An image generated by Gemini AI to represent the social media battle of the political parties in Kerala.
It is getting hot in Kerala. Not just the weather, but the political scene as well. With the general elections scheduled for April 9, the coming weeks were always expected to be intense – literally and figuratively. What is different this time, however, is where much of that heat will be felt.
Campaigning is no longer confined to roads, stages, and public meetings. As temperatures rise, more of it is shifting to screens. Sitting at home, scrolling through a phone, watching campaign videos on YouTube, or catching political snippets on social media has quietly become a central part of how people consume election messaging. In many ways, this is now the primary battleground.
It is in this context that we are helping to launch a special conversation series on the On the Dot Media YouTube channel, which attempts to unpack how this election is playing out in the digital space – not just through human observation, but with the help of AI.
Tracking everything manually is impossible. Even keeping up with mainstream media – newspapers, television channels, digital platforms – is a stretch. Social media is an entirely different ocean. Facebook, X, Instagram and YouTube each have their own rhythm, audiences and narratives. The scale alone makes it humanly unmanageable.
So we decided to bring AI’s data-crunching power into the process. The TikTalk News team began working with multiple platforms – ChatGPT, Claude and Gemini – to analyse social media activity around the three major political fronts and their supporter ecosystems. The aim was not to arrive at a definitive answer, but to see whether we could get a reasonably clear picture of what was happening.
The analyses generated by these AI platforms were then shared with a seasoned and widely respected political analyst, our senior editorial advisor MG Radhakrishnan, along with a tech-savvy academic, Arun Surendran, principal of Trinity Engineering College. The idea was to have a short, focused conversation on how social media campaigning is unfolding during this election.
What followed was both fascinating and humbling.
Powerful Tools: Each AI platform responded differently. The same dataset, the same prompt structure – and yet, the outputs varied in tone, emphasis and even conclusions. It became immediately clear that AI does not deliver a single, objective truth. It delivers interpretations, even when the same task is presented.
Prompting turned out to be a skill in itself. The more precise we became, the better the responses. But even then, subtle biases and crept in – sometimes in the framing, sometimes in what was highlighted or ignored. Inferred material had to be flagged and facts mentioned had to be verified. It was a sharp learning curve. Every iteration taught us something new, not just about the platforms, but about the nature of analysis itself.
At one level, the experience reinforced the obvious strength of AI – its ability to process vast amounts of data and compress it into capsule form. Feed the models a task, and they will crawl through large volumes of information on the web and return with a structured summary in minutes. For something as chaotic as social media, that is incredibly powerful.
Different Picture: But at another level, it raised deeper questions. If different AI systems produce different readings of the same reality, then what does “analysis” really mean in this context? And more importantly, who controls the narrative – the data, the model, or the person writing the prompt?
Alongside text analysis, we also experimented with AI-generated images – some of which will be featured as part of the On the Dot Media episodes. Images were the first step. It was not as simple as it seemed. Getting the right output required careful prompting, iteration and a fair bit of patience. But once it clicked, the results were impressive.
Video was more challenging. There were limitations, inconsistencies and a fair amount of trial and error. But the broader realisation was hard to ignore – with enough persistence, a single individual can now create reasonably sophisticated visual content without a camera, a crew or a studio.
This is where the perspective of MG Radhakrishnan adds an important layer. He recalls his entry into journalism, when it was a complex, almost magical system. Reports were written by hand, types were set manually, and printing involved an entire chain of people and processes. What appeared in the newspaper the next day was the result of coordinated effort across a large team.
Extended Influence: Contrast that with today. A single person, sitting at home with a laptop or even just a smartphone, can produce and distribute content through social media that can potentially reach a large audience. Writing, editing, publishing and promotion can all be handled with AI tools. Even video production – once the domain of specialised teams – is becoming accessible.
The idea of a one-person company is often discussed. What is less talked about is the rise of the one-person media operation – and now, increasingly, the one-person media influencers. That is perhaps the most significant shift this exercise brings into focus.
For political campaigns, this opens up enormous possibilities. It is only a matter of time before AI is used at scale – not just to analyse public sentiment, but to create targeted messaging, generate content and respond in real time.
But there is a flip side.
Final Vote: When anyone can create convincing images, videos or narratives, the line between reality and fabrication becomes increasingly blurred. We are already seeing early signs – synthetic visuals, misleading clips and fabricated claims spreading rapidly across platforms. In the context of an election, the stakes are significantly higher.
For the average viewer, this creates a difficult situation. The volume of content is overwhelming, and the tools to verify authenticity are limited. Trust, which was already under strain in the age of social media, now faces a new layer of complexity.
What this On the Dot exercise makes clear is that we are entering a phase where the challenge is no longer access to information, but interpretation. AI can read the ocean. It can summarise, cluster and highlight patterns. But understanding what those patterns mean – and deciding what to trust – remains a human responsibility.
As Kerala heads into an election influenced by algorithms as much as by ideology, that responsibility becomes even more critical.
Quantum pioneers win Turing Award
Quantum physicists Charles Bennett and Gilles Brassard have been awarded the 2025 Turing Award for founding quantum information science.Considered the Nobel Prize in computing, the award carries a 1 million US dollars prize. The duo first met during a beach swim four decades ago, leading to quantum cryptography. At the time, their work seemed fascinating but impractical. Today, it is poised to secure the world’s most sensitive information.
You can make researchers into billionaires, but you can never take the researchers out of them – as this Turkish couple proves. Ugur Sahin and Ozlem Tureci of BioNTech are stepping away from leadership to focus on research. They helped pioneer Covid-19 vaccines with Pfizer, becoming global figures. By 2023, millions of doses had been administered worldwide. Now billionaires, they say management is “not what we are passionate about” – even as mRNA heads towards a 78 billion US dollars market.
Atomberg turns a new leaf
Local startups that improve everyday products are always favourites, and Atomberg remains one of them. The Pune-based firm, known for transforming the fan market, is entering component manufacturing. It will design motors and controllers for ACs, refrigerators, and washing machines. To power this B2B push, it is investing 150–200 crore rupees in a new Pune facility. Clients include Godrej and Voltas, with more OEMs in talks.
China clears first commercial brain chip
Brain chips have already hit the market. China has approved the first invasive brain–computer interface for commercial use. Developed by Neuracle Medical Technology, it moves ahead of global rivals. The system pairs a brain implant with a robotic glove for thought-driven gripping. It targets patients with spinal cord injuries who cannot grasp objects. The implant sits above brain tissue, reducing risk – putting China ahead of players like Neuralink.
When code meets courtroom
When a faulty toaster causes injury, companies can be sued. But what about software? A California lawsuit is asking whether social media design can be treated as a product defect. The focus is on features like infinite scroll and algorithmic recommendations. It argues these designs may harm vulnerable users, especially younger audiences. The outcome could reshape how tech platforms are regulated globally.