On the top of the latest India-Pakistan battle, a parallel battle unfolded on-line – a battle of narratives. Whereas unbiased fact-checkers and the government-run Press Data Bureau scrambled to debunk pretend information, unsubstantiated claims, and AI-generated misinformation, many customers turned to AI chatbots like Grok and Ask Perplexity to confirm claims circulating on X.
Right here is an instance: On Could 10, India and Pakistan agreed to stop all army exercise — on land, air and sea — at 5 PM. Whereas responding to some consumer queries the subsequent day, Grok referred to as it a “US-brokered ceasefire”. Nevertheless, on Could 10, when a consumer requested about Donald Trump’s position in mediating the ceasefire, Grok added some lacking context, saying, “Indian officers assert the ceasefire was negotiated straight between the 2 international locations’ army heads. Pakistan acknowledges US efforts alongside others,” presenting a extra rounded model of the occasions.
Such inconsistencies reveal a deeper problem with AI responses. Consultants warned that although AI chatbots can present correct info, they’re removed from dependable “fact-checkers”. These chatbots can provide real-time responses, however as a rule, they might add to the chaos, particularly in evolving conditions.
Prateek Waghre, an unbiased tech coverage researcher, attributed this to the “non-deterministic” nature of AI fashions: “The identical query gained’t at all times provide the similar reply,” he stated, “It depends upon a setting referred to as ‘temperature’.”
Massive language fashions (LLMs) work by predicting the subsequent phrase amid a variety of chances. The “temperature” determines the variability of responses the AI can generate. A decrease temperature would imply that probably the most possible subsequent phrase is picked, producing much less variable and extra predictable responses. A better temperature permits LLMs to offer unpredictable, inventive responses.

In response to Waghre, what makes using AI bots for fact-checking claims extra worrisome is that “they aren’t objectively dangerous.” “They aren’t outright horrible. On some events, they do offer you correct responses, which implies that individuals are inclined to have a better quantity of perception of their functionality than is warranted,” he stated.
What makes AI chatbots unreliable?
1. Hallucinations
The time period “hallucination” is used to explain conditions when AI chatbots generate false or fabricated info and current it as factual info.
Story continues under this advert
Alex Mahadevan, director of MediaWise, stated AI chatbots like Grok and Ask Perplexity “hallucinate details, mirror on-line biases and have a tendency to agree with regardless of the consumer appears to need,” and therefore, “aren’t dependable fact-checkers.”
“They don’t vet sources or apply any editorial normal,” Mahadevan stated. MediaWise is a digital literacy programme of Poynter, a non-profit journalism faculty based mostly within the US, which helps individuals spot misinformation on-line.
xAI admits to this within the “phrases of service” out there on its web site. “Synthetic intelligence is quickly evolving and is probabilistic in nature; it might subsequently generally end in Output that accommodates “hallucinations,” could also be offensive, might not precisely mirror actual individuals, locations or details, could also be objectionable, inappropriate, or in any other case not be appropriate to your supposed objective,” the corporate states.
Perplexity’s phrases of service, too, carry an analogous disclaimer: “You acknowledge that the Providers might generate Output containing incorrect, biased, or incomplete info.”
2. Bias and lack of transparency
Mahadevan flagged one other threat with AI chatbots — inherent bias.
Story continues under this advert
“They’re constructed and beholden to whoever spent the cash to create them. For instance, simply yesterday (Could 14), X’s Grok was caught spreading deceptive statements about ‘white genocide’, which many attribute to Elon Musk’s views on the racist falsehood,” he wrote in an e-mail response to indianexpress.com.
The “white genocide” claims gained traction after US President Donald Trump granted asylum to 54 white South Africans earlier this 12 months, citing genocide and violence in opposition to white farmers. The South African authorities has strongly denied these allegations.
Waghre stated that customers assume AI is goal as a result of it’s not human, and that’s deceptive. “We don’t know to what extent or what sources of information have been used for coaching them,” he stated.
Each xAI and Perplexity say their instruments depend on real-time web searches; Grok additionally faucets into public posts on X. But it surely’s unclear how they assess credibility or filter misinformation. Indianexpress.com reached out to each companies to know this higher, however didn’t obtain any response on the time of publishing.
3. Scale and pace
Story continues under this advert
Maybe probably the most regarding problem is the dimensions at which these chatbots function.
With Grok embedded straight into X, AI-generated errors could be amplified immediately to thousands and thousands. “We’re not utilizing these instruments to help skilled fact-checkers,” Waghre stated, “They’re working at inhabitants scale – so their errors are too.”
Waghre additionally stated that these AI chatbots are prone to study and enhance from errors, however “You’ve conditions the place they’re placing out incorrect solutions, and people are then getting used as additional proof for issues.”
What AI companies ought to change
Mahadevan questioned the “design selection” that AI companies make use of. “These bots are constructed to sound assured even after they’re improper. Customers really feel they’re speaking to an all-knowing assistant. That phantasm is harmful,” he stated.
Story continues under this advert
He really useful stronger accuracy safeguards – chatbots ought to refuse to reply if they’ll’t cite credible sources, or flag “low-quality and speculative responses”.
Vibhav Mithal, a lawyer specialising in AI and mental property, has a distinct take. He insisted there is no such thing as a want to write down off AI chatbots solely since their reliability as fact-checkers relies upon largely on context, and extra importantly, on the standard of information they’ve been skilled on. However duty, in his opinion, lies squarely with the businesses constructing these instruments. “AI companies should determine the dangers of their merchandise and search correct recommendation to repair them,” Mithal stated.
What can customers do?
Mithal pressured that this isn’t about AI versus human fact-checkers. “AI can help human efforts, it’s not an both/or state of affairs,” he stated. Concurring, Mahadevan listed two easy steps customers can take to guard themselves:
All the time double-check: If one thing sounds stunning, political or too good to be true, confirm it by way of different sources.
Story continues under this advert
Ask for sources: If the chatbot can’t level to a reputable supply or simply name-drops imprecise web sites, be skeptical.
In response to Mahadevan, customers ought to deal with AI chatbots like overconfident interns: helpful, quick, however not at all times proper. “Use them to assemble context, not affirm fact. Deal with their solutions as leads, not conclusions,” he stated.

