Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»‘Grok, verify’: Why AI chatbots shouldn’t be considered reliable fact-checkers | Technology News
Technology

‘Grok, verify’: Why AI chatbots shouldn’t be considered reliable fact-checkers | Technology News

May 18, 2025No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Express shorts
Share
Facebook Twitter LinkedIn Pinterest Email

On the top of the latest India-Pakistan battle, a parallel battle unfolded on-line – a battle of narratives. Whereas unbiased fact-checkers and the government-run Press Data Bureau scrambled to debunk pretend information, unsubstantiated claims, and AI-generated misinformation, many customers turned to AI chatbots like Grok and Ask Perplexity to confirm claims circulating on X.

Right here is an instance: On Could 10, India and Pakistan agreed to stop all army exercise — on land, air and sea — at 5 PM. Whereas responding to some consumer queries the subsequent day, Grok referred to as it a “US-brokered ceasefire”. Nevertheless, on Could 10, when a consumer requested about Donald Trump’s position in mediating the ceasefire, Grok added some lacking context, saying, “Indian officers assert the ceasefire was negotiated straight between the 2 international locations’ army heads. Pakistan acknowledges US efforts alongside others,” presenting a extra rounded model of the occasions.

Such inconsistencies reveal a deeper problem with AI responses. Consultants warned that although AI chatbots can present correct info, they’re removed from dependable “fact-checkers”. These chatbots can provide real-time responses, however as a rule, they might add to the chaos, particularly in evolving conditions.

Story continues under this advert

Prateek Waghre, an unbiased tech coverage researcher, attributed this to the “non-deterministic” nature of AI fashions: “The identical query gained’t at all times provide the similar reply,” he stated, “It depends upon a setting referred to as ‘temperature’.”

Massive language fashions (LLMs) work by predicting the subsequent phrase amid a variety of chances. The “temperature” determines the variability of responses the AI can generate. A decrease temperature would imply that probably the most possible subsequent phrase is picked, producing much less variable and extra predictable responses. A better temperature permits LLMs to offer unpredictable, inventive responses.

Festive offer

In response to Waghre, what makes using AI bots for fact-checking claims extra worrisome is that “they aren’t objectively dangerous.” “They aren’t outright horrible. On some events, they do offer you correct responses, which implies that individuals are inclined to have a better quantity of perception of their functionality than is warranted,” he stated.

What makes AI chatbots unreliable?

1. Hallucinations

The time period “hallucination” is used to explain conditions when AI chatbots generate false or fabricated info and current it as factual info.

Story continues under this advert

Alex Mahadevan, director of MediaWise, stated AI chatbots like Grok and Ask Perplexity “hallucinate details, mirror on-line biases and have a tendency to agree with regardless of the consumer appears to need,” and therefore, “aren’t dependable fact-checkers.”

“They don’t vet sources or apply any editorial normal,” Mahadevan stated. MediaWise is a digital literacy programme of Poynter, a non-profit journalism faculty based mostly within the US, which helps individuals spot misinformation on-line.

xAI admits to this within the “phrases of service” out there on its web site. “Synthetic intelligence is quickly evolving and is probabilistic in nature; it might subsequently generally end in Output that accommodates “hallucinations,” could also be offensive, might not precisely mirror actual individuals, locations or details, could also be objectionable, inappropriate, or in any other case not be appropriate to your supposed objective,” the corporate states.

Perplexity’s phrases of service, too, carry an analogous disclaimer: “You acknowledge that the Providers might generate Output containing incorrect, biased, or incomplete info.”

2. Bias and lack of transparency

Mahadevan flagged one other threat with AI chatbots — inherent bias.

Story continues under this advert

“They’re constructed and beholden to whoever spent the cash to create them. For instance, simply yesterday (Could 14), X’s Grok was caught spreading deceptive statements about ‘white genocide’, which many attribute to Elon Musk’s views on the racist falsehood,” he wrote in an e-mail response to indianexpress.com.

The “white genocide” claims gained traction after US President Donald Trump granted asylum to 54 white South Africans earlier this 12 months, citing genocide and violence in opposition to white farmers. The South African authorities has strongly denied these allegations.

Waghre stated that customers assume AI is goal as a result of it’s not human, and that’s deceptive. “We don’t know to what extent or what sources of information have been used for coaching them,” he stated.

Each xAI and Perplexity say their instruments depend on real-time web searches; Grok additionally faucets into public posts on X. But it surely’s unclear how they assess credibility or filter misinformation. Indianexpress.com reached out to each companies to know this higher, however didn’t obtain any response on the time of publishing.

3. Scale and pace

Story continues under this advert

Maybe probably the most regarding problem is the dimensions at which these chatbots function.

With Grok embedded straight into X, AI-generated errors could be amplified immediately to thousands and thousands. “We’re not utilizing these instruments to help skilled fact-checkers,” Waghre stated, “They’re working at inhabitants scale – so their errors are too.”

Waghre additionally stated that these AI chatbots are prone to study and enhance from errors, however “You’ve conditions the place they’re placing out incorrect solutions, and people are then getting used as additional proof for issues.”

What AI companies ought to change

Mahadevan questioned the “design selection” that AI companies make use of. “These bots are constructed to sound assured even after they’re improper. Customers really feel they’re speaking to an all-knowing assistant. That phantasm is harmful,” he stated.

Story continues under this advert

He really useful stronger accuracy safeguards – chatbots ought to refuse to reply if they’ll’t cite credible sources, or flag “low-quality and speculative responses”.

Vibhav Mithal, a lawyer specialising in AI and mental property, has a distinct take. He insisted there is no such thing as a want to write down off AI chatbots solely since their reliability as fact-checkers relies upon largely on context, and extra importantly, on the standard of information they’ve been skilled on. However duty, in his opinion, lies squarely with the businesses constructing these instruments. “AI companies should determine the dangers of their merchandise and search correct recommendation to repair them,” Mithal stated.

What can customers do?

Mithal pressured that this isn’t about AI versus human fact-checkers. “AI can help human efforts, it’s not an both/or state of affairs,” he stated. Concurring, Mahadevan listed two easy steps customers can take to guard themselves:

All the time double-check: If one thing sounds stunning, political or too good to be true, confirm it by way of different sources.

Story continues under this advert

Ask for sources: If the chatbot can’t level to a reputable supply or simply name-drops imprecise web sites, be skeptical.

In response to Mahadevan, customers ought to deal with AI chatbots like overconfident interns: helpful, quick, however not at all times proper. “Use them to assemble context, not affirm fact. Deal with their solutions as leads, not conclusions,” he stated.



Source link

Chatbots considered factcheckers Grok news reliable shouldnt Technology verify
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

That ‘AI caricature using everything about me’ trend could expose you to digital fraud | Technology News

March 8, 2026

Lakshya Sen after marathon All England win against Victor Lai: ‘Plan was to finish off rally in first few shots when I started cramping’ | Badminton News

March 8, 2026

Donald Trump Snaps At Fox News Reporter Over ‘Stupid’ Question

March 7, 2026

Google Pixel 10 vs Pixel 10a: A closer look at design, display, and camera upgrades | Technology News

March 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Donald Trump Mocks Reporter Who Assumed He Knew His Son’s Career

March 8, 2026

That ‘AI caricature using everything about me’ trend could expose you to digital fraud | Technology News

March 8, 2026

AI Could Reignite Internet Traffic as Price Compression Persists

March 8, 2026

Lakshya Sen after marathon All England win against Victor Lai: ‘Plan was to finish off rally in first few shots when I started cramping’ | Badminton News

March 8, 2026
Popular Post

Spears Considering Coming Out of Retirement

French Open: Jannik Sinner outlasts Novak Djokovic in straight sets, a result of tennis’s new world order | Tennis News

Anna Kendrick Slams Director For Degrading Her In Front Of 100 Extras

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.