Amid a long-running tussle with tech platforms that has solely intensified within the generative AI period, Indian information publishers are pushing again in opposition to the usage of journalistic content material as free uncooked materials to coach AI programs.
On the opening day of the AI Influence Summit 2026 in New Delhi, Monday (February 16), a panel that includes leaders from the media and publishing ecosystem in India, made clear that journalistic content material used to coach AI fashions must be paid for. In addition they sought to tell apart information content material from web information, stating that professionally reported content material is essential to enhancing mannequin accuracy and stopping hallucinations.
“Journalistic content material shouldn’t be free-floating content material on the web. It’s one thing which is mental property. It will get created with funding, infrastructure, and expertise. That information must be contracted. It can’t be surrendered,” LV Navaneeth, the CEO of The Hindu group, mentioned. Different audio system included Kalli Purie, Govt Editor-in-Chief, India Right now Group; Mohit Jain, COO, Bennett, Coleman & Co Ltd; Pawan Agarwal, Deputy Managing Director, Dainik Bhaskar Group, Robert Whitehead, Worldwide Information Media Affiliation (INMA) lead; and Tanmay Maheshwari, Managing Director, Amar Ujala Publications in a panel dialogue moderated by Ashish Pherwani from Ernst and Younger (EY).
The decision for AI corporations to pretty compensate publishers comes amid rising skepticism of stories publishers in a number of jurisdictions, together with in the USA and India, over issues of copyrighted materials, reminiscent of information studies, being utilized by corporations like OpenAI for coaching their foundational fashions, with out permission or cost.
This has led to court docket circumstances, together with in India, the place publishers — members of the Digital Information Publishers Affiliation (DNPA), together with The Indian Specific, amongst others — have mounted a authorized problem in opposition to OpenAI over the “illegal utilisation of copyrighted materials”.
Throughout the panel dialogue organised by the DNPA, audio system additionally examined the shifting worth of stories within the AI period, how publishers are deploying AI instruments inside newsrooms, and whether or not the expertise may also help unlock new income streams moderately than erode present ones.
The impression of AI on publishers
Fairly than diminishing the worth of journalism, The Occasions Group’s Mohit Jain argued that AI may elevate the premium on credibility and accountability.
Story continues beneath this advert
“India is a vibrant nation that’s various and complicated. And in such an atmosphere, editorial discretion, verification, and institutional reminiscence shouldn’t be non-compulsory, it’s foundational,” he mentioned. “The press is not only one thing which produces data, it curates belief, gives context, and accepts the ethical and the obligation for what it publishes, and that layer of accountability is the differentiator, and when AI begins to commoditise data, the belief will grow to be scarce, and that shortage will create worth,” Jain added.
Nonetheless, the INMA’s Whitehead struck a extra sombre word, warning that AI chatbots have been already eroding referral visitors from search engines like google and yahoo to publishers and threatening a core pillar of their enterprise fashions.
“How on earth are we funding journalism? AI is already destroying the worth of the businesses right here on the stage,” he mentioned, including that referral visitors to publishers from search engines like google and yahoo and social media networks has seen “enormous falls” previously 12 months following the broader rollout of AI Mode and AI Overviews in Google Search.
Widespread use circumstances of AI in newsrooms
On the usage of AI in newsrooms, publishers dismissed the concept that it’s a substitute for journalists and pointed to a ‘human moat’ as a structural necessity to maintain public discourse. India Right now’s Kalli Purie mentioned that the information organisation has adopted an ‘AI sandwich’ tenet “the place human intent begins the AI train. You have got AI in between that can assist you with one thing, after which you will have the ultimate resolution taken by a human.”
Story continues beneath this advert
Drawing parallels to the concomitant advantages of nuclear energy, Navneeth mentioned The Hindu makes use of AI to enrich a human’s work and assist readers go deeper into an article. On utilizing AI to extend revenues, the media government mentioned that AI can be utilized to extend engagement and retention time. He additional revealed that the information day by day has developed an in-house AI mannequin that’s reportedly much less prone to hallucinate as it’s skilled on The Hindu’s personal archival materials.
Nonetheless, Amar Ujala’s Tanmay Maheshwari highlighted the technical limitations of AI in multilingual information manufacturing, noting that the accuracy of most Indic-language AI fashions is lower than 55 per cent.
Accountability for wrongful, AI-hallucinated content material
When requested the place accountability ought to lie for wrongful content material, publishers argued that AI corporations, social media platforms, and even unbiased content material creators needs to be held to the identical authorized and moral requirements as legacy information manufacturers.
“If legacy media is chargeable for the content material we put out, our editor is held to very excessive requirements. Platforms needs to be held to the identical excessive customary,” mentioned Navneeth. Purie additionally referred to as for an finish to “the asymmetry of reward and punishment between legacy media and social media.” “Legacy media has to comply with sure tips. We see those self same tips flouted on social media on an on a regular basis foundation, and we tolerate that,” she added.
Story continues beneath this advert
Outcomes from AI Summit: What publishers need
Placing ahead a nine-point agenda, Purie referred to as for transparency from corporations utilizing coaching information scraped from publishers’ web sites to construct AI fashions. A number of audio system additionally supported clearer structuring and labelling of content material to make sure traceability, in order that AI-generated content material may be extra reliably attributed to unique sources.
In addition they referred to as for the popularity of journalism as a public good, and for tech corporations to enhance their algorithms as a way to reward tales that ship social impression versus being virality based mostly. “Put an actual worth to verified content material offered by correct establishments, and penalise AI hallucinations severely,” Purie mentioned.
In the meantime, Whitehead steered that governments ought to move a regulation as a way to guarantee paid coaching of AI fashions on journalistic content material.
“There are billions of {dollars} being paid for skilled content material, however to not media corporations. Corporations in San Francisco which can be shopping for [data] on the black market, that cash must circulation by means of to the media corporations creating that content material, and that may solely occur when there’s a regulation that requires the tech platforms to take part in a good digital market,” he mentioned, citing Norway and South Africa as two nations exploring comparable laws.

