Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»Humility is the key to ethical AI: Vilas Dhar, President, Patrick J. McGovern Foundation | Technology News
Technology

Humility is the key to ethical AI: Vilas Dhar, President, Patrick J. McGovern Foundation | Technology News

October 18, 2025No Comments12 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Dhar has framed global discussions on AI governance, responsible innovation, and the social impact of technology. (Express Photo)
Share
Facebook Twitter LinkedIn Pinterest Email

Vilas Dhar is President of the Patrick J. McGovern Basis, a US-based $1.5 billion philanthropy advancing synthetic intelligence for public functions.

The Basis is among the many largest funders of AI for public functions, having invested greater than $500 million in varied initiatives.

Dhar has framed world discussions on AI governance, accountable innovation, and the social affect of know-how. He has served as a member of the UN Secretary-Basic’s Excessive-Degree Advisory Physique on AI, convened to form a global framework for the governance of synthetic intelligence and to align its growth with human rights, sustainable growth, and world safety.

Story continues beneath this advert

He additionally serves because the US Authorities’s nominated skilled to the International Partnership on AI and advises main tutorial centres, together with the Stanford Institute for Human-Centered AI and MIT Clear up.

Vilas Dhar holds a Grasp’s in Public Administration from Harvard Kennedy College and twin bachelor’s levels in Biomedical Engineering and Laptop Science from the College of Illinois Urbana-Champaign.

Vilas Dhar spoke to indianexpress.com on moral AI, the alternatives and the challenges it presents, the strategic grant-making of his organisation, and the significance of open dialogue in constructing accountable AI. Edited excerpts:

Venkatesh Kannaiah: Give us a quick overview of the Patrick J. McGovern Basis and its focus.

Vilas Dhar: The McGovern Basis was born out of the idea that know-how should advance human well-being, not simply income. From its inception, it has centered on seeding, scaling, and sustaining AI and information improvements that serve a public objective.

Story continues beneath this advert

Our considering is that AI will change all the things, however we don’t want yet another AI firm. We want a spot to construct social resilience to what is going to occur due to AI. So we wish an establishment that protects society whereas tech innovation runs rampant.

Our basis offers about $75 million a yr in grants to round 150 initiatives. Over time, we’ve got deployed over $500 million throughout dozens of nations to help initiatives tackling challenges in well being, local weather, digital rights, schooling, governance, and extra.

We do three issues. We fund and help nonprofits to make use of AI towards particular public use circumstances. We’re additionally attempting to construct a platform for coverage engagement in order that we will truly help governments and worldwide organisations on find out how to construct this. And at last, we work on constructing establishments round AI capability. The thought isn’t just how we construct know-how capability, but in addition coverage and know-how help.

Venkatesh Kannaiah: Your concentrate on AI, information science, and social affect — the why and the way.

Vilas Dhar: We’re sort of frontline AI builders. And we do two issues. We construct merchandise and we offer consulting companies to any nonprofit on this planet that wishes to make use of AI to additional their mission.

Story continues beneath this advert

We’re recognized within the US as a disruptive philanthropy as a result of we do grant making, product growth, direct deployment, consulting, and coverage advisory. The thought is that the instances we dwell in require a brand new sort of establishment to unravel issues.

I see information and AI as a part of society’s spine, as foundational as roads or electrical energy. If that infrastructure is biased, opaque, or concentrated, it degrades all we construct upon it. So our work spans three strands: direct deployment, capability constructing (serving to nonprofits, governments, and researchers soak up these instruments), and governance (norms, auditability, open instruments). All of it pushed by one query: how will we make these programs accountable to individuals, not simply establishments?

Venkatesh Kannaiah: Inform us about fascinating AI initiatives that you’ve funded.

Vilas Dhar: We spend money on initiatives that present how synthetic intelligence can serve individuals, not simply establishments.

We focus lots on AI security. And so we’ve got a partnership with OpenAI and Anthropic, and different main corporations to make sure that we’re constructing protected and accountable frameworks for AI fashions.

Story continues beneath this advert

We’re additionally constructing the most important world information coalition round local weather change and the way it impacts farmers, villagers, and different communities.

Among the many varied initiatives are Local weather TRACE, which makes use of satellite tv for pc imagery and machine studying to provide the world’s first open, verifiable stock of greenhouse gasoline emissions. It offers policymakers, journalists, and residents the info they should maintain governments and industries accountable.

Audere builds smartphone instruments that use pc imaginative and prescient to interpret speedy diagnostic assessments for malaria, HIV, and COVID-19, bettering accuracy and entry in communities the place laboratory infrastructure is proscribed.

WattTime applies AI to measure and cut back real-time carbon emissions from the facility grid, serving to cities and corporations supply cleaner power each hour.

Story continues beneath this advert

The On-line Information Affiliation’s AI Initiative equips journalists with the abilities and frameworks to make use of AI responsibly, making certain that new instruments strengthen public belief moderately than erode it.

And Grant Guardian, an AI platform created throughout the Basis, automates nonprofit monetary evaluation so philanthropy can transfer assets sooner and extra equitably.

Venkatesh Kannaiah: Inform us about information science or neuroscience initiatives you’ve got funded.

Vilas Dhar: The way forward for information for public good will depend on what we select to make open. Lots of the world’s most beneficial datasets are locked behind industrial partitions, formed by incentives that serve shareholders moderately than societies.

If we wish AI to strengthen democracy and enhance lives, we should spend money on world, noncommercial information belongings that anybody can use to construct options for the general public curiosity. Solely philanthropy and governments have each the independence and the accountability to create them. That perception guides a lot of our work. One instance is Local weather TRACE, a worldwide coalition we’ve got supported that makes use of satellite tv for pc information and AI to trace greenhouse gasoline emissions throughout each main sector in close to actual time. It’s the world’s first open, verifiable stock of emissions.

Story continues beneath this advert

Venkatesh Kannaiah: Inform us about three issues we must be fearful about with AI.

Vilas Dhar: The focus of energy: when a couple of platforms resolve what information, predictions, or views we will entry. The “governance hole”: weak guidelines round consent, high quality, audits, and recourse. The execution hole: many insurance policies are noble on paper, however actual establishments battle to make use of protected and accountable programs.

Venkatesh Kannaiah: Do you see completely different geographic fashions of AI growing on their very own?

Vilas Dhar: We have now an American for-profit mannequin of AI. We have now a Chinese language government-run mannequin of AI. And in lots of conversations, these are the one two fashions which are put ahead. I’m very invested in the concept the Indian mannequin of an open stack round AI offers an alternate, a compelling case for the way we construct AI.

We’re investing closely in open-source AI. How will we spend money on expertise, information, and compute entry? If philanthropy solely offers grant funding, it’ll by no means repair this drawback.

As an alternative, I feel philanthropy and authorities have to return collectively to construct public capability round AI. Which means investing in constructing compute entry, in creating public information units, and in making it potential for individuals to have jobs in AI that aren’t with a tech firm. This could possibly be transformative.

Story continues beneath this advert

Venkatesh Kannaiah: On authoritarian-regime-led LLMs and ‘democratic’ LLMs.

Vilas Dhar: I don’t assume there’s a distinction between authoritarian and democratic LLMs. As an alternative, what we’ve got is LLMs and AI instruments which are getting used to solidify the present energy construction, and an area to construct new types of AI that really break the inequity of our society.

The primary is closely capitalised, it’s funded, and it’s being constructed with authorities energy. The second is undercapitalised, under-resourced, and underdeveloped. That’s the place we must be focusing. Now, you would possibly use a special time period.

There’s Western AI and possibly Chinese language AI or different ways in which you wish to give it some thought. I feel the massive problem is that there’s no individuals’s AI but. And I feel if we might spend money on constructing a mechanism for democratic widespread participation in AI, you could possibly even have a 3rd mannequin.

Venkatesh Kannaiah: Your views and the challenges you see on moral AI, based mostly on learnings out of your grants.

Vilas Dhar: Moral AI begins with humility. We’re constructing programs that study from us, but we nonetheless battle to outline what we worth, what we defend, and what we refuse to automate. The true problem will not be in writing moral codes however in creating the social establishments that may implement them. Expertise strikes sooner than governance, so philanthropy and civil society should construct the capability to check, query, and proper these programs in actual time.

Story continues beneath this advert

Throughout our grants, I’ve seen that ethics fails when it’s summary. It succeeds when it’s practiced: when builders doc their selections, when information scientists interrogate bias, and when public establishments demand transparency.

Venkatesh Kannaiah: Are you able to clarify the moral challenges intimately?

Vilas Dhar: We see it on a regular basis with how AI is intersecting with social media. We see AI instruments getting used to drive algorithms that command consideration are being thought of accountable due to sure moral frameworks on the firm degree.

However after they’re truly utilized in a group, they result in substantial affect on psychological well being, round isolation and political polarisation.

What must occur is to rework the ethics framework from rules to particular engineering tips about how we construct these instruments to centre group penalties. So, for instance, in social media, corporations would possibly want to limit entry based mostly on age, based mostly on social and emotional maturity, or based mostly on the context during which persons are utilizing the instruments.

And for that to occur, the businesses should tackle extra ethical accountability for the instruments that they create. We assist that occur in two methods. We are able to monitor and advise corporations to be sure that they’re truly constructing ethical and moral rules into the merchandise they create.

And we will organise shopper and folks’s actions to push again on the businesses after they construct a instrument and deploy it that really creates hurt.

Venkatesh Kannaiah: Inform us about India-specific grants or initiatives of yours.

Vilas Dhar: India represents some of the dynamic laboratories on this planet for understanding how know-how can serve humanity. The nation’s scale, range, and custom of public innovation make it a testing floor for what accountable AI can obtain.

Our work in India helps organisations that pair deep native experience with trendy information science. There’s Khushi Child, an organisation working in Rajasthan in partnership with the federal government. With expertise in last-mile neonatal maternal well being care, they’ve constructed an AI-enabled instrument that lets them do what we name population-level well being. And in doing so, they had been capable of establish numerous villages which have dietary deficiencies. And utilizing AI, they had been capable of establish what these deficiencies had been, work with the federal government to offer dietary supplements, and result in higher well being outcomes.

One other is the work that we do with rice farmers to construct AI-based tooling that permits them to have higher data about when to plant, when to reap, and when to promote out there.

In India, we consider the area of publicly created AI instruments goes to be adopted sooner than anyplace else on this planet.

Venkatesh Kannaiah: How ought to India take up the AI alternative? Or is it a problem?

Vilas Dhar: India has a uncommon window: to deal with AI as civic infrastructure. If India invests now in open information platforms, interoperable requirements, institutional capability, and rights-based guardrails, we will lead, not comply with. However the danger is slipping into opaque, proprietary programs that deepen inequality.

Venkatesh Kannaiah: Inform us about two out-of-the-box issues that AI and information or neurosciences might remedy sooner or later.

Vilas Dhar: Think about a system for group well being employees: an AI that listens, triages, diagnoses, and tracks, all in native languages, offline-first, bridging conventional and trendy medication.

Or a local weather planner that {couples} neuroscience, information, and behavioral inference, predicting how metropolis programs (power, transport, water) react below stress and steering them in actual time, optimising for fairness, resilience, and wellbeing.

Venkatesh Kannaiah: Can we name you a tech optimist?

Vilas Dhar: I feel we’re already in the course of the best transformation of human society. As a result of what’s occurring will not be yet another iteration. It’s not how the web made us go from going to a retailer to logging on or one thing else. That is a few elementary change in what humanity is able to. I don’t say this as a tech optimist. I’m optimistic about the truth that if we’ve got higher instruments, we will do extra. And I feel we’re in the course of that change already. The query is, how are we going to make use of it.

Venkatesh Kannaiah: Your view on AI and politics.

Vilas Dhar: AI goes to alter individuals’s entry to political info in a means that offers them far more energy. I feel we are going to see AI influencing political participation within the subsequent 5 years in a very constructive and significant means.



Source link

Dhar Ethical Foundation humility key McGovern news Patrick President Technology Vilas
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Donald Trump Snaps At Fox News Reporter Over ‘Stupid’ Question

March 7, 2026

Google Pixel 10 vs Pixel 10a: A closer look at design, display, and camera upgrades | Technology News

March 7, 2026

Vivo X300 FE India launch expected soon: Check specs, camera, price | Technology News

March 7, 2026

‘Bumrah is just a freak’: Michael Clarke backs India pacer to be decisive in T20 World Cup final | Cricket News

March 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Donald Trump Snaps At Fox News Reporter Over ‘Stupid’ Question

March 7, 2026

Google Pixel 10 vs Pixel 10a: A closer look at design, display, and camera upgrades | Technology News

March 7, 2026

Here’s Why Garmin Stock Soared in February

March 7, 2026

India vs New Zealand head-to-head record, most runs, most wickets, all you need to know

March 7, 2026
Popular Post

T20 World Cup: Caution hinting at their lack of confidence, Pakistan get over the line against Canada to keep Super 8 hopes alive | Cricket News

Trump’s campaign finances are strained as legal peril mounts – JHB

If You Invested $1,000 in Gold 1 Year Ago, Here’s How Much You’d Have Today

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.