Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»AI Chatbots want your health records. Tread carefully. | Technology News
Technology

AI Chatbots want your health records. Tread carefully. | Technology News

March 19, 2026No Comments8 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
AI-chatbot
Share
Facebook Twitter LinkedIn Pinterest Email

Written by Brian X. Chen and Teddy Rosenbluth

For the previous couple of years, the tech trade has satisfied those who their artificially clever chatbots get higher the extra knowledge you feed them. The subsequent step is to get customers to share their most delicate data: their well being information.

Microsoft this week unveiled a software that can let customers share information from a number of well being suppliers with its chatbot, Copilot. The information can then be mixed with knowledge gathered by a person’s health machine, resembling an Apple Watch. After analyzing all the data, the chatbot will provide you with a high-level overview of well being points for the person.

Microsoft’s announcement echoed strikes by Amazon, OpenAI and Anthropic, which started testing comparable instruments — Well being AI, ChatGPT Well being and Claude for Healthcare — this yr. By accumulating well being knowledge and providing direct suggestions, the businesses, whose AI chatbots have made headlines for contributing to some customers’ psychosis, isolation and unhealthy habits, are treading into dangerous territory.

(The New York Occasions has sued OpenAI and Microsoft, claiming copyright infringement of stories content material associated to AI programs. The 2 corporations have denied the go well with’s claims.)

In interviews, physicians stated there could be upsides to chatbot-assisted well being care, like serving to individuals acquire perception into their well being at a time when well being care is changing into more and more unaffordable. However sharing well being information with tech corporations creates a bunch of privateness dangers. Like previous applied sciences that made individuals overly anxious about their well being, the chatbots might additionally result in pointless journeys to the physician.

Right here’s what that you must know.

How would this work?

On Microsoft’s Copilot web site and cell app, customers will be capable of click on on a “Well being” tab and create a profile by answering questions on their age and intercourse. From there, customers can choose to share their well being information and knowledge from units like an Apple Watch, Fitbit and an Oura sleep tracker.

Story continues beneath this advert

Customers can then prod the chatbot with questions or signs by saying issues like, “I haven’t been sleeping nicely.” The chatbot then analyzes the well being information and wearable knowledge to make observations, resembling sleep traits since a latest hospital go to.

The chatbot can even provide you with a “backside line” abstract of well being points to concentrate to, resembling sleep deprivation, diabetes and restricted bodily exercise.

Customers will initially be capable of attempt Copilot Well being totally free when it’s launched this yr. Microsoft stated it deliberate to cost a subscription charge to make use of the software, however it didn’t share a worth.

What are the potential advantages?

Medical information have been chaotic and cumbersome for sufferers to navigate as a result of the data may be scattered throughout varied databases utilized by completely different well being suppliers. (A main care doctor might wrestle to supply suggestions on a foot harm, for instance, if the affected person’s podiatrist used a distinct file system.) Microsoft’s AI might assist join the dots from many various well being suppliers, together with a person’s health machine knowledge.

Story continues beneath this advert

Microsoft stated a physician would in all probability want hours to manually evaluate all of an individual’s medical information and health machine knowledge to provide you with a high-level overview on well being. It stated Copilot Well being might do that in seconds.

“That is about giving shoppers and sufferers unimaginable perception and intelligence over their very own file and serving to them navigate very advanced challenges and a really advanced system that we’ve all created for them,” stated Dr. Dominic King, Microsoft’s vice chairman of well being in its AI division.

As well being care prices have risen, many People are dropping protection. An AI chatbot could possibly be a low-cost approach to assist individuals pay nearer consideration to their well being and analysis data on signs, just like an online search on a web site like WebMD.

What are the dangers?

In recent times, cyberattacks have breached hospitals and well being care programs. Placing well being information in a central place makes that data way more tempting goal for criminals, stated Matthew Inexperienced, an affiliate professor of pc science at Johns Hopkins College. A sufferer’s well being knowledge might expose circumstances that she or he would need to maintain personal.

Story continues beneath this advert

“There’s a pot of gold of high-value knowledge that’s in a single location that folks can get,” Inexperienced stated.

Equally, legislation enforcement businesses that need a person’s well being information might go to Microsoft as a substitute of a number of suppliers, stated Mario Trujillo, a knowledge privateness lawyer for the Digital Frontier Basis, a digital rights nonprofit. A lady pursuing reproductive well being care in a state with an abortion ban could possibly be at larger threat, he added.

Additionally, the Well being Insurance coverage Portability and Accountability Act, or HIPAA, which strictly requires conventional well being care suppliers to guard affected person privateness, doesn’t apply to tech corporations providing chatbots. So these corporations, which aren’t well being care suppliers even once they provide comparable companies, might do what they wished with well being information, resembling use the data to coach their AI or present adverts associated to a person’s well being circumstances.

Microsoft stated individuals’s well being knowledge could be encrypted and wouldn’t be used to enhance its AI or serve focused adverts. It additionally stated it gave legislation enforcement businesses entry to buyer knowledge solely in response to legitimate authorized requests.

Story continues beneath this advert

Is well being recommendation from a chatbot reliable?

Microsoft says Copilot Well being is supposed to assist individuals perceive their well being and put together for appointments — not substitute a physician’s experience. Its information launch included a disclaimer that the chatbot “is just not supposed to diagnose, deal with or forestall illnesses.”

Dr. Girish Nadkarni, chief AI officer for the Mount Sinai Well being System, stated it was naive to suppose that customers wouldn’t ask a chatbot that had entry to all of their medical information for diagnoses and recommendation.

“Positive, you’ll be able to embody a disclaimer to not use it that approach,” he stated. “However individuals are going to make use of it that approach. That’s simply human nature.”

Thus far, analysis means that chatbots will not be but prepared for that duty.

Story continues beneath this advert

A research printed final month analysed a number of chatbots, together with these from OpenAI and Meta, and located that they had been no higher than an online search at guiding customers towards the proper diagnoses or serving to them decide what they need to do subsequent. And the know-how posed distinctive dangers, generally presenting false data or drastically altering its recommendation relying on slight modifications within the wording of the questions.

These weaknesses have already led to high-profile errors. As an example, a 60-year-old man was held for weeks in a psychiatric unit after ChatGPT advised reducing down on salt by consuming sodium bromide as a substitute, inflicting paranoia and hallucinations.

OpenAI stated the present model of ChatGPT was considerably higher at answering well being questions than the mannequin, since phased out, that was examined within the research. Meta didn’t reply to a request for remark.

Some new analysis means that even fashions which can be tailor-made for customers’ well being questions, like ChatGPT Well being, pose dangers. When Nadkarni and his colleagues enter particulars from hypothetical medical instances into the mannequin, which was launched in January, it missed high-risk emergencies, in a single case failing to suggest the emergency room for somebody with impending respiratory failure.

Story continues beneath this advert

One other threat is {that a} chatbot’s fundamental summaries of well being issues might create anxiousness, stated Dr. Lisa Piercey, a former well being commissioner for Tennessee. A sinus headache this time of yr is prone to be a symptom of allergic reactions, however a chatbot might increase the potential for a extra critical situation that spurs an pointless go to to the physician.

“It very nicely might inform you you’ve received a mind tumor,” she stated. “That causes a ton of tension.”

Copilot Well being has additionally not but been studied by impartial researchers. King of Microsoft stated the chatbot was designed to keep away from giving medical recommendation even within the face of pointed questions and as a substitute provide “steering and assist.” Quite than inform customers that they’ve a particular situation, it could present an inventory of potential diagnoses. Or as a substitute of recommending a medicine, it could present some questions that customers can ask their medical doctors.

The corporate additionally stated it was releasing Copilot Well being step by step, testing new options with a small set of customers every step of the best way, to make sure that the expertise remained protected and dependable.

This text initially appeared in The New York Occasions.



Source link

carefully Chatbots health news records Technology tread
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Getting to host 2028 World Indoor Athletics part of India’s ambition to host 2036 Olympics | Sport-others News

March 20, 2026

Honor Magic 9 Specs Leak

March 19, 2026

Meet Nadien el Hammamy, the teenaged daughter of a glass artist who paints four walls of the squash court with creative racquet-work | Sport-others News

March 19, 2026

Nothing Phone (4a) Pro Review: Daringly Different

March 19, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Getting to host 2028 World Indoor Athletics part of India’s ambition to host 2036 Olympics | Sport-others News

March 20, 2026

Why You Need a Power Line Truck in the Electrical Industry

March 20, 2026

Honor Magic 9 Specs Leak

March 19, 2026

NVIDIA’s HBM Supplier Just Posted Record Results, So Why Is It Falling?

March 19, 2026
Popular Post

Cybill Shepherd Holds Onto Assistant After Celebrating 75th Birthday

Take a look inside a $1.1 million ‘zero emissions’ home

Morning Bid: The confidence game

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.