Amidst a number of wrongful demise lawsuits, OpenAI is transferring to fill a essential security position that has reportedly been vacant for a number of months now.
Final Saturday, the ChatGPT maker mentioned it’s trying to rent a brand new ‘head of preparedness’ to information the AI startup’s security technique by serving to it anticipate the potential harms of its fashions and the way they are often abused, as per a job itemizing posted on X by OpenAI CEO Sam Altman.
The pinnacle of preparedness will earn $555,000 per yr, together with fairness within the firm. The brand new rent “will lead the technical technique and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s method to monitoring and making ready for frontier capabilities that create new dangers of extreme hurt,” the corporate mentioned.
OpenAI’s hunt to fill the advanced position comes at a time when the AI startup has been hit with quite a few accusations concerning the impacts of ChatGPT on customers’ psychological well being, together with a couple of wrongful demise lawsuits. Its personal examine revealed that greater than one million ChatGPT customers (0.07 per cent of weekly lively customers) exhibited indicators of psychological well being emergencies, together with mania, psychosis, or suicidal ideas.
Acknowledging that the “potential impression of fashions on psychological well being was one thing we noticed a preview of in 2025,” Altman mentioned that the pinnacle of preparedness “is a essential position at an vital time.”
“If you wish to assist the world determine methods to allow cybersecurity defenders with cutting-edge capabilities whereas making certain attackers can’t use them for hurt, ideally by making all methods safer, and equally for the way we launch organic capabilities and even achieve confidence within the security of operating methods that may self-improve, please think about making use of,” he wrote.
“This shall be a aggravating job, and also you’ll leap into the deep finish just about instantly,” he added.
Story continues under this advert
OpenAI’s security groups have skilled important worker churn over the previous few years. In July 2024, the corporate reassigned then-head of preparedness Aleksander Madry, and mentioned that the position could be taken over by two AI security researchers, Joaquin Quinonero Candela and Lilian Weng.
Nonetheless, Weng left OpenAI a couple of months later. Earlier this yr, Candela introduced he was transferring away from the preparedness workforce to guide recruiting at OpenAI.
In November 2025, Andrea Vallone, the pinnacle of a security analysis workforce often known as mannequin coverage, mentioned she was leaving OpenAI on the finish of the yr. The AI security analysis chief reportedly helped form ChatGPT’s responses to customers experiencing psychological well being crises.
© IE On-line Media Providers Pvt Ltd

