This picture taken on February 2, 2024 reveals Lu Yu, head of Product Administration and Operations of Wantalk, a man-made intelligence chatbot created by Chinese language tech firm Baidu, exhibiting a digital girlfriend profile on her cellphone, on the Baidu headquarters in Beijing.
Jade Gao | Afp | Getty Photos
BEIJING — China plans to limit synthetic intelligence-powered chatbots from influencing human feelings in ways in which may result in suicide or self-harm, in response to draft guidelines launched Saturday.
The proposed rules from the Our on-line world Administration goal what it calls “human-like interactive AI providers,” in response to a CNBC translation of the Chinese language-language doc.
The measures, as soon as finalized, will apply to AI services or products supplied to the general public in China that simulate human persona and interact customers emotionally by way of textual content, photographs, audio or video. The general public remark interval ends Jan. 25.
Beijing’s deliberate guidelines would mark the world’s first try to manage AI with human or anthropomorphic traits, mentioned Winston Ma, adjunct professor at NYU College of Legislation. The newest proposals come as Chinese language corporations have quickly developed AI companions and digital celebrities.
In contrast with China’s generative AI regulation in 2023, Ma mentioned that this model “highlights a leap from content material security to emotional security.”
The draft guidelines suggest that:
- AI chatbots can not generate content material that encourages suicide or self-harm, or have interaction in verbal violence or emotional manipulation that damages customers’ psychological well being.
- If a person particularly proposes suicide, the tech suppliers should have a human take over the dialog and instantly contact the person’s guardian or a delegated particular person.
- The AI chatbots should not generate gambling-related, obscene or violent content material.
- Minors should have guardian consent to make use of AI for emotional companionship, with closing dates on utilization.
- Platforms ought to have the ability to decide whether or not a person is a minor even when the person doesn’t disclose their age, and, in instances of doubt, apply settings for minors, whereas permitting for appeals.

Extra provisions would require tech suppliers to remind customers after two hours of steady AI interplay and mandate safety assessments for AI chatbots with greater than 1 million registered customers or over 100,000 month-to-month energetic customers.
The doc additionally inspired the usage of human-like AI in “cultural dissemination and aged companionship.”
Chinese language AI chatbot IPOs
The proposal comes shortly after two main Chinese language AI chatbot startups, Z.ai and Minimax, filed for preliminary public choices in Hong Kong this month.
Minimax is finest recognized internationally for its Talkie AI app, which permits customers to speak with digital characters. The app and its home Chinese language model, Xingye, accounted for greater than a 3rd of the corporate’s income within the first three quarters of the 12 months, with a mean of over 20 million month-to-month energetic customers throughout that point.
Z.ai, also referred to as Zhipu, filed beneath the identify “Information Atlas Expertise.” Whereas the corporate didn’t disclose month-to-month energetic customers, it famous its expertise “empowered” round 80 million gadgets, together with smartphones, private computer systems and good automobiles.
Neither firm responded to CNBC’s request for feedback on how the proposed guidelines may have an effect on their IPO plans.

