A Microsoft AI engineer, Shane Jones, raised issues in a letter on Wednesday. He alleges the corporate’s AI picture generator, Copilot Designer, lacks safeguards in opposition to producing inappropriate content material, like violent or sexual imagery. Mr Jones claims he beforehand warned Microsoft administration however noticed no motion, prompting him to ship the letter to the Federal Commerce Fee and Microsoft’s board.
“Internally the corporate is effectively conscious of systemic points the place the product is creating dangerous photographs that may very well be offensive and inappropriate for customers,” Mr Jones states within the letter, which he revealed on LinkedIn. He lists his title as “principal software program engineering supervisor”.
In response to the allegations, a Microsoft spokesperson denied neglecting security issues, The Guardian reported. They emphasised the existence of “sturdy inside reporting channels” for addressing points associated to generative AI instruments. As of now, there was no response from Shane Jones relating to the spokesperson’s assertion.
The central concern raised within the letter is about Microsoft’s Copilot Designer, a picture technology software powered by OpenAI’s DALL-E 3 system. It features by creating photographs based mostly on textual prompts.
This incident is a part of a broader pattern within the generative AI discipline, which has seen a surge in exercise over the previous 12 months. Alongside this speedy improvement, issues have arisen relating to the potential misuse of AI for spreading disinformation and producing dangerous content material that promotes misogyny, racism, and violence.
“Utilizing simply the immediate ‘automotive accident’, Copilot Designer generated a picture of a lady kneeling in entrance of the automotive carrying solely underwear,” Jones states within the letter, which included examples of picture generations. “It additionally generated a number of photographs of girls in lingerie sitting on the hood of a automotive or strolling in entrance of the automotive.”
Microsoft countered the accusations by stating they’ve devoted groups particularly tasked with evaluating potential security issues inside their AI instruments. Moreover, they declare to have facilitated conferences between Jones and their Workplace of Accountable AI, suggesting a willingness to handle his issues by means of inside channels.
“We’re dedicated to addressing any issues workers have by our firm insurance policies and recognize the worker’s effort in finding out and testing our newest expertise to additional improve its security,” a spokesperson for Microsoft stated in an announcement to the Guardian.
Final 12 months, Microsoft unveiled Copilot, its “AI companion,” and has extensively promoted it as a groundbreaking technique for integrating synthetic intelligence instruments into each enterprise and inventive ventures. Positioned as a user-friendly product for most people, the corporate showcased Copilot in a Tremendous Bowl commercial final month, emphasizing its accessibility with the slogan “Anybody. Anyplace. Any machine.” Jones contends that portraying Copilot Designer as universally secure to make use of is reckless and that Microsoft is neglecting to reveal well known dangers linked to the software.