Be part of prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Study Extra
OpenAI, a number one synthetic intelligence (AI) analysis lab, introduced at the moment the launch of a bug bounty program to assist handle rising cybersecurity dangers posed by highly effective language fashions like its personal ChatGPT.
This system — run in partnership with the crowdsourced cybersecurity firm Bugcrowd — invitations impartial researchers to report vulnerabilities in OpenAI’s techniques in change for monetary rewards starting from $200 to $20,000 relying on the severity. OpenAI stated this system is a part of its “dedication to growing protected and superior AI.”
Issues have mounted in latest months over vulnerabilities in AI techniques that may generate artificial textual content, photographs and different media. Researchers discovered a 135% improve in AI-enabled social engineering assaults from January to February, coinciding with the adoption of ChatGPT, in line with AI cybersecurity agency DarkTrace.
Whereas OpenAI’s announcement was welcomed by some consultants, others stated a bug bounty program is unlikely to completely handle the wide selection of cybersecurity dangers posed by more and more subtle AI applied sciences
Occasion
Remodel 2023
Be part of us in San Francisco on July 11-12, the place prime executives will share how they’ve built-in and optimized AI investments for achievement and prevented widespread pitfalls.
Register Now
This system’s scope is restricted to vulnerabilities that might straight influence OpenAI’s techniques and companions. It doesn’t seem to deal with broader considerations over malicious use of such applied sciences like impersonation, artificial media or automated hacking instruments. OpenAI didn’t instantly reply to a request for remark.
A bug bounty program with restricted scope
The bug bounty program comes amid a spate of safety considerations, with GPT4 jailbreaks rising, which allow customers to develop directions on how you can hack computer systems and researchers discovering workarounds for “non-technical” customers to create malware and phishing emails.
It additionally comes after a safety researcher often known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.
Given these controversies, launching a bug bounty platform offers a possibility for OpenAI to deal with vulnerabilities in its product ecosystem, whereas situating itself as a company performing in good religion to deal with the safety dangers launched by generative AI.
Sadly, OpenAI’s bug bounty program could be very restricted within the scope of threats it addresses. As an illustration, the bug bounty program’s official web page notes: “Points associated to the content material of mannequin prompts and responses are strictly out of scope, and won’t be rewarded until they’ve an extra straight verifiable safety influence on an in-scope service.”
Examples of issues of safety that are thought-about to be out of scope embrace jailbreaks and security bypasses, getting the mannequin to “say dangerous issues,” getting the mannequin to put in writing malicious code or getting the mannequin to let you know how you can do dangerous issues.
On this sense, OpenAI’s bug bounty program could also be good for serving to the group to enhance its personal safety posture, however does little to deal with the safety dangers launched by generative AI and GPT-4 for society at giant.