Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»Anthropic offers $15,000 bounties to hackers in push for AI safety
Technology

Anthropic offers $15,000 bounties to hackers in push for AI safety

August 9, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Anthropic offers $15,000 bounties to hackers in push for AI safety
Share
Facebook Twitter LinkedIn Pinterest Email

Be part of our every day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Study Extra


Anthropic, the bogus intelligence startup backed by Amazon, launched an expanded bug bounty program on Thursday, providing rewards as much as $15,000 for figuring out essential vulnerabilities in its AI methods. This initiative marks one of the vital aggressive efforts but by an AI firm to crowdsource safety testing of superior language fashions.

This system targets “common jailbreak” assaults — strategies that would persistently bypass AI security guardrails throughout high-risk domains like chemical, organic, radiological, and nuclear (CBRN) threats and cybersecurity. Anthropic will invite moral hackers to probe its next-generation security mitigation system earlier than public deployment, aiming to preempt potential exploits that would result in misuse of its AI fashions.

AI security bounties: A brand new frontier in tech safety

This transfer comes at a vital second for the AI {industry}. The UK’s Competitors and Markets Authority simply introduced an investigation into Amazon’s $4 billion funding in Anthropic, citing potential competitors points. Towards this backdrop of accelerating regulatory scrutiny, Anthropic’s give attention to security might assist bolster its status and differentiate it from rivals.

The method contrasts with different main AI gamers. Whereas OpenAI and Google keep bug bounty applications, they sometimes give attention to conventional software program vulnerabilities somewhat than AI-specific exploits. Meta has confronted criticism for its comparatively closed stance on AI security analysis. Anthropic’s express concentrating on of AI questions of safety and invitation for out of doors scrutiny units a brand new customary for transparency within the area.

Moral hacking meets synthetic intelligence: A double-edged sword?

Nevertheless, the effectiveness of bug bounties in addressing the complete spectrum of AI security issues stays debatable. Figuring out and patching particular vulnerabilities is efficacious, however it could not sort out extra basic problems with AI alignment and long-term security. A extra complete method, together with in depth testing, improved interpretability, and doubtlessly new governance constructions, could also be needed to make sure AI methods stay aligned with human values as they develop extra highly effective.

Anthropic’s initiative additionally highlights the rising position of personal corporations in setting AI security requirements. With governments struggling to maintain tempo with speedy developments, tech corporations are more and more taking the lead in establishing finest practices. This raises vital questions concerning the steadiness between company innovation and public oversight in shaping the way forward for AI governance.

The race for safer AI: Will bug bounties prepared the ground?

The expanded bug bounty program will start as an invite-only initiative in partnership with HackerOne, a platform connecting organizations with cybersecurity researchers. Anthropic plans to open this system extra broadly sooner or later, doubtlessly making a mannequin for industry-wide collaboration on AI security.

As AI methods turn into extra built-in into essential infrastructure, making certain their security and reliability grows more and more essential. Anthropic’s daring transfer represents a major step ahead, nevertheless it additionally underscores the advanced challenges going through the AI {industry} because it grapples with the implications of more and more highly effective expertise. The success or failure of this program might set an vital precedent for the way AI corporations method security and safety within the coming years.


Source link
Anthropic bounties hackers offers push safety
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Tinder’s 50 million users are burning out. The app is betting AI can fix what swiping broke | Technology News

March 14, 2026

Here is what happens next

March 14, 2026

Disney+ Verts: A New Vertical Video Feature

March 14, 2026

‘This acts like a safety net’: Truecaller’s Kunal Dua on the new Family Protection feature | Technology News

March 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Tinder’s 50 million users are burning out. The app is betting AI can fix what swiping broke | Technology News

March 14, 2026

Eon Resources Stock Jumps on Oil Hedging Announcement. Is High-Flying EONR a Buy Here?

March 14, 2026

U.S. Stocks Lose Ground As War With Iran Keeps Pressure On Oil Prices

March 14, 2026

Everything Eric Dane Said About His ALS Battle Before His Death at 53

March 14, 2026
Popular Post

Just days before his death, Ayrton Senna revealed shocking details to ex-Ferrari chairman about his Williams stint

Sen. Lisa Murkowski Says Gaetz Nomination Is ‘Not Serious’

Golden State’s Stephen Curry Scores 50 in Game 7 Win Over Sacramento Kings

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.