Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»Anthropic unveils new AI feature to scan codebases, suggest patches within Claude Code | Technology News
Technology

Anthropic unveils new AI feature to scan codebases, suggest patches within Claude Code | Technology News

February 22, 2026No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Claude Opus 4.1 is Anthropic’s most advanced coding model to date. (Image: Anthropic)
Share
Facebook Twitter LinkedIn Pinterest Email

4 min learnNew DelhiUp to date: Feb 22, 2026 01:20 PM IST

Anthropic has unveiled a brand new AI-powered function that allows customers of its common AI coding assistant to scan their codebases and generate software program patches to handle them.

ARTICLE CONTINUES BELOW VIDEO

The brand new function referred to as Claude Code Safety has been built-in into the net model of Anthropic’s Claude Code device. It’s designed to permit groups to seek out and repair safety points that conventional strategies typically miss, the AI startup mentioned in a weblog submit on Friday, February 20.

To begin, Claude Code Safety will at the moment solely be obtainable to a restricted variety of paid Claude Enterprise and Crew prospects, with expedited entry for maintainers of open-source repositories.

The launch of the brand new cybersecurity function comes as a rising variety of non-coders are utilizing AI vibe-coding instruments to create their very own web sites and apps, whilst many might lack the experience to determine safety flaws within the AI-generated code they deploy. A latest report by AI safety startup Tenzai discovered that web sites created utilizing AI coding instruments from OpenAI, Anthropic, Cursor, Replit, and Devin, may very well be tricked into leaking delicate information or mistakenly sending cash to hackers.

“Present evaluation instruments assist, however solely to a degree, as they often search for recognized patterns […] Quite than scanning for recognized patterns, Claude Code Safety reads and causes about your code the best way a human safety researcher would: understanding how elements work together, tracing how information strikes via your software, and catching advanced vulnerabilities that rule-based instruments miss,” Anthropic mentioned in a weblog submit.

The way it works

Claude Code Safety is constructed into Anthropic’s Claude Code, permitting customers to simply overview AI-generated code and iterate on fixes throughout the similar atmosphere. The AI-powered device analyses programming code and software program via a multi-stage verification course of, with overview by a human analyst because the final step.

“Nothing is utilized with out human approval: Claude Code Safety identifies issues and suggests options, however builders at all times make the decision,” Anthropic mentioned.

Story continues under this advert

The code overview course of additionally entails filtering out false positives and extra verification rounds of its personal findings. These findings will probably be proven to customers in a unified dashboard, the place builders can examine the AI-suggested patches.

The findings will probably be graded based mostly on their severity in addition to Claude’s confidence in its evaluation. “We additionally use Claude to overview our personal code,” Anthropic mentioned. Earlier this month, Mike Krieger, Anthropic’s chief product officer, revealed that the corporate’s AI coding instruments are used internally by workers to generate successfully 100 per cent of code.

“Claude is being written by Claude. Claude merchandise and Claude code are being fully written by Claude,” Krieger had mentioned. When it comes to testing and efficiency, Anthropic mentioned that Claude Code Safety has been stress-tested on a group of aggressive Seize-the-Flag occasions. It additionally partnered with Pacific Northwest Nationwide Laboratory to experiment with utilizing AI to defend vital infrastructure.

The corporate additional mentioned that its staff of researchers had efficiently discovered over 500 never-before-detected vulnerabilities in manufacturing open-source codebase utilizing the Claude Opus 4.6 mannequin. “We’re working via triage and accountable disclosure with maintainers now, and we plan to increase our safety work with the open-source group,” it mentioned.



Source link

Anthropic Claude code codebases feature news Patches scan suggest Technology unveils
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Amazon looks to add stricter checks after outages linked to AI coding tools: Report | Technology News

March 11, 2026

Google Pixel 11 Pro XL and Fold Images Leak

March 11, 2026

Oppo Find N6 Release Date Confirmed

March 11, 2026

Zoom unveils real-time voice translation, deepfake detection features for video calls | Technology News

March 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Amazon looks to add stricter checks after outages linked to AI coding tools: Report | Technology News

March 11, 2026

UBS Suggests 2 Energy Stocks to Consider Amid Geopolitical Risk

March 11, 2026

Angelina Jolie Plans Exit From U.S. After ‘Maleficent’ Backlash

March 11, 2026

Google Pixel 11 Pro XL and Fold Images Leak

March 11, 2026
Popular Post

Hiring activity weakens in April

BOC Aviation awarded $406 mln over planes stuck in Russia

Instant Pot Duo Crisp with Ultimate Lid review

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.