4 min learnNew DelhiUp to date: Feb 22, 2026 01:20 PM IST
Anthropic has unveiled a brand new AI-powered function that allows customers of its common AI coding assistant to scan their codebases and generate software program patches to handle them.
ARTICLE CONTINUES BELOW VIDEO
The brand new function referred to as Claude Code Safety has been built-in into the net model of Anthropic’s Claude Code device. It’s designed to permit groups to seek out and repair safety points that conventional strategies typically miss, the AI startup mentioned in a weblog submit on Friday, February 20.
To begin, Claude Code Safety will at the moment solely be obtainable to a restricted variety of paid Claude Enterprise and Crew prospects, with expedited entry for maintainers of open-source repositories.
The launch of the brand new cybersecurity function comes as a rising variety of non-coders are utilizing AI vibe-coding instruments to create their very own web sites and apps, whilst many might lack the experience to determine safety flaws within the AI-generated code they deploy. A latest report by AI safety startup Tenzai discovered that web sites created utilizing AI coding instruments from OpenAI, Anthropic, Cursor, Replit, and Devin, may very well be tricked into leaking delicate information or mistakenly sending cash to hackers.
“Present evaluation instruments assist, however solely to a degree, as they often search for recognized patterns […] Quite than scanning for recognized patterns, Claude Code Safety reads and causes about your code the best way a human safety researcher would: understanding how elements work together, tracing how information strikes via your software, and catching advanced vulnerabilities that rule-based instruments miss,” Anthropic mentioned in a weblog submit.
The way it works
Claude Code Safety is constructed into Anthropic’s Claude Code, permitting customers to simply overview AI-generated code and iterate on fixes throughout the similar atmosphere. The AI-powered device analyses programming code and software program via a multi-stage verification course of, with overview by a human analyst because the final step.
“Nothing is utilized with out human approval: Claude Code Safety identifies issues and suggests options, however builders at all times make the decision,” Anthropic mentioned.
Story continues under this advert
The code overview course of additionally entails filtering out false positives and extra verification rounds of its personal findings. These findings will probably be proven to customers in a unified dashboard, the place builders can examine the AI-suggested patches.
The findings will probably be graded based mostly on their severity in addition to Claude’s confidence in its evaluation. “We additionally use Claude to overview our personal code,” Anthropic mentioned. Earlier this month, Mike Krieger, Anthropic’s chief product officer, revealed that the corporate’s AI coding instruments are used internally by workers to generate successfully 100 per cent of code.
“Claude is being written by Claude. Claude merchandise and Claude code are being fully written by Claude,” Krieger had mentioned. When it comes to testing and efficiency, Anthropic mentioned that Claude Code Safety has been stress-tested on a group of aggressive Seize-the-Flag occasions. It additionally partnered with Pacific Northwest Nationwide Laboratory to experiment with utilizing AI to defend vital infrastructure.
The corporate additional mentioned that its staff of researchers had efficiently discovered over 500 never-before-detected vulnerabilities in manufacturing open-source codebase utilizing the Claude Opus 4.6 mannequin. “We’re working via triage and accountable disclosure with maintainers now, and we plan to increase our safety work with the open-source group,” it mentioned.


