Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»New NIST report sounds the alarm on growing threat of AI attacks
Technology

New NIST report sounds the alarm on growing threat of AI attacks

January 9, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
New NIST report sounds the alarm on growing threat of AI attacks
Share
Facebook Twitter LinkedIn Pinterest Email

Be part of leaders in San Francisco on January 10 for an unique night time of networking, insights, and dialog. Request an invitation right here.


The Nationwide Institute of Requirements and Expertise (NIST) has launched an pressing report to assist within the protection in opposition to an escalating risk panorama focusing on synthetic intelligence (AI) programs.

The report, titled “Adversarial Machine Studying: A Taxonomy and Terminology of Assaults and Mitigations,” arrives at a essential juncture when AI programs are each extra highly effective and extra weak than ever.

Because the report explains, adversarial machine studying (ML) is a method utilized by attackers to deceive AI programs by means of refined manipulations that may have catastrophic results.

The report goes on to supply an in depth and structured overview of how such assaults are orchestrated, categorizing them primarily based on the attackers’ targets, capabilities and data of the goal AI system.

VB Occasion

The AI Impression Tour

Attending to an AI Governance Blueprint – Request an invitation for the Jan 10 occasion.

 

Study Extra

“Attackers can intentionally confuse and even ‘poison’ synthetic intelligence programs to make them malfunction,” the NIST report explains. These assaults exploit vulnerabilities in how AI programs are developed and deployed.

The report outlines assaults like “information poisoning,” the place adversaries manipulate the info used to coach AI fashions. “Latest work reveals that poisoning could possibly be orchestrated at scale in order that an adversary with restricted monetary assets can management a fraction of public datasets used for mannequin coaching,” the report says.

One other concern the NIST report outlines is “backdoor assaults,” the place triggers are planted in coaching information to induce particular misclassifications in a while. The doc warns that “backdoor assaults are notoriously difficult to defend in opposition to.”

The NIST report additionally highlights privateness dangers from AI programs. Strategies like “membership inference assaults” can decide if an information pattern was used to coach a mannequin. NIST then cautions, “No foolproof method exists as but for safeguarding AI from misdirection.”

Whereas AI guarantees to rework industries, safety consultants emphasize the necessity for warning. “AI chatbots enabled by current advances in deep studying have emerged as a robust expertise with nice potential for quite a few enterprise functions,” the NIST report states. “Nonetheless, this expertise continues to be rising and will solely be deployed with abundance of warning.”

The aim of the NIST report is to determine a standard language and understanding of AI safety points. This doc will most certainly function an essential reference to the AI safety group as it really works to handle rising threats.

Joseph Thacker, principal AI engineer and safety researcher at AppOmni, informed VentureBeat “That is the very best AI safety publication I’ve seen. What’s most noteworthy are the depth and protection. It’s probably the most in-depth content material about adversarial assaults on AI programs that I’ve encountered.”

For now, it appears we’re caught in an countless sport of cat and mouse endlessly. As consultants grapple with rising AI safety threats, one factor is obvious — we’ve entered a brand new period the place AI programs will want way more sturdy safety earlier than they are often safely deployed throughout industries. The dangers are just too nice to disregard.

Source link

Alarm attacks growing Nist report Sounds Threat
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

iPad-Style Sidebar and a Surprising Change to the Buttons

March 14, 2026

‘Young graduates may struggle to stand out’: ServiceNow CEO Bill McDermott on AI adoption | Technology News

March 14, 2026

Tinder’s 50 million users are burning out. The app is betting AI can fix what swiping broke | Technology News

March 14, 2026

Here is what happens next

March 14, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Who does Cheteshwar Pujara think as one of the ‘most destructive’ openers in IPL history

March 14, 2026

Suki Waterhouse Criticized Over Daughter’s Birthday Post

March 14, 2026

iPad-Style Sidebar and a Surprising Change to the Buttons

March 14, 2026

The Metals Company Stock Is a Buy Before March 26

March 14, 2026
Popular Post

How Cher’s Addict Son Was ‘Left to Die’ After Drug Overdose

Kevin Spacey’s Civil Trial On Sexual Assault Claims Begins Today

Mbappe: Eyeing a double and parallels with Pele

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.