Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»New NIST report sounds the alarm on growing threat of AI attacks
Technology

New NIST report sounds the alarm on growing threat of AI attacks

January 9, 2024No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
New NIST report sounds the alarm on growing threat of AI attacks
Share
Facebook Twitter LinkedIn Pinterest Email

Be part of leaders in San Francisco on January 10 for an unique night time of networking, insights, and dialog. Request an invitation right here.


The Nationwide Institute of Requirements and Expertise (NIST) has launched an pressing report to assist within the protection in opposition to an escalating risk panorama focusing on synthetic intelligence (AI) programs.

The report, titled “Adversarial Machine Studying: A Taxonomy and Terminology of Assaults and Mitigations,” arrives at a essential juncture when AI programs are each extra highly effective and extra weak than ever.

Because the report explains, adversarial machine studying (ML) is a method utilized by attackers to deceive AI programs by means of refined manipulations that may have catastrophic results.

The report goes on to supply an in depth and structured overview of how such assaults are orchestrated, categorizing them primarily based on the attackers’ targets, capabilities and data of the goal AI system.

VB Occasion

The AI Impression Tour

Attending to an AI Governance Blueprint – Request an invitation for the Jan 10 occasion.

 

Study Extra

“Attackers can intentionally confuse and even ‘poison’ synthetic intelligence programs to make them malfunction,” the NIST report explains. These assaults exploit vulnerabilities in how AI programs are developed and deployed.

The report outlines assaults like “information poisoning,” the place adversaries manipulate the info used to coach AI fashions. “Latest work reveals that poisoning could possibly be orchestrated at scale in order that an adversary with restricted monetary assets can management a fraction of public datasets used for mannequin coaching,” the report says.

One other concern the NIST report outlines is “backdoor assaults,” the place triggers are planted in coaching information to induce particular misclassifications in a while. The doc warns that “backdoor assaults are notoriously difficult to defend in opposition to.”

The NIST report additionally highlights privateness dangers from AI programs. Strategies like “membership inference assaults” can decide if an information pattern was used to coach a mannequin. NIST then cautions, “No foolproof method exists as but for safeguarding AI from misdirection.”

Whereas AI guarantees to rework industries, safety consultants emphasize the necessity for warning. “AI chatbots enabled by current advances in deep studying have emerged as a robust expertise with nice potential for quite a few enterprise functions,” the NIST report states. “Nonetheless, this expertise continues to be rising and will solely be deployed with abundance of warning.”

The aim of the NIST report is to determine a standard language and understanding of AI safety points. This doc will most certainly function an essential reference to the AI safety group as it really works to handle rising threats.

Joseph Thacker, principal AI engineer and safety researcher at AppOmni, informed VentureBeat “That is the very best AI safety publication I’ve seen. What’s most noteworthy are the depth and protection. It’s probably the most in-depth content material about adversarial assaults on AI programs that I’ve encountered.”

For now, it appears we’re caught in an countless sport of cat and mouse endlessly. As consultants grapple with rising AI safety threats, one factor is obvious — we’ve entered a brand new period the place AI programs will want way more sturdy safety earlier than they are often safely deployed throughout industries. The dangers are just too nice to disregard.

Source link

Alarm attacks growing Nist report Sounds Threat
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Ex-Uber CEO launches robotics startup ‘Atoms’ for mining, transport, and food automation | Technology News

March 15, 2026

ByteDance suspends launch of video AI model after copyright disputes: Report | Technology News

March 15, 2026

Meta delays rollout of new AI model after performance concerns | Technology News

March 15, 2026

Meta may cut up to 20% of workforce as AI spending surges | Technology News

March 15, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

My soon to be ex-husband maxed out new credit cards in my name. How can I ensure he gets the debt in the divorce?

March 15, 2026

Former Pakistan captain Sarfaraz Ahmed announces retirement from international cricket 

March 15, 2026

Ex-Uber CEO launches robotics startup ‘Atoms’ for mining, transport, and food automation | Technology News

March 15, 2026

Tap your home’s liquidity at low rates

March 15, 2026
Popular Post

Poetry, storytelling & Harvard study — how Modi laid into Congress, Rahul Gandhi in LS speech

‘Tiny cretin of a man’: Restaurant bans comedian James Corden over misconduct

Is Costco Wholesale Corporation (COST) the Best Consumer Staples Stock to Buy According to Analysts?

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.