Close Menu
  • Homepage
  • Local News
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
  • Business
  • Technology
  • Health
  • Lifestyle
Facebook X (Twitter) Instagram
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
Facebook X (Twitter) Instagram Pinterest
JHB NewsJHB News
  • Local
  • India
  • World
  • Politics
  • Sports
  • Finance
  • Entertainment
Let’s Fight Corruption
JHB NewsJHB News
Home»Technology»Tenable report shows how generative AI is changing security research 
Technology

Tenable report shows how generative AI is changing security research 

April 29, 2023No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Want open-source security? Focus on app dependencies
Share
Facebook Twitter LinkedIn Pinterest Email

Be part of prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for fulfillment. Be taught Extra


At the moment, vulnerability administration supplier Tenable revealed a brand new report demonstrating how its analysis crew is experimenting with massive language fashions (LLMs) and generative AI to boost safety analysis. 

The analysis focuses on 4 new instruments designed to assist human researchers streamline reverse engineering, vulnerability evaluation, code debugging and internet software safety, and determine cloud-based misconfigurations.

These instruments, now accessible on GitHub, display that generative AI instruments like ChatGPT have a precious function to play in defensive use instances, significantly on the subject of analyzing code and translating it into human-readable explanations in order that defenders can higher perceive how the code works and its potential vulnerabilities.

“Tenable has already used LLMs to construct new instruments which might be dashing out processes and serving to us determine vulnerabilities sooner and extra effectively,” the report stated. “Whereas these instruments are removed from changing safety engineers, they’ll act as a pressure multiplier and cut back some labor-intensive and sophisticated work when utilized by skilled researchers.”

Occasion

Remodel 2023

Be part of us in San Francisco on July 11-12, the place prime executives will share how they’ve built-in and optimized AI investments for fulfillment and averted frequent pitfalls.

 

Register Now

Automating reverse engineering with G-3PO 

One of many key instruments outlined within the analysis is G-3PO, a translation script for the reverse engineering framework Ghidra. Developed by the NSA, G-3PO is a software that disassembles code and decompiles it into “one thing resembling supply code” within the C programming language. 

Historically, a human analyst would wish to research this in opposition to the unique meeting itemizing to determine how a bit of code capabilities. G-3PO automates the method by sending Ghidra’s decompiled C code to an LLM (supporting fashions from OpenAI and Anthropic) and requests an evidence for what the perform does. Because of this the researcher can perceive the code’s perform with out having to research it manually.

Whereas this will save time, in a YouTube video explaining how G-3PO works, Olivia Fraser, Tenable’s zero-day researcher, warns that researchers ought to at all times double-check the output for accuracy. 

“It goes with out saying in fact that the output of G-3PO, identical to any automated software, needs to be taken with a grain of salt and within the case of this software, in all probability with a number of tablespoons of salt,” Fraser stated. “Its output ought to in fact at all times be checked in opposition to the decompiled code and in opposition to the disassembly, however that is par for the course for the reverse engineer.” 

BurpGPT: The online app safety AI assistant 

One other promising resolution is BurpGPT, an extension for software testing software program Burp Suite that allows customers to make use of GPT to research HTTP requests and responses. 

BurpGPT intercepts HTTP site visitors and forwards it to the OpenAI API, at which level the site visitors is analyzed to determine dangers and potential fixes. Within the report, Tenable famous that BurpGPT has proved profitable at figuring out cross website scripting (XSS) vulnerabilities and misconfigured HTTP headers. 

This software subsequently demonstrates how LLMs can play a job in lowering guide testing for internet software builders, and can be utilized to partially automate the vulnerability discovery course of. 

“EscalateGPT seems to be a really promising software. IAM insurance policies usually symbolize a tangled advanced internet of privilege assignments. Oversights throughout coverage creation and upkeep usually creep in, creating unintentional vulnerabilities that criminals exploit to their benefit. Previous breaches in opposition to cloud-based information and functions proves this level over and over,” stated Avivah Litan, VP analyst at Gartner in an e-mail to VentureBeat.

EscalateGPT: Establish IAM coverage points with AI 

In an try and determine IAM coverage misconfigurations, Tenable’s analysis crew developed EscalateGPT, a Python software designed to determine privilege-escalation alternatives in Amazon Internet Companies IAM. 

Primarily, EscalateGPT collects the IAM insurance policies related to particular person customers or teams and submits them to the OpenAI API to be processed, asking the LLM to determine potential privilege escalation alternatives and mitigations. 

As soon as that is carried out, EscalateGPT shares an output detailing the trail of privilege escalation and the Amazon Useful resource Title (ARN) of the coverage that may very well be exploited, and recommends mitigation methods to repair the vulnerabilities. 

Extra broadly, this use case illustrates how LLMs like GPT-4 can be utilized to determine misconfigurations in cloud-based environments. As an example, the report notes GPT-4 efficiently recognized advanced situations of privilege escalation primarily based on non-trivial insurance policies by way of multi-IAM accounts. 

When taken collectively, these use instances spotlight that LLMs and generative AI can act as a pressure multiplier for safety groups to determine vulnerabilities and course of code, however that their output nonetheless must be checked manually to make sure reliability.

Source link

changing Generative report research security shows Tenable
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

AI models can be used to unmask anonymous social media accounts, new study warns | Technology News

March 10, 2026

Microsoft deepens ties with Anthropic, integrates Claude Cowork agentic AI tool with 365 Copilot | Technology News

March 10, 2026

Samsung Refutes S26 Ultra Privacy Display Complaints

March 10, 2026

Google Play Store Warning Over Battery-Draining Android Apps

March 10, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

AI models can be used to unmask anonymous social media accounts, new study warns | Technology News

March 10, 2026

Got a low rate? Now consider this.

March 10, 2026

Jose Mourinho hits back after red card in Benfica vs Porto 2-2 draw

March 10, 2026

Princesses Beatrice and Eugenie ‘Set to Freeze Out Sarah Ferguson’

March 10, 2026
Popular Post

How Britain’s Wind Boom Has Slashed Energy Bills

Issue subscribed 69.79 times on final day led by QIBs and retail investors

Stars Pulling in Millions Peddling Raunchy Pics On Sex Sites

Subscribe to Updates

Get the latest news from JHB News about Bangalore, Worlds, Entertainment and more.

JHB News
Facebook X (Twitter) Instagram Pinterest
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 Jhb.news - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.