Published in News

AI can exploit security by reading a book

by on22 April 2024


If only humans still did that

In the latest "AI is taking over the world" news, a gaggle of computer boffins from the University of Illinois Urbana-Champaign discovered that AI agents can read security advisories and exploit real-world security vulnerabilities.

UIUC’s Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang have penned a paper claiming that OpenAI's GPT-4 can, without any human hand-holding, exploit vulnerabilities in real-world systems if you just hand it a CVE advisory on a silver platter.

To prove their point, they gathered 15 one-day vulnerabilities, some of which are so critical they'd make your hair stand on end, and GPT-4 could exploit 87 per cent of them.

Expanding on the alarming revelation that large language models can be weaponized to automate cyber attacks in a controlled environment.

 Daniel Kang, an assistant professor at UIUC, said: "GPT-4 can execute certain exploits independently, surpassing the capabilities of open-source vulnerability scanners. This capability of GPT -4 could potentially lead to severe security breaches."

The team noted they had a smorgasbord of vulnerabilities, from websites to containers to Python packages, and more than half of them are deemed 'high' or 'critical' risk by the CVE folks."

Kang and his merry men crunched the numbers and figured out that pulling off a successful LLM agent attack would set you back a mere $8.80 per exploit.

Rate this item
(0 votes)