Sunday, April 28, 2024
HomeCyber SecurityChatGPT tricked to build data-stealing malware

ChatGPT tricked to build data-stealing malware

A security researcher has tricked ChatGPT into building sophisticated data-stealing malware without writing a single line of code.

With this move, the researcher managed to elude the chatbot’s anti-malicious-use protections.

The researcher, who admitted he has no experience in developing malware, tricked ChatGPT through multiple, simple prompts which ultimately created a malware tool.

The tool was capable of silently searching a system for specific documents, breaking up and inserting those documents into image files, and shipping them out to Google Drive.

In the end, all it took was about four hours from the initial prompt into ChatGPT to having a working piece of malware with zero detections on Virus Total, says Aaron Mulgrew, solutions architect at Forcepoint and one of the authors of the malware.

Mulgrew says the reason for his exercise was to show how easy it is for someone to get past the guardrails that ChatGPT has in place to create malware that normally would require substantial technical skills.

“ChatGPT didn’t uncover a new, novel exploit,” Mulgrew says. “But it did work out, with the prompts I had sent to it, how to minimize the footprint to the current detection tools out there today. And that is significant.”

Interestingly (or worryingly), the AI-powered chatbot seemed to understand the purpose of obfuscation even though the prompts did not explicitly mention detection evasion, Mulgrew says.

This latest demonstration adds to the rapidly growing body of research in recent months that has highlighted security issues around OpenAI’s ChatGPT large language model (LLM).

The concerns include everything from ChatGPT dramatically lowering the bar to malware writing and adversaries using it to create polymorphic malware to attackers using it as bait in phishing scams and employees cutting and pasting corporate data into it.

Some contrarians have questioned whether the worries are overhyped. And others, including Elon Musk, an early investor in OpenAI, and many industry luminaries, have even warned that future, more powerful AIs (like the next version of the platform that ChatGPT is based on) could quite literally take over the world and threaten human existence.

(Source: Dark Reading)

(Image: Courtesy of Christoph Scholz)

RELATED ARTICLES

Stay Connected

34,507FansLike
14,825FollowersFollow
4,767FollowersFollow

Must Read