AI and cyber security: Hackers are using ChatGPT to create malware

Artificial intelligence is at its peak thanks to ChatGPT, DALL-E, and MidJourney. But now, researchers have shown how the conversational algorithm created by OpenAI can be used to develop malware that adapts to the environments it infects. 

Experts from the computer security firm Cyberark explained that they were able to use ChatGPT to create polymorphic malware, which is malicious software that can alter its own code to avoid detection and make removal more complex.

“One of the powerful capabilities of ChatGPT is the ability to easily create and continually mutate injectors. By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect,” the report explained.

How did they do it? The researchers developed code that made periodic queries to ChatGPT in search of new modules that execute malicious actions. And they succeeded. This basically means that attackers don’t even need to update their code every time they are discovered. The code can “edit” itself. “The use of ChatGPT’s API within malware can present significant challenges for security professionals. It’s important to remember, this is not just a hypothetical scenario but a very real concern,” they added.

But just because everyone is now jumping on the AI bandwagon, it doesn’t mean it’s anything new. The cybersecurity field has been leveraging these solutions for some time. “Attackers always relied on some form of artificial intelligence to perform better attacks than they did just a month or even weeks earlier. Artificial intelligence adapts much faster than institutions trying to protect against threats,” explained expert Sreekar Krishna, who worked at Microsoft. He said the company founded by Bill Gates has been using this technology to improve its security systems for at least a decade.

But here’s the thing: ChatGPT makes it easier. This used to require several algorithms. Now, it can be done with just one.

According to Krishna, big tech companies like Google or Netflix string together algorithms to get the job done. One model generates an output that serves as input for the next, and so on, until they achieve the desired result. Today, ChatGPT can do the job of several algorithms at once. And it’s within everyone’s reach.

But just as criminals are using these systems to create new forms of attack, experts are using them to improve their methods. David Cieslak, executive vice president of RKL eSolutions, says artificial intelligence has been making life easier for those working in cybersecurity for years. It has helped develop tools that may seem simple, like spam filters or virus scanners, and others that are much more complex.

“Is artificial intelligence being used to attack? Yes. And to defend against attacks? Also yes. This is similar to what I hear about quantum computing. They say it will be able to break currently unbreakable codes instantly. But at the same time, quantum computing can make us potentially unhackable. Both teams are playing with the same ammunition,” Cieslak said.

By Axel Marazzi

Axel is a journalist who specializes in technology and writes for media such as RED/ACCIÓN, Revista Anfibia, and collaborates with the Inter-American Development Bank. He has a newsletter, Observando, and a podcast, Idea Millonaria.

Leave a Reply

Your email address will not be published. Required fields are marked *