Friday, April 26, 2024

Is ChatGPT creating a cybersecurity nightmare? We asked the experts

Share

A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.Sanket Mishra / Pexels

ChatGPT feels pretty inescapable right now, with stories marveling at its abilities seemingly everywhere you look. We’ve seen how it can write music, render 3D animations, and compose music. If you can think of it, ChatGPT can probably take a shot at it.

Contents

  • Questionable abilities
  • A phisherman’s friend
  • Can ChatGPT boost your cybersecurity?
  • How to keep yourself safe

And that’s exactly the problem. There’s all manner of hand-wringing in the tech community right now, with commenters frequently worrying that AI is about to lead to a malware apocalypse with even the most green-fingered hackers conjuring up unstoppable trojans and ransomware.

Recommended Videos

But is this actually true? To find out, I spoke to a number of cybersecurity experts to see what they made of ChatGPT’s malware abilities, whether they were concerned about its potential for misuse, and what you can do to protect yourself in this dawning new world.

Related

  • Google’s ChatGPT rival just launched in search. Here’s how to try it

  • OpenAI’s new ChatGPT app is free for iPhone and iPad

  • These 2 new ChatGPT features are about to change everything

Questionable abilities

Matheus Bertelli / Pexels

One of the main attractions of ChatGPT is its ability to perform complicated tasks with just a few simple prompts, especially in the world of programming. The fear is that this would lower the barriers to entry for creating malware, potentially risking a proliferation of virus writers who rely on AI tools to do the heavy lifting for them.

Joshua Long, Chief Security Analyst at security firm Intego, illustrates this point. “Like any tool in the physical or virtual worlds, computer code can be used for good or for evil,” he explains. “If you request code that can encrypt a file, for example, a bot like ChatGPT can’t know your real intent. If you claim that you need encryption code to protect your own files, the bot will believe you — even if your real goal is to create ransomware.”

ChatGPT has various safeguards in place to combat this sort of thing, and the trick for virus creators is in bypassing those guardrails. Bluntly ask ChatGPT to create an effective virus and it will simply refuse, requiring you to get creative in order to outwit it and get it to do your bidding against its better judgment. Considering what people are able to do with jailbreaks in ChatGPT, the possibility of creating malware using AI feels possible in theory. In fact, it’s already been demonstrated, so we know it’s possible.

But not everyone is panicking. Martin Zugec, the Technical Solutions Director at Bitdefender, thinks the risks are still fairly small. “The majority of novice malware writers are not likely to possess the skills required to bypass these security measures, and therefore the risk posed by chatbot-generated malware remains relatively low at this time,” he says.

“Chatbot-generated malware has been a popular topic of discussion lately,” Zugec continues, “but there is currently no evidence to suggest that it poses a significant threat in the near future.” And there’s a simple reason for that. According to Zugec, “the quality of malware code produced by chatbots tends to be low, making it a less attractive option for experienced malware writers who can find better examples in public code repositories.”

Joe Maring/Digital Trends

So, while getting ChatGPT to craft malicious code is certainly possible, anyone who does have the skills needed to manipulate the AI chatbot is likely to be unimpressed with the poor code it creates, Zugec believes.

But as you might guess, generative AI is only just getting started. And for Long, that means the hacking risks posed by ChatGPT are not set in stone just yet.

“It’s possible that the rise of LLM-based AI bots may lead to a small-to-moderate increase in new malware, or an improvement in malware capabilities and antivirus evasion,” Long says, using an acronym for the large language models that AI tools like ChatGPT use to build their knowledge. “At this point, though, it’s not clear how much of a direct impact tools like ChatGPT are making, or will make, on real-world malware threats.”

A phisherman’s friend

If ChatGPT’s code-writing skills are not yet up to scratch, could it be a threat in other ways, such as by writing more effective phishing and social engineering campaigns? Here, the analysts agree that there is much more potential for misuse.

For many companies, one potential attack vector is the firm’s employees, who can be tricked or manipulated into inadvertently providing access where they shouldn’t. Hackers know this, and there have been plenty of high-profile social engineering attacks that have proved disastrous. For example, it’s thought that North Korea’s Lazarus Group started off its 2014 intrusion into Sony’s systems — resulting in the leaking of unreleased films and personal information — by impersonating a job recruiter and getting a Sony employee to open an infected file.

This is one area where ChatGPT could dramatically help hackers and phishers improve their work. If English is not a threat actor’s native language, for instance, they could use an AI chatbot to write a convincing phishing email for them that is intended to target English speakers. Or it could be used to rapidly create large numbers of convincing messages in much less time than it would take human threat actors to do the same task.

Things could get even worse when other AI tools are thrown into the mix. As Karen Renaud, Merrill Warkentin, and George Westerman have postulated in MIT’s Sloan Management Review, a fraudster could generate a script using ChatGPT and have it read out over the phone by a deepfake voice that impersonates a company’s CEO. To a company employee receiving the call, the voice would sound — and act — just like their boss. If that voice asked the employee to transfer a sum of money to a new bank account, the employee may well fall for the ruse due to the deference they pay their boss.

As Long puts it, “No longer do [threat actors] have to rely on their own (often imperfect) English skills to write a convincing scam e-mail. Nor must they even come up with their own clever wording and run it through Google Translate. Instead, ChatGPT — wholly unaware of the potential for malicious intent behind the request — will happily write the entire text of the scam e-mail in any desired language.”

And all that’s required to get ChatGPT to actually do this is some clever prompting.

Can ChatGPT boost your cybersecurity?

Shutterstock

Yet, it’s not all bad. The same traits that make ChatGPT an attractive tool for threat actors — its speed, its ability to find flaws in code — make it a helpful resource for cybersecurity researchers and antivirus firms.

Long points out that researchers are already using AI chatbots to find as-yet-undiscovered (“zero-day”) vulnerabilities in code, simply by uploading the code and asking ChatGPT to see if it can spot any potential weaknesses. That means the same methodology that could weaken defenses can be used to shore them up.

And while ChatGPT’s main attraction for threat actors may lie in its ability to write plausible phishing messages, those same talents can help train companies and users on what to look out for in order to avoid being scammed themselves. It could also be used to reverse engineer malware, helping researchers and security firms to quickly develop countermeasures.

Ultimately, ChatGPT by itself isn’t inherently good or bad. As Zugec points out, “The argument that AI can facilitate the development of malware could apply to any other technological advancement that has benefited developers, such as open-source software or code-sharing platforms.”

In other words, as long as the safeguards keep improving, the threat posed by even the best AI chatbots may never become as dangerous as has recently been predicted.

How to keep yourself safe

If you’re concerned about the threats posed by AI chatbots and the malware they can be abused to create, there are some steps you can take to protect yourself. Zugec says it’s important to adopt a “multi-layered defense approach” that includes “implementing endpoint security solutions, keeping software and systems up to date, and remaining vigilant against suspicious messages or requests.”

Long, meanwhile, recommends steering clear of files that you are automatically prompted to install when visiting a website. When it comes to updating or downloading an app, get it from the official app store or website of the software vendor. And be cautious when clicking on search results or logging into a website — hackers can simply pay to place their scam sites at the top of search results and steal your login info with carefully crafted lookalike websites.

ChatGPT is not going anywhere, and neither is the malware that causes so much damage all over the world. While the threat from ChatGPT’s coding ability may be overblown for now, its proficiency at crafting phishing emails could cause all manner of headaches. Yet it’s very possible to protect yourself from the threat it poses and ensure you don’t fall victim. Right now, an abundance of caution — and a solid antivirus app – can help keep your devices safe and sound.

Read more

More News