UK IT leaders fear malicious use of ChatGPT by foreign states

Half believe chatbot will be used in an attack this year

UK IT leaders fear malicious use of ChatGPT by foreign states

UK IT leaders are concerned about the potential for malicious use of the ChatGPT chatbot by foreign states, according to a new study.

The survey of 500 UK IT decision makers by Blackberry found that, whilst 60% saw ChatGPT as generally being used for good, 72% were worried about its potential for malicious use in cybersecurity. In fact, almost half (48%) believed that a successful cyberattack using the technology could occur within the next year.

ChatGPT, OpenAI's free chatbot based on GPT-3.5, was released on 30th November and reached a million users within five days. Analysts are now calling it the fastest-growing web app in history, having reached 100 million users by January.

The bot is regularly being used to take the stress out of day-to-day tasks, like writing professional emails, essays and code - and, according to reports, phishing emails. More than half (57%) of IT leaders said the chatbot's ability to write more believable phishing emails was a top concern, along with the rise in attack sophistication (51%) and its ability to accelerate new social engineering attacks (49%).

Similar numbers had concerns about the faster spread of misinformation (49%), and the chatbot's potential to help novice hackers improve their technical knowledge (47%).

An overwhelming majority of respondents, 88%, said governments have a responsibility to regulate technologies like ChatGPT.

Shishir Singh, CTO cybersecurity at BlackBerry, said ChatGPT would likely increase its influence in the cyber industry over time. But, he added, there were reasons to be optimistic as well.

"We've seen a lot of hype and scaremongering, but the industry remains fairly pragmatic - and for good reason. There are many benefits to be gained from this kind of advanced technology and we're only scratching the surface, but we can't ignore the ramifications."

A January study by WithSecure showed how the GPT-3 model, which ChatGPT is based on, could be used to make social engineering attacks more difficult to detect and easier to carry out. Using AI based on language models attackers could tweak phishing emails to be more personal, launching large-scale attacks with bespoke variation.

"The generation of versatile natural-language text from a small amount of input will inevitably interest criminals, especially cybercriminals," the researchers said. "Likewise, anyone who uses the web to spread scams, fake news or misinformation may have an interest in a tool that creates credible text at super-human speeds."