AI chatbots as double-edged swords in cybersecurity
The arrival of generative artificial intelligence (AI)Artificial intelligence (AI) refers to computer systems capable of performing tasks typically requiring some form of intelligence, like decision-making, visual perception, and speech recognition. … Continue reading has profoundly shaken up our society. As we grasp the ever-mounting grip AI chatbots have on us, we are starting to investigate in what way these new technologies will change our world – including cyberspace.
Cybersecurity is still a broadly under-discussed issue that doesn’t receive enough attention – except for the occasional cyberattack that makes the headlines. However, cybersecurity is imperative for all of us, specifically for the critical actors working at the front lines of society. For civil society organizations (NGOs), cybersecurity is of paramount importance, since any successful attack could impede an NGO’s humanitarian mission and inflict serious human harm.
It is, therefore, crucial to reflect on how generative AI will transform cyberspace – both for better and for worse. As with many other technologies, AI chatbots appear to be a double-edged sword, for cybersecurity and beyond. We aim to shed some light on both the positive and negative side, so that we may both benefit from the opportunities and act preemptively to potential new threats.
The Rise of AI in Cybersecurity
Taking a step back, this is by far not the first time AI is mobilized in cybersecurity. And not the first time it is a double-edged sword either. AI has been used in cybersecurity since the 1990s when machine learning (ML) and neural networks were applied to spam filteringFabio Cristiano et al., “Artificial Intelligence and International Conflict in Cyberspace,” in Artificial Intelligence and International Conflict in Cyberspace, by Dennis Broeders et al., … Continue reading. With the advent of ‘big data’, AI systems can identify vulnerabilities, and thus more easily keep up with cybercriminals and the large volume of new viruses generated.
Furthermore, AI-powered cybersecurity can automate threat detection, more accurately detect harmful bots, provide enhanced endpoint security for remote-working devices, and respond more efficiently to traditional cyberattacksAshaq Azhar Mohammed, “Artificial Intelligence for Cybersecurity: A Systematic Mapping of Literature,” International Journal of Innovations in Engineering Research and Technology 7, no. … Continue reading. As AI became a shield for cybersecurity, threat actors started seeing its potential as a sword. AI-driven cyberattacks are more adaptable to their environment, learn from past cyberattacks and improve their capabilities to hide from anti-virus systemsCong Truong Thanh and Ivan Zelinka, “A Survey on Artificial Intelligence in Malware as Next-Generation Threats,” MENDEL 25, no. 2 (December 20, 2019): 27–34..
Generative AI Chatbots as the new Double-Edged Sword of Cybersecurity
Nowadays, the commercially available large language models (LLM) infused chatbots such as ChatGPT, Bing, and Google Bard are transforming the landscape of cybersecurity once again. The two abilities of AI chatbots that have the most potential to change cybersecurity are the ability to generate and analyze code, and the ability to generate text, speech, and images. We will discuss both and show how each can make cyberspace both safer and more dangerous.
AI Chatbots as a New Line of Defense
Cybersecurity experts have started praising ChatGPT for its potential use in cybersecurity defense. When used properly, ChatGPT has shown the ability to understand and locate malware code and provide valuable insights and remediation advice. This could make AI chatbots a valuable asset for NGOs to render their cybersecurity operations more efficient and sophisticated and to automate mundane daily tasks to free up time for more advanced workPrewitt, Christopher. 2023. “Council Post: Four Ways ChatGPT Is Changing Cybersecurity.” Forbes, September 3, 2023, sec. Innovation. . Furthermore, AI chatbots can lower the bar for entry for junior cybersecurity analysts by making it easier to use and understand new cybersecurity toolsAneiro, Thomas. 2023. “ChatGPT Is about to Revolutionize Cybersecurity.” VentureBeat (blog). May 14, 2023..
However, this promising use for AI chatbots comes with more potentially negative side effects. Increased automation can lead to the loss of jobs in the cybersecurity and IT sector. What is more, due to the lack of data protection in most AI chatbots, any detailed information that is provided as inputs into chatbots is at risk to be harvested by malicious actors, giving them an edge on any new cybersecurity tools being developed with the help of generative AI.
AI Chatbots as a New Attack Vector
AI chatbots have an increasingly sophisticated knowledge of coding languages, allowing anyone to create functioning code in any coding language in a matter of seconds. While this has the potential to democratize computer coding, it also has the potential of democratizing cyberattacks. Seven out of ten IT experts think that ChatGPT is a potential cybersecurity threat. And in the same survey on IT professionals across North America, the UK, and Australia, more than half of experts predict that ChatGPT will be used in a successful cyberattack within a yearJonathan Jackson, “The Growing Influence of ChatGPT in the Cybersecurity Landscape,” BlackBerry Blog (blog), March 3, 2023..
Only a month after the release of ChatGPT, CheckPoint Research (CPR) analysis detailed how ChatGPT was able to draft a phishing email, implement code into an attached Excel file that would download and run ChatGPT-generated malware and give complete control over the target computer to the researchersBen-Moshe, Sharon, Gil Gekker, and Golan Cohen. 2022. “OpwnAI: AI That Can Save the Day or HACK It Away.” Check Point Research. December 19, 2022.. With astonishing ease, the researchers were able to create a functioning reverse shell without needing to write a single line of code. A few days later, a user named USDoD disclosed on an underground hacking forum that he was able to create a Python-based encryption script suitable to create ransomware using only ChatGPT, despite having no discernable knowledge of how to create scriptsZacharakos, Alexis. 2023. “How Hackers Can Abuse ChatGPT to Create Malware | TechTarget.” Security. February 22, 2023.. And in early April 2023, Aaron Mulgrew, a researcher for Forcepoint, showed in a blog post how he was able to create an undetectable virus using only ChatGPT promptsMulgrew, Aaron. 2023. “I Built a Zero Day Virus with Undetectable Exfiltration Using Only ChatGPT Prompts.” Forcepoint (blog). April 4, 2023.. These are only a few examples of how the astounding and almost magical coding powers of AI chatbots can be used by threat actors to automate the creation of malware. What is more, just as AI chatbots have the potential of lowering the barrier of entry for cybersecurity newcomers, they can equally lower the barrier for entry for technically unskilled actors to become dangerous cyber criminalsInsikt Group. 2023. “I, Chatbot | Recorded Future.” Cyber Threat Analysis. Recorded Future..
Generating Text, Images, and Audio
AI Chatbots used for Education and Capacity Building
The AI chatbots’ ease at generating human-like content is simultaneously one of its biggest strengths and an additional threat to cybersecurity. On the promising side, AI chatbots can be used as powerful educational tools. They offer a way for organizations to quickly and easily create personalized lesson plans, digital flashcards, and interactive scenarios making it easier for people to learn appropriate cyber hygiene practicesHeaven, Will Douglas. 2023. “ChatGPT Is Going to Change Education, Not Destroy It.” MIT Technology Review. 2023.. Most promisingly, AI chatbots can help people learn by applying critical thinking instead of rote memorizationAbramson, Ashley. 2023. “How to Use ChatGPT as a Learning Tool.” APA Monitor on Psychology 54 (3).. Furthermore, AI chatbots can foster individualized learning, by allowing individuals to educate themselves about cybersecurity threats and how best to protect themselves. This can increase digital literacy and potentially protect NGOs and their staff from routine cyberattacks such as phishing campaigns.
These opportunities come with significant caveats, as LLMs are still far from perfect and AI chatbots are prone to hallucination and producing outright false informationAlkaissi, Hussam, and Samy I McFarlane. 2023. “Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.” Cureus, February.. While it is unlikely that chatbots will provide false information regarding basic cyber hygiene, it might lead to a false sense of confidence in one’s cybersecurity knowledge when it comes to more sophisticated topics. Additionally, due to the lackluster data protection from AI chatbots, any sensitive and personal data inserted into the chatbot can be leaked and fall into the hands of the threat actors which NGOs might have sought to protect themselves from. Sensitive data used, when leaked, can lead to financial damages for the organization and a loss of trust from beneficiaries and donors alike.
AI Chatbots Enhance Phishing and Disinformation Campaigns
On the dangerous side, generative AI can help threat actors craft persuasive phishing messages that avoid typical giveaways such as spelling mistakes or confusing languageInsikt Group. 2023. “I, Chatbot | Recorded Future.” Cyber Threat Analysis. Recorded Future.. Beyond the text-generating AI chatbots, with increasingly sophisticated text-to-speech (TTS) generators that can mimic voices from short recordings, scammers can conduct more convincing phone scams and social engineering effortsMorrison, Ryan. 2023. “Microsoft’s New VALL-E AI Can Clone Your Voice from a Three-Second Audio Clip.” Tech Monitor (blog). January 10, 2023.. Phishing attacks are already one of the largest threats to NGOs, and without adequate capacity and awareness building, these upgraded phishing attacks will become even more detrimental to civil society. Disinformation equally benefits from the human-like writing skills of AI chatbots. Even more worryingly, the tendency for chatbots to hallucinate and their flimsy relationship with the truth makes them an ideal tool for creating disinformation. Both phishing and disinformation campaigns are limited by the time it requires to create personalized phishing emails and fake news articles. Generative AI resolves this issue and makes these campaigns both more efficient in their production and more effective in their desired outcome. In other words, AI chatbots will increase both the quantity and quality of phishing and disinformation campaigns.
Generative AI might well turn out to be a double-edged sword when it comes to the cybersecurity of organizations. Using AI chatbots in IT teams can alleviate work and render operations more efficient but might lead to job displacement or data leaks. And while AI chatbots can be used to draft communications or write reports and blog posts, they can also be used to craft persuasive phishing and disinformation campaigns.
What can be done to reduce or mitigate the negative side of this double-edged sword? For organizations to leverage the power of generative AI to improve both their internal cybersecurity operations and to educate their staff on cyber hygiene, these large language models need to be based on robust and unbiased training data. AI developers need to implement stricter data protection and guarantee that no input data can be harvested by third parties. Concerning the malicious use of AI chatbots, AI companies have introduced increasingly rigid guardrails to prevent unintended use cases – with limited success. It will be essential to push for more awareness and capacity building of NGOs and their staff on the new generation of phishing and disinformation campaigns that appear to be around the corner. Considering that NGOs already today don’t have the resources to invest in their cybersecurity, it will require the collaboration of all stakeholders to help protect civil society in the changing threat landscape of cyberspace.
|↑1||Artificial intelligence (AI) refers to computer systems capable of performing tasks typically requiring some form of intelligence, like decision-making, visual perception, and speech recognition. Techniques vary in complexity and include algorithms, predictive models, computer vision, deep learning, machine learning, neural nets, and natural language processing (Nonnecke and Dawson 2022). Generative AI refers to an algorithm that is capable of generating new content, such as images, sound, text, and code (Lawson 2023).|
|↑2||Fabio Cristiano et al., “Artificial Intelligence and International Conflict in Cyberspace,” in Artificial Intelligence and International Conflict in Cyberspace, by Dennis Broeders et al., 1st ed. (London: Routledge, 2023), 1–15.|
|↑3||Ashaq Azhar Mohammed, “Artificial Intelligence for Cybersecurity: A Systematic Mapping of Literature,” International Journal of Innovations in Engineering Research and Technology 7, no. 9 (September 2020).|
|↑4||Cong Truong Thanh and Ivan Zelinka, “A Survey on Artificial Intelligence in Malware as Next-Generation Threats,” MENDEL 25, no. 2 (December 20, 2019): 27–34.|
|↑5||Prewitt, Christopher. 2023. “Council Post: Four Ways ChatGPT Is Changing Cybersecurity.” Forbes, September 3, 2023, sec. Innovation.|
|↑6||Aneiro, Thomas. 2023. “ChatGPT Is about to Revolutionize Cybersecurity.” VentureBeat (blog). May 14, 2023.|
|↑7||Jonathan Jackson, “The Growing Influence of ChatGPT in the Cybersecurity Landscape,” BlackBerry Blog (blog), March 3, 2023.|
|↑8||Ben-Moshe, Sharon, Gil Gekker, and Golan Cohen. 2022. “OpwnAI: AI That Can Save the Day or HACK It Away.” Check Point Research. December 19, 2022.|
|↑9||Zacharakos, Alexis. 2023. “How Hackers Can Abuse ChatGPT to Create Malware | TechTarget.” Security. February 22, 2023.|
|↑10||Mulgrew, Aaron. 2023. “I Built a Zero Day Virus with Undetectable Exfiltration Using Only ChatGPT Prompts.” Forcepoint (blog). April 4, 2023.|
|↑11, ↑15||Insikt Group. 2023. “I, Chatbot | Recorded Future.” Cyber Threat Analysis. Recorded Future.|
|↑12||Heaven, Will Douglas. 2023. “ChatGPT Is Going to Change Education, Not Destroy It.” MIT Technology Review. 2023.|
|↑13||Abramson, Ashley. 2023. “How to Use ChatGPT as a Learning Tool.” APA Monitor on Psychology 54 (3).|
|↑14||Alkaissi, Hussam, and Samy I McFarlane. 2023. “Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.” Cureus, February.|
|↑16||Morrison, Ryan. 2023. “Microsoft’s New VALL-E AI Can Clone Your Voice from a Three-Second Audio Clip.” Tech Monitor (blog). January 10, 2023.|