Skip to main content
Reading Time: 6 minutes

This article is based on a recent paper, written by Vincent van Wezel, Väinö Saarinen, Ronán Aardenburg and Laura van IJzendoorn. This group researched how the development of Artificial Intelligence (AI) has affected cybersecurity and whether AI can defend against its malicious use.

Introduction to AI in Cybersecurity

Artificial Intelligence (AI) has already entered our lives significantly as it is already being used in medicine, robotics, transport, Natural Language Processing, Neural Computing, accounting, weather forecasting and more (Pannu & Student, 2015). Still, then the full range of applications of AI is yet to be discovered. As AI develops more and more it comes as no surprise that AI has already found its way into cybercrime. Recent developments also show that AI is making its way into cybersecurity (Gowtham & Krishnamurthi, 2014). Which can lead to wondering what will happen when these two meet.

What risks does AI pose to cybersecurity?

Attacking the human element of Cybersecurity through generative AI

Human error is one of the largest risks against cybersecurity. According to Verizon’s 2023 Data Breaches Investigations Report (DBIR Report 2023 – Master’s Guide, 2023) over 82% of all data breaches in 2023 involved a human element. To exploit this human risk to cybersecurity even more, AI could be used. Gupta et al (2023) add to this argument as they explain that generative AI can be used to construct highly convincing phishing emails. They state that AI can be used to impersonate high-ranking individuals within an organisation through deep fake audio or video, or convincing social engineering messages can be generated through AI. Through this phishing attacks like the spearphishing used in the Bangladesh Bank Heist, which has been analysed in a recent ITHappens article, can become even more sophisticated.

Evasion of Detection

In order to detect cyber threats from malicious users, mainstream cybersecurity methods rely on rule-based algorithms and signature-based detection methods that snuff out the most common forms of attacks. This is where AI could pose a threat as, according to Carlini and Wagner (2017), AI can make use of adversarial attacks that trick other machine learning algorithms into misclassifying certain inputs. This can be used to evade intrusion detection and circumvent firewalls.

Model Manipulation and poisoning

Most AI used in the present day makes use of machine learning models in order to construct new results based on user input. These machine-learning models are fed a large amount of data from which it derives patterns, relations and common outcomes. According to a study by Muñoz-González et al (2017), data manipulation is a valid risk when using AI in a cybersecurity context, as data manipulation can fundamentally change the way a machine learning algorithm functions.

According to Handa et al (2019) an existing model could also be analysed to discover which inputs lead to certain outcomes. Hereby, the existing model of an already trained machine learning algorithm is exploited to circumvent its cybersecurity defences. By utilising a large amount of computing power, a great variety of inputs can be tested against the machine learning program to replicate its machine learning algorithm. Once replicated, the model can be manipulated to find weak points which can be exploited in the original model. 

Learning from past mistakes & Automatization

The risk of learning from past mistakes is what makes AI one of the most powerful adversaries in the cybersecurity field. Machine learning algorithms can perfect their handling of certain issues to an almost perfect extent, their evasion of cybersecurity systems could therefore theoretically also be perfected.

 “As the intelligence of AI systems improves, practically all crimes could be automated.” (Yampolskiy, 2016).

Automatization is another element that strengthens all aspects of malicious AI use cases. As a computer program could be set up to work on tearing down cybersecurity defences 24/7, it is only a matter of time before it is successful in doing so. By combining this automatization with the ability to learn from past mistakes, a malicious AI could be multiple steps ahead of any cybersecurity protocol once it has gathered sufficient input from past tries.

What are the possibilities of Artificial Intelligence within cybersecurity?

AI is already becoming vital in the cybersecurity industry. Multiple companies are already using it in their defence against possible cyberattacks, but also to protect the company from human errors, such as phishing (Gowtham & Krishnamurthi, 2014). AI can be used to automate easy day-to-day tasks, such as data analysis, monitoring network traffic and analysing user behaviour. It can also be applied to support cybersecurity engineers in their programming, e.g. to prevent mistakes from entering a live-build or reminding engineers that a weakness still exists in their system. 

This is all that AI can do in the present, but what can AI do in the future for cybersecurity? This is a difficult question to answer, as we still do not know the full range of capabilities of AI, especially with quantum computing on the horizon, which can completely change the current IT landscape.

In addition, we don’t know what cyberattacks might consist of in the future. The way malicious actors attack in the future may change, and it is possible that AI cannot support us in these kinds of attacks at all.

Something that we know for sure is that AI will be able to be applied to the current range of applications. It will still be used to assist cybersecurity engineers whilst programming, and will still be able to identify phishing emails, automate easy tasks such as data analysis and monitor network traffic. 

Is Cybersecurity Artificial Intelligence able to defend against malicious Artificial Intelligence?

Nowadays, AI is being leveraged extensively in all kinds of ways to advance the defensive capabilities of cybersecurity. AI can be used for its powerful automation and data analysis capabilities. An AI system can take advantage of what it knows and understand past threats to identify similar attacks in the future, even if their patterns change.

Undoubtedly, artificial intelligence has several advantages when it comes to cybersecurity in the following aspects: AI can discover new and sophisticated changes in attack flexibility and AI can handle the volume of data. Furthermore, AI can automatically detect and respond to threats. Automatically detecting and responding to threats has helped to reduce the work of network security experts and can assist in detecting threats more effectively than other methods (Truong et al, 2020).

Though AI can do so much regarding cybersecurity, one could question the effectiveness of using AI in cybersecurity.

“we can arrive at a simple generalisation: An AI designed to do X will eventually fail to do X” (Yampolskiy 2016)

By this, the author means that AI failures will always exist, and we cannot rely on AI alone. The author names some examples of AI failures, like software that was designed to make discoveries, instead discovered how to cheat or a self-driving car that got into an accident.

These AI failures, conversely, stem from errors inherent in the very intelligence that these systems are engineered to demonstrate. We can categorise these failures broadly into two categories: those occurring during the learning phase and those during the performance phase (Yampolskiy, 2016).

In the end, one of the biggest challenges when incorporating AI into cybersecurity is to think of the design and the way that data is managed in order to maintain availability and accessibility across various platforms (Chakraborty et al, 2023).


Artificial Intelligence is both a blessing and a curse to cybersecurity, where some aspects cause greater threats, others open up the possibilities to defend from both new and existing threats to an even greater extent. Many malicious use cases of AI play into the same pitfalls cybersecurity currently faces. In the end, time will tell which side will win this fight.


Carlini, N., & Wagner, D. (2017). Adversarial examples are not easily detected: bypassing ten detection methods. arXiv (Cornell University).

Chakraborty, A., Biswas, A., & Khan, A. K. (2023). Artificial intelligence for cybersecurity: Threats, attacks and mitigation. In Intelligent systems reference library (pp. 3–25).

DBIR Report 2023 – Master’s Guide. (n.d.). Verizon Business.

Gowtham, R., & Krishnamurthi, I. (2014). A comprehensive and efficacious architecture for detecting phishing webpages. Computers & Security, 40, 23–37.

Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, 11, 80218–80245.

Handa, A., Sharma, A., & Shukla, S. K. (2019). Machine learning in cybersecurity: A review. Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery, 9(4).

Muñoz-González, L., Biggio, B., Demontis, A., Paudice, A., Wongrassamee, V., Lupu, E., & Roli, F. (2017). Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. AISec.

Statista. (2023, August 29). Annual amount of financial damage caused by reported cybercrime in U.S. 2001-2022.

Truong, T. C., Diep, Q. B., & Zelinka, I. (2020). Artificial intelligence in the cyber domain: offense and defense. Symmetry, 12(3), 410.

Truong, T. C., Zelinka, I., Plucar, J., Čandík, M., & Šulc, V. (2020). Artificial intelligence and cybersecurity: past, presence, and future. In Advances in intelligent systems and computing (pp. 351–363).

Yampolskiy, R. V. (2016). Taxonomy of pathways to dangerous artificial intelligence. National Conference on Artificial Intelligence.

Yampolskiy, R. V. (2016). Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures. arXiv (Cornell University).

Leave a Reply