AI and Cybersecurity: A Double-Edged Sword

Artificial intelligence and its ability to automate tasks and take decisions faster than humans is a topic on everyone’s lips lately. AI has reached, in recent years, remarkable performance and is increasingly being used in day-to-day life in software development, process automation, facial recognition systems, text editors, search/recommendation algorithms and so on. Many suggest that this technology has the capacity to significantly change the world as it is today.  However, this technology has, clearly, a dual aspect to it since it can be used in both positive and malicious ways. One example would be the automation of  processes that reduce the time to discover a fracture in a radiography versus the deep fake technology used to manipulate people. 

No wonder, AI has become an integral part of the geopolitical competition between China and the US, and both states are investing heavily in dominating this space. Thinking about the use of AI and other technologies in augmenting the power of cyber attacks, we see a future where privacy is long gone. Our aim with this article is to look into the growing issues of the use of AI and the implications of their increased use in today’s society, by taking a strategic perspective to a technological topic.

Impact of Artificial Intelligence in CybersecurityArtificial Intelligence, as mentioned before, comes with great opportunities and a plethora of uses. On the other hand, it brings about great challenges. This is because, while it can be used to improve cybersecurity defensive mechanisms, as DarkTrace is doing for example , it can also be a tool to enable better targeted and more sophisticated cyber attacks. Accordingly, it can affect the entire cybersecurity landscape by expanding existing threats, introducing new ones or by altering the typical characteristics of threats. Typically, AI supercharged cyber attacks will have a decision making capability far greater than human decision, and malware coupled with machine learning algorithms can be perfectioned at great speed and complexity to surpass any modern defensive mechanisms. If this sounds scary, it probably is, and the race for domination in AI implies its use for offensive purposes as well by state and non-state actors.

What is the current threat landscape? Well, we can, most importantly, point out the fact that recent developments have seen a move towards some sort of weaponization of data. Data is the new petrol, they say and this can be seen often in the tech industry which are competing over the acquisition of data from the users fiercely. This trend can be seen in multiple domains, from marketing to politics and international relations. Information is, itself, a double-edged sword that can both empower through knowledge and further ‘evil’ purposes. 

This development can also be seen when it comes to certain AI technologies such as deepfakes.  Based on machine learning, this technology allows the creation of images, audio and video content that emulate legitimate content. As a result, it manipulates already existing data and creates strikingly real-looking new content. One can see, even with this broad explanation of what deepfakes are, how they can become a threat. The higher their quality is, the more confusion they can spread and the more far-reaching disinformation campaigns are. The highest quality deepfakes are obtained through GANs (generative adversarial networks) which use two machine learning systems going against each other in a competition. This technology is often seen as artificial ‘imagination’ and is based on a battle, forcing one of the two machine learning systems to create realistic images in order to fool the second system. Within the next few years, it is said that both states and criminal threat actors involved in disinformation operations will likely choose such means, as online media consumption shifts more towards ‘seeing is believing’. 

Deepfake technology is often misused for malicious purposes, including scams and election manipulation. Moreover, it can be used to bypass biometric security protections, leading possibly to identity theft and financial frauds. From a cybersecurity perspective, it is increasingly being said that social engineering-based attacks can be furthered through deepfakes, since availability of personal information online increases. Just recently, an employee from an UAE company was tricked into sending $35 million to a person that deepfaked the voice of his boss. This is believed by many to constitute the next big change in the cyber threat landscape – the number of deepfake phishing incidents will increase mainly because the technology is mature, it is harder to mitigate them as a threat than regular phishing, it is more effective at exploiting trust and is new as a phishing tactic, meaning that people are not expecting it and do not know how to deal with it. 


Apart from this technology being employed in the usual cybercrime domain, deepfakes can also damage the society at large through disinformation campaigns, for example. Deepfakes can affect the political and social stage and, apart from the already mentioned danger of manipulation of elections, malicious actors can easily use fabricated audio or video content to deeply disturb the functioning of a State. They can distort democratic discourse, exploit certain pre-existing social divisions, erode trust in public institutions, compromise military/intelligence operations, affect the economy and even damage a State’s image on the international level.

We can conclude that deepfakes can be used in malicious ways and have a more far-reaching impact on society than expected by most. They can be used as part of a larger hybrid attack, or can be deployed by themselves in order to destabilise and deceive. This goes hand in hand with a warning issued by the FBI in March this year, stating that ‘nation-states are virtually certain to use deepfakes to help propagate increasingly misleading campaigns in the US in coming months.’

AI technology is both a threat and a support system when it comes to cybersecurity, specifically. Experts point out the fact that the best and most efficient way to mitigate emerging cyber threats is to use AI itself as a defense mechanism. However, in cybersecurity, adversaries always have the upper hand and this will be the same using AI. We’ve seen that any amount of defensive cybersecurity is only valid until a breach happens and they usually do at some point. This is why we thought about creating something different, that helps organizations to discover how their security infrastructure responds to a real attack, using a real scenario. The StageOne product that we are developing, an adversarial attack simulation framework, has exactly this role: emulates the modus operandi of Advanced Persistent Threats and, in this way, identifies vulnerabilities before they can be exploited by attackers. Our vision with StageOne is adding machine learning capabilities to the implant in order to enable it to take decisions on its own and surpass even the most advanced defensive mechanisms. Why? Because we believe that the best defense can only come after knowing one’s weak points over constant tactics and strategies of attack. One could argue that technologies like this can bridge, to a certain extent, the gap that exists in cyberspace between defensive and offensive by using both in order to better adapt security systems to ever-changing threats.

The conclusion is that AI is here to stay, and whether some states want to regulate its ethical use, the bad guys are already using it to create the next generation cyber weapons or deep fakes. The advantages that come with technology are incredible and this is a large step for humanity. However, it must be borne in mind that one of the greatest threats of AI to society is not the technology itself, but its potential to be used in combination with other technologies and bring about unexpected challenges. This leads back to AI being seen as a double-edged sword, having both positive and more malicious uses. In any case, security should never be taken for granted and, as threats keep changing and evolving, so should the tactics employed in order to tackle them.



Alexandra Ivan

Felix Staicu