AI is playing an increasingly important role in cybersecurity, and unleashing significant — and in some respects unprecedented — advantages in several areas. For example:
- Vulnerability Management: AI and machine learning use advanced techniques to analyze unusual or patterns and vulnerabilities.
- Data Handling: AI monitors network transactional data, in order to guard it against potential threats.
- Threat Detection: AI uses behavioral analysis to identify malicious activities in real time and immediately responds to threats.
- Network Monitoring and Security: AI establishes a point of reference for network traffic, and uses this baseline to evaluate and protect the network.
- Higher Security Over Time: AI gets smarter and improves its security measures continuously with time.
Clearly, AI has become a vital piece of the overall cybersecurity puzzle — and we are just getting started. Consider that by 2030 the size of the global AI-based security market is expected to reach $133.8 billion, up from $14.9 billion in 2021 (all figures USD).
The Rise of Offensive AI
Despite this positive development, there is also growing awareness that adversaries are updating their playbook to exploit AI. This phenomenon has been dubbed “offensive AI” (also known as weaponized AI). A survey of security leaders commissioned by Darktrace and conducted by Forrester found:
- 88 percent believe that offensive AI is inevitable.
- 77 percent expect offensive AI to lead to an increase in the scale and speed of attacks.
- 75 percent cite system/business disruption as their top concern about offensive AI.
- 66 percent believe that offensive AI will trigger attacks that no human can envision or anticipate.
A separate survey by Deloitte found that 43 percent of CIOs have “major or extreme concerns about potential AI risks,” with 49 percent saying that AI-related cybersecurity vulnerabilities are among their biggest fears.
How Adversaries are Exploiting AI
According to Hubsecurity.com, here are some of the ways that adversaries are exploiting vulnerabilities in AI and ML systems:
- providing systems with malicious inputs, which cause high-volume algorithms to make false predictions; as a result, machines are forced to make decisions on unverified data
- corrupting (also known as “poisoning”) the integrity and reliability of datasets
- deceiving task-specific ML models designed to achieve designated objectives through specialized training
- giving online systems false inputs or gradual retraining, so that they provide faulty outputs
- launching inconspicuous data extraction attacks that target the ML model itself; adversaries are also carrying out smaller sub-symbolic extraction attacks, which are relatively less complex, faster, and require fewer resources
In addition, adversaries are using AI to carry out disinformation (“fake news”) campaigns; some of which look so authentic and credible, that they are tricking seasoned cybersecurity professionals. Comments Fast Company:
“Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. These transformers are also used to spread false threat intelligence, and do so well enough to stump actual cyberthreat hunters. When this misinformation regarding cyberthreats is spread, it can force the security team to refocus its attention to fake risks, which leaves systems open to real harmful attacks that could have tragic results.”
How to Protect AI Systems
What can businesses do to protect their AI systems? Experts point to three core pillars:
-
Maintain extremely strict security protocols across the entire data environment. This includes (but is not limited to) the principle of least privilege, zero trust, segregation of duties, defense-in-depth, and the four-eyes principle.
-
Log and create an audit trail to capture all operations performed by AI. Visibility is needed to ensure that transparency requirements are being met — but not exceeded. To mitigate risk, it may be necessary or desirable to dial back or disallow certain model forms that are too complex or opaque.
-
Implement strong access control and authentication. This includes MFA, robust password management, and comprehensive end user training. For most businesses, this also includes privileged session management and/or privileged access management to support functions such as secure credential injection, automated password rotation, account discovery, alerts/notifications, and checkout request governance.
Fighting Back with Defensive AI
We are also seeing the development and emergence of “defensive AI,” which is the use of AI and ML (machine learning) methods to analyze large inputs of data and discover the normal and abnormal behavioral patterns of a system — ultimately figuring out new kinds of attacks, while continually enhancing accuracy. Comments the World Economic Forum:
“Defensive AI is not merely a technological advantage in fighting cyberattacks, but a vital ally on this new battlefield. Rather than rely on security personnel to respond to incidents manually, organizations will instead use AI to fight back against a developing problem in the short term, while human teams will oversee the AI’s decision-making and perform remedial work that improves overall resilience in the long term.”
Indeed, the primary role of humans — and more specifically, highly trained cybersecurity specialists — in the new world of defensive AI should not be overlooked. Ideally, AI will be responsible for sifting through enormous volumes of data, carrying out routine tasks, and adapting to attacks. And while the machines are busy doing that, humans can monitor the integrity, accuracy, and competence of decision-making, while helping AI get smarter.
ITPro.com points out:
“Just like many other implementations of AI, defensive AI helps humans by doing a lot of the heavy lifting, takes some actions autonomously, learns as it goes along, reports to humans and helps humans – and organisations – achieve their goals.”
And Forbes Business Council contributor Victor Fredung adds:
“What is essential is to build a holistic approach, using both human analysts and AI software to complement each other. Delegating time-consuming activities posed by low-level security risks to software leaves skilled personnel free to focus on aspects of security that require a human touch. A successful relationship between AI and skilled staff can make a huge difference in guarding against cybercrime.”
The Final Word
Although it can seem supernatural at times, AI is by no means a magic wand. However, it can — and frankly, it must — play a significant role in strengthening cybersecurity, since adversaries have already demonstrated their eagerness to carry out AI-powered cyberattacks: everything from endless spamming to AI fuzzing.
As a society, we have crossed the line on AI, and there is no going back. Ensuring that core cybersecurity pillars and defensive AI join forces to defeat offensive AI will go a long way in determining whether this chapter in our species’ history will be a triumphant success story — or a chilling cautionary tale.