Take a quick skim through the cybersecurity industry press. At first glance it might appear that AI-driven cybersecurity solutions are just around the corner. Article after article claims – with little or no evidence – that we will soon be living in a world where AIs will protect us against the most sophisticated cyberattacks.
The reality is a little more nuanced than this. There are certainly some areas in which AI can improve cybersecurity, but these capabilities can be hard to discern in the hype around the technology. In addition, AI is unlikely to be able to protect us against forms of attack that rely on human error – like credit card skimming.
In this article, we’ll take a look at the state of the art when it comes to using AIs in cybersecurity. We’ll review what this technology can do, what it can’t, and how it will continue to inform the development of the industry.
ML and AI
The first point to get out of the way is that many companies that claim to use “AI” are doing no such thing. Instead, they are more likely working with Machine Learning (ML) systems. The difference between the two approaches can be complex, but put simply, ML is a sub-set of AI. ML systems are able to take massive sets of data, isolate patterns, and make predictions based on these patterns. They are not, in other words, truly “intelligent” in the way that AI systems claim to be.
This is not to say that ML systems are not useful. The average firm today faces hundreds, if not thousands, of cyberthreats every week. Many of these conform to attack patterns that are well-known, but difficult to spot for human operators. ML systems can be very useful in alerting their human peers to unusual network activity.
Some of these systems have been in place for years. Google, for instance, has long used ML techniques to spot suspicious Google Docs links being sent through Gmail, and to inform the spam filter that runs on the same email platform. More recently, this same system has been used to detect phishing and malware in the Play Store.
Offense and Defense
Such uses of ML are certainly useful for protecting businesses and consumers alike, but their ability to stop hackers may be undermined by the simultaneous use of AI in offensive cybersecurity operations. In other words, while AI can protect us against cyberattack, keep in mind that it is also deployed by the various surveillance networks created by governments and corporate entities that exist to harvest our data.
The development of AI-enhanced cyberweapons has been a major source of concern in both the defense industry and the tech sector for some years now. Some analysts fear that a new generation of AI-augmented offensive cyber capabilities will likely exacerbate the military escalation risks associated with emerging technology, especially related to inadvertent and accidental miscues.
These fears might sound quite distant from the average consumer worried about having their online banking account hacked, but this is unlikely to be the case for much longer. This range of offensive, AI-driven cyberweapons are already finding their way into the wrong hands, being used by hackers and cybercriminals against “normal people,” and not the military installations they were designed to target.
In fact, as the leaks of military-grade cyberweapons continue to increase, we may see the emergence of a new “arms race” between those companies and agencies charged with protecting our data and those tasked to produce weapons that can steal them. At the moment, the winner of this new cold war is difficult to predict.
Collaboration and Competition
In this context, it’s best to see the emergence of AI as a neutral development for cybersecurity. At the moment, it’s difficult to see whether the offensive capabilities of AI and ML will outweigh the extra protection they offer.
This is especially true when the dangers of unleashing truly autonomous AI are factored in. Allowing an AI system to shut down a bank’s online services in reaction to a perceived attack, for instance, could cause more harm than the attack itself. Just because an ML program has not seen a particular type of activity before does not mean it should be allowed to over-react.
At the moment, therefore, the most effective uses of AI in cybersecurity are those that take a collaborative approach. AI-informed cybersecurity systems are designed so that AIs are able to flag unusual activity and then allow a human to determine the best course of action. This kind of approach means that each member of the cybersecurity team, both human and AI, can do what they do best. The AI can sift through reams of data to identify unusual activity that a human would find it difficult to spot, but then pass this information to a truly intelligent entity – a human – who can best assess how to react to this activity.
In short, we should treat the current hype around AI, and what it can do for the cybersecurity industry, with a decent dose of healthy skepticism. In some ways, AI (or at least ML) is already helping to keep us safe online, since it underlies the operation of spam filters and anti-virus programs. Extending the reach of these systems could improve their efficacy, but it also risks handing over a dangerous level of control over our digital lives.
Because of this, it’s unlikely that AI systems will develop into the cybersecurity panacea that some are predicting they will. Instead, anyone looking to protect themselves or their business from hackers and scams is going to need expert cybersecurity advice for some time yet. The best approach to protecting yourself is, in other words, constant vigilance and research into emerging threats.
- AI is Everywhere, But How Can It Help Cybersecurity? - 11/12/2020