PRESS

AI for the Bad Guy

In cybersecurity, we rely excessively on data science to combat cyber attacks. But the adversary has access to the same tools we do. Instead of focusing on ways to prevent and detect attacks, we also need to make exploits costly for attackers. This is what Ridgeback does.

We have been fed a drumbeat of news on the transformative power of artificial intelligence for decades. A Time magazine cover from January 1950 was one of the earliest features on artificial intelligence that I have seen.  Since, there has been a rising crescendo of reports on A.I. in the general news media.  Back then it was futuristic.  Now it’s really, truly here (and has been for awhile).

The power of data science to achieve extraordinary outcomes is a gift.  It is also at the heart of the most rapidly growing categories of cybersecurity solutions: acquire comprehensive datasets from endpoints and network communications, apply analyses learned from across a huge universe of organizations’ computing environments and alert security operators when anomalous or harmful activity has been identified within their organization.  Of course, the behavior has to have already occurred to be assessed. Then, the security team can step in and arrest and remediate the effects of said bad activity.  The  smarter the A.I. tools, the better the identification of problematic activity.

There’s a new A.I. in town. ChatGPT is the most public face of Generative A.I.  ChatGPT is probably the fastest growth technology adoption ever – 100 million active monthly users in its first eight weeks.  McKinsey & Co. describes Generative A.I. as “algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.”  There is loads of value to be created from Generative A.I. in so many ways.  Might that value also redound to the benefit of our cyber adversaries?

A few issues worth considering:

Concern number 1. The adversary is, in fact, adversarial. In spite of our faith in the power of science to identify our vulnerabilities, we have to acknowledge that the behavior of our enemies – anyone seeking to wrongfully access our systems – is informed by knowledge of our defensive strategies.  They aren’t just up to no good, they are designing and evolving their exploits with a well-grounded understanding of the defensive techniques we employ, and, moreover, with their own access to the tools we use for our defenses.

Concern number 2. Threat actors subvert models. When they introduce signals into protected environments they create noise, distract security teams and subvert the models we’ve entrusted with the responsibility for detecting them.

Concern number 3. Threat actors also have access to generative A.I., just like we do. Now with Generative A.I. they can generate and release malicious code effortlessly, in volume, to innovate exploits against our own defensive measures. Could the power of generative A.I. increase both the volume and virulence of threat activity? Yes, it can.  See here (and many other places).

Concern number 4. More threat activity means more incident response. The volume of malware introduced has, by some estimates, reached over 500,000 new programs per day…or about six new programs every second. All day, every day. That is up from some 250,000 programs daily just four or five years ago.

When you combine concerns 1, 2 & 3 with the exponential growth of threat activity, the already heavy burden of incident response will keep escalating. We will incessantly need more hands on deck to respond.  What’s more, by virtue of the characteristics of the datasets we attempt to evaluate for hostile behavior, incident alerts always include false positives, indistinguishable from true positives. False positives confound our ability to properly allocate resources to contain truly damaging threats.

Data science and artificial intelligence can be powerful.  But we cannot rely on them too heavily to address the awful balance of power between attacker and defender, which, judging by the rich proceeds of ransomware alone – by reports $20B last year – starkly favors attackers.  Over the last 20 years, we have ramped up our spending on security solutions from $4B to almost $200B per year to address new exploits with new defense mechanisms – a whack-a-mole strategy.  However, across some 70 categories of security solutions, 100% of them are passive or analytical.  We are either trying to block attacks or watching our assets to detect the enemy and then respond.  This doesn’t hold hope of changing the game one little bit.

To change the dynamic, we have to go beyond watching, assessing and responding. As in any conflict, strategies of defense don’t work unless we can impose our will on the adversary.  Their behavior needs to bear costs…once it truly does, we stand a chance of recovering our freedom to operate and to reverse the enormous burdens on our people, on our income statements, and on all our stakeholders.

Please send me a note to find out how we have made this our mission at Ridgeback Network Defense and how it can work for your organization.