What is the point of cybersecurity?

The question might seem basic, but it touches on one of the most important issues facing companies around the world. Indeed, this question is so critical because — despite repeated attempts to shore up digital systems over the last few decades — cybersecurity risks remain rampant.

In 2022 alone, a total of 4,100 publicly disclosed data breaches occurred, comprising some 22 billion records that were exposed. All this despite the fact that organizations around the world spent a record-breaking $150 billion on cybersecurity in 2021.

Software itself is changing, too. The rise of artificial intelligence in general, and generative AI in particular, is fundamentally altering the way companies use software. The increasing use of AI is, in turn, making software’s attack surfaces more complicated and software itself more vulnerable.

How, then, should companies go about securing their software and data?

The answer is not that cybersecurity is a pointless endeavor — far from it. Instead, what companies aim to achieve from their security programs must evolve, just as the way that companies’ use of data and software has evolved. It is past time for their cybersecurity efforts to change, too.

More specifically, companies can adapt to the growing insecurities of the digital world by making three changes to the ways they go about shoring up their software:

3 Ways Companies Can Improve Their Cybersecurity

First, cybersecurity programs must no longer have the avoidance of failures as their overarching aim.

Software systems, AI, and the data they all rely upon are so complex and brittle that failure is in fact a feature of these systems, not a bug. Because AI systems themselves are inherently probabilistic, for example, AI is guaranteed to be wrong at times — ideally, however, just less so than humans. The same holds true for software systems, not because they are probabilistic, but because as their complexity increases, so too do their vulnerabilities. For this reason, cybersecurity programs must shift their focus from attempting to prevent incidents to detecting and responding to failures when they do inevitably occur.

Adopting so-called zero trust architectures, which are premised on the assumption that all systems can or will be compromised by adversaries, is one of many ways to recognize and respond to these risks. The U.S. government even has a zero trust strategy, which it’s implementing across departments and agencies. But the adoption of zero trust architectures is just one of many changes that need to occur on the way to accepting failures in software systems. Companies must also invest more in their incident response programs, red team their software and AI for multiple types of failures by simulating potential attacks, bolster in-house incident response planning for traditional software and AI systems, and more.

Second, companies must also expand their definition of “failure” for software systems and data to encompass more than just security risks.

Digital failures are no longer simply security related, but instead now involve a host of other potential harms, ranging from performance errors to privacy issues, discrimination, and more. Indeed, with the rapid adoption of AI, the definition of a security incident is itself no longer clear.

The weights (the trained “knowledge” stored in a model) for Meta’s generative AI model LLaMA, for example, were leaked to the public in March, giving any user the ability to run the multibillion–parameter model on their laptop. The leak may have started as a security incident, but it also gave rise to new intellectual property concerns over who has the right to use the AI model (IP theft) and undermined the privacy of the data the model was trained on (knowing the model’s parameters can help to recreate its training data and therefore violate privacy). And now that’s it’s freely accessible, the model can be used more widely to create and spread disinformation. Put simply, it no longer takes an adversary to compromise the integrity or availability of software systems; changing data, complex interdependencies, and unintended uses for AI systems can give rise to failures all on their own.

Cybersecurity programs cannot therefore be relegated to only focusing on security failures; this will, in practice, make information security teams less effective over time as the scope of software failures grows. Instead, cybersecurity programs must form a part of broader efforts focused on overall risk management — assessing how failures can occur and managing them, regardless of whether the failure was generated by an adversary or not.

This, in turn, means that information security and risk management teams must include personnel with a wide range of expertise beyond security alone. Privacy experts, lawyers, data engineers, and others all have key roles to play in protecting software and data from new and evolving threats.

Third, monitoring for failures must be one of the highest-priority efforts for all cybersecurity teams.

This is, sadly, not currently the case. Last year, for example, it took companies an average of 277 days, or roughly 9 months, to identify and contain a breach. And it’s all too common for organizations to learn about breaches and vulnerabilities in their systems not from their own security programs, but through third parties. The current reliance on outsiders for detection is itself a tacit admission that companies are not doing all they should to understand when and how their software is failing.

What this means in practice is that every software system and every database needs a corresponding monitoring plan and metrics for potential failures. Indeed, this approach is already gaining traction in the world of risk management for AI systems. The National Institute of Standards and Technology (NIST), for example, released its AI Risk Management Framework (AI RMF) earlier this year, which explicitly recommends that organizations map potential harms an AI system can generate and develop a corresponding plan to measure and manage each harm. (Full disclosure: I received a grant from NIST to support the development of the AI RMF.) Applying this best practice to software systems and databases writ large is one direct way to prepare for failures in the real world.

This does not mean, however, that third parties cannot play an important role in detecting incidents. Quite the contrary: Third parties have an important part to play in detecting failures. Activities like “bug bounties,” in which rewards are offered in exchange for detecting risks, are a proven way to incentivize risk detection, as are clear ways for consumers or users to communicate failures when they occur. Overall, however, third parties cannot continue to play the primary role in detecting digital failures.

. . .

Are the above recommendations enough? Surely not.

For cybersecurity programs to keep pace with the growing range of risks created by software systems, there is much more work to be done. More resources, for example, are needed at all stages of the data and software life cycle, from monitoring the integrity of data over time to ensuring security is not an afterthought through processes such as DevSecOps, a method that integrates security throughout the development life cycle, and more. As the use of AI grows, data science programs will need to invest more resources in risk management as well.

For now, however, failures are increasingly a core feature of all digital systems, as companies keep learning the hard way. Cybersecurity programs must acknowledge this reality in practice, if not simply because it is already in fact a reality.