Here’s the brutal truth: It doesn’t matter how much your organization spends on the latest cybersecurity hardware, software, training, and staff or whether it has segregated its most essential systems from the rest. If your mission-critical systems are digital and connected in some form or fashion to the internet (even if you think they aren’t, it’s highly likely they are), they can never be made fully safe. Period.

This matters because digital, connected systems now permeate virtually every sector of the U.S. economy, and the sophistication and activity of adversaries — most notably nation-states, criminal syndicates, and terrorist groups — have increased enormously in recent years. Witness the attacks in the United States on Atlanta’s municipal government and on a data network shared by four operators of natural-gas pipelines, the theft of data from Equifax, and the global WannaCry and NotPetya malware attacks. In many of the most notorious incidents of recent years, the breached companies thought they had strong cyber defenses.

  • Andy Bochman is senior grid strategist at the Idaho National Laboratory, where he provides strategic guidance to government and industry leaders on issues at the intersection of energy security and climate resilience.

I am a member of a team at the Idaho National Lab (INL) that has been studying how organizations critical to the U.S. economy and national security can best protect themselves against cyberattacks. We’ve focused on those that rely on industrial control systems — such as the ones that regulate heat and pressure in electric utilities and oil refineries — and have come up with a solution that flies in the face of all conventional remedies: Identify the functions whose failure would jeopardize your business, isolate them from the internet to the greatest extent possible, reduce their reliance on digital technologies to an absolute minimum, and backstop their monitoring and control with analog devices and trusted human beings. Although our methodology is still in the pilot stage, organizations can apply many elements of the approach now.

Admittedly, this strategy — which isn’t feasible for purely information-based businesses — may raise operating costs and reduce efficiency in some cases. But it’s the only way to ensure that mission-critical systems can’t be successfully attacked by digital means. In this article I will share the lab’s methodology for identifying such systems. It invariably turns up vulnerable functions or processes that leaders never realized were so vital that their compromise could put the organization out of business. We’ve applied elements of the methodology at companies and in the U.S. military for the past several years and conducted a highly successful yearlong pilot of the entire approach at Florida Power & Light, one of the largest electric utilities in the United States. A second pilot in one of the U.S. military services is now under way. INL is also exploring ways to take the process mainstream. This will most likely mean partnering with selected engineering services firms and getting them licensed and trained to apply the methodology.

The Existing Threat

In the old days, mechanical pumps, compressors, valves, relays, and actuators did the work in industrial companies. Situational awareness came from analog gauges, and skilled and trusted engineers communicated with headquarters via landline telephone circuits. Other than tampering with the supply chain or co-opting an employee, the only way a saboteur could disrupt operations was to go to the plant and bypass the three physical pillars of security: gates, guards, and guns.

Today operations in 12 of the 16 infrastructure sectors that the U.S. Department of Homeland Security has deemed critical — because their “assets, systems, and networks, whether physical or virtual, are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof” — depend partially or fully on digital control and safety systems. Although digital technologies bring wonderful new capabilities and efficiencies, they have proved to be highly susceptible to cyberattacks. The systems of large corporations, government agencies, and academic institutions are constantly being prodded for weaknesses by automated probes that are readily available on the dark web; many are free, and others cost hundreds or thousands of dollars (the more expensive ones even come with technical support). They can often be thwarted by cybersecurity best practices, but in reality it is virtually impossible to defend against well-planned, targeted attacks — meticulously conducted over months if not years.

The financial impact of cyberattacks is soaring. Just two last year, the ones involving WannaCry and NotPetya, caused damage worth more than $4 billion and $850 million, respectively. The WannaCry attack, which the United States and the United Kingdom accused North Korea of carrying out, reportedly used tools stolen from the National Security Agency. Exploiting an opening in Windows machines that hadn’t installed a Microsoft security patch, it encrypted data; crippled hundreds of thousands of computers in hospitals, schools, businesses, and homes in 150 countries; and demanded a ransom. The NotPetya attack, which Russia is believed to have carried out as part of its campaign to destabilize Ukraine, was conducted through an update to a Ukrainian accounting company’s software. It began with an assault on Ukrainian government and computer systems and spread to other parts of the world, with corporate victims including the Danish shipping company Maersk, the pharma firm Merck, the chocolate manufacturer Cadbury, and the advertising behemoth WPP, among many others.

A Growing Vulnerability

The pace of digital transformation continues to accelerate with the growth of automation, the internet of things, cloud processing and storage, and artificial intelligence and machine learning. The propagation of and growing dependency on complex, internet-connected, software-intensive digital technologies carries a serious cybersecurity downside. In a 2014 article published by the Center for a New American Security, Richard J. Danzig, a former secretary of the navy and now a board director at the center, spelled out the paradox posed by digital technologies:

Even as they grant unprecedented powers, they also make users less secure. Their communicative capabilities enable collaboration and networking, but in so doing they open doors to intrusion. Their concentration of data and manipulative power vastly improves the efficiency and scale of operations, but this concentration in turn exponentially increases the amount that can be stolen or subverted by a successful attack. The complexity of their hardware and software creates great capability, but this complexity spawns vulnerabilities and lowers the visibility of intrusions….In sum, cyber systems nourish us, but at the same time they weaken and poison us.

The fact is that these technologies are so mind-bogglingly complex that even the vendors who create and know them best don’t fully understand their vulnerabilities. Vendors typically sell automation as a way to remove risks posed by fault-prone humans, but it just replaces those risks with others. Information systems now are so complicated that U.S. companies need more than 200 days, on average, just to detect that they have been breached, according to the Ponemon Institute, a center that conducts independent research on privacy, data protection, and information security policy. And most often they don’t find the breach themselves; they are notified by third parties.

Despite the ever-expanding number of damaging, high-profile cyberattacks throughout the world, on companies such as Target, Sony Pictures, Equifax, Home Depot, Maersk, Merck, and Saudi Aramco, business leaders have been unable to resist the allure of digital technologies and the many benefits they provide: greater efficiency, lower head counts, the reduction or elimination of human error, quality improvements, opportunities to glean much more information about customers, and the ability to create new offerings. Leaders spend more and more every year on new security solutions and high-priced consultants, continuing with conventional approaches to cybersecurity and hoping for the best. That is wishful thinking.

security-inline-image-1

ABOVE: Britain deployed defenses near Rough Sands, six miles off the coast of England. With names like Tongue, Sunk Head, and Knock John, the so-called Roughs Forts were manned by anti-aircraft, and they were England’s first line of defense from the mainland. They were outfitted with the most modern radar equipment available. But they could not stop the Blitz.

The Limitations of “Cyber Hygiene”

These conventional approaches — or “hygiene” in the cybersecurity trade — include:

  • creating comprehensive inventories of a company’s hardware and software assets
  • buying and deploying the latest defensive hardware and software tools, including endpoint security, firewalls, and intrusion-detection systems
  • regularly training employees to recognize and avoid phishing emails
  • creating “air gaps” — in theory, separating important systems from other networks and the internet — though in practice, there are no true air gaps
  • building a large cybersecurity staff supplemented with various services and service providers to do all of the above

Many organizations adhere to best-practices frameworks such as the National Institute of Standards and Technology’s (NIST) cybersecurity framework and the SANS Institute’s top 20 security controls. These entail continuously performing hundreds of activities without error. They include mandating that employees use complex passwords and change them frequently, encrypting data in transit, segmenting networks by placing firewalls between them, immediately installing new security patches, limiting the number of people who have access to sensitive systems, vetting suppliers, and so on.

Many CEOs seem to believe that by hewing to cyber-hygiene best practices, they can protect their organizations from grievous harm. The numerous high-profile breaches amply demonstrate the error of this presumption. All the companies previously mentioned had large cybersecurity staffs and were spending significant sums on cybersecurity when they were breached. Cyber hygiene is effective against run-of-the-mill automated probes and amateurish hackers, but not so in addressing the growing number of targeted and persistent threats to critical assets posed by sophisticated adversaries.

In asset-intensive industries such as energy, transportation, and heavy manufacturing, no amount of talent or money can accomplish all the prescribed best practices without error. In fact, most organizations fail at the first of the recommended practices: creating comprehensive inventories of the company’s hardware and software assets. That is a huge shortcoming, because you can’t secure what you don’t even know you have.

Then there are the trade-offs inherent in the best practices. Security upgrades usually require that systems be shut down for installation, but that’s not always feasible. For example, utilities, chemical companies, and others that put a premium on the availability and reliability of their industrial processes or systems can’t stop them every time a software company issues a new security patch. So they tend to install the patches periodically, in batches, during scheduled downtime, often many months after a patch is released. Another issue is protecting widely dispersed assets. Larger utilities, for example, operate thousands of substations, which often are spread out over thousands of square miles. Refreshing them presents a quandary: If you can access the software via a network to implement updates, a talented adversary may just as easily tap into the network to access the software for nefarious purposes. But if your own employees physically update the software at all those plants, the effort can be prohibitively expensive. And if you subcontract that work to independent outfits, you can’t hope to sufficiently vet them all.

Even if the best practices could be implemented perfectly, they would be no match for sophisticated hackers, who are well funded, patient, constantly evolving, and can always find plenty of open doors to walk through. No matter how good your company’s hygiene is, a targeted attack will penetrate your networks and systems. It may take the hackers weeks or months, but they will get in.

That’s not just my view. Michael Assante, a former chief security officer of American Electric Power and now a leader at the SANS Institute, told me, “Cyber hygiene is helpful for warding off online ankle biters” and “if done perfectly in a utopian world, might thwart 95% of attackers.” But in the real world, he said, it registers as “barely a speed bump for sophisticated attackers aiming at a particular target.” And in an interview last year with the Wall Street Journal, Bob Lord, the former head of security for Yahoo and Twitter, said, “When I talk to corporate security officers, I see a little bit of this fatalism, which is ‘I can’t defend against the most sophisticated nation-state attack. Therefore, it is a lost game. So I’m not really going to start to think deeply about the problem.’”

cybersecurity-inline-2

No matter how good your company’s cyber hygiene is, a targeted attack will penetrate your networks and systems.

One case in point is the 2012 Shamoon virus attack on Saudi Aramco, which had good defenses in place. The attack, which U.S. officials suspect was carried out by Iran, erased data on three-quarters of the oil company’s corporate PCs. A more recent attack, in March 2018, was designed to trigger a blast at a Saudi petrochemical plant by interfering with safety controllers. It might have succeeded had the attacker’s code not contained an error, according to the New York Times. “The attackers not only had to figure out how to get into that system, they had to understand its design well enough to know the layout of the facility — what pipes went where and which valves to turn in order to trigger an explosion,” the Times wrote.

INL’s Radical Idea

It’s time to embrace a drastically different approach: a highly selective shift away from full reliance on digital complexity and connectivity. This can be done by identifying the most essential processes and functions and then reducing or eliminating the digital pathways attackers could use to reach them.

The Idaho National Lab has developed a step-by-step approach: its consequence-driven, cyber-informed engineering (CCE) methodology. The objective of CCE is not a one-time risk assessment; rather, it is to permanently change how senior leaders think about and weigh strategic cyber risks to their companies. Although it is still in the pilot stage, we’ve seen great results. We plan to have CCE fully ramped up in 2019 and to have several services firms licensed to implement the methodology by 2020. But even today, the core precepts of the CCE approach can be adapted by any organization. (The lab has also developed a companion framework: cyber-informed engineering (CIE), which, while similar to CCE in many respects, describes methods for integrating cyber risk mitigations across the entire engineering life cycle.)

The methodology comprises four steps that should be performed in a highly collaborative fashion by the following:

  • a CCE master — now someone from the INL, but in the future people at engineering services firms trained by the INL
  • all the leaders responsible for regulatory compliance, litigation, and mitigating risks: the CEO, the chief operating officer, the chief financial officer, the chief risk officer, the general counsel, and the chief security officer (CSO)
  • the people who oversee core operational functions
  • safety system experts and the operators and engineers most familiar with the processes on which the company most depends
  • cyber experts and process engineers who know how systems and equipment can be misused

For a number of these people, the process will be stressful. For example, the exposure of heretofore unknown enterprise-level risks is bound to initially make the CSO squirm. But often that is not fair. No CSO can hope to fully prepare a company for an attack by a highly resourced adversary.

1. Identify “Crown Jewel” Processes

The work begins with what the INL calls consequence prioritization: the generation of possible catastrophic scenarios, or high-consequence events. This involves identifying functions or processes whose failure would be so damaging that it would threaten the company’s very survival. Examples include an attack on transformers that would stop an electric utility from distributing electricity — or on compressor stations that would prevent a natural gas distribution company from delivering to its customers — for a month. Other examples include a targeted attack on the safety systems in a chemical plant or an oil refinery that would cause pressure to exceed limits, leading to an explosion that could kill or injure hundreds or thousands of people, generate lawsuits seeking ruinous damages, wreak havoc with the company’s market cap, and cost its leaders their jobs.

Analysts familiar with how sophisticated cyber adversaries act help the team envision what prospective attackers’ end goals might be. By answering questions such as “What would you do if you wanted to disrupt your processes or ruin your company?” and “What are the first facilities you would go after the hardest?” the team can identify the targets whose disruption would be the most destructive and the most feasible and develop scenarios involving them for discussion by the C-suite. Depending on the size of the company, this step may take a few weeks to a few months.

2. Map the Digital Terrain

The next task, which typically takes a full week but may take longer, is mapping all the hardware, software, and communications technologies and the supporting people and processes (including third-party suppliers and services) in the company-ending scenarios. It entails laying out the steps of production, documenting in robust detail all the places where control and automation systems are employed, and capturing all the necessary physical or data inputs into the function or process. These connections are potential pathways for attackers, and companies often are not aware of all of them.

Existing maps of these elements never fully match the reality. Questions such as “Who touches your equipment?” and “How does information move through your networks and how do you protect it?” will always turn up surprises. For example, the team may discover from a network architect or the control engineer that a vital system is connected not just to the operational systems network but to the business network that deals with accounts payable and receivable, payment systems, customer-information systems, and — by extension — the internet. By asking the person responsible for managing vendors, the team might learn that the supplier of this system maintains a direct wireless connection to it in order to perform remote analysis and diagnostics. A safety-system supplier may say that it can’t directly communicate with the equipment, but a careful examination of the mechanics and update processes may reveal that it can. Any such discovery is an aha moment for the team.

3. Illuminate the Likely Attack Paths

Then, using a variant of a methodology developed by Lockheed Martin, the team identifies the shortest, most likely paths attackers would take to reach the targets identified in step 1. These paths are ranked by their degree of difficulty. The CCE master and other outside experts, including people with access to sensitive information about attackers and their methods, play the lead roles in this phase. They share information gleaned from government sources about attacks on similar systems around the world. Additional company input regarding safety systems, the firm’s capabilities and procedures for responding to cyber threats, and so on help the team finalize a list of attack paths, which is used in step 4 to prioritize remediation actions for senior leaders to consider.

4. Generate Options for Mitigation and Protection

Now it’s time to come up with options for engineering out highest-consequence cyber risks. If there are 10 pathways to a target but they all pass through one particular node, that’s obviously a great place to install a tripwire — a closely monitored sensor that would alert a fast-response team of defenders at the first sign of trouble.

Some remedies are surprisingly easy and inexpensive to implement: for example, a software-free, hardwired vibration sensor that will slow down or trip a unit that has been given malicious digital instructions that might cause it to damage or destroy itself. Others take more time and money, such as keeping a redundant but not identical backup system ready to continue a crucial function, even if in a somewhat degraded state. Although many remedies will have no negative impact on operational efficiency and business opportunities, others might. So a company’s leaders will ultimately have to decide how to proceed on the basis of what risks they can accept, must avoid, can transfer, or should try to mitigate.

If a selected process simply must have a digital channel for monitoring or sending control signals, the goal should be to keep the number of digital pathways to and from the critical process at an absolute minimum to make spotting abnormal traffic easier. In addition, a company might add a device to protect a system should it receive digital commands that would cause a catastrophic event — a mechanical valve or switch, for example, that would prevent the pressure or the temperature from exceeding specified parameters. And sometimes a company might want to reinsert trusted people into the activity — to monitor a mechanical thermometer or pressure gauge, for instance, to ensure that the digital devices are telling the true story. If your company has not suffered a serious cyber incident, the notion of disconnecting as much as possible, installing old-fashioned mechanical devices, and inserting humans in automated functions might sound like a regressive business decision. Instead it should be reframed as a proactive risk-management decision. It may decrease efficiency, but if the somewhat higher cost radically reduces the likelihood of a disaster that your current methods can’t protect against, it is the smart move.

A question like “Who touches your equipment?” will always turn up surprises.

It’s not hard to imagine CEOs and COOs reading through this process with skepticism. In any change-management project, moving hearts and minds from ideas they’ve hewn to for decades is a massive challenge. Anticipate resistance, especially early on. Divulging so much information about your company and admitting to weaknesses you either didn’t know about or didn’t want to think about will be psychologically taxing. Later phases will challenge engineers’ fortitude as their systems and practices are pored over for weaknesses. Make sure team members feel safe during even the hardest evaluations of your systems. In the end, the detailed information about adversaries’ approaches and what they could achieve — showing how it could happen to you — will be a revelation. Even the most resistant team members should climb on board when they recognize the risks and the best way to mitigate them.

What You Can Do Today

Learn to think like your adversaries. You might go as far as to build an internal team charged with continually assessing the strength of your defenses by trying to reach critical targets. The team should include experts in the processes in question, control and safety systems, and operational networks.

Even if you can maintain consistently high levels of cyber hygiene, you must prepare for a breach. The best way to do that is to create a cyber safety culture similar to those that exist at elite chemical factories and nuclear power plants. Every employee, from the most senior to the most junior, should be aware of the importance of reacting quickly when a computer system or a machine in their care starts acting abnormally: It might be an equipment malfunction, but it might also indicate a cyberattack.

Finally, a Plan B should be ready for implementation if and when you and your team lose confidence in systems that support your most critical functions. It should be designed to allow your company to continue essential operations, even if at a reduced level. Ideally, the backup system should not rely on digital technologies and should not be connected to a network — particularly the internet. But at a minimum, it should not exactly replicate the one in question, for an obvious reason: If attackers were able to breach the original, they’ll be able to easily invade one identical to it.

. . .

Every organization that depends on digital technologies and the internet is vulnerable to a devastating cyberattack. Not even the best cyber hygiene will stop Russia, North Korea, and highly skilled, well-resourced criminal and terrorist groups. The only way to protect your business is to take, where you can, what may look like a technological step backward but in reality is a smart engineering step forward. The goal is to reduce, if not eliminate, the dependency of critical functions on digital technologies and their connections to the internet. The sometimes higher cost will be a bargain when compared with the potentially devastating price of business as usual.