What will you learn?

  • Safety and security should be prioritized over speedy system deployment.
  • How can software architecture that provides an isolation environment stop threats from accessing sensitive data?
  • Why hypervisor technology is so important.

The media’s rate at which cyber attacks on critical infrastructure are being covered is increasing, it seems. We learned in February that a Florida water treatment facility had been compromised. In May, news broke about the attack on the Colonial Pipeline and the Irish Public Healthcare Service. These events raise a fundamental question: Should these systems ever have been connected?

External network connections are a boon for businesses. External network connections can make these systems more vulnerable to attacks. Hackers can access them easier and cause havoc.

Are the risks associated with connectivity worth the benefits? Some recent hacks used external connectivity to allow remote control and monitoring of a specific function. Does that offer more flexibility than the need for a human being to visit the building? This statement may sound a bit ludicrous from someone working in technology. Over the past 10 years, I have repeated the phrase, “just because it’s connected doesn’t make it a good Idea!”

Safety and security should be prioritized over speedy system uptake once critical infrastructure systems are connected to external networks. It is important to carefully design the system architecture of the deployed, connected platform.

  • Through supported updates to the system, increase the immunity to attacks by continuing to raise it.
  • Recognize when the system has been compromised, and then be able to return it to a good state.
  • The system can be divided to contain any intrusion without compromising its safety, security, or key assets.

IT vs. OT

One insightful article remarked that one of the problems is the divergent perspectives of Operational Technology and Information Technology (IT). According to a quote by OT, IT wants data to remain confidential. OT wants everything to run smoothly and keep everyone safe.

The Hitchhiker’s Guide to the Galaxy was one of my favourite series of books, even though the movie was terrible. A Babelfish was placed in the ear of someone. This allowed any spoken language to be translated into a language the person could understand. These connected systems have a similar need for a bridge between the old and the new. This involves translating commands and frameworks from the IT world into the highly reliable and available OT world.

This topic has been brought up in several forums. I have been challenged on several points. One of my strongest arguments was that “No software can be safely run on unsecure hardware.” I believe we all would agree that more secure hardware components are needed.

There are many efforts underway to improve system-level security for connected systems. One example is Arm’s Platform Security Architecture initiative (PSA). The crown jewels of the system cannot be accessed in such a system that is compromised with poorly written software. We should take a step back and think about:

  • The timeline starts with silicon (not IP).
  • Design cycles for embedded systems.
  • When PSA was launched.
  • The time embedded platforms are used before being traded.

What percentage of the 100 million chips shipped by Arm’s partners in the past five years were PSA compliant? A lot of platforms are built on legacy architectures. I am referring to Arm here, but similar problems exist for x86, MIPS, and RISC V-based components.

A software architecture that provides an isolated environment to prevent threats from accessing sensitive data is what’s required. Secure systems should be considered distributed. This means that security can be achieved by physically separating the components and through the mediation and execution of trusted functions within the components.

Separation

Virtual enclaves that are secure and can be used to execute security functions, operating systems and applications must be created using virtualization. The operating system is not responsible for controlling how resources are allocated or secured. This makes the endpoint no longer a point for vulnerability but a point for protection.

A separation kernel hypervisor enforces memory protection between virtual machines (VMs) in the same way an operating system enforces protected memories contexts among its processes. Although processes in each VM can interact with one another, it is impossible for them not to interact with other VMs.

It is crucial to choose the right hypervisor technology. Some embedded options still depend on the underlying operating system. If they fail, the entire system may crash. Some variants allow root logins. The best way forward is to use minimally-configured hypervisors that immutably assign resources to VMs (i.e. can’t be modified after the system has started) and then get out.

Cyber attacks are likely to increase in frequency in the short term. It is crucial to design critical infrastructure systems that make security a top priority to ensure our networks are as secure as possible.

Leave a Reply

Your email address will not be published. Required fields are marked *