Security For Safety As a Seed Crystal

A security by design experiment under ideal conditions — using ISA TR 84.00.09 as an example

Still image from a video showing the effect of a seed crystal in fast motion: https://www.youtube.com/watch?v=MBI39y941GU

The rest of the crystal can then grow around the seed. The “growing around” almost goes by itself, but without the crystal seed, nothing really goes by itself. For the seed crystal to work, it does not matter what the crystal is made of. It can be salt. Sugar. Snow. Metal.

Automation engineers are struggling with security, and we are all struggling with “security by design” for automation security. Maybe a seed crystal could facilitate growth?

Engineering security for the “special case” that are safety systems can be this seed crystal.

How? This is what this article is about, taking the ISA Technical Report 84.00.09 “Cybersecurity Related to the Functional Safety Lifecycle” as an illustrative example.

Disclaimer: I am part of the working group revising the TR 84.00.09, but here I’m only voicing own opinions, not official working group opinions.

Security for Safety: Where’s the matter?

Because we’re talking a lot about security for safety here, let’s begin with a really short introduction of the matter by taking an imaginary flight.

First we pass the security check, get rid of belt, shoes, and watch, and fish our toothpaste out of our carry-on.

Later, once we’re seated in the airplane, a stewardess shows us the safety instructions.

Both experiences are in place to make us survive our flight, but:

The security check protects the airplane from us. It is supposed to avoid us carrying items that could make the airplane crash.

The safety instructions protect us from the airplane. They are supposed to avoid us dying in case the airplane actually crashes.

Safety protects humans (and environment) from machines, security protects machines from humans.

But what about the machines that are supposed to protect humans? In automation, there are such like “machines”, in the field of functional safety. There are “machines” — controllers, for example — that bring other machines to a safe state in case something goes wrong. Such a safety controller could close an inlet valve or turn off a heater in case a reactor threatens to explode.

So in a way, this safety controller is a machine that protects humans from other machines.

Obviously, suchlike safety systems are a welcome target for attacks of — exactly, humans. So we need to protect the machine that protects humans from other machines (=safety) from humans (=security for safety).

From a purely technical point of view, safety controllers are not that different from the “normal” controllers used for automation. Therefore, security for safety can be regarded a sub-discipline of automation security.

Why should the niche area security for safety be a seed crystal for entire automation security?

Two reasons:

Reason 1: At least there is already a risk analysis

Risk analyses are a bit like fighting COVID-19: The ideal result would be that nothing happens at all.

This is the opposite of building exciting new features, and this is why risk analyses, this imperfect security engineering tool (and yet the best we have), are not overly popular among engineers.

Except for safety. There are already risk analyses in safety.

Safety engineers have warmed towards thinking in what-ifs instead of in well-defined natural law, thinking in worst case scenarios insted of in trailblazing new features.

They have accepted that the what-ifs help them; that preparing systems for the worst adds value. They have accepted that they’re spending a considerable amount of time year after year in designing, auditing, and testing safety functions that ideally never get used.

Granted, safety risk analyses differ from security risk analyses. They feel more familiar, more trustworthy to engineers, because they rely on hard facts and numbers: statistics of technical failure, gathered over years. Numbers you can trust.

Granted, security risk analyses are more cumbersome than those for safety. There are no usable statistics for human malice, and even if they existed: no one could force the hackers of tomorrow to abide by the rules of yesterday’s hacker statistics.

But: it is easiert to carry out a security risk analysis when there at least already is a safety risk analysis in place. Not everything is transferable, but thinking in risks is easier if you have already thought about worst case scenarios in the plant under consideration.

Reason 2: The safety engineering process is well-defined

We all agree we want automation security by design, don’t we?

But we have no clue where to begin. What does this automation engineering process look like? Can someone mark the point where security by design is to be integrated into the already hopelessly interdisciplinary automation engineering process?

The problem “security by design” for automation systems is so large and complex that one yearns for a smaller laboratory to experiment. Security for safety could be this laboratory.

Engineering safety controllers is only a small portion of engineering a whole automated plant, but nevertheless it is a design process resulting, among other things, in readily-designed controllers.

And what’s more: This small portion has been defined and described very well, above all in the standard families IEC 61508 and IEC 61511 (and ISA84) respectively.

Talking of standard families:

Where is security for safety discussed?

Probably in many plants around the world, but there are also a few committees ans working groups that try to create standards stemming from their discussions. Below is an (incomplete) list.

  • IEC TC65 WG20
    writes the technical report IEC TR 63069, “Framework to bridge the requirements for safety and security”
  • ISA84 WG9
    writes the technical report ISA TR.84.00.09, “Cybersecurity Related to the Functional Safety Lifecycle”
  • ISO TC 199
    writes the ISO TR 22100–4, “Safety of machinery — Relationship with ISO 12100 — Part 4: Guidance to machinery manufacturers for consideration of related IT-security (cyber security) aspects”
  • NAMUR WG 4.18
    writes the NAMUR worksheet NA 163, “Security Risk Assessment of SIS”.
    (NAMUR is a German-based User Association of Automation Technology in Process Industries)
  • Additional sector-specific committees and (draft) documents like ISO/SAE 21434 for automotive or CENELEC TS 50701 for railway.

In the following, we’ll take a look at ISA (International Society of Automation) TR.

Please note: Even though I’m a member of the ISA84 working group which is currently revising TR84.00.09 and the lead of the sub team covering the risk assessment part, I’m not representing ISA84 WG9 here, but only voicing personal viewpoints.

ISA TR 84.00.09: An integrated lifecycle for security and safety

The ISA TR 84.00.09 “Cybersecurity Related to the Functional Safety Lifecycle”, first published in 2013 as “Security Countermeasures Related to Safety Instrumented Systems (SIS)”, aims at giving guidance for an integrated lifecycle covering security and safety.

And indeed ISA is a good place to do this.

IEC 61511, which defines the safety lifecycle for the process industry sector, was originally based on the ISA84.01 standard released by the ISA84 committee in 1996, and ISA84 still contributes to the IEC standard today.

IEC 62443, which works on defining the security lifecycle for industrial control and automation systems (IACS), is developed by the ISA99 committee and then adapted by IEC.

Consequently, ISA created a working group to bring together experts of both safety (ISA84) and security (ISA99), and the goal of ISA TR 84.00.09 is to bring together the safety lifecycle outlined in IEC 61508 / IEC 61511 and the security lifecycle outlined in IEC 62443 to create an integrated lifecycle.

Creating an integrated lifecycle from two existing, separate lifecycles sounds easier than it is.

First, both lifecycles to be integrated are moving targets.

Second, the security and safety lifecycles may look quite similar on the surface, but the devil is in the details: Just try dropping some basic terms like risk analysis, asset inventory, or architecture diagram among groups of security experts and safety experts. Both groups will easily understand what you’re talking about — but in different ways.

This is all work in progress, but I try and share some insights gained on the roadside, using the following figure.

How to integrate lifecycles: Identify your master, focus on deliverables

1. Focus on deliverables first

This is actually the core insight, while the rest can be regarded as corollaries.

While focusing on the lifecycle process steps when trying to define and work with lifecycles is the intuitive approach, I’ve found that real understanding is gained once we pay more attention to the deliverables, found in the unimposing and often neglected inputs and outputs to process steps.

Thinking about the right order for process steps becomes way easier once we wonder which deliverables are needed or useful for a certain process step, and what additions to these deliverables it produces as an outcome.

2. Integrating lifecycles is not equal to finding analogies

The security risk assessment is somewhat analogous to the safety risk assessment. That much is obvious.

But the conclusion is not necessarily that in an integrated lifecycle, both happen in parallel.

Partitioning a system into zones and conduits, one of the core concepts of IEC 62443, can be regarded analogous to allocating safety functions to protection layers, one of the core concepts of IEC 61511.

But that again does not mean it is automatically a good idea to fit them into the same phase in the integrated lifecycle.

Why?

Because both may need different inputs. In practice, a lifecycle becomes useful once it provides clear, well thought-out guidance on what do to in what order. You need an execution sequence and concrete deliverables, not an academic discussion about both processes’ analogies.

This leads us to the third insight:

3. If security needs to be integrated into safety engineering or vice versa is a practical, not a political question

The question which one is the “master lifecycle” where the other discipline needs to be integrated (and who’s responsible) — security or safety — has often sparked discussions driven by company and industry politics and sensitivities about competencies, division of work, and the importance of one’s own discipline.

Politics aside, (and deliverables to the center): The question for the “master” can be answered solely by considering the practicability of the resulting integrated engineering process.

Security engineering, as I’ve written elsewhere, is somewhat cynically put, an auxiliary discipline.
Let’s recall what we stated earlier: Security protects machines from humans. If we don’t build any machines, security has got nothing to do. Thus, no security engineer in the world could do the tiniest bit of meaningful work if it weren’t for others disciplines’ engineers who had a least put in some basic thoughts on what a machine is supposed to do and what it’s supposed to look like.

This, by the way, is no contradiction to security by design:

Nobody claims a machine has to be completely designed and built before it makes sense to think about security. But the functional requirements and a preliminary design need to exist.

What exactly it is that has to exist, where the earliest reasonable entry point for security within the engineering process is — that is the sticking point for integrating security into engineering lifecycles, and hence for security by design in general.

When it comes to the lifecycle integration in ISA TR 84.00.09, the question for the “master” was quickly answered: Because we need to describe how safety systems are secured, security being the auxiliary discipline, the safety lifecycle is our master. Security needs to be “integrated”, which means: has to work with and enhance its deliverables.

This is important also for a second reason: If security, the auxiliary discipline, can’t be done without the system knowledge of the engineers in charge of the systems that needs to be secured, these engineers are the ones that need to be able to work with the security methods from the integrated lifecycle.
And these security methods will be easier for engineers to accept and learn if they leverage what they already know.

A crystal seed alone does not make a crystal

Acrystal is not dead matter. It continues to grow. The third edition of ISA TR84.00.09 will make a practical proposal for integrating security into the safety lifecycle, or for how to design secure safety systems.

Granted: Only because a well-described engineering process might be developed for security of safety systems, that does not mean it looks the same for automation engineering in general.

But we can understand the integrated security for safety lifecycle as an opportunity to try the experiment “security by design” for automation systems under laboratory conditions.

We can understand the integrated engineering process for security in safety systems as a seed crystal for an integrated engineering process for security in automation systems in general.

Who knows, maybe something new and sparkling grows onto it. Something like an idea how to integrate security — by design! — into the automation engineering process?

If we regard an integrated lifecycle for security for safety a seed crystal — maybe something new and sparkling grows onto it? (Photo by Greg Rosenke)

On a side note: When a crystal is formed, there’s always heat of crystallization (caused by enthalpy of fusion, in case you were wondering).

But for an issue like security for safety that has been hotly debated for years, everything else would be an infringement of the laws of nature.

Friction generates heat — true for writing and engineering. Fluchsfriction generates writings on security engineering. Heated debates welcome! CTO@admeritia