More Hard Hats for Security Engineering!

Automation Engineers, You Already Know Almost Everything You Need. Good News from S4x20

Sarah Fluchs
16 min readApr 19, 2020

A German version of this S4x20 retrospective has been published in the German automation engineering magazine atp.

Security engineering has plenty of use for more “hard hat people”! (Photo by Silvia Brazzoduro on Unsplash)

Are you an automation engineer? Did you recently have the gnawing feeling that you need to do more for your systems’ security? Or are you longing to being able to respond to sceptical questions about your systems’ security with a whole-hearted “Not perfect yet, but got it covered!”?

There are good news for you:

The most important things you need to know about security engineering, you know already.

And there are new ideas how security engineering can leverage this knowledge better.

This is my (belated) retrospective on the S4x20 ICS Security Conference which has happened in Miami South Beach in January 2020, and at the same time it is supposed to be an encouraging outlook.

I’m not aiming for a complete review of the event, but am only highlighting information that directly helps automation engineers to better take responsibility for their own systems’ security.

What is S4?

S4, created by Dale Peterson, is one of the biggest ICS security conferences in the world. This year, it had 719 participants.

In Dale’s words, S4 aims to be “optimistic, forward looking, creative, and driving change”. When I get asked about S4, I mostly find it easier to explain what it does NOT want to be, which you can best extract from it’s CFP:
No ICS Security 101, no vulnerability talks that sow FUD (fear, uncertainty and doubt) but do not offer a “so what” — an impact or a solution.

If you’ve attended other security conferences, you will know that especially the latter — hyping vulnerabilities without any focus on fixing the underlying problems — is all to common.

“If all our systems are that vulnerable — why should we even try and secure them?”

I’ve met too many disheartened automation engineers during the last years who’ve heard of all the spectacular hacks and vulnerabilities of the very systems they are using on a daily basis. Too often, I’ve heard them say “if all our systems are that vulnerable — why should we even try and secure them?”.

I’ll just counter that with quoting Jason Larsen, who gave insights into a “normal” day hacking an electric grid at S4x20 in order to demystify hackers: “We are attributing too much “James Bond” in ICS hacking:

“Have we let so much James Bond creep into our thought processes that we aren’t really preparing for the 9-to-5 guy that is most probably going to show up at the firewall? They are not all super-powerful all knowing attackers.” — Jason Larsen, S4x20

Now that we clarified that, let’s dive into the brilliant ideas of this year’s S4 which have true potential to help automation engineers take automation security into their own hands.

Spoiler: I might be a hopeless optimistic, but think they’re not that far away.

Three fresh ideas in the Security Engineering “drawer cabinet”

In order to create common ground regardless of your knowledge in automation security I quickly introduce you to a simple security engineering procedure model (more detailed information can be found here. During this retrospect, it serves as a kind of drawer cabinet: Because we have a common understanding which drawers are there, it is easier to communicate which drawer a certain new idea belongs into.

[To those who’ve seen that model too many times during the last year: I’m not using it again and again to annoy anyone or to sell anything to anyone, neither have I lost any bet forcing me to mention lighthouses at least once a day. It simply helps me own thinking].

Image 1 shows our drawer cabinet — a high-level security engineering process model. Bottom to top, it contains the following “drawers”:

  • Function (FC): Understanding the essential functions and their dependencies of the system or network that needs to be secured,
  • Risk (RI): Analysis of vulnerabilities, threats, threat scenarios, and risk,
  • Requirement (RE): Derivation of security requirements in order to mitigate risk,
  • Implementation (IP): Design and implementation of these security requirements.
Image 1: Three encouraging ideas of this year’s S4x20 for automation security engineering, categorized into process steps within a security engineering procedure model (S4x20 speakers in parentheses)

The following three encouraging ideas I took home from S4x20 are sorted into these “drawers” in image 1 by answering a simple question: Which one of these security engineering process steps do they potentially improve?

Idea 1: Disenchant security attacks by systematically structuring attack- and testing scenarios.

Systematic structuring of attack- and testing scenarios takes the edge off the imponderability of security attacks and instead makes them “engineerable” for automation engineers. And a few ideas presented at S4 make that possible — if we are willing to blend methods from “offense” and “defense”.

The idea fits into the risk layer of the security engineering procedure model in image 1.

Idea 2: Develop security guidelines that can be interwoven with automation engineers’ existing daily routines.

Security engineering, in a way, is an “auxiliary discipline”. Auxiliary not in terms of being less important, but in terms of being only important if closely interwoven with another engineering discipline. Therefore, we need practical security requirements and guidelines that automation engineers can use while — not additional to — doing their daily work. An example: PLC programming. “Normal” programmers learn secure development principles from scratch in their training — but how can secure development principles look like for the programming of programmable logic controllers?

The second idea belongs into the requirements (RE) layer in image 1.

Idea 3: Leverage the body of knowledge that already exists in automation engineering for security engineering.

The third idea actually is more of an attitude: More listening, less talking. The solution for which most vendors have emerged in recent years serves well as an exmaple: security monitoring. If we roll out and operate security monitoring solutions in ICS, we often encounter problems that automation engineers, monitoring their control systems, have long known and solved — why don’t we look at how they did it?

The third idea addresses the implementation and operation of security solutions, which is the uppermost procedure model layer in image 1.

Idea 1: Disenchant security attacks by systematically structuring attack- and testing scenarios

When it comes to the identification and analysis of risks, security is often split into two camps: offense and defense, red team and blue team, pentesting and engineering, “break things” and “build things”.

“Offensive security” or “red teaming” denotes testing of systems in order to unveil security problems and the description und classification of these problems. “Defensive security” or “blue teaming” stands for reacting to attacks and development of security solutions in order to impede attacks.

This separation, a separation between describing the problem and finding a solution, is a speciality of security which is not found in other engineering disciplines. On reason for it is the greater variability of security engineering “problems” compared to other engineering disciplines’ problems. In security, problems (vulnerabilities, threat actors, malicious code, threats, …) cannot be described by the laws of nature but are largely dependent on ever-changing human creativity.
And because you can hardly predict human creativity, there is a large amount of (necessary!) security services based almost exclusively on “describing the problem”: the fast identification of malicious code or activity: antivirus software, intrusion detection and prevention systems, security monitoring, threat hunting.

On result of separating offensive and defensive security is that the classical “red teamer” consider it their job to structure and describe the problem, but not to design a solution for that problem. They like to “break things, not build things”.
Meanwhile, “blue teamers” primarily try to find solutions, but mainly for the problems “red teamers” unveil through their attacks.

And sure thing, both “camps” have developed their own methods and frameworks over time. It is about time to blend those into more powerful, “cross-camp” methods.

S4x20 showcased two excellent candidates for this blending: Consequence-Driven, Cyber-Informed Engineering (CCE), developed by US Department of Engergy’s Idaho National Laboratory (INL), and the ICS ATT&CK framework, developed by the US-based not-for-profit research organization MITRE.

Leverage attack modelling frameworks (like ICS ATT&CK) for systematically designing security solutions

Understanding and describing attacks constitutes a big part of what a security engineer is doing, regardless if they are rather red or blue team. Within the last years, models for breaking down, structuring, and systematically describing attacks have improved.

In 2013, MITRE has first published its ATT&CK framework, then for Windows systems.

Meanwhile, there are attack modelling frameworks specifically for automation: In 2015, Michael Assante and Robert M. Lee first published a description of the Industrial Control System Cyber Kill Chain.

This year, ICS ATT&CK — an automation security flavor of the original ATT&CK framework — was released to the public. Austin Scott presented it on S4x20.

One big advantage of the ATT&CK framework is that it is based on broad participation of the ICS security community, which in turn leads to broad community approval. It also helps that the US Department for Homeland Security (DHS) just began leveraging ICS ATT&CK for structuring and explaining threat scenarios in its cybersecurity alerts.

A systematology like ICS ATT&CK is valuable to gain a better understanding for security attacks as well as a common language for describing them.

But frameworks like ICS ATT&CK can do so much more than describing attacks that have already happened! They are at the same time a tool for disenchanting imponderable attack scenarios by systematizing them, usable for vendors, integrators, and asset owners alike. ICS ATT&CK can enable automation engineers to model attacks themselves.

We must not be awestruck by spectacular attacks. We need to disenchant them by methodology and thus enable the “so what”.

But we should not stop there. If automation engineers can model security attacks themselves, they might as well develop ideas for the “so what”, for doing something about the possibility or the potential harm of the attacks they just modelled.

We must not be awestruck by spectacular attacks. We need to disenchant them by methodology and thus enable the “so what”.

How?

Let’s take a closer look and see how ICS ATT&CK works. The framework consists of so-called tactics and techniques.

Tactics break an attack down into single steps pursuing a certain purpose like “inital access”, “discovery”, or “lateral movement”. Some tactics are ICS-specific, for example “impair process control”.

Techniques are ways to put a tactic into practice. Inital access, for example, could be gained by compromising an eingineering workstation, sending a phishing mail, or smuggling in malicious code on removable media.
Of course, not all tactics have to be ticked off for an attack to be successful; mostly some selected tactics (and corresponding techniques) suffice.

The structural elements of ICS ATT&CK, “tactics” and “techniques”, do not contain novel information on attack vectors. In fact, their strength is quite the opposite: they demonstrate that a cybersecurity attack can be broken down into a finite number of repeating structural elements. This is exactly how frameworks like ATT&CK disenchant the imponderability of security attacks, and why they can be leveraged for designing the “so what”, the solutions to the security problems they describe.

Just think about it: How do you do your risk analyses right now?

Do you work through a long list of risk scenarios that you try to transfer to your individual systems? Do you do creative brainstorming pondering the question “what can go wrong”?
Granted, these methods all work. But they are dependent on someone with profound security knowledge and some experience to moderate to moderate the process.

Systematic modelling of threat scenarios — ideally combined with systematic modelling of systems to be protected an their dependencies makes all the “profound security knowledge and experience” easier accessible to every automation engineer and thus provides a more methodological approach to identify threat scenarios. What are your most important functions? What are they dependent on? And which tactics can harm which part of these dependencies most?

Further information:

Leverage insights from (consequence-based) security engineering for structured security testing

Blending offense and defense methods is not a one-way-road.

Insights from defensive security can of course be leveraged for offensive security as well, and the benefit is quite similar: Much like designing solutions can become much more systematic by using offensive security methods, testing or hacking systems can become much more systematic by using defensive methods.
The basic idea is to leverage the knowledge of critical functions and their worst-case consequences gained during consequence-driven engineering for systematically testing these functions.

Here’s an example from this year’s S4. INL’s Virginia Wright presented the idea of “Test Effect Payloads” (TEP), building upon INL’s CCE method (”Consequence-Driven, Cyber-Informed Engineering).

CCE is focused on identification of critical functions and their high-consequence conditions which need to be prevented. These high-consequence conditions, or at least events leading to these conditions, are used to systematize testing: Test Effect Payloads (TEPs) try to induce exactly those events or conditions.

By systematizing how tests are structured, methodological security tests become more transparent and accessible to automation engineers.

The advantage of these systematic tests is not only their direct usability for assessment and improvement of defense strategies, but also that they can be planned and their results can be interpreted by the very engineers that are able to define the essential functions and high-consequence events.

By systematizing how tests are structured, methodological security tests become more transparent and accessible to automation engineers. In a way, this is an effect disenchanting security attacks similar to the one above mentioned for ICS ATT&CK, only this time it is more the penetration tester’s attack that gets disenchanted, not the real attacker’s.

There’s also a quality benefit: The more systematized the testing methodology, the lesser do their results depend on the individual security testers’ skills.

Further information

Idea 2: Develop security guidelines that can be interwoven with automation engineers’ existing daily routines

The fact that security can be done more efficiently if you consider it from the beginning has almost become a platitude — condensed into the phrase “security by design”.
Yet, there are not overly many concepts on how to bring “security by design” to practice. To the contrary: Considering ICS security, one very basic concept is lacking: How to securely program PLCs.

We’ve all heard so much about how vulnerable PLCs are that we’re longing for practicable advice on what to do about it.

In software engineering, secure coding practices belong to the basics every software engineer learns at school, just like learning a programming language, efficient use of hardware resources or useful documentation.
When trying to implement ISO/IEC 27001, which of course was not written with industrial automation in mind, one stumbles upon control A.14.2 “Security in development and support processes”.
And yes, of course there are development processes in industrial automation, the most basic one being PLC programming.
But there are no secure coding principles for PLCs. Ore, there weren’t.

If you did not know before, now you have the background knowledge to understand why the ICS security community was so excited about Jake Brodsky’s S4x20 on secure programming principles for PLCs.

There is a second reason:

It is no news at all that PLCs are insecure by design. It even was no news back in 2012 when, at S4x12, the results of “Project Basecamp” were presented. Project Basecamp’s goal was to transfer the knowledge that PLCs are vulnerable into concrete vulnerabilities and tools everybody could use in order to exploit these vulnerabilities. “It is a bloodbath” were the words project leader Reid Weightman chose when summarizing the results.

In 2020, eight years later, there are much acclaimed presentations, both at S4 and its European equivalent CS3sthlm, showcasing vulnerabilities in Siemens S7 PLCs and the ubiquitous runtime environment CoDeSys.

While the knowledge of these vulnerabilities adds tremendous value to the community, the relative frequency of these types of presentations is also symptomatic for the imbalance inherent in the security community: Showing problems (aka vulnerabilities) is much more sexy than presenting solutions. Breaking things is more sexy than building things.

This is in fact a second reason for the enthusiastic reactions to Jake Brodsky’s S4 talk on secure PLC programming: We’ve all heard so much about how vulnerable PLCs are that we’re longing for practicable advice on what to do about it. There’s simply not enough written down on this topic, and while PLC logic certainly is not all there is to PLC security, it sure is a good start.

I’m not reproducing Jake’s talk here, but for me it was quintessential that the basic principles of secure software development are not fundamentally different for PLCs than for any other software — but Jake does a good job breaking down how these basic principles translate to PLCs:
Validate inputs and outputs, make logic modular and distribute over several PLCs, restrict the data user interfaces (like HMIs) can access, write code in a way that supports bug fixing, try avoiding indirect addressing and sets / resets, use internal status registries for integrity checks and software error reports — that’s as practical as it gets.

And as practicable as it needs to become if we do not want “security by design” to stay an empty buzzword.

By now, we have the video of Jake’s talk, the full slide deck, three blog articles diving deeper into PLC integrity checks, indirection handling, and input validations — and even some limericks summarizing the most important points.

This is way more than we had before S4x20. And it makes for optimism that we as a community develop more actionable, practicable security guidelines that can be interwoven with automation engineers “business as usual”, rendering the fulfillment of daunting security requirements like “security in development processes” actually doable.

Further information

Idea 3: Leverage the body of knowledge that already exists in automation engineering for security engineering

There are not many security solutions that merely consist of buying and installing a certain product — much to the dismay of busy automation engineers longing for a quick fix for their relatively new, annoying problem that is security.

One of the rare security solutions that can be bought as a ready-to-use software product (ready-to-use being disputable here) is security monitoring. For security monitoring, not to be confused with resource monitoring, there has been a flood of products within the last years, leveraging technological advances in big data and machine learning. The promise of suchlike tools is to detect security threats early on by recognizing anomalies in the network.

Automation engineers, traditionally suspicious of something external messing around with their automation systems — have cast off their suspicion when passive-only monitoring tools came to the market, and when the tools started to speak ICS-specific protocols. Dale Peterson has written a couple of good analyses on the ICS detection tool market wihtin the last year.

In short: By now, there are plenty of ICS-specific security monitoring tools, and also their deployments in real automation systems increase.

Thus, the problem focus regarding security monitoring shifts from technological questions like active versus passive monitoring techniques to the person sitting in front of the security monitoring tool: What kind of skills do these people need? And how do we handle the flood of information the security monitoring tools produce?

Turns out that with more experience and practice in using security monitoring tools, problems arise that sound eerily familiar to automation engineers: We are drowned in alerts.

Like everything in security — suprise! — security monitoring needs to be engineered.

This is where Chris Sistrunk’s talk at S4x20 sets in. Having too many alerts to handle is actually a very common and well-understood problem in automation engineering, he argues, because monitoring and alerting are core tasks of a control system — even though traditionally it is not security but process and system health being monitored. Actually, it is such a common problem that there are standards describing solutions: ISA 18.2–2016 and EEMUA Publication 191, for instance.

Chris leads through these standards’ core principles by explaining what an “alert philosophy” is and how it can be created, revolving around identifying your most critical risks the exact alarms that indicate these risks may be present.

There’s more to say about that specific topic, and Chris’ slide deck and video are well worth a watch, but there’s also an important lesson learned on a much higher level: Chris points out that like everything in security — surprise! — security monitoring needs to be engineered.

Creating an alert philosophy is a good example of how the single most important piece of knowledge for engineering security is knowledge about the systems to be protected.
For security monitoring of a control system network, you cannot create an alert philosophy without automation engineers who know their systems. Automation engineers must take a vital part in engineering automation security.

Chris’ talk is an excellent example of automation security involving automation engineers can come up with solutions “traditional” IT security engineers would probably not think of first.

Tapping into automation engineers’ knowledge does not only hold the opportunity to make automation security more accepted among engineers and more practical, but it might as well produce ideas even the “normal” IT security could leverage — who knows? Automation engineers have been amassing knowledge on robustly operating their systems over decades.

However, this only works if one the one hand automation engineers start to consider their systems’ security as their own problem and one the other hand, security professionals start to consider the possibility that there may be other ways of solving security problems than they are used to.

Further information

Security engineering has plenty of use for more “hard hat people”

Who can engineer security?

The presented ideas, disenchanting security attacks by structuring them, developing security guidelines practicable enough to be interwoven with automation engineers daily routines, and leveraging automation engineers’ existing knowledge for solving security problems, all nourish one hope:

There are indeed ways to make security engineering doable for automation engineers. We’re not there yet, but there are seeds of hope.

However, they can only grow to full-blown, widely used methods if we pour in at least the same amount of control system and automation knowledge as we pour in security knowledge.

Thus, their success depends on automation engineers’ willingness to regard automation security as their very own problem.

Automation engineers, are you willing to?
Security engineers, will you listen?

Further Reading and Viewing

--

--

Sarah Fluchs
Sarah Fluchs

Written by Sarah Fluchs

Friction generates heat — true for writing and engineering. Fluchsfriction generates writings on security engineering. Heated debates welcome! CTO@admeritia

No responses yet