The four paths to a security decision
How security decision-making works in software engineering, requirements engineering, and system engineering
The security decision
What do you regard a security decision? To jump in, let’s not be too formal about it, but begin with an example. Pictured below is a typical OT network, field level containing sensors and actuators at the bottom in light blue, controllers in dark blue above, then green control system components and office IT colored yellow. In this network, I’ve marked red a few things that could be regarded as security decisions.
There’s no need to dive deeper into any of these decisions here, but here are two general observations:
- Security decisions are rather not one big monolithic thing. It’s not one architecture change. It’s not inserting one super-smart security gateway. Instead, security decisions are a hodgepodge of many small configurations that collectively build (or break) your security posture.
- Security decisions don’t always look like they are security-relevant at first sight. They often do not have “security” written on them. They are also often made in other disciplines than security.
This means that whether you care about security or not, whether you follow a security engineering method or not, whether you want it or not: You’re making security decisions all the time. Here’s the thing:
It’s impossible not to make security decisions. What we need to do is to turn them into visible, conscious decisions.
We’re now trying to give a more formal definition of “security decision”. We need to provide a bit of context to do that. Let’s assume we have an architecture, a system to be secured. This architecture has security problems that are supposed to be met with security requirements, which in turn will be implemented as security solutions.
In this scenario, we can mark red what a security decision is in general: Setting a security requirement to address a security problem.
Of course, a “security problem” and “a security requirement” are still vague terms. We’ll get to that in a bit.
Paths to a security decision
Next question: How do you arrive at a security decision? In the context of our research project IDEAS, we combed through security approaches from disciplines that have been dealing with security by design for a long time — compared to Industrial Control System (ICS) engineers. We've looked at methods from IT: software engineering, requirements engineering, and systems engineering.
There are many different methods, each grown within the mindset that is prevalent in the respective niches. But if you take a step back and compare, all these methods take one out of four possible paths for security decision-making: risk-driven, goal-driven, compliance-driven or library-supported.
The risk-driven path probably looks most familiar to you if you have an IT or OT security background. It takes an attacker's perspective on the architecture you want to secure, identifying potential attack scenarios as security problems, for example a leak of sensitive information. The security decision on this path is a mitigation decision: you define a security requirement that is meant to mitigate the risk caused by an attack scenario.
It may come as a surprise for advocates of risk-based approaches, but there are other legitimate paths to a security decision besides the risk-driven one. Especially in requirements engineering, it is common to regard security as just another dimension of non-functional requirements and engineer them as all other requirements are engineered — and that is mostly goal-driven. So in contrast to the attacker's perspective that was assumed in the risk-driven path, you take on a defender's perspective, or — without the combat rhetoric — simply an engineer's perspective. As an engineer, you decide which security goals your architecture need to fulfill, for example, it needs to protect the confudentiality of customers’ data. The security decision then is a concretization: you define. security requirements that concretize your security goals.
One could argue that the risk-driven and goal-driven paths are just different wording for the same thing: the attacker’s goal could be just the negated version of the defender’s goal. However, security is not a zero-sum game: “In football, the goals won by an attacker are exactly the goals lost by the defender. Security is different; there is not necessarily a relationship between the losses incurred by the asset owner and the gains of the attacker.” This means that there are two different paths to determining security requirements, one from an attacker’s perspective and one from an engineer’s (the defender’s) perspective, and both add value.
The third security decision-making path is simple, but indeed very common. You don’t bother with thinking too much about the architecture you want to protect and its security problems. On the compliance-driven path, your only security problem is that you need to comply with a certain regulation — be it by law or by corporate policy — so the security requirements are set. The only decision you still have to make is an application decision: Does this regulatory requirement apply to my situation?
There’s a fourth path which is a special case, but we cover it anyway because so many security engineering methods especially in software engineering make use of it. In library-supported security decision-making, you build your security decisions upon similar decisions you've made earlier for similar architectures and / or problems.
The security decision base
Last question: Based on which information do you make your security decision, or shorter: What's your security decision base?
This one's more complicated. It's a question about concepts you use to represent your architecture, security problems, security requirements, and implementation. You often take the concepts you are familiar with for granted, while they're in fact very different depending on the discipline you're in, and also depending on the security decision-making path.
In the below image is an overview over concepts (white rectangles) that we deemed useful fot automation engineering, which also do not restrict the decision-making path you can choose. Why we decided to include each specific concept is too lengthy to explain here, but you can read it in our research paper “A Security Decision Base”. Here, we just briefly explain the core concepts, and provide examples and related concepts.
Function
The function concept permeates all stages of the decision-making process. We have modified it a bit to include not only technical details, but also information about the intention of a function and the people involved — because these are essential information for security.
That way, functions can be used for describing the operational architecture to be protected, risk scenarios, security functions or security-enhanced functions including security requirements, and also the solution architecture.
Examples: program a PLC, backup PLC logic
Related concepts: use case, actor / stakeholder, task, resource, dependency, data flow
Unwanted event
An unwanted event is the starting point of every risk scenario: The event that would make a really bad day, the event that must not happen. Risk scenarios lead to at least one unwanted event.
Examples: Overpressure in reactor, recipes become known to competitors, loss of view in control system
Related concepts: Impact, high-consequence event (as known from INL’s CCE method), worst case scenario, hazardous event, anti-goal, attacker’s intention, attacker’s goal
Security Goal
We use the concept of the security goal to bring together the decision-making on the different paths: they all end in a security goal. After that, the process continues in the same way, regardless of which path you took to reach the decision. Consequently, we define a security goal quite flexibly. It can be a compliance goal (“comply with that regulation”), but also a more conventional security goal composed of some flavor of confidentiality, integrity, and availability.
Examples: confidentiality of recipe data, integrity of PLC logic, non-repudiation of accountants doing invoicing, trustworthy engineering station = {integrity, authenticity, accountability} of engineering station
Related concepts: security objective, protection objective, soft goal
Security parameters
Remember our examples for security decisions at the beginning of this article? The hodgepodge of small configurations that all somehow affect security and that we needed to make visible?
Well, these are represented by the security parameter concept. Security parameters are anything that, if modified, affect your system’s security posture.
Examples: choice of a communication protocol, integrity protection of PLC logic, mechanism for changing operating modes, user accounts (default or individual)
Related concepts: none
Indicators of Insecurity (IoIs)
Indicators of insecurity are similar to security parameters. They also exist to make something visible that would otherwise rarely by mentioned explicitly. They are everything that could be used to make an unwanted event happen. You could say, hey, that’s a vulnerability — and that’s true, vulnerabilities are part of IoIs. But especially in industrial control systems, there are so many things that could be used in a risk scenario that are not really vulnerabilities, but legitimate, built-in features. Insecure-by-design features. That’s why we called this concept indicators of insecurity — to make “insecure by design” more visible.
Often, an indicator of insecurity is related to a security parameter. It’s a security parameter set to a value that makes it a potential point of attack.
Examples: CVE related to a component, integrity protection mechanism for PLC logic: none, user accounts: default
Related concepts: vulnerability, weakness
I realize that reading about security decision-making and even more so about theoretical concepts for a security decision base is rather dry material. That’s why the next goal in our research project will be to visualize the above concepts in diagrams that make all relevant information for making security decisios easy to grasp for humans — because I firmly believe humans will continue to be the ones making security decisions for some time to come.
And they don’t need a method to replace their security decision-making. They need a method to support their security decision-making. A method that helps them to turn the hodgepodge of unintentional, invisible security decisions into visible, conscious, informed security decisions.
This article is based on a research paper first presented at EKA conference in Magdeburg, Germany on June 23, 2022. The research is funded by the German Federal Ministry of Education and Research.