For security, think functions — not systems
How to view your systems through security’s lens
Short version / infographic for this article can be found here.
At the end of the article Security Engineering Needs A P&I Diagram I promised that I would make a first draft proposal for modelling security. Or at least, to begin with, the “first layer” of security engineering, which is understanding the functions to be protected.
But before we dive deeper into modelling, let’s linger a little more at this inconspicuous little statement at the end of that last sentence:
Understanding the functions to be protected.
It seems like common sense, but in most cases, security practitioners do not really live by it. They do gather and understand assets, or systems — but functions?
In this article, I’m offering the hypothesis that functions, not systems, are indeed the most important unit in which we need to think security; the unit around everything else pinwheels.
I will argue you should view your systems through the “function lens” and name eight changes you’ll notice once you do.
What do you need to know for security?
What’s the information you ask for once you begin securing a system? What do you need to know?
I’ve been asking this question a myriad of people during the last years, engineers, administrators, security consultants, my coworkers, probably every job applicant I’ve interviewed.
It probably sounded trivial to most of them. But it is not. The answers are insightful, because what people tell you they need to know in order to do security engineering tells a story about how they approach security engineering.
Most people suggest that the minimum you would need in order to secure something is understand the system to be secured.
Other than that, what you need to know for security engineering can be answered if you know what your next steps would be. Let’s take a quick look: We’ll probably want to identify threat scenarios, evaluate risks, define security requirements, and design security solutions (I spare you from the lighthouse procedure model this time).
Thus, we need to understand
- How the system works
(in order to determine “what can go wrong”, i.e. threat scenarios and how likely that is) - What’s the system’s purpose in the greater whole
(in order to evaluate worst cases and consequences of threat scenarios) - Which characteristics of the system actually serve this purpose, or in other words: Which features are fixed and which are negotiable
(in order to find feasible ways to implement security requirements)
It appears obvious that technical information condensed into asset inventories, architecture diagrams, network maps, and data flow diagrams is helpful, and this corresponds with most answers to the original question what we need to know in order to do security engineering.
Don’t forget those humans
Statements like these have long become truisms:
For security, you need to take into account the human factor.
Security is a “people business”.
Security consists of people, processes, and technology.
Your biggest vulnerability is the user.
You know what’s funny?
If the importance of people for security is a truism, why are we only asking for technology as the very foundation of our security analysis then?
Look at the above list of what we need to understand: How the system works, what’s its purpose in the greater whole, and which system features are actually needed to serve this purpose. Who said “the system” is a purely technical one?
Nobody, and probably you are not really thinking that way either.
Yet, we are used to documenting only the technical part as a basis for our further analysis. If we think “system”, we mostly think “technology”.
I can only guess why. Because that’s the documentation available, that other engineers have produced. Because there is no standard for documenting people. Because people’s behavior is too elusive and unpredictable to document anyway. Because that little bit of human interaction needs no documentation but goes without saying. Because we can design systems, but we cannot design people. Because we’re freakin’ engineers, not psychologists!
But let’s just try for a moment and look at the above list of what we need to know and imagine “system” as a human-technology complex.
Or, to quote NIST SP 800–160 on Systems Security Engineering:
“System elements include technology/machine elements, human elements, and physical/environmental elements. System elements may therefore be implemented via hardware, software, or firmware; physical structures or devices; or people, processes, and procedures.”
Purpose!
If we put all this together, we now have what we need to know to do security:
- A “system” consisting of technology, humans, and their interactions
- How the system works
- What’s the system’s purpose
- Which characteristics of the system actually serve this purpose
What looks like a loose collection of information, we can actually lump together into a single, handy unit called a function. So a function is more than just a system. It’s the system, its dependencies, its interactions, what humans do with it, and its purpose — all in one unit.
A function is what holds together your loose arrangement of asset and makes sense out of it by way of purpose.
I like to think of this as security’s atomic unit; “atomic” in its most original sense of “indivisible”. This of course it not to be taken too literally.
You can very well divide a function into its parts, technical systems, hardware and software, humans, dependencies, communication protocols. But it is atomic in another way: For security purposes, you should not divide it.
When you think security, you should not divide technology from its purpose. For security engineering, looking at technology without explicitly knowing its purpose may lead to overengineered security with a side of a helpless feeling among engineers, caused by the perceived arbitrariness of security measures “on principle” .
I would even dare to say that it already has.
Sounds familiar? It is!
Functions are not a new concept for security engineering. There are quite some (security) engineering approaches working with functions:
Consequence-based, cyber-informed Engineering (CCE) is INL’s approach to securing critical infrastructures. Based on the assumption that if an attacker really tries, critical infrastructures will become compromised eventually, INL proposes an approach to focus on the most critical…wait for it…functions first.
Essential functions are also one of the core concepts of the IEC 62443 Industrial automation and control systems (IACS) security standard series, introduced in IEC 62443–3–3:2013 as a “ function or capability that is required to maintain health, safety, the environment and availability for the equipment under control”, and stating further: “A key step in risk assessment […] should be the identification of which services and functions are truly essential for operations.”
Stepping away from security, systems engineering in general also works with functions. In fact, already the definition of a system from ISO/IEC 15288:2015 (Systems and software engineering) sounds very much like what we just defined as functions: A system is defined as the “combination of interacting elements organized to achieve one or more stated purposes”.
See the emphasis on purpose? Consequently, systems engineering regards functions as one possible view on systems: “A system can be viewed in isolation as an entitiy, i.e., a product; or as a collection of functions capable of interacting with its surrounding environment, i.e., a set of services.” We’ll remember for later:
A system can be viewed […] as a collection of functions capable of interacting with its surrounding environment.
— ISO/IEC 15288:2015
These examples reflect how most people arrive at the function concept when thinking about engineering security systematically. But even though we know on a theoretical level we have to think in functions, I have the perception we’re not really following through with that:
We have no common understanding what a function is, and no discourse either.
We do not model them.
We do not regard them as essential documentation.
We do not sketch functions, talk about functions, or share them when talking about security.
Instead, we keep talking about “systems” or “assets”, leaving out the humans and degrading purpose to a mere byproduct.
What changes once you think in functions
What happens once you really, truly, think in functions? This is probably the right time for an example, so here’s a very simple one.
Let’s assume we’re lucky and this system here is all we have to protect by our security engineering:
Now, we put on our function glasses. What could functions look like? Let’s remember: A function is
- A “system” consisting of technology, humans, and their interactions
- How the system works
- What’s the system’s purpose
- Which characteristics of the system actually serve this purpose
Obviously, our system will have different functions, hence different perspectives to look at.
Below are two examples: Programming the PLC (blue) and monitoring the PLC (green).
Both functions have their purpose in their title, and the diagram includes all technological systens, human roles, and interactions necessary to fulfill that purpose:
With this example in mind and our function lenses firmly in front of our eyes, let’s look at eight changes you’ll witness once thinking in functions.
1. You rapidly pivot between perspectives
Don’t regard thinking in functions or systems as an either-or choice. Rather, thinking in functions is a different, systematic way of looking at a system.
Remember ISO/IEC 15288 from above? You can still look at a system as an assortment of other systems, but you can also change your perspective and look at it as a collection of functions.
Or, look at one function at a time. The green one. The blue one. With each perspective, other aspects gain relevance. Functions don’t make this reality of multiple perspectives less complex — but you bring structure to them and can systematically pivot between them; each function being its own perspective.
2. You make explicit what you don’t know
Looking at the above sketched functions, you probably realize they are very explicit on how these functions work, down to diagramming data flow. It’s normal not to know every detail of how a function works. After all, control system vendors do not always share which ports and protocols they use, or the person you would need to ask has long left the company.
In fact, discovering these knowledge gaps is one of the biggest advantages of thinking in functions: You systematically discover which function you actually cannot really explain.
On a side note: You can think in functions without the diagrams, but I recommend not to. The diagrams force you to be very explicit how a function works. You can easily put together a number of bullet points describing a function and later on realize they are still too ambiguous to convert into a diagram:
The laptop is used for programming, fine, but how exactly is it laptop connected to the PLC? Over the network or point-to-point? And who exactly is it that’s allowed to do the programming? Your engineers or third parties too?
3. You easily build bridges to business processes
If you ever had to implement a security management system you know that translating business processes to technology can be challenging, because of the different levels of detail and the seemingly irreconcilable viewpoints of management and engineers. That problem disappears once you connect both perspectives by using functions as your pivot point. Since you’ve already grouped your assets according to their purpose in functions, you only need to tie these functions to business processes now.
For the same reason, if you need to do a risk analysis and estimate worst-case consequences later on, you already have all information you need handy when you think about risks for functions, instead of for single systems. Because what determines the worst consequence when messing with a function? Exactly, the function’s purpose.
4. Threat scenarios become a matter of more logical and less creative thinking
When you’re at the point in your risk analysis where you have to determine threat scenarios, you’ll notice that you’ve already done much of the required brain-work when you have your functions handy.
You can systematically work through your functions, asking yourself what could go wrong, which renders the construction of threat scenarios a more logical than creative process. Also, having thought that systematically about what can go wrong helps create a decision base for the likelihood of a scenario. How much work is it? What knowledge or information would an attacker need?
5. You automatically include humans in your analysis
When it comes to threat scenarios, it’s easy to overlook human weakness. But when you properly work through your functions, how could you forget the humans? They’re a part of the function, you diagrammed them, you won’t forget about them. (Same holds true when you design security requirements and their implementation later on).
6. You reduce the number of single units you have to consider
Looking at the above examples, systematically thinking in functions seems like a lot of work. No doubt, security engineering is a lot of work — especially if you have no clue how your stuff works. Properly understanding your functions should not be a security issue, but in reality, it often is.
That said: Have you ever done a risk assessment based on single systems, even system types, as your atomic unit?
Replacing that by functions dramatically reduces the order of magnitude of single units you’ll have to assess. While you may well deal with an order of magnitude of maybe tenth of thousands of systems in a larger company, functions rather come in the order of magnitude of tenth to hundreds.
7. You create maintainable documentation
Because you can’t secure what you don’t know, documentation is a crucial (and painful) part of every security project. While functions do not reduce the importance of documentations, they do alleviate the painfulness of its change management, because they bring a permanent structure.
Your systems are volatile, but what you do with them (hence, your functions) is much less volatile. The way how you program a PLC will change, but the function “Program PLC” will stay. Once you change how it works, you can easily change the existing function, and you do not have to do your security analysis all over but can check your existing security analysis for “Program PLC” and see what needs to be changed.
8. System experts become talkative
This may well be the most important change you’ll notice when putting on your function glasses.
Security heavily relies on those with profound system knowledge being able (and willing) to efficiently share their knowledge. Thinking in functions is way more intuitive for automation engineers than listing single assets, and thus functions are a really good guidance if you want to quickly and systematically understand what systems you need to protect and how they work.
In short: If you want to make a system expert talk about their systems (and keep talking!), ask them about its functions — about what people are doing with the systems on a daily basis.
Towards modelling security
Now that we’re all able — and hopefully also willing? — to view our systems through the function lens in our security engineering, we’re ready to move on to modelling them.
In the meantime, put on your new function glasses, take a look, and let me know how things are going.