Make The Big Decisions With Cyber Decision Diagrams
Picture this: You’re busy implementing the latest cybersecurity best practices. At the same time, your inbox is filling up. Alerts are pinging. Decisions cascading. Each one demanding instant attention. Each one claiming to be crucial.
This isn’t just chaos — this is our reality. Drowing in an avalanche of small decisions. Each one urgent. Each one demanding attention. Each one promising to be the one that matters.
Should we patch this system now or later?
Which alert deserves priority?
Which best practice needs immediate attention?
And each small decision we make results in a new avalanche, new small decisions.
Let’s be honest. We all know that we are losing control.
While we’re chasing all these small decisions, we know something much bigger is lurking in the fog beyond our screens. Something we can’t see because we’re too busy with all these small decisions.
You all might remember — in summer 2024, just half a year ago, a sophisticated security system caused a massive shutdown at airports worldwide.
One small decision — to deploy updates automatically — cascaded into thousands of stranded passengers. The airport’s High Consequence Event had struck.
Isn’t it ironic? We’re all trying to make every small decision the right way, implement every security measure, prevent every possible incident. But sometimes our decisions don’t prevent, but CAUSE security incidents.
Because while we try to do all the small things right, we don’t have any time left for the question if we’re still doing the right things.
Each time we sink further into our screens, scanning systems and monitoring networks, we drift further away from the reality we’re trying to protect.
Because if there’s one thing I’ve learned working with engineers across industries, it’s this:
When we don’t look beyond cyberspace, we miss what really matters.
A chemical plant’s resilience isn’t just about network security — it’s about understanding what happens in real life when that network fails. And that’s a difference.
An airport’s operations aren’t just about keeping systems running — they’re about keeping planes flying. And that’s a difference.
These High Consequence Events are what makes CEOs lose sleep.
Now, we all take pride in making risk-based decisions. CEOs love that, too.
So let’s get this out of the way upfront: As long as you mentally stay in cyberspace — you’re not making risk-based decisions.
It’s not just difficult — it’s impossible.
The High Consequence Events don’t happen in cyber, and we won’t prevent them just in cyber.
If we stay in cyber, we can’t see how a cyber system contributes to a plant shutting down.
If we stay in cyber, we can’t see how a release valve would be a better safeguard than the most sophisticated antivirus solution.
If we stay in cyber, the real world, the reason why all cyber exists, stays covered in thick fog.
Because these real-world consequences don’t materialize in a world of bits and bytes. They exist in a world of steel and steam
– a world where engineers speak a different language.
Their language looks like this, for example:
What you’re seeing might look like a maze of lines and symbols. But to engineers, this piping & instrumentation diagram is more than a technical drawing — it’s a vessel of collected wisdom.
Every line, every symbol carries decades of engineering knowledge. Like a magical repository of understanding, showing them exactly how their world works.
See these flowing lines? They’re not just pipes — they’re the veins and arteries of industrial life. The control points with the numbers in them? They are valves, guardians of pressure and flow, or sensors, the eyes and ears of the entire system, constantly watching, measuring, protecting.
And this diagram isn’t just a documentation of the status quo. It’s created earlier:
Before a single pipe is laid, before a single valve is installed, engineers gather around diagrams like these. They trace their fingers along these lines, seeing the future in these flows. They debate. They decide. They dream up better ways to make things work.
The piping & instrumentation diagram is the engineers’ mental world. They can enter it, think through alternatives, make their decisions.
When engineers look at the diagram, they can trace how a single sensor’s reading flows up through control loops to keep an entire process stable. They see exactly how a temperature shift in this reactor could ripple through the entire system.
And everything was fine as long as the world looked like this:
Because the piping & instrumentation diagram does cover control loops like the one above. It’s in fact one of the diagram’s purposes, being the interface between process and control engineers.
But watch what happens to this crystal-clear view as the history of control systems unfolds.
First, simple control loops expanded into layers. What was once a direct line between sensor and response became a maze of digital control. Like fog rolling in over a harbor, complexity begins to obscure the engineer’s vision.
Because suddenly, there is a cyberspace. Control systems are added. The piping & instrumentation diagram can still tell most of the story, but no longer all of it.
Next comes the IT revolution. Suddenly, offices need data from the plant floor. The Purdue Model emerges — our first attempt to organize this growing complexity.
The cyberspace expands — and the fog with it. The physical world continues to shrink in our vision.
Today? The boundaries of the Purdue model themselves are dissolving like morning mist. Cloud technologies reach down into our most critical systems…
…and again, the portion that the engineer’s mental models can cover shrinks.
Let’s be clear. It’s not just about diagrams disappearing. It’s about losing a crucial way of seeing.
From an engineer’s perspective, the cyber world — with no sound engineering method — is navigating in thick fog, and that makes it difficult for engineers to contribute to this strange world.
Engineers have spent decades perfecting their methods for designing automation systems. They know exactly where to place a sensor to catch overheating. They know precisely how to choose the safety trip that will best protect a critical process.
But where are the equivalent tools for making informed decisions about the complex cyber world we’ve been building on top?
Where are our diagrams in cyber to decide for example how control system programming should be accessed?
Where are our mental models to analyze whether cloud connectivity creates more risk or more reward?
The answer came to me by accident. And like many discoveries, it started with a simple research question: How can we empower engineers to do cybersecurity?
For years, I’ve been researching how to involve engineers in cybersecurity without them having to become security experts.
Therefore I created diagrams. Simple ones. Actually, so simple that I dared not present them as research results. So simple that for years, I couldn’t believe they’re not already standard practice in cybersecurity anyway.
But one late Friday afternoon, something unexpected happens. During a completely different discussion with some engineers, one of them stops mid-sentence.
‘Wait,’ he says. ‘Those diagrams you showed us before… can we look at them again?’
When I pull them up, everything changes. As the engineers gather around the diagrams, the energy in the room shifts.
‘This!’ they say, with an intensity that makes me pause. ‘If we could have something like this during early engineering… that would be really helpful.’
‘Nobody does this yet while engineering a plant,’ they continue.‘ We have our piping and instrumentation diagrams, we have our other tools, but for cyber… we don’t have anything.’
That’s when it hits me. What these engineers are searching for isn’t just another tool. They’re searching for what every complex field eventually needs — a way to distill wisdom into clarity. A way to see patterns through complexity. A place where every engineer and every IT person can pour their knowledge into and a joint clarity emerges.
In the magical world of Harry Potter, they have exactly such a tool. They call this magical object a pensieve.
Let me use Dumbledore’s own words to explain:
“One simply siphons the excess thoughts from one’s mind, pours them into the basin, and examines them at one’s leisure. It becomes easier to spot patterns and links, you understand, when they are in this form.”
That’s what Dumbledore says.
Wouldn’t you like that, too, for your cyber decision chaos?
A tool to transfrom complexity into clarity?
To make your thoughts accessible to others?
See patterns you couldn’t see before?
Free your mind from the overwhelming weight of details to focus on what truly matters?
To make good cybersecurity decisions, we need a way to step back and see the whole picture — cyber and beyond.
That’s exactly what the cyber decision diagrams, one result of my PhD, offer us.
Let me show you exactly what made these engineers stop in their tracks that Friday afternoon.
These diagrams I’m going to show you look ridiculously simple. So simple that I need to confess something — I’ve been nervous about showing them to you today. But sometimes, the most powerful tools are the simplest ones.
Look at what’s different here: No assets. No networks. No overwhelming technical details.
Instead, you see abstract entities. You see intentions. You see how humans actually interact with our systems.
Take this example: Here’s a function showing how an engineer might change controller logic. You see the role — who needs to do something. You see the intention — what they’re trying to achieve. You see the components involved — but not buried under technical specifications.
At this level of detail, you can easily create these diagrams in early engineering, before a single asset is purchased. This is information you can’t get from scanning assets anyway. You need to scan engineers’ brains.
And — that’s where the magic really happens — watch as we layer this over an engineer’s familiar piping & instrumentation diagram.
Two worlds — cyber and physical — suddenly speaking the same language.
Suddenly, we see that this controller regulates temperature in a critical reactor. And we see that a wrong change could lead to overheating.
In other words: We see what matters in the real world.
Now, before we drown in small decisions, we can stay at that level of detail for a while. Step back. Think about if this is how engineers SHOULD be making controller logic changes? Or should they rather change it from the control room?
To see clearer, you can compare your two options side by side. Become aware of the differences.
Think about what these differenes mean for your cyber resilience. For example, you quickly realize that if engineers change logic from the control room, data has to travel a longer way, probably through more than one network.
At this point, you might decide it would help to see this function in the context of your network.
Now you can see how the function interacts with the rest of your architecture. It becomes clearer if you highlight the function and grey out the rest of the architecture. Like this:
Now you can see what different networks your data is crossing. Admittedly, the diagram is a bit more crowded than before, but notice — we’re still not drowning in technical details. We’re keeping focus on what matters: the interactions, the intentions, the potential consequences.
As the fog lifts from cyberspace, we gain a new perspective. A very powerful perspective: Suddenly, you can see everything that matters for cybersecurity, from cyberspace and beyond, at a glance.
— Technical components that matter
— Human roles and their intentions
— Interactions between technical components, and between humans and tech
— And: Your anchor into the real world, the high consequence events you need to prevent.
All in one diagram.
If you want to think about your function more thoroughly, or, in Dumbledore’s words, if you have some more excess thoughts to siphon into your pensieve, you can layer different kinds of cybersecurity information on top of these diagrams:
An attack path leading to a high consequence event. In this example, an attack could originate at an IT device in the office network and eventually leading to the real-world impact, the overheating of the product.
Or you may want to layer security requirements onto your cyber decision diagrams. You might decide that it really matters to make sure that who ever wants to change controller logic MUST be physically present. So you add a key switch that needs to be turned manually.
But look, you don’t just add the key switch — but also a security requirement tag that makes explicit WHY it was added.
That way, as soon as someone else enters your cybersecurity pensieve — other engineers, IT people, an auditor, an authority — they can see and understand your thoughts. And maybe add their own.
That way, the engineering knowledge that is so desperately needed for making truly risk-based cybersecurity decisions, based on real-world consequences, becomes accessible to everyone. Equipped with the cyber diagram pensieve, even IT people can make engineering-informed cybersecurity decisions. And even non-technical managers can understand them.
Think about what that means! There’s no need for engineers to shoulder the burden of cybersecurity. All they need to do is siphon their knowledge to the cyber decision diagrams.
Does this sound tempting? Then why don’t you start building your own cybersecurity pensieve today? Begin with the most pressing high consequence event, and decide on one key function that could lead to it. Then you’re already in the midst of creating your first cyber decision diagram.
Let your engineers and cyber experts pour their knowledge into it.
I promise you’ll be surprised by how many big decisions you’ve never thought about systematically before.
I promise you’ll be relieved how empowering it feels to have mastered the complexity of tech, cyber and beyond, in a diagram.
Oh wait, I know what you must be thinking. Great, now I’m fully motivated — and all she leaves me with is a blank page?! Blank pages can be daunting, I know.
That’s why we’ve built something: A free tool that guides you through creating your first cyber decision diagram, your first drop in your cyber pensieve.
It’s completely free and open to the public because I firmly believe that a good tool for understanding complex tech should be available to EVERYBODY in cybersecurity.
Dear cybersecurity community: cyber-decision-diagrams.com is for you.
Before you roll up your sleeves and start diagramming, let me leave you with one last thought:
When future generations look back at how we handled the cyber challenges of our time, let them say this: When cyber complexity threatened to overwhelm us, we didn’t surrender.
We didn’t just react to every small decision that came our way.
We resisted the temptation to pile even more complexity on top.
We also didn’t try to fight complexity, fight the future.
Instead, we created a new way to handle complexity:
We learned to cyber-diagram it.
We don’t need more tools to see through the fog. What we need is a new way of seeing.
A new way of making technical knowledge accessible to everyone who can help secure our critical infrastructure.
Let’s democratize technical understanding.
Let’s look beyond the fog.
Let’s show what truly matters.
Let’s diagram the future.
This is a shortened transcript of my session at S4x25. The session was recorded; the video will be uploaded to S4’s YouTube channel in the next weeks or months.