In an earlier blog post [1], I wished we had a common model for security, something that is to security engineers what P&I diagrams are to process engineers. Something that our minds could “walk around in” to better understand our systems, assumptions, risks, and security design choices.
I promised to make a first draft proposal for models to be used in security engineering, which is what I’m doing in this article.
There’s also a pdf version available for download here.
In this article, we’ll be modelling the security engineering workflow based on the Layered Blueprints security engineering procedure model, and we’ll create security engineering system models, making use of familiar and widely accepted concepts and standards like data flow diagrams, attack graphs, MITRE ATT&CK®, ISO/IEC 27001 and ISA/IEC 62443 along the way. …
Security Engineering sucks.
No, seriously, it’s very annoying that we need to do it at all. I mean, who of you has become an engineer in order to do security? Engineers want to build great things, make life easier, better, and more efficient.
Security engineers, however, do not build any cool new features. All they do is trying their best to ensure that the cool features other engineers built do not get screwed. We only need to do Security Engineering because there are total douchebags in the world that want to abuse our very cool engineered features.
So I agree that having to do Security Engineering really sucks, just like having to lock one’s bike sucks. …
It was as if people had been waiting until they could finally and rightfully write that someone has died directly because of a hacker attack.
That of course is true, and the incident is tragic.
Also, no one working in critical infrastructure security is really surprised that it happened. But nevertheless, the hastily drawn conclusions, gloatingly confirming the cliché of our critical infrastructures being in bad shape, fall short.
Let’s calm down, stop pointing fingers, and analyze what we know.
Last month, it was hard not to come across this incident: The “Universitätsklinikum Düsseldorf” (UKD) — Düsseldorf’s university hospital — has suffered a ransomware attack. As a consequence, a woman delivered to the hospital as an emergency could not receive treatment that same night but was sent to another hospital in Wuppertal, which is about 30km from Düsseldorf. …
Es war, als hätten die Menschen nur darauf gewartet, bis sie endlich — und zurecht! — darüber schreiben können, dass ein Mensch unmittelbar aufgrund eines Hackerangriffs gestorben ist.
Das stimmt natürlich, und es ist tragisch.
Auch hat es niemanden, der in der IT-Sicherheit von kritischen Infrastrukturen arbeitet, wirklich überrascht. Trotzdem springen die hastigen Schlüsse, die schadenfroh das Klischee unserer schlecht aufgestellten kritischen Infrastrukturen bestätigen, zu kurz.
Kommen Sie, wir regen uns ab, stecken den erhobenen Zeigefinger wieder ein und analysieren mal, was wir eigentlich wissen.
Es war im September schwierig, daran vorbeizukommen: Das Universitätsklinikum Düsseldorf (kurz UKD), kritische Infrastruktur im Sektor Gesundheit, hatte einen Ransomware-Vorfall. Als Folge konnte eine Frau nicht notfallbehandelt werden und starb nach der um eine Stunde verspäteten Behandlung im 30 km entfernten Wuppertaler Krankenhaus — oder verkürzt in den Schlagzeilen: Tod durch Ransomware. …
Imagine you’d have a digital representation of your assets, or your critical functions, that summarized all security-relevant aspects in one model.
Not a perfect reflection at all, but perfect enough to do security engineering. Imagine you could compliance-check that representation against security frameworks. Adjust it if your assets used different protocols, had different users, or there were new known vulnerabilities. And push the “apply” button so you could transform your security configurations of choice into reality.
Imagine you’d have a digital twin for security engineering.
Almost two years ago, I introduced the concept of Layered Blueprints, a lighthouse-shaped procedure and system model for security engineering. …
Können wir anfangen, PLC-Eigenheiten als Features, nicht als Bugs für Security zu begreifen?
So dringend brauchen SPSen keine Security in der Programmierung, oder? Und selbst wenn — SPSen wären doch gar nicht in der Lage, Security umzusetzen? Überhaupt, zählt SPSen Programmieren eigentlich als Programmieren?
“Niemand lernt sichere SPS-Programmierung in seiner Ausbildung”, hat Jake Brodsky während des S4x20-Vortrag bemerkt, der die Initialzündung zum Top 20 Secure PLC Coding Practices Project gegeben hat, das Jake Brodsky, Dale Peterson und ich begonnen haben. Es wird von der ISA Global Cybersecurity Alliance gehostet und ist seit heute öffentlich verfügbar.
Dieser Text ist eine Einführung in das Projekt und seine Hintergründe und enthält auch eine deutsche Übersetzung der ersten Top 20-Entwurfsversion. …
Can we start using particularities of Programmable Logic Controllers (PLCs) as features, not bugs for security?
PLCs don’t need secure programming practices that urgently, right? Even if they would — PLCs were not capable of implementing the secure coding practices we know anyway? While we’re at it: Does PLC programming count as programming in the first place?
“No one learns secure PLC coding at school”, Jake Brodsky said in his S4x20 talk which gave the initial spark to the Secure PLC Programming Practices Project which Jake Brodsky, Dale Peterson and I set up. …
If you want to change only one thing about your approach to security in 2020, pick this one: Stop thinking in single systems, in little blocks. Think in functions instead.
And do that consistently. Consistently does not mean writing down the most critical functions once at the beginning of a risk analysis and let them gather dust, preserved in files you won’t ever touch again.
Consistently means thinking in functions in everyday business whenever you make a security-relevant decision. …
Wenn Sie 2020 nur eine einzige Änderung an Ihrem Umgang mit Security vornehmen wollen, nehmen Sie diese: Hören Sie auf, in einzelnen Systemen, in “Klötzchen” zu denken. Denken Sie in Funktionen.
Und zwar konsequent. Konsequent heißt nicht, einmal am Anfang der Risikoanalyse die kritischsten Funktionen aufzuschreiben —die dann irgendwo auf dem geduldigen Papier verstauben.
Konsequent heißt, im Alltag in jeder Security-relevanten Entscheidung in Funktionen zu denken. Ihre wichtigsten Funktionen so präsent zu haben, dass Sie sie herunterrattern und aufzeichnen könnten, wenn Sie nachts um Drei geweckt würden.
Konsequent heißt, weniger über Server X und Steuerung Z zu reden, sondern über die Funktion die die beiden (wahrscheinlich zusammen mit ein paar weiteren Systemen) erfüllen. …
At the end of the article Security Engineering Needs A P&I Diagram I promised that I would make a first draft proposal for modelling security. Or at least, to begin with, the “first layer” of security engineering, which is understanding the functions to be protected.
But before we dive deeper into modelling, let’s linger a little more at this inconspicuous little statement at the end of that last sentence:
Understanding the functions to be protected.
It seems like common sense, but in most cases, security practitioners do not really live by it. …