We built and tested a Security by Design tool

Here’s what we learned (so far)

Sarah Fluchs
7 min readJun 26, 2023
Image generated by Midjourney.com

We’ve been working on improving security by design for the automation industry in our German state-funded research project “IDEAS” since 2021.

Now we’ve tried out the “security by design decisions” software tool that we’ve designed for the first time. More precisely: We tested the alpha version of our software tool containing our “Security by Design Decisions” method with one of our application partners, HIMA Paul Hildebrandt GmbH. We were able to use a real-life project for testing that we re-evaluated using our tool together with HIMA engineers.

I’m highlighting some of the most important findings from the test run in this article. If you want all the details: Here is the full research paper (open access).

How do you measure success for Security by Design?

When is a Security by Design Method successful? This is a question that in our research, we were pondering from the very beginning.

One of our first findings back in 2021 was that any method is only successful if it’s actually used, so our unofficial project slogan quickly became “Security by Design for engineers with no time and security expertise”.

But our understanding of successful Security by Design keeps evolving. This is how we measured success of our “Security by Design Decisions” method (compared to executing the same project without the method):

  1. Decision identification: Do decision makers identify more security decisions?
  2. Decision making: Can decision makers make security decisions autonomously based on the information offered?
  3. Decision tracing: Can a third party understand why each security decision was made this way?
  4. Decision re-use: Can artifacts used / produced during decision making be re-used in future projects?

Most security decisions stay undercover

Image generated by Midjourney.com

When we analyzed the results of the test run, we first turned to this question: Do decision makers identify more security decisions using the tool?

The results were interesting.

We made a total of 61 security decisions (which took about 4 hrs).

  • 27 of these were additional security decisions, i.e. they were not made in the original project. Out of these,
  • 4 were completely new decisions had not been considered in the original project at all.

And here’s the interesting part:

  • 23 (!) of the decisions were made as well in the original project — but their security impact had not been considered.

We like to call them 😎 “undercover security decisions”.

They are security decisions because they have a security impact, but they’re not recognized as such. They’ve been made, but for reasons that have nothing to do with security. Thus, “undercover security decisions” have the potential to inadvertently impair the security posture — they are the reason why “insecure by design” features exist in products.

Undercover security decisions often affect topics or engineering domains that are not typically associated with security (but do have a security impact). A classic in industrial automation: An undercover wifi router “hidden” in a control cabinet.

Examples from our test run included sensor redundancies, alarm times / priorities / texts for potentially security-relevant alarms, bridging and forcing of signals, OSI layer 1–3 signal processing choices, how new software is deployed, or use of shared hosts.

Why security decisions are really made (today)

Image generated by Midjourney.com, modifications by the author

For which reasons do you make your security decisions? Understanding (and documenting) the answers to this question is one of our biggest goals for our security by design tool. We want to make security decisions traceable: Can a third party understand why each security decision was made this way? Our “security by design decisions” tool makes it easy to explicitly document the rationales for each security decision made. So naturally, we created lots of data around that question during the test run.

We found that cybersecurity decision-making mostly follows one of four paths:

  • 🔥 Risk-driven: The decision is made to mitigate an identified business risk.
  • 🎯 Goal-driven: The decision is made to meet a pre-defined (security) design goal.
  • 📜 Compliance-driven: The decision is made because the organization needs to comply with an internal or external regulation.
  • 📚 Library-supported: The decision is made based on past solutions for comparable problems.

So much for the theory.
But for which reasons are security by design decisions REALLY made?

Before we started, the test persons (engineers at HIMA) stated that most of their security decisions were compliance-driven, having to comply with client’s regulatory documents.

Well, here is how the decisions were really made (multiple answers were possible):

  • 🔥 36% were risk driven
  • 🎯 30% were goal-driven
  • 📜 15% were compliance-driven
  • ….and by far the highest percentage (57%) of security decisions were not made for security reasons, but based on a functional requirement (or a functional restriction).

Let that sink in: By far the highest percentage of security decisions were not made for security reasons.

Wait, isn’t that a problem?

No, it’s reality, and it’s okay that way — as long as we make those decisions consciously and we are aware that they are security decisions too (even if they’re not made for security reasons).

Documenting a security decisions that from a security perspective is not optimal — e.g. choosing NOT to implement a certain security feature — is just as important as documenting the security features we implement.

We found it’s important to trace decision rationales, but without judging them. Security by Design doesn’t need a moralising undertone. It needs to provide engineers with the all the information they need to make the security decisions that are best for their overall system.

So, engineers — make your security decisions as you like. But: Leave footprints. 🐾🐾 Trace your decision rationales! As long as clients, auditors, colleagues, managers — and yourself! — can understand why you made a security decision, you’ll be fine.

Are we doing security goals all wrong?

Image generated by Midjourney.com

Or, to put it less pointedly: Can we use security goals to make security by design more intuitive for non-experts?

Part of our security by design method is to create a security “decision base”. This decision base contains all information that helps engineers making security decisions (we make an effort to display the decision base information intuitively in diagrams, but that’s a different story).

We experiment with different security decision-making paths, as you already know by now. Two of them are

  • 🔥 risk-driven: Security decision is made to mitigate a business risk
  • 🎯 goal-driven: Security decision is made to meet a design goal

These paths can often be used interchangeably. Often, they are two ways of looking at the same problem: Something I want to avoid is often simply the negation of something I want to achieve.

But our security by design approach is all about enabling engineers with no time and security expertise to do security by design anyway, so efficiency and intuitivity is important. During our test run, we found engineers often found it easier to define security goals to achieve than risks to mitigate.

We let them define their security goals as they wished, and the results were inspiring. Security professionals mostly end up with variations of availiability, integrity, and confidentiality as security goals (and we had many of those in the test run as well), but examples for more intuitive security goals engineers defined were:

  • 🎯 “It shouldn’t be possible to use the engineering station for malicious purposes.”
  • 🎯 “For entering the control room, a person must be identified as a control engineers.”
  • 🎯 “From the controller network, communication to external parties should not be possible.”

These are highly usable indicators for the goals security engineering must achieve. With these goals at hand, security decision-making becomes more intuitive — and it is very easy for engineers with no security focus to define them. Also, the above goals were often more applicable and specific than the classical “CIA” goals (compare: “integrity of engineering station”).

We concluded that a less rigid / more creative definition of security goals can indeed help to get conversations about security priorities going.

Our favorite feedback

(Parts of) the IDEAS research team with members of HIMA, INEOS, Pforzheim University, and admeritia

Last, I want to share my favorite line of feedback from the crew that tried and tested our software demonstrator. It is my favorite line because it perfectly sums up the mission we want to achieve with our Security by Design tool:

“I’m really not a security expert, but with the tool I have the confidence to make security decisions during engineering.”

What’s next?

We have three more field tests at INEOS and HIMA coming up in the summer and fall. So when our project ends at the end of 2023, our tool will be absolutely field-tested with already four improvement loops under its belt.

Links:

  • Here is the full research paper (open access) covering the first round of validation.
  • Here is a list of all IDEAS publications to date (scroll down to the table).
  • Here is an article from 2021 summarizing the IDEAS vision.

--

--

Sarah Fluchs
Sarah Fluchs

Written by Sarah Fluchs

Friction generates heat — true for writing and engineering. Fluchsfriction generates writings on security engineering. Heated debates welcome! CTO@admeritia

No responses yet