A good-practice example for Cyber Resilience Act (CRA) documentation
How to be CRA compliant AND make your critical infrastructure clients happy
The EU Cyber Resilience Act (CRA) forces product suppliers to consider security during development, have security advisory and update processes in place, and provide their customers with user instructions for security. Currently, many product manufacturers are nervously eyeing their competitors and wondering how everybody else will approach product security.
The “information and instructions to the user”, one of the documents required to comply with CRA and the only one that is directly handed to the product’s buyers, hasn’t received much attention yet. However, especially if you are a product supplier that sells to critical infrastructure operators / Operational Technology (OT) environments, doing that piece of documentation well does not only earn you the CE marking, but could well make the difference between a grumpy customer that doesn’t want to pay for security and a happy one that does.
This arcticle provides a specific example for how to do the “information and instructions to the user” well — and explains why you should care.
The product security problem
Here’s what you hear from critical infrastructure operators when you ask them about product security: “If an OEM talks to me about security, I always suspect they use it to sell me new hard- or software.”
And here is what product suppliers tell me: “We would happily build in all the security — if our clients would be willing to pay for it.”
Or, also popular, this complaint:
“We have a secure and an insecure version of our product. Guess what people buy!”
Sounds like it’s about money, huh? Nobody wants to pay for security. The product suppliers won’t build it in for free, and the asset owners don’t want to shoulder the extra cost.
And that’s definitely part of the problem, and one problem regulation can address. But especially for critical infrastructures and OT, that’s not the whole problem. Product security will only make OT asset owners happy if it also addresses the second half of the problem. And that has to do with communication.
The Cybersecurity Communication Pyramid
Let me introduce you to the cybersecurity communication pyramid. It’s one result of my security by design research that I did as part of my PhD in the last three years.
How you communicate cybersecurity well is dependent on your communication intention. Based on who you want to address and what you want them to do, your addressees need different sets of information.
The least amount of information is needed at the pyramid top. If you want to communicate a product’s security to a consumer who only needs to use a product securely, you don’t need to communicate much. A list of features is fine. A cybersecurity label assuring the list of features is met is fine too. This is what all the consumer IoT labels are for.
If you want to communicate a product’s security to an authority, they will want to know more, as I’m sure many critical infrastructure operators are painfully aware. An authority could be external or internal, it could also be your management, but all authorities have in common that they don’t just want to know security features, they want to UNDERSTAND your security decisions. That only works if they know your rationales for choosing certain features (and for not choosing others).
And then there’s the pyramid bottom. At the pyramid bottom, your addressees need to make security decisions themselves for two reasons. First, because they’re regulated themselves, as many critical infrastructure operators are. They need to do their own threat models and risk assessment, and they need to explain their security measures to authorities themselves.
Second, because they often do a fair share of engineering themselves. The products they buy are just building blocks that are further integrated into complex systems of systems (industrial automation and control systms, to be precise), which are running the plants that deliver the very service that makes them critical infrastructure.
These buyers of your products are not consumers. You better think of them as co-engineers. They need to make security decisions, often in collaboration their product supplier. And for that, they need not just features. What they need is feature options, and they need context to decide which features to choose for their installation environments.
This, my dear product suppliers, is one reason why it seems like asset owners don’t want to pay for security. They may not want to pay for your “secure version” of a product that has all the security features, the majority of which they don’t need. But they are probably not happy with your “insecure” version either. They need something in between, they need options.
This target group, the co-engineers, are often overlooked. They are underrepresented in regulation, which primarily focuses on consumers — and that’s fine for IoT. But for the critical infrastructure target group, which also happens to contain a lot of operational technology (OT), product security needs to address the co-engineer target group.
And this is mainly solved through changes in communicating security.
How to communicate security well to critical infrastructure operators
So how do you communicate cybersecurity to co-engineers? How do we make critical infrastructure operators happy?
Frankly, it’s not that hard. There are three things that go a long way in making asset owners happy. They have three demands:
- Tell us where to look: Asset owners want to know which of your five thousand configuration options matter for security. Make it as easy to spot them as a red pin on a map.
- Let us choose: Asset owners are risk owners. They need to take responsibility for their overall plant risk, and this plant consists of way more than your component. Make life easier for asset owners by not offering features — but feature options.
This by the way is also part of the problem that we have with security levels in 62443 at the moment: They are helpful for manufacturers to get easy-to-read certificates for certain security levels, but for asset owners, they’re not granular enough. They don’t need measures lumped together in a security level, they need to be able to pick and choose. - Help us choose: And when you flagged all your security features and showed the different options, you’re very likely to completely overwhelm asset owners. Don’t leave them alone. Offer clear guidance when you recommend which option in which installation environment.
Required documents for CRA compliance
So if (understandably!) your main concern as a product supplier is CRA compliance, how can you still meet the cybersecurity needs of your critical infrastructure / OT clients along the way?
Here are the three required documents for CRA compliance:
- You need an EU declaration of conformity to affix your CE mark. This is nothing much really, just one page that says you’re compliant with the CRA’s requirements.
- You need a technical documentation to substantiate your claims.
- And you need information & instructions to the user.
Let’s look at this from the communication pyramid’s perspective. What communication intention is served by which document?
The middle one is easy: The technical documentation is for the authorities to prove you’re actually compliant to the CRA — that would be level B on the pyramid.
The documents to the left and right are given to the user. But where are the users on our communication pyramid?
The answer is: It depends on your product. If you have a consumer product, you’re at the pyramid top. You’re communicating to consumers. If you have a product that is used in OT plants and critical infrastructures, you’re at the bottom. In that case, you’re communicating to co-engineers that are themselves engineering security too — and this is the case we will now look at.
Let’s see what needs to be in each document:
As mentioned before, the declaration of conformity is nothing much, just a piece of paper really that refers to the regulations and harmonized standards or essential requirements you’re complying with. Essential requirements are specified in the CRA, but they are rather vague. Harmonized standards are European standards that fulfill the so-called presumption of conformity, so if you meet these standards, you can be sure to meet the essential requirements in the CRA.
The technical documentation substantiates your claims of being compliant to the essential requirements. In case you’re doing a third-party assessment, they will examine the technical documentation. Also, market surveillance authorities are authorized to request this technical documentation any time. The technical documentation contains a detailed product description, your vulnerability handling process, and your risk assessment — which determines which essential requirements you implement.
The information and instructions to the user are basically excerpts from your technical documentation to answer the most pressing user questions. You’ve probably already guessed it: They are the place to fix our communication problem.
When people talk about the CRA, they rarely address the information & instructions to the user. They are totally underrated, probably because most people think of user manuals they know and never read. And probably no one will read the cybersecurity information & instructions to the user for consumer products — but remember: you are communicating to co-engineers! For co-engineers, this underrated document is what makes the difference between a grumpy client who doesn’t want to pay for security and a happy one that does pay.
Good practice example
And finally, I will show you a good practice for information & instructions to the user.
This is our Example Engineering PC EE-2024-CRA, built by Good Practice Incorporation. The name says it all: Good Practice, Inc. has provided a really good practice information & instructions to the user.
Part 1: Product description
The first part contains the product description, the link to the EU declaration of conformity, and a description of the intended purpose and essential functionalities.
Good Practice, Inc. has chosen to convey the intended purpose and essential functionalities through diagrams, which is very efficient for the reader. The product in question, the Example Engineering PC, is colored green, and the diagrams show scenarios of intended use — along with roles that are expected to interact with the PC, components it is intended to be connected to, and protocols being used for the two different use cases. You can also add remarks for which use cases the product is NOT intended — like remote maintenance.
All of this seems simple, but is incredibly valuable if you’re using the Engineering PC and trying to figure out how to use it securely. I can’t count how many hours I’ve spent in calls with product manufacturers on behalf of asset owners to find out how a product works and which protocol is used for specific tasks. With this documentation, all this information would come with the product.
Part 2: Vulnerability handling
Second part: Everything you need to know about vulnerability handling. A single point of contact, the end of support when security updates will no longer be provided, a link to the SBOM, and information on how installing security updates works.
(The “link to the SBOM” of course is way easier said than done. It’s an important part of the documentation but one that deserves at least one article of its own. To be useful, it should of course be machine-readable in a format that the customer can process.)
But let’s look at the information on how installing security updates work. At the latest since the Crowdstrike incident this year, where an automatically delivered, but erroneous update crashed Windows PCs around the world, we all know it makes sense to be very aware of these update processes.
Similar to the essential functionalities and intended use, Good Practice Inc. has chosen to explain the update processes in diagrams — again including all the components and communication around the (green) engineering PC that is needed for the updates to work. Based on this information, customers can make an informed decision how they would like to install security updates.
Part 3: Cybersecurity risk and measures
That’s where the beef is. Remember our three things that make asset owners happy? One of those was guidance when to choose which security features. As we all know, that is highly dependent on the asset owner’s installation environment….this could be an excuse for a product supplier to say “I don’t know anything about your installation environmet, so you need to work that out on your own” — but not for Good Practice Inc.! They have chosen to provide, once again, diagrams of possible installation environments for our Example Engineering PC: You could plug it into a PLC in the field with no network connection, or you could use it for engineering over the network, or via the control system. And the network could be segmented or just a flat network.
The customer can then pick the architecture that is closest to how he’s planning to use the Example Engineering PC. Let’s say that would be architecture 2b (engineering over the network)…then he could get a more detailed account of all properties that are security-relevant in this architecture:
Each of the little green symbols in the documentation marks a security-relevant property (remember: these should be as easy to spot as red pins on a map).
These can be properties of the Engineering PC itself or properties affecting its security environment, for example the network in which it is placed or the engineer that uses it.
Let’s look at one of the properties of the Example Engineering PC in more detail, for example the integrity check:
Here, Good Practice Inc. has provided guidance for making a decision regarding this property: an explanation what this property is about and why it matters for security, viable options, and recommendations when to choose which option.
Remember: Critical infrastructure operators want to be able to pick and choose, but at the same time be guided through their choices.
You may have noticed the red exclamation marks in the “integrity check” property. They mark options that may be risky from a security point of view.
These exclamation marks can also be used to highlight changes that may lead to security risk not only for a single property, but across all properties in the entire architecture:
And finally, the threat model for the Example Engineering PC is made transparent. Because, remember: critical infrastructure operators need to do their own threat modeling and risk assessments — and for that, having ready-to-use threat models for components (and knowing which threats were and weren’t considered for these components) goes a long way. Ideally, critical infrastructure operators can re-use these threat models in for their own risk assessments:
Speaking of re-use: We’re in 21st century. We’re proud that we start to have digital SBOMs and digital security advisories. We should absolutely provide all the information we’ve seen now in a digital format customers can directly put to use. To be clear: this does not mean a digital PDF, but a machine-readable format like JSON that can be fed into the asset owners (security) engineering tools.
What can you do?
Okay, that was a quick glimpse into a brighter product security future: So much valuable cybersecurity information can be communicated on a few pages! What can you do now to bring us there? Depends on your role:
If you’re a critical infrastructure (or otherwise regulated) operator, remember to make clear what you want and need. Don’t demand features from your product supplier. Don’t just demand any Security Level. Demand options, and demand information that helps you choose between options.
And the other side of the medal is: Also be transparent enough regarding your installation environment so your product supplier can co-engineer security with you.
If you’re a product supplier, make user instructions a priority, forced by regulation or not. They are your chance to shine with your customers! Also, you will build products including security your customers actually pay for, because they can adjust it to their needs and thus only pay for what they really want.
If you’re a regulator, remember while the vast majority of security regulations is written for consumers, OT security is different. To really improve product security for critical infrastructure and other OT environments that are security-regulated themselves, regulation needs to be written for co-engineers, not consumers.