Turn Controllers into Plants’ Bodyguards

Top 20 Secure PLC Coding Practices released

Sarah Fluchs
11 min readJun 15, 2021
Y’all better get used to this beauty: The PLC Security Project’s logo

It began with a conference talk, and talks over drinks after the conference talk, and long hours on the phone in a world that just began to understand that COVID-19 meant: no more conferences. In a way, the Top 20 Secure PLC Coding Practices list is an example of finding ways to work on projects as a community that could no longer meet in presence.

But this is not the article to explain why and how the project began, there’s another one explaining that already.
This, ladies and gentleman, is the article to present the first tangible, downloadable, commentable result of our PLC Security project, which happens to also be the first list of Secure Coding Practices for PLCs.

And while I want to let the document’s content stand for itself, there are a few explanatory notes on the project that go beyond the document.

You can download the Top 20 Secure PLC Coding Practices document at our project website plc-security.com and get all the news by following our twitter account @secureplc.

The Top 20 Secure PLC Coding Practices, in short, are a list of coding practices for PLC programmers that have benefits for the IT security of programmable logic controllers (PLCs) and the plants these PLCs control.

While the list is inspired by similar lists for “normal” IT software, published by e.g. OWASP, Carnegie Mellon University, CWE, or Microsoft, the Secure PLC Coding Practices are not an attempt to apply the known coding practices to different devices. Instead of bending existing practices until they fit onto a PLC (and potentially don’t make much sense any more), we tried to start from a blank page and make use of what PLCs are typically capable of.

We tried to regard the PLCs’ particularities — real-time capabilities, limited calculating power, limited flexibility, the fact they control physical processes — as special features, not bugs for achieving better security.

We tried to turn PLCs, often regarded the achilles heel of automated plants, into the plants’ ubiquitous and unrelenting body guards, one in front of each (back) door.

Why do we need this list?

Security people — you know PLC’s are often insecure by design. You have read and heard about a million PLC vulnerabilities. Maybe you’ve come to accept that that’s just what PLC’s are, insecure? How many times have you with a heavy heart placed PLC’s out of scope for security programs with the remark “implementing security requirements not technically feasible”? How often have you agreed to adventurous lines of argumentation that “PLC programming is not really programming” because in the end, you knew you would not have much to offer regarding secure PLC programming even if you’d like to?

Automation engineers — you’ve probably heard PLC’s and security don’t go particulary well with each other. Increasingly, you have people getting on your nerves asking “have you thought about security yet”? You would really like to do something about security, heck even PLC security, if only someone told you how?

So if nothing else, after long years of complaining about insecure by design PLCs, the Secure PLC Coding Practices are supposed to start the constructive part of the conversation, and take away the excuses how people would really like to establish securely programming PLC’s, but it’s just not feasible, “see, there isn’t even any information on how to do secure programming for PLC’s in the whole industry”.
(If you feel attacked; don’t worry, I’ve been that person as well.)

Well, these times are behind us. The next time you hear this excuse, send them the link to the Top 20.

If the list achieves nothing else, it is supposed to establish a common understanding of what PLC security even means; what we can expect from a PLC that has been “programmed securely”.

What does the document contain?

The Top 20 Secure PLC Coding practices can be consumed in two ways: The short version fits on two pages and gives an overview over the 20 practices. The detailed version adds one or more pages of information about each practice, containing

  • guidance including background info and explanations,
  • examples for implementation (or what could happen if a practice is not implemented),
  • the practice’s “Why”, i.e. a list of benefits, which are always security, but a lot of the times a practice has reliability or maintenance benefits too, and
  • a list of references to standards or frameworks like MITRE ATT&CK for ICS, CWE, or several parts of the ISA-62443 series.

Also, the practices have tags, which are a short summary of their security objectives and their target group.

By the way, you can share the document freely, as we’ve chosen one of the most open licenses we could find (see license statement here).

How do these practices improve security?

Practices are grouped by security objective. We assigned the tags very late in the process, after quite a few hours of discussions on what exactly would be the security benefit of implementing a each practice.

We value all the reliability and maintenance benefits, but in the end, in order to fit onto a secure coding practices list, practices had to have a strong security benefit as well. There were very useful practices where we could not find more security objectives than “code gets better readable” which you could always translate to “incident response would be easier”.
They did not make it to the list.

So in a way, the security objective tag is the very distilled, simplified form of the “why” section, and looking at the security objectives tags we agreed on gives a good impression which security objectives are well-suited to be tackled by PLC coding (and which, in turn, are rather not):

  • Integrity: This is a big one, and the single most popular security objective for practices on our list. Out of 20 practices, a solid 12 aim at integrity. Which is why we further distinguished them into practices improving integrity of PLC logic, integrity of PLC variables (including timers and counters), or integrity of I/O values.
  • Hardening: This includes everything that involves reducing complexity and thus the attack surface. Two practices fall into this category.
  • Resilience: Practices that ensure a PLC will run robustly in case of errors. Because we were very strict with practices belonging to that category, we only have one practice that has the primary purpose of increasing resilience.
  • Monitoring: This tag marks all practices that recommend monitoring certain values in the PLC that could indicate security problems. The interesting part is that most of these are not classic security monitoring, and all can be implemented directly within the PLC and sometimes HMI, using a PLC’s standard features and particularities. With five practices, this is the second-most popular category.

Admittedly, not all of the above are crystal-clear security objectives in the narrower sense of the word, (i.e. you could rightfully argue that monitoring mostly has the ultimate goal of something else, e.g. identifying integrity breaches) but they’re the most common answers we encountered when asking the “how does this improve security” question.

There’s one thing that many practices have in common, and it’s the reason why “integrity” and “monitoring” are the most popular security objectives on our list: Many practices make use of PLCs’ major strength, which is the fact that a PLC “understands” a process better than most other devices in a plant network — that’s what they’ve been programmed for.
If a variable is off, the PLC is the best device to detect it, because it “knows” the process context of the variable. If a valve takes longer than expected to close, the PLC can notice, because it gets the direct feedback from the valve. If a calculation suddenly causes a scan cycle to last longer than the last one million times, a PLC can “know” that something in the calculation is now different.

Also, you can read the limitations of secure coding practices for PLCs (at the time being) out of the security objectives list, because quite some thinkable security objectives are not represented: authentication, for example, or other mechanisms ensuring confidentiality, like encryption.
So while PLCs can turn into your plant’s most knowledgeable bodyguards because they “understand” your process in a way most other devices can’t, it is important to acknowledge that they’re highly specialized bodyguards, and it is certainly not a good idea to rely on their code, no matter how secure, for every security problem. Some problems are simply better solved outside of PLCs.

How were the Top 20 chosen?

This brings us to another aspect of what you won’t find on the list. After we had received dozens of good suggestions for secure PLC coding practices, we realized that many of them weren’t really about coding any more, but about architecture, networking, practices for other devices like HMIs, or documentation. These all improve PLC security for sure, but they are not PLC coding. So we agreed on the simple definition of “anything that involves changes to the PLC itself” to decide if a practice was in scope of our secure PLC coding practices list.

That said, we did not throw away all the practices that did not fit into this definition. There’s a second list waiting to be edited and sorted out, which has the working title of “secure PLC environment practices”.

Even after narrowing down the scope, there were way more than 20 candidates for the Top 20 list. The compilation of the final list was a true community effort, because we decided early on that we’d collect votes for practices, so they could move onto and out of the list like soccer clubs can move up to the premier league. There was some merging, grouping, editing and rewording, after the vote, but the decision which practices are on the list is based on the votes that users on our Top 20 platform had cast.

Last important note: It does not mean anything where in the list a practice is located. Number 1 is not more important than number 20. They are grouped by security objectives, that’s all.

Some practices are so basic. Why have you included them?

First: There are two perspectives to look at the secure PLC Coding Practices, and there’s a different portion looking “basic” from each one of them.

Case 1: Too basic for security people

From a security perspective, practices like “disable unused ports” may look basic. We included those anyway for three reasons:

  • First, they may feel basic for security experts, but maybe not for PLC programmers.
  • Second, just because the requirement seems basic, its implementation is not always as straightforward. A big portion of the practices rank around implementation guidance and examples.
  • Third, we wanted to spell out which “basic” security requirements a PLC code and configs can actually fulfill, and how.
    There are many myths around security requirements generally not being technically feasible on PLC’s, and while this is true for some, there are others that actually can be implemented.

Case 2: Too basic for PLC programmers

From a PLC programmer’s perspective, practices like “monitor PLC cycle times” may look basic. Those we included because they may be best practices basic for PLC programmers for a lot of reasons, and maybe even on other PLC programming best practices list, but our goal is to make obvious that they also make sense for security reasons (and, on the flipside, ignoring them is also a security issue).

Case 3: Everyone does this anyway

And even if chances are that some practices really are basic for everyone in the industry, there are two additional reasons why we included them:

  • Repeating the Top 20’s purpose from above: If the list achieves nothing else, it is supposed to establish a common understanding of what PLC security even means; what we expect from a PLC that has been “programmed securely”. So if there are things that everyone agrees are important for secure PLC coding, they totally belong on the list, no matter how basic.
  • The document is also meant to be a guidance for programmers just getting started on either PLC programming or security or both, and if there are security best practices everyone naturally adheres to, new programmers should have access to this wisdom.

Who did you have in mind while writing the document?

The person we had in mind when writing all of the practices were engineers, more precisely engineers who program PLCs. Who we did not primarly write the practices for are security experts or management. That is not to say these groups should not read the practices (they probably should!), but the texts were written in a way that PLC programmers would find useful.

Also, the texts were written not only for PLC programmers, but more often than not also by PLC programmers. Writing secure PLC coding practices would not have worked without those who write PLC code. We had — and continue to have — the best discussions when a PLC programmer frowned and said “I have no idea how you’re supposed to implement this, but see, this is what we’ve been doing for years”.

Nevertheless, even “PLC programmers” can be too wide a target group, because it makes a major difference where they work. Therefore, the second tag each practice has is a “target group” tag. We used the ISA-62443 role definitions of product supplier, integration /maintenance service provider, and asset owner to make clear in which phase of a PLC’s lifecycle a practice is probably best implemented.

I want to contribute / I found a mistake.

Commenting on the Top 20 document is easy: Just download our comment form, fill it out, and send it back to the email address included in the form. You can either comment on the existing text or propose a completely new practice you think is missing.

If you want to be involved in resolving comments and deciding where the Top 20 project is headed in the future — please reach out to us. We’re always on the lookout for passionate PLC people who want to be added to our little core team!

Who is “we”?

Now, I’ve talked of “we” for several paragraphs, and that is for a reason: Naming all people who contributed to the project in each sentence would have made this articly rather lengthy.
The Secure PLC Coding Practices Project is a community effort. We can proudly say we have more than 900 registered users on the platform we’ve used to structure our discussions throughout the project (top20.isa.org; going to be shut down soon).
The core team that met every other week to discuss each and every word in each and every practice is named at the end of the document, and there’s a full list of supporters from the platform who agreed to be named on the last page. No one in the project got paid for the Top 20 work, and since we are based all around the globe, work included meetings at unorthodox times for many.

Thank you, dream team!

What’s up next?

Our main goal is to implant the secure PLC coding practices firmly in every PLC programmer’s and automation engineer’s basic body of knowledge. We’ll talk about, share, explain, and improve our list.
There’s a large pool of ideas for improvements, extensions, and collaborations already: add code samples, add more examples for different PLC brands, finish the additional secure PLC encvironment list — to name just a few.

But what we need you to do now is to spread the word that there is now such a thing as secure coding practices for PLCs. Spread it to everyone you know, especially to automation engineers and PLC programmers.
Even better: Try the practices out yourself, and give us feedback on how to make them easier to understand, more complete, more accurate, or easier to implement. We can’t wait to hear what you think.

Download the latest Top 20 Secure PLC coding practices document at plc-security.com or follow our project’s twitter account, @secureplc.

--

--

Sarah Fluchs

Friction generates heat — true for writing and engineering. Fluchsfriction generates writings on security engineering. Heated debates welcome! CTO@admeritia