In recent years, security in embedded systems design has become a major concern. Patrick Schaumont’s CC25 article looks at the current state of affairs through several examples. The included tips and suggestions will help you evaluate the security needs of your next embedded design.
When you’re secure, you’re protected from loss or danger. Electronic security—the state of security for electronic systems—is essential for us because we rely so much on electronic embedded systems in everyday life. Embedded control units, RFID payment systems, wireless keys, cell phones, and intellectual-property firmware are just a few examples where embedded security matters to us. System malfunctions or the malicious uses of such devices are guaranteed to harm us. Security requires stronger guarantees than reliability. When we implement a secure system, we’re assuming an adversary with bad intentions. We’re dealing with someone who’s intentionally trying to cause harm. This article emphasizes attacks rather than solutions. The objective is to give you a sense of the issues.
Defining Embedded Security
As design engineers, we want to know how to create secure designs. Unfortunately, it’s hard to define the properties that make a design secure. Indeed, being “secure” often means being able to guarantee what is not going to happen. For example: “The wireless door opener on my house cannot be duplicated without my explicit authorization” or “The remote update of this wireless modem will not brick it.” Designing a secure system means being able to tell what will be prevented rather than enabled. This makes the design problem unique.
There is, of course, a good amount of science to help us. Cryptologists have long analyzed the desirable features of secure systems, and they have defined security objectives such as confidentiality, privacy, authentication, signatures, and availability. They have defined cryptographic algorithms such as encryption and decryption, public-key and symmetric-key operations, one-way functions, and random-number generation. They have also created cryptographic protocols, which show how to use those cryptographic algorithms in order to meet the intended security objectives.
Cryptography is a good starting point for secure embedded design. But it is not enough. Secure embedded designs face two specific challenges that are unique to embedded implementation. The first is that, by definition, embedded systems are resource-constrained. For example, they may use an 8-bit microcontroller and 32 KB of flash memory. Or they may even have no microcontroller at all and simply consist of a passively powered RFID device. Such severe resource constraints imply that there are hardly any compute cycles available for security functions. The second challenge is that embedded systems have simple, accessible form factors. Once deployed in the field, they become easy to tamper with, and they are subject to attacks that cryptologists never thought of. Indeed, classic cryptography assumes a “black-box” principle: it assumes that crypto-devices are free from tampering. Clearly, when an attacker can desolder components or probe microcontroller pins, the black-box principle breaks down.
Embedded Security Attacks
Embedded security attacks come in all forms and types. Here I’ll detail a few examples of recent, successful cases. In each of them, the attackers used a different approach. Refer to the documents listed in the Resources section at the end of this essay for pointers to in-depth discussions.
Let’s begin with a classic case of cryptanalysis. Keeloq was a proprietary encryption algorithm used in remote keyless entry systems. The algorithm is used by many car manufacturers, including Chrysler, General Motors, and Toyota, to name a few. It has a 64-bit key, which means that randomly trying keys will lead to a key search space of 264 possibilities. That is at the edge of what is practical for an attacker. Even when trying 10 million keys per second, you’d still need thousands of years to try all the keys of a 64-bit cipher. However, in 2008, researchers in Leuven, Belgium, found a way to reduce the search space to 44 bits. Essentially, they found a mathematical weakness in the algorithm and a way to exploit it. A 44-bit search space is much smaller. At 10 million keys per second, it only takes 20 days to cover the search space—a lot more practical. Clearly, deciding the key length of a secure embedded system is a critical design decision! Too short, and any progress in cryptanalysis may compromise your system. Too long, and the design may be too slow, and too big for embedded implementation.
Attackers go further, as well, and tamper with the security protocol. In 2010, researchers from Cambridge, UK, demonstrated a hack on the “Chip and PIN” system, an embedded system for electronic payments. Chip and PIN is a system for electronic purchases. It is similar to a debit card, but it is based on a chip-card (a credit card with a built-in microprocessor). To make a purchase, the user inserts the chip-card in a merchant terminal and enters a PIN code. A correct PIN code will authorize purchases. The researchers found a flaw in the communication protocol between the merchant terminal and the chip-card. The terminal will authorize purchases if two conditions are met: when it has identified the chip-card and when it receives a “PIN-is-correct” message from this card. The researchers intercepted the messages between the terminal and the chip-card. They were then able to generate a “PIN-is-correct” message without an actual PIN verification taking place. The terminal—having identified the chip-card, and received a “PIN-is-correct” message—will now authorize purchases to the chip-card issuer (a bank). This type of attack, called a man-in-the-middle attack, was done with a hacked chip-card, an FPGA board, and a laptop. Equally important, it was demonstrated on a deployed, commercial system. In the Resources section of this article I list a nice demonstration video that appeared on the BBC’s Newsnight program.
One step beyond the man-in-the-middle attack, the attacker will actively analyze the implementation, typically starting with the cryptographic components of the design. A recent and important threat in this category is side-channel analysis (SCA). In SCA, an attacker observes the characteristics of a cryptographic implementation: its execution time, its power dissipation, and its electromagnetic patterns. By sampling these characteristics at high speed, the attacker is able to observe data-dependent variations. These variations are called side-channel leakage. SCA is the systematic analysis of side-channel leakage. Given sufficient measurements—say, a few hundred to a few thousands of measurements—SCA is able to extract cryptographic keys from a device. SCA is practical and efficient. For example, in the past two years, SCA has been used successfully to break FPGA bitstream encryption and Atmel CryptoMemory. Links to detailed information are in the Resources section of this essay.
If there’s one thing obvious from these examples, it is that perfect embedded security cannot exist. Attackers have a wide variety of techniques at their disposal, ranging from analysis to reverse engineering. When attackers get their hands on your embedded system, it is only a matter of time and sufficient eyeballs before someone finds a flaw and exploits it.
What Can You Do?
The examples above are just the tip of the iceberg, and may leave the impression of a cumbersome situation. As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.
First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.
Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries were originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip-package level of a microcontroller. This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.
Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra.
The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven techniques. Learn about the experiences of other designers. The following examples are good starting points for learning about current concerns and issues in embedded security.
Three Books for Your Desk
Security is a complex field with many different dimensions. I find it very helpful to have several reference works close by to help me navigate the steps of building any type of security service. The following three books describe the basics of information security and systems security. While not specifically targeted at the embedded context alone, the concepts they explain are equally valid for it as well.
Christof Paar and Jan Pelzl’s Understanding Cryptography: A Textbook for Students and Practitioners gives an overview of basic cryptographic algorithms. The authors explain the different types of encryption algorithms (stream and block ciphers, as well as various standards). They describe the use of public-key cryptography, covering RSA and elliptic curve cryptography (ECC), and their use for digital signatures. And they discuss hash algorithms and message authentication codes. The book does not cover cryptographic protocols, apart from key agreement. A nice thing about the book is that you can find online lectures for each chapter.
Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno’s Cryptography Engineering: Design Principles and Practical Applications covers basic cryptography as well, but with a slightly different emphasis as the first. It takes a more practical approach and frequently refers to existing practice in cryptography. It has sections on (software-oriented) implementation issues and practical implementation of key agreement protocols. This book would give immediate value to the practicing engineer—although it does not connect to the embedded context as well as the previous book. For example, it does not mention ECC.
Ross Anderson’s Security Engineering is a bible on secure systems design. It’s very broad. It builds up from basic cryptography over protocols up to secure systems design for telecoms, networking, copyright control, and more. It’s an excellent book on the systems perspective of secure design. The first edition of this book can be downloaded for free from the author’s website, though it’s well worth the investment to have the latest edition on your desk.
Many websites cover product teardowns and the specific security features of these implementations. Flylogic’s Analytics Blog (www.flylogic.net/blog/) describes the analysis of various chipcards. It contains chip micrographs and discusses various techniques to reverse-engineer chip security features. The website is an excellent showcase of what’s possible for a knowledgeable individual; it also clearly illustrates the point that perfect security cannot exist.
If you would like to venture in analysis of secure embedded designs yourself, then the Embedded Analysis wiki by Nathan Fain and Vadik is a must read (http://events.ccc.de/congress/2010/wiki/Embedded_Analysis). They discuss various reverse-engineering tools to help you monitor a serial line, extract the image of a flash memory, and analyze the JTAG interface of an unknown component. They also cover reverse-engineering practice in an online talk, which I’ll mention below.
Earlier I noted that cost is an important element in the security design. If you’re using cryptography, then this will cost you compute cycles, digital gates, and memory footprint. There are a few websites that give an excellent overview of these implementation costs for various algorithms.
The EBACS website contains a benchmark for cryptographic software, covering hash functions, various block and stream ciphers, and public-key implementations (http://bench.cr.yp.to/supercop.html). Originally designed for benchmarking on personal computers, it now also includes benchmarks for ARM-based embedded platforms. You can also download the benchmarks for a wealth of reference implementations of cryptographic algorithms. The Athena website at GMU presents a similar benchmark, but it’s aimed at cryptographic hardware (http://cryptography.gmu.edu/athena/). It currently concentrates on hash algorithms (in part due to its development for the SHA-3 competition). You can apply the toolkit to other types of cryptographic benchmarking as well. The website provides a host of hardware reference implementations for hash algorithms. It also distributes the benchmarking software, which is fully automated on top of existing FPGA design flows from Altera and Xilinx.
Security is a fast-evolving field. You can remain up to date on the latest developments by subscribing to a few newsletters. Here are three newsletters that have never failed to make a few interesting points to me. They do not exclusively focus on secure embedded implementations, but frequently mention the use of embedded technology in the context of a larger security issue.
The ACM RISKS list (http://catless.ncl.ac.uk/Risks) enumerates cases of typical security failures, many of them related to embedded systems. Some of the stories point out what can happen if we trust our embedded computers too blindly, such as GPS systems that lead people astray and stranded. Other stories discuss security implications of embedded computers, such as the recent news that 24% of medical device recalls are related to software failures.
Bruce Schneier’s “Schneier on Security” blog and Crypto-Gram newsletter (www.schneier.com/crypto-gram.html) focus on recent ongoing security issues. He covers everything from the issues with using airport scanners to the latest hack on BMW’s remote keyless entry system.
The Technicolor Security Newsletter (www.technicolor.com/en/hi/technology/research-publications/security-newsletters/security-newsletter-20) discusses contemporary security issues related to computer graphics, content protection, rights management, and more. The newsletter gives succinct, clear descriptions of content protection (and attacks on it) for mobile platforms, game machines, set-top boxes, and more.
Three Web Presentations
You can also learn from watching presentations by security professionals. Here are three interesting ones that specifically address security in embedded devices.
In a talk titled “Lessons Learned from Four Years of Implementation Attacks Against Real-World Targets,” Christof Paar covers the use of side-channel analysis (SCA) to break the security of various embedded devices, including wireless keys, encrypted FPGA bitstreams, and RFID smartcards. The talk is an excellent illustration of what can be achieved with SCA.
Nathan Fain gave a talk called “JTAG/Serial/Flash/PCB Embedded Reverse Engineering Tools and Technique” at a recent conference. The author discusses various tools for analyzing embedded systems. It’s the live version of the wiki page listed earlier. Go to his website (www.deadhacker.com) to download the tools he discusses.
Finally, in a talk titled “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” Stephen Checkoway discusses the embedded security analysis of cars. The author demonstrates how an attacker is able to access a car’s internal network, a concept called “the attack surface.” He points out several known issues, such as the risks posed by the on-board diagnostics (ODB) port. But he also demonstrates a wide variety of additional access points, from CD to long-range wireless links. Each of these access points comes with specific risks, such as remote unlocking of doors and unauthorized tracking. It’s a fascinating discussion that demonstrates how the ubiquitous microcontroller has brought safety as well as risk to our cars.
Security in embedded systems design requires a designer to think about ways in which bad things are prevented from happening. We have seen a great deal of progress in our understanding of the threats to embedded systems. However, it’s clear that there is no silver bullet. The threats are extremely diverse, and eventually it’s up to the designer to decide what to protect. In this article, I provided a collection of pointers that should help you learn more about these threats.—By Patrick Schaumont (Patrick is an associate professor at Virginia Tech, where he works with students on research projects relating to embedded security. Patrick has covered a variety of embedded security-related topics for Circuit Cellar: one-time passwords, electronic signatures for firmware updates, and hardware-accelerated encryption.)
R. Anderson, Security Engineering, Second Edition, Wiley Publishing, Indianapolis, IN, 2008.
J. Balasch, B. Gierlichs, R. Verdult, L. Batina, and I. Verbauwhede, “Power Analysis of Atmel CryptoMemory — Recovering Keys from Secure EEPROMs.” In O. Dunkelman (ed.), Topics in Cryptology — CT-RSA 2012, The Cryptographer’s Track at the RSA Conference, Lecture Notes in Computer Science 7178, O. Dunkelman (ed.), Springer-Verlag, 2012.
BBC Newsnight, “Chip and PIN is Broken,” www.youtube.com/watch?v=1pMuV2o4Lrw.
D. Bernstein and T. Lange, “EBACS: ECRYPT Benchmarking of Cryptographic Systems,”
E. Biham, O. Dunkelman, S. Indesteege, N. Keller, and B. Preneel, “How to Steal Cars—A Practical Attack on Keeloq,” COSIC, www.cosic.esat.kuleuven.be/keeloq/.
S. Checkoway, “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” www.youtube.com/watch?v=bHfOziIwXic.
E. Diels, “Technicolor Security Newsletter,” www.technicolor.com/en/hi/technology/research-publications/security-newsletters/security-newsletter-20.
N. Fain and Vadik, “Embedded Analysis,”
———, “JTAG/Serial/Flash/PCB Embedded Reverse Engineering Tools and Technique,” www.youtube.comwatch?v=8Unisnu-cNo.
N. Ferguson, B. Schneier, and T. Kohno, Cryptography Engineering, Wiley Publishing, Indianapolis, IN, 2010.
Flylogic’s Analytics Blog, www.flylogic.net/blog/.
K. Gaj and J. Kaps, “ATHENa: Automated Tool for Hardware Evaluation,” Cryptographic Engineering Research Group, George Mason University, Fairfax, VA, http://cryptography.gmu.edu/athena/.
A. Moradi, A. Barenghi, T. Kasper, and C. Paar, “On the Vulnerability of FPGA Bitstream Encryption Against Power Analysis Attacks,” IACR ePrint Archive, 2011, http://eprint.iacr.org/2011/390.
S. Murdoch, S. Drimer, R. Anderson, and M. Bond, “Chip and PIN is Broken,” 2010 IEEE Symposium on Security and Privacy, www.cl.cam.ac.uk/~sjm217/papers/oakland10chipbroken.pdf.
P. Neumann (moderator), “The Risks Digest: Forum on Risks to the Public in Computers and Related Systesm,” ACM Committee on Computers and Public Policy, http://catless.ncl.ac.uk/Risks.
C. Paar, “Lessons Learned from Four Years of Implementation Attacks Against Real-World Targets,” Seminar at the Isaac Newton Institute for Mathematical Sciences, 2012.
C. Paar and J. Pelzl, Understanding Cryptography, Springer-Verlag, 2010, www.crypto-textbook.com.
B. Schneier, “Crypto-gram Newsletter,” www.schneier.com/crypto-gram.html.
This article first appeared in CC25.
Sponsor this Article
Circuit Cellar's editorial team comprises professional engineers, technical editors, and digital media specialists. You can reach the Editorial Department at firstname.lastname@example.org, @circuitcellar, and facebook.com/circuitcellar