Embedded Security & IP Protection

Infineon Technologies’s new OPTIGA Trust E offers an easy-to-use solution for protecting manufacturers’ valuable IP in industrial automation equipment, medical systems, and more. Since encrypting software isn’t enough to product your systems, the OPTIGA Trust E offers enhanced authentication and secured storage of software codes and product data.Infineon OPTIGA-Trust-E

The OPTIGA Trust E’s features include:

  • Advanced pre-programmed security controller
  • Complete system integration support
  • Extended temperature range of -40° to +85 C
  • Standardized I²C interface
  • Small USON-10 footprint
  • Supports authentication of products that rely on the USB Type-C standard
  • The OPTIGA Trust E is already available in volume quantities.

Source: Infineon 

USB-to-FPGA Communications: A Case Study of the ChipWhisperer-Lite

Sending data from a computer to an FPGA is often required. This might be FPGA configuration data, register settings, or streaming data. An easy solution is to use a USB-connected microcontroller instead of a dedicated interface chip, which allows you to offload certain tasks into the microcontroller.

In Circuit Cellar 299 (June 2015), Colin O’Flynn writes:

Often your FPGA-based project will require computer communication and some housekeeping tasks. A popular solution is the use of a dedicated USB interface chip, and a soft-core processor in the FPGA for housekeeping tasks.

For an open-source hardware project I recently launched, I decided to use an external USB microcontroller instead of a dedicated interface chip. I suspect you’ll find a lot of useful design tidbits you can use for yourself—and, because it’s open source, getting details of my designs doesn’t involve industrial espionage!

The design is called the ChipWhisperer-Lite (see Photo 1). This device is a training aid for learning about side-channel power analysis of cryptographic implementations. Side-channel power analysis uses measurements of small power variations during execution of the cryptographic algorithms to break the implementation of the algorithm.

Photo 1: This shows the ChipWhisperer-Lite, which contains a Xilinx Spartan 6 LX9 FPGA and Atmel SAM3U2C microcontroller. The remaining circuitry involves the power supplies, ADC, analog processing, and a development device which the user programs with some cryptographic algorithm they are analyzing.

Photo 1: This shows the ChipWhisperer-Lite, which contains a Xilinx Spartan 6 LX9 FPGA and Atmel SAM3U2C microcontroller. The remaining circuitry involves the power supplies, ADC, analog processing, and a development device which the user programs with some cryptographic algorithm they are analyzing.

In a previous article, “Build a SoC Over Lunch” (Circuit Cellar 289, 2014), I made the case for using a soft-core processing in an FPGA. In this article I’ll play the devil’s advocate by arguing that using an external microcontroller is a better choice. Of course the truth lies somewhere in between: in this example, the requirement of having a high-speed USB interface makes an external microcontroller more cost-effective, but this won’t always be the case.

This article assumes you require computer communication as part of your design. There are many options for this. The easiest from a hardware perspective is to use a USB-Serial converter, and many projects use such a system. The downside is a fairly slow interface, and the requirement of designing a serial protocol.

A more advanced option is to use a USB adapter with a parallel interface, such as the FTDI FT2232H. These can achieve very high-speed data rates—basically up to the limit of the USB 2.0 interface. The downside of these options is that it still requires some protocol implemented on your FPGA for many applications, and it has limited extra features (such as if you need housekeeping tasks).

The solution I came to is the use of a USB microcontroller. They are widely available from most vendors with USB 2.0 high-speed (full 480 Mbps data rate) interfaces, and allow you to perform not only the USB interface, but the various housekeeping tasks that your system will require. The USB microcontroller will also likely be around the same price (or possibly cheaper) than the equivalent specialized interface chip.

When selecting a microcontroller, I recommend finding one with an external memory bus interface. This external memory bus is normally designed to allow you to map devices such as SRAM or DRAM into the memory space of the microcontroller. In our case we’ll actually be mapping FPGA registers into the microcontroller memory space, which means we don’t need any protocol for communication with the FPGA.

OflynnFig1fpga

Figure 1: This figure shows the basic connections used for memory-mapping the FPGA into the microcontroller memory space. Depending on your requirements, you can add some additional custom lines, such as a flag to indicate different FPGA register banks to use, as only a 9-bit address bus is used in this example.

I selected an Atmel SAM3U2C microcontroller, which has a USB 2.0 high-speed interface. This microcontroller is low-cost and available in TQFP package, which is convenient if you plan on hand assembling prototype boards. The connections between the FPGA and microcontroller are shown in Figure 1.

On the FPGA, it is easy to map this data bus into registers. This means that to configure some feature in the FPGA, you can just directly write into a register. Or if you are transferring data, you can read from or write to a block-RAM (BRAM) implemented in the FPGA.

Check out Colin’s ChipWhisperer-Lite KickStarter Video:

Editors’ Pick: A Review of Current Embedded Security Risks

In recent years, security in embedded systems design has become a major concern. Patrick Schaumont’s CC25 article looks at the current state of affairs through several examples. The included tips and suggestions will help you evaluate the security needs of your next embedded design.

When you’re secure, you’re protected from loss or danger. Electronic security—the state of security for electronic systems—is essential for us because we rely so much on electronic embedded systems in everyday life. Embedded control units, RFID payment systems, wireless keys, cell phones, and intellectual-property firmware are just a few examples where embedded security matters to us. System malfunctions or the malicious uses of such devices are guaranteed to harm us. Security requires stronger guarantees than reliability. When we implement a secure system, we’re assuming an adversary with bad intentions. We’re dealing with someone who’s intentionally trying to cause harm. This article emphasizes attacks rather than solutions. The objective is to give you a sense of the issues.schaumont

Defining Embedded Security

As design engineers, we want to know how to create secure designs. Unfortunately, it’s hard to define the properties that make a design secure. Indeed, being “secure” often means being able to guarantee what is not going to happen. For example: “The wireless door opener on my house cannot be duplicated without my explicit authorization” or “The remote update of this wireless modem will not brick it.” Designing a secure system means being able to tell what will be prevented rather than enabled. This makes the design problem unique.

There is, of course, a good amount of science to help us. Cryptologists have long analyzed the desirable features of secure systems, and they have defined security objectives such as confidentiality, privacy, authentication, signatures, and availability. They have defined cryptographic algorithms such as encryption and decryption, public-key and symmetric-key operations, one-way functions, and random-number generation. They have also created cryptographic protocols, which show how to use those cryptographic algorithms in order to meet the intended security objectives.

Cryptography is a good starting point for secure embedded design. But it is not enough. Secure embedded designs face two specific challenges that are unique to embedded implementation. The first is that, by definition, embedded systems are resource-constrained. For example, they may use an 8-bit microcontroller and 32 KB of flash memory. Or they may even have no microcontroller at all and simply consist of a passively powered RFID device. Such severe resource constraints imply that there are hardly any compute cycles available for security functions. The second challenge is that embedded systems have simple, accessible form factors. Once deployed in the field, they become easy to tamper with, and they are subject to attacks that cryptologists never thought of. Indeed, classic cryptography assumes a “black-box” principle: it assumes that crypto-devices are free from tampering. Clearly, when an attacker can desolder components or probe microcontroller pins, the black-box principle breaks down.

Embedded Security Attacks

Embedded security attacks come in all forms and types. Here I’ll detail a few examples of recent, successful cases. In each of them, the attackers used a different approach. Refer to the documents listed in the Resources section at the end of this essay for pointers to in-depth discussions.

Let’s begin with a classic case of cryptanalysis. Keeloq was a proprietary encryption algorithm used in remote keyless entry systems. The algorithm is used by many car manufacturers, including Chrysler, General Motors, and Toyota, to name a few. It has a 64-bit key, which means that randomly trying keys will lead to a key search space of 264 possibilities. That is at the edge of what is practical for an attacker. Even when trying 10 million keys per second, you’d still need thousands of years to try all the keys of a 64-bit cipher. However, in 2008, researchers in Leuven, Belgium, found a way to reduce the search space to 44 bits. Essentially, they found a mathematical weakness in the algorithm and a way to exploit it. A 44-bit search space is much smaller. At 10 million keys per second, it only takes 20 days to cover the search space—a lot more practical. Clearly, deciding the key length of a secure embedded system is a critical design decision! Too short, and any progress in cryptanalysis may compromise your system. Too long, and the design may be too slow, and too big for embedded implementation.

Attackers go further, as well, and tamper with the security protocol. In 2010, researchers from Cambridge, UK, demonstrated a hack on the “Chip and PIN” system, an embedded system for electronic payments. Chip and PIN is a system for electronic purchases. It is similar to a debit card, but it is based on a chip-card (a credit card with a built-in microprocessor). To make a purchase, the user inserts the chip-card in a merchant terminal and enters a PIN code. A correct PIN code will authorize purchases. The researchers found a flaw in the communication protocol between the merchant terminal and the chip-card. The terminal will authorize purchases if two conditions are met: when it has identified the chip-card and when it receives a “PIN-is-correct” message from this card. The researchers intercepted the messages between the terminal and the chip-card. They were then able to generate a “PIN-is-correct” message without an actual PIN verification taking place. The terminal—having identified the chip-card, and received a “PIN-is-correct” message—will now authorize purchases to the chip-card issuer (a bank). This type of attack, called a man-in-the-middle attack, was done with a hacked chip-card, an FPGA board, and a laptop. Equally important, it was demonstrated on a deployed, commercial system. In the Resources section of this article I list a nice demonstration video that appeared on the BBC’s Newsnight program.

One step beyond the man-in-the-middle attack, the attacker will actively analyze the implementation, typically starting with the cryptographic components of the design. A recent and important threat in this category is side-channel analysis (SCA). In SCA, an attacker observes the characteristics of a cryptographic implementation: its execution time, its power dissipation, and its electromagnetic patterns. By sampling these characteristics at high speed, the attacker is able to observe data-dependent variations. These variations are called side-channel leakage. SCA is the systematic analysis of side-channel leakage. Given sufficient measurements—say, a few hundred to a few thousands of measurements—SCA is able to extract cryptographic keys from a device. SCA is practical and efficient. For example, in the past two years, SCA has been used successfully to break FPGA bitstream encryption and Atmel CryptoMemory. Links to detailed information are in the Resources section of this essay.

If there’s one thing obvious from these examples, it is that perfect embedded security cannot exist. Attackers have a wide variety of techniques at their disposal, ranging from analysis to reverse engineering. When attackers get their hands on your embedded system, it is only a matter of time and sufficient eyeballs before someone finds a flaw and exploits it.

What Can You Do?

The examples above are just the tip of the iceberg, and may leave the impression of a cumbersome situation. As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.

First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.

Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries were originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip-package level of a microcontroller. This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.

Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra.

The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven techniques. Learn about the experiences of other designers. The following examples are good starting points for learning about current concerns and issues in embedded security.

Three Books for Your Desk

Security is a complex field with many different dimensions. I find it very helpful to have several reference works close by to help me navigate the steps of building any type of security service. The following three books describe the basics of information security and systems security. While not specifically targeted at the embedded context alone, the concepts they explain are equally valid for it as well.

Christof Paar and Jan Pelzl’s Understanding Cryptography: A Textbook for Students and Practitioners gives an overview of basic cryptographic algorithms. The authors explain the different types of encryption algorithms (stream and block ciphers, as well as various standards). They describe the use of public-key cryptography, covering RSA and elliptic curve cryptography (ECC), and their use for digital signatures. And they discuss hash algorithms and message authentication codes. The book does not cover cryptographic protocols, apart from key agreement. A nice thing about the book is that you can find online lectures for each chapter.

Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno’s Cryptography Engineering: Design Principles and Practical Applications covers basic cryptography as well, but with a slightly different emphasis as the first. It takes a more practical approach and frequently refers to existing practice in cryptography. It has sections on (software-oriented) implementation issues and practical implementation of key agreement protocols. This book would give immediate value to the practicing engineer—although it does not connect to the embedded context as well as the previous book. For example, it does not mention ECC.

Ross Anderson’s Security Engineering is a bible on secure systems design. It’s very broad. It builds up from basic cryptography over protocols up to secure systems design for telecoms, networking, copyright control, and more. It’s an excellent book on the systems perspective of secure design. The first edition of this book can be downloaded for free from the author’s website, though it’s well worth the investment to have the latest edition on your desk.

Four Sites

Many websites cover product teardowns and the specific security features of these implementations. Flylogic’s Analytics Blog (www.flylogic.net/blog/) describes the analysis of various chipcards. It contains chip micrographs and discusses various techniques to reverse-engineer chip security features. The website is an excellent showcase of what’s possible for a knowledgeable individual; it also clearly illustrates the point that perfect security cannot exist.

If you would like to venture in analysis of secure embedded designs yourself, then the Embedded Analysis wiki by Nathan Fain and Vadik is a must read (http://events.ccc.de/congress/2010/wiki/Embedded_Analysis). They discuss various reverse-engineering tools to help you monitor a serial line, extract the image of a flash memory, and analyze the JTAG interface of an unknown component. They also cover reverse-engineering practice in an online talk, which I’ll mention below.

Earlier I noted that cost is an important element in the security design. If you’re using cryptography, then this will cost you compute cycles, digital gates, and memory footprint. There are a few websites that give an excellent overview of these implementation costs for various algorithms.

The EBACS website contains a benchmark for cryptographic software, covering hash functions, various block and stream ciphers, and public-key implementations (http://bench.cr.yp.to/supercop.html). Originally designed for benchmarking on personal computers, it now also includes benchmarks for ARM-based embedded platforms. You can also download the benchmarks for a wealth of reference implementations of cryptographic algorithms. The Athena website at GMU presents a similar benchmark, but it’s aimed at cryptographic hardware (http://cryptography.gmu.edu/athena/). It currently concentrates on hash algorithms (in part due to its development for the SHA-3 competition). You can apply the toolkit to other types of cryptographic benchmarking as well. The website provides a host of hardware reference implementations for hash algorithms. It also distributes the benchmarking software, which is fully automated on top of existing FPGA design flows from Altera and Xilinx.

Three Newsletters

Security is a fast-evolving field. You can remain up to date on the latest developments by subscribing to a few newsletters. Here are three newsletters that have never failed to make a few interesting points to me. They do not exclusively focus on secure embedded implementations, but frequently mention the use of embedded technology in the context of a larger security issue.

The ACM RISKS list (http://catless.ncl.ac.uk/Risks) enumerates cases of typical security failures, many of them related to embedded systems. Some of the stories point out what can happen if we trust our embedded computers too blindly, such as GPS systems that lead people astray and stranded. Other stories discuss security implications of embedded computers, such as the recent news that 24% of medical device recalls are related to software failures.

Bruce Schneier’s “Schneier on Security” blog and Crypto-Gram newsletter (www.schneier.com/crypto-gram.html) focus on recent ongoing security issues. He covers everything from the issues with using airport scanners to the latest hack on BMW’s remote keyless entry system.

The Technicolor Security Newsletter (www.technicolor.com/en/hi/technology/research-publications/security-newsletters/security-newsletter-20) discusses contemporary security issues related to computer graphics, content protection, rights management, and more. The newsletter gives succinct, clear descriptions of content protection (and attacks on it) for mobile platforms, game machines, set-top boxes, and more.

Three Web Presentations

You can also learn from watching presentations by security professionals. Here are three interesting ones that specifically address security in embedded devices.

In a talk titled “Lessons Learned from Four Years of Implementation Attacks Against Real-World Targets,” Christof Paar covers the use of side-channel analysis (SCA) to break the security of various embedded devices, including wireless keys, encrypted FPGA bitstreams, and RFID smartcards. The talk is an excellent illustration of what can be achieved with SCA.

Nathan Fain gave a talk called “JTAG/Serial/Flash/PCB Embedded Reverse Engineering Tools and Technique” at a recent conference. The author discusses various tools for analyzing embedded systems. It’s the live version of the wiki page listed earlier. Go to his website (www.deadhacker.com) to download the tools he discusses.

Finally, in a talk titled “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” Stephen Checkoway discusses the embedded security analysis of cars. The author demonstrates how an attacker is able to access a car’s internal network, a concept called “the attack surface.” He points out several known issues, such as the risks posed by the on-board diagnostics (ODB) port. But he also demonstrates a wide variety of additional access points, from CD to long-range wireless links. Each of these access points comes with specific risks, such as remote unlocking of doors and unauthorized tracking. It’s a fascinating discussion that demonstrates how the ubiquitous microcontroller has brought safety as well as risk to our cars.

Looking Forward

Security in embedded systems design requires a designer to think about ways in which bad things are prevented from happening. We have seen a great deal of progress in our understanding of the threats to embedded systems. However, it’s clear that there is no silver bullet. The threats are extremely diverse, and eventually it’s up to the designer to decide what to protect. In this article, I provided a collection of pointers that should help you learn more about these threats.—By Patrick Schaumont (Patrick is an associate professor at Virginia Tech, where he works with students on research projects relating to embedded security. Patrick has covered a variety of embedded security-related topics for Circuit Cellar: one-time passwords, electronic signatures for firmware updates, and hardware-accelerated encryption.)

RESOURCES

R. Anderson, Security Engineering, Second Edition, Wiley Publishing, Indianapolis, IN, 2008.

J. Balasch, B. Gierlichs, R. Verdult, L. Batina, and I. Verbauwhede, “Power Analysis of Atmel CryptoMemory — Recovering Keys from Secure EEPROMs.” In O. Dunkelman (ed.), Topics in Cryptology — CT-RSA 2012, The Cryptographer’s Track at the RSA Conference, Lecture Notes in Computer Science 7178, O. Dunkelman (ed.), Springer-Verlag, 2012.

BBC Newsnight, “Chip and PIN is Broken,” www.youtube.com/watch?v=1pMuV2o4Lrw.

D. Bernstein and T. Lange, “EBACS: ECRYPT Benchmarking of Cryptographic Systems,”

http://bench.cr.yp.to/supercop.html.

E. Biham, O. Dunkelman, S. Indesteege, N. Keller, and B. Preneel, “How to Steal Cars—A Practical Attack on Keeloq,” COSIC, www.cosic.esat.kuleuven.be/keeloq/.

S. Checkoway, “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” www.youtube.com/watch?v=bHfOziIwXic.

E. Diels, “Technicolor Security Newsletter,” www.technicolor.com/en/hi/technology/research-publications/security-newsletters/security-newsletter-20.

N. Fain and Vadik, “Embedded Analysis,”

http://events.ccc.de/congress/2010/wiki/Embedded_Analysis.

———, “JTAG/Serial/Flash/PCB Embedded Reverse Engineering Tools and Technique,” www.youtube.comwatch?v=8Unisnu-cNo.

N. Ferguson, B. Schneier, and T. Kohno, Cryptography Engineering, Wiley Publishing, Indianapolis, IN, 2010.

Flylogic’s Analytics Blog, www.flylogic.net/blog/.

K. Gaj and J. Kaps, “ATHENa: Automated Tool for Hardware Evaluation,” Cryptographic Engineering Research Group, George Mason University, Fairfax, VA, http://cryptography.gmu.edu/athena/.

A. Moradi, A. Barenghi, T. Kasper, and C. Paar, “On the Vulnerability of FPGA Bitstream Encryption Against Power Analysis Attacks,” IACR ePrint Archive, 2011, http://eprint.iacr.org/2011/390.

S. Murdoch, S. Drimer, R. Anderson, and M. Bond, “Chip and PIN is Broken,” 2010 IEEE Symposium on Security and Privacy, www.cl.cam.ac.uk/~sjm217/papers/oakland10chipbroken.pdf.

P. Neumann (moderator), “The Risks Digest: Forum on Risks to the Public in Computers and Related Systesm,” ACM Committee on Computers and Public Policy, http://catless.ncl.ac.uk/Risks.

C. Paar, “Lessons Learned from Four Years of Implementation Attacks Against Real-World Targets,” Seminar at the Isaac Newton Institute for Mathematical Sciences, 2012.

C. Paar and J. Pelzl, Understanding Cryptography, Springer-Verlag, 2010, www.crypto-textbook.com.

B. Schneier, “Crypto-gram Newsletter,” www.schneier.com/crypto-gram.html.

This article first appeared in CC25.

 

 

Security Agents for Embedded Intrusion Detection

Knowingly or unknowingly, we interact with hundreds of networked-embedded devices in our day-to-day lives such as mobile devices, electronic households, medical equipment, automobiles, media players, and many more. This increased dependence of our lives on the networked-embedded devices, nevertheless, has raised serious security concerns. In the past, security of embedded systems was not a major concern as these systems were a stand-alone network that contained only trusted devices with little or no communication to the external world. One could execute an attack only with a direct physical or local access to the internal embedded network or to the device. Today, however, almost every embedded device is connected to other devices or the external world (e.g., the Cloud) for advanced monitoring and management capabilities. On one hand, enabling networking capabilities paves the way for a smarter world that we currently live in, while on the other hand, the same capability raises severe security concerns in embedded devices. Recent attacks on embedded device product portfolios in the Black Hat and Defcon conferences has identified remote exploit vulnerabilities (e.g., an adversary who exploits the remote connectivity of embedded devices to launch attacks such as privacy leakage, malware insertion, and denial of service) as one of the major attack vectors. A handful of research efforts along the lines of traditional security defenses have been proposed to enhance the security posture of these networked devices. These solutions, however, do not entirely solve the problem and we therefore argue the need for a light weight intrusion-defense capability within the embedded device.

In particular, we observe that the networking capability of embedded devices can indeed be leveraged to provide an in-home secure proxy server that monitors all the network traffic to and from the devices. The proxy server will act as a gateway performing policy based operations on all the traffic to and from the interconnected embedded devices inside the household. In order to do so, the proxy server will implement an agent based computing model where each embedded device is required to run a light weight checker agent that periodically reports the device status back to the server; the server verifies the operation integrity and signals back the device to perform its normal functionality. A similar approach is proposed Ang Cui and Salvatore J. Stolfo’s 2011 paper, “Defending Embedded Systems with Software Symbiotes,” where a piece of software called Symbiote is injected into the device’s firmware that uses a secure checksum-based approach to detect any malicious intrusions into the device.

In contrast to Symbiote, we exploit lightweight checker agents at devices that merely forward device status to the server and all the related heavy computations are offloaded to the proxy server, which in turn proves our approach computationally efficient. Alternatively, the proposed model incurs a very small computational overhead in gathering and reporting critical device status messages to the server. Also, the communication overhead can be amortized under most circumstances as the sensor data from the checker agents can be piggybacked to the original data messages being transferred between the device and the server. Our model, as what’s described in the aforementioned Cui and Stolfo paper, can be easily integrated with legacy embedded devices as the only modification required to the legacy devices is a “firmware upgrade that includes checker agents.”

To complete the picture, we propose an additional layer of security for modern embedded devices by designing an AuditBox, as in the article, “Pillarbox,” by K. Bowers, C. Hart, A. Juels, and N. Triandopoulos. It keeps an obfuscated log of malicious events taking place at the device which are reported back to the server at predefined time intervals. This enables our server to act accordingly by either revoking the device from the network or by restoring it to a safe state. AuditBox will enforce integrity by being able to verify whether the logs at the device have been tampered with by an adversary who is in control of the device and covertness by hiding from an attacker with access to the device whether the log reports detection of malicious behavior. To realize these requirements, AuditBox will exploit the concept of forward secure key generation.

Embedded systems security is of crucial importance and the need of the hour. Along with the advancement in embedded systems technology, we need to put an equal emphasis on its security in order for our world to be truly a smarter place.

RESOURCES
K. Bowers, C. Hart, A. Juels, & N. Triandopoulos, “Pillarbox: Combating Next-Generation Malware with Fast Forward-Secure Logging,” in Research in Attacks, Intrusions and Defenses, ser. Lecture Notes in Computer Science, A. Stavrou, H. Bos, and G. Portokalidis (Eds.), Springer, 2014, http://dx.doi.org/10.1007/978-3-319-11379-1_3.

A. Cui & S. J. Stolfo, “Defending embedded systems with software symbiotes,” in Proceedings of the 14th international conference on Recent Advances in Intrusion Detection (RAID’11), R. Sommer, D. Balzarotti, and G. Maier (Eds.), Springer-Verlag, 2011, http://dx.doi.org/10.1007/978-3-642-23644-0_19.

DevuDr. Devu Manikantan Shila is the Principal Investigator for Cyber Security area within the Embedded Systems and Networks Group at the United Technologies Research Center (UTRC).

 

Marten van DijkMarten van Dijk is an Associate Professor of Electrical and Computing Engineering at the University of Connecticut, with over 10 years research experience in system security both in academia and industry.

 

Syed Kamran HaiderSyed Kamran Haider is pursuing a PhD in Computer Engineering supervised by Marten van Dijk at the University of Connecticut.

 

This essay appears in Circuit Cellar 297 (April 2015).

Embedded Security (EE Tip #139)

Embedded security is one of the most important topics in our industry. You could build an amazing microcontroller-based design, but if it is vulnerable to attack, it could become useless or even a liability.  EmbeddSecurity

Virginia Tech professor Patrick Schaumont explains, “perfect embedded security cannot exist. Attackers have a wide variety of techniques at their disposal, ranging from analysis to reverse engineering. When attackers get their hands on your embedded system, it is only a matter of time and sufficient eyeballs before someone finds a flaw and exploits it.”

So, what can you do? In CC25, Patrick Schaumont provided some tips:

As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.

First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.

Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries were originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip package level of a microcontroller.

This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.

Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra. The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven techniques. Learn about the experiences of other designers. The following examples are good starting points for learning about current concerns and issues in embedded security.

Security is a complex field with many different dimensions. I find it very helpful to have several reference works close by to help me navigate the steps of building any type of security service.

Schaumont suggested the following useful resources:

Embedded Security Tips (CC 25th Anniversary Preview)

Every few days we you a sneak peek at some of the exciting content that will run in Circuit Cellar‘s Anniversary issue, which is scheduled to be available in early 2013. You’ve read about Ed Nisley’s essay on his most memorable designs—from a hand-held scanner project to an Arduino-based NiMH cell tester—and Robert Lacoste’s tips for preventing embedded design errors. Now it’s time for another preview.

Many engineers know they are building electronic systems for use in dangerous times. They must plan for both hardware and software attacks, which makes embedded security a hot topic for 2013.  In an essay on embedded security risks, Virginia Tech professor Patrick Schaumont looks at the current state of affairs through several examples. His tips and suggestions will help you evaluate the security needs of your next embedded design.

Schaumont writes:

As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.

First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.

Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries where originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip-package level of a microcontroller. This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.

Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra.

The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven Circuit Cellar’s Circuit Cellar 25th Anniversary Issue will be available in early 2013. Stay tuned for more updates on the issue’s content.techniques. Learn about the experiences of other designers.

Schaumont then provides lists of helpful embedded security-related resources, such as Flylogic’s Analytics Blog and the Athena website at GMU.

One-Time Passwords from Your Watch

Passwords establish the identity of a user, and they are an essential component of modern information technology. In this article, I describe one-time passwords: passwords that you use once and then never again. Because they’re used only once, you don’t have to remember them. I describe how to implement one-time passwords with a Texas Instruments (TI) eZ430-Chronos wireless development tool in a watch and how to use them to log in to existing web services such as Google Gmail (see Photo 1).

Photo 1—The Texas Instruments eZ430 Chronos watch displays a unique code that enables logging into Google Gmail. The code is derived from the current time and a secret value embedded in the watch.

To help me get around on the Internet, I use a list of about 80 passwords (at the latest count). Almost any online service I use requires a password: reading e-mail, banking, shopping, checking reservations, and so on. Many of these Internet-based services have Draconian password rules. For example, some sites require a password of at least eight characters with at least two capitals or numbers and two punctuation characters. The sheer number of passwords, and their complexity, makes it impossible to remember all of them.

What are the alternatives? There are three different ways of verifying the identity of a remote user. The most prevailing one, the password, tests something that a user knows. A second method tests something that the user has, such as a secure token. Finally, we can make use of biometrics, testing a unique user property, such as a fingerprint or an eye iris pattern.

Each of these three methods comes with advantages and disadvantages. The first method (passwords) is inexpensive, but it relies on the user’s memory. The second method (secure token) replaces the password with a small amount of embedded hardware. To help the user to log on, the token provides a unique code. Since it’s possible for a secure token to get lost, it must be possible to revoke the token. The third method (biometrics) requires the user to enroll a biometric, such as a fingerprint. Upon login, the user’s fingerprint is measured again and tested against the enrolled fingerprint. The enrollment has potential privacy issues. And, unlike a secure token, it’s not possible to revoke something that is biometric.

The one-time password design in this article belongs to the second category. A compelling motivation for this choice is that a standard, open proposal for one-time passwords is available. The Initiative for Open Authentication (OATH) is an industry consortium that works on a universal authentication mechanism for Internet users. They have developed several proposals for user authentication methods, and they have submitted these to the Internet Engineering Task Force (IETF). I’ll be relying on these proposals to demonstrate one-time passwords using a eZ430-Chronos watch. The eZ430-Chronos watch, which I’ll be using as a secure token, is a wearable embedded development platform with a 16-bit Texas Instruments MSP430 microcontroller.

ONE-TIME PASSWORD LOGON

Figure 1 demonstrates how one-time passwords work. Let’s assume a user—let’s call him Frank—is about to log on to a server. Frank will generate a one-time password using two pieces of information: a secret value unique to Frank and a counter value that increments after each authentication. The secret, as well as the counter, is stored in a secure token. To transform the counter and the secret into a one-time password, a cryptographic hash algorithm is used. Meanwhile, the server will generate the one-time password it is expecting to see from Frank. The server has a user table that keeps track of Frank’s secret and his counter value. When both the server and Frank obtain the same output, the server will authenticate Frank. Because Frank will use each password only once, it’s not a problem if an attacker intercepts the communication between Frank and the server.

Figure 1—A one-time password is formed by passing the value of a personal secret and a counter through a cryptographic hash (1). The server obtains Frank’s secret and counter value from a user table and generates the same one-time password (2). The two passwords must match to authenticate Frank (3). After each authentication, Frank’s counter is incremented, ensuring a different password the next time (4).

After each logon attempt, Frank will update his copy of the counter in the secure token. The server, however, will only update Frank’s counter in the user table when the logon was successful. This will intercept false logon attempts. Of course, it is possible that Frank’s counter value in the secure token gets out of sync with Frank’s counter value in the server. To adjust for that possibility, the server will use a synchronization algorithm. The server will attempt a window of counter values before rejecting Frank’s logon. The window chosen should be small (i.e., five). It should only cover for the occasional failed logon performed by Frank. As an alternate mechanism to counter synchronization, Frank could also send the value of his counter directly to the server. This is safe because of the properties of a cryptographic hash: the secret value cannot be computed from the one-time password, even if one knows the counter value.

You see that, similar to the classic password, the one-time password scheme still relies on a shared secret between Frank and the server. However, the shared secret is not communicated directly from the user to the server, it is only tested indirectly through the use of a cryptographic hash. The security of a one-time password therefore stands or falls with the security of the cryptographic hash, so it’s worthwhile to look further into this operation.

CRYPTOGRAPHIC HASH

A cryptographic hash is a one-way function that calculates a fixed-length output, called the digest, from an arbitrary-length input, called the message. The one-way property means that, given the message, it’s easy to calculate the digest. But, given the digest, one cannot find back the message.

The one-way property of a good cryptographic hash implies that no information is leaked from the message into the digest. For example, a small change in the input message may cause a large and seemingly random change in the digest. For the one-time password system, this property is important. It ensures that each one-time password will look very different from one authentication to the next.

The one-time password algorithm makes use of the SHA-1 cryptographic hash algorithm. This algorithm produces a digest of 160 bits. By today’s Internet standards, SHA-1 is considered old. It was developed by Ronald L. Rivest and published as a standard in 1995.

Is SHA-1 still adequate to create one-time passwords? Let’s consider the problem that an attacker must solve to break the one-time password system. Assume an attacker knows the SHA-1 digest of Frank’s last logon attempt. The attacker could now try to find a message that matches the observed digest. Indeed, knowing the message implies knowing a value of Frank’s secret and the counter. Such an attack is called a pre-image attack.

Fortunately, for SHA-1, there are no known (published) pre-image attacks that are more efficient than brute force trying all possible messages. It’s easy to see that this requires an astronomical number of messages values. For a 160-bit digest, the attacker can expect to test on the order of 2160 messages. Therefore it’s reasonable to conclude that SHA-1 is adequate for the one-time password algorithm. Note, however, that this does not imply that SHA-1 is adequate for any application. In another attack model, cryptographers worry about collisions, the possibility of an attacker finding a pair of messages that generate the same digest. For such attacks on SHA-1, significant progress has been made in recent years.

The one-time password scheme in Figure 1 combines two inputs into a single digest: a secret key and a counter value. To combine a static, secret key with a variable message, cryptographers use a keyed hash. The digest of a keyed hash is called a message authentication code (MAC). It can be used to verify the identity of the message sender.

Figure 2 shows how SHA-1 is used in a hash-based message authentication code (HMAC) construction. SHA-1 is applied twice. The first SHA-1 input is a combination of the secret key and the input message. The resulting digest is combined again with the secret key, and SHA-1 is then used to compute the final MAC. Each time, the secret key is mapped into a block of 512 bits. The first time, it is XORed with a constant array of 64 copies of the value 0x36. The second time, it is XORed with a constant array of 64 copies of the value 0x5C.

Figure 2—The SHA-1 algorithm on the left is a one-way function that transforms an arbitrary-length message into a 160-bit fixed digest. The Hash-based message authentication code (HMAC) on the right uses SHA-1 to combine a secret value with an arbitrary-length message to produce a 160-bit message authentication code (MAC).

THE HOTP ALGORITHM

With the HMAC construction, the one-time password algorithm can now be implemented. In fact, the HMAC can almost be used as is. The problem with using the MAC itself as the one-time password is that it contains too many bits. The secure token used by Frank does not directly communicate with the server. Rather, it shows a one-time password Frank needs to type in. A 160-bit number requires 48 decimal digits, which is far too long for a human.

OATH has proposed the Hash-based one-time password (HOTP) algorithm. HOTP uses a key (K) and a counter (C). The output of HOTP is a six-digit, one-time password called the HOTP value. It is obtained as follows. First, compute a 160-bit HMAC value using K and C. Store this result in an array of 20 bytes, hmac, such that hmac[0] contains the 8 leftmost bits of the 160-bit HMAC string and hmac[19] contains the 8 rightmost bits. The HOTP value is then computed with a snippet of C code (see Listing 1).

Listing 1—C code used to compute the HTOP value

There is now an algorithm that will compute a six-digit code starting from a K value and a C value. HOTP is described in IETF RFC 4226. A typical HOTP implementation would use a 32-bit C and an 80-bit K.

An interesting variant of HOTP, which I will be using in my implementation, is the time-based one-time password (TOTP) algorithm. The TOTP value is computed in the same way as the HOTP value. However, the C is replaced with a timestamp value. Rather than synchronizing a C between the secure token and the server, TOTP simply relies on the time, which is the same for the server and the token. Of course, this requires the secure token to have access to a stable and synchronized time source, but for a watch, this is a requirement that is easily met.

The timestamp value chosen for TOTP is the current Unix time, divided by a factor d. The current Unix time is the number of seconds that have elapsed since midnight January 1, 1970, Coordinated Universal Time. The factor d compensates for small synchronization differences between the server and the token. For example, a value of 30 will enable a 30-s window for each one-time password. The 30-s window also gives a user sufficient time to type in the one-time password before it expires.

IMPLEMENTATION IN THE eZ430-CHRONOS WATCH

I implemented the TOTP algorithm on the eZ430-Chronos watch. This watch contains a CC430F6137 microcontroller, which has 32 KB of flash memory for programs and 4,096 bytes of RAM for data. The watch comes with a set of software applications to demonstrate its capabilities. Software for the watch can be written in C using TI’s Code Composer Studio (CCStudio) or in IAR Systems’s IAR Embedded Workbench.

The software for the eZ430-Chronos watch is structured as an event-driven system that ties activities performed by software to events such as alarms and button presses. In addition, the overall operation of the watch is driven through several modes, corresponding to a particular function executed on the watch. These modes are driven through a menu system.

Photo 2 shows the watch with its 96-segment liquid crystal display (LCD) and four buttons to control its operation. The left buttons select the mode. The watch has two independent menu systems, one to control the top line of the display and one to control the bottom line. Hence, the overall mode of the watch is determined by a combination of a menu-1 entry and a menu-2 entry.

Photo 2—With the watch in TOTP mode, one-time passwords are shown on the second line of the display. In this photo, I am using the one-time password 854410. The watch display cycles through the strings “totP,” “854,” and “410.”

Listing 2 illustrates the code relevant to the TOTP implementation. When the watch is in TOTP mode, the sx button is tied to the function set_totp(). This function initializes the TOTP timestamp value.

Listing 2—Code relevant to the TOTP implementation

The function retrieves the current time from the watch and converts it into elapsed seconds using the standard library function mktime. Two adjustments are made to the output of mktime, on line 11 and line 12. The first factor, 2208988800, takes into account that the mktime in the TI library returns the number of seconds since January 1, 1900, while the TOTP standard sets zero time at January 1, 1970. The second factor, 18000, takes into account that my watch is set to Eastern Standard Time (EST), while the TOTP standard assumes the UTC time zone—five hours ahead of EST. Finally, on line 14, the number of seconds is divided by 30 to obtain the standard TOTP timestamp. The TOTP timestamp is further updated every 30 s, through the function tick_totp().

The one-time password is calculated by compute_totp on line 33. Rather than writing a SHA1-HMAC from scratch, I ported the open-source implementation from Google Authenticator to the TI MSP 430. Lines 39 through 50 show how a six-digit TOTP code is calculated from the 160-bit digest output of the SHA1-HMAC.

The display menu function is display_totp on line 52. The function is called when the watch first enters TOTP mode and every second after that. First, the watch will recompute the one-time password code at the start of each 30-s interval. Next, the TOTP code is displayed. The six digits of the TOTP code are more than can be shown on the bottom line of the watch. Therefore, the watch will cycle between showing “totP,” the first three digits of the one-time password, and the next three digits of the one-time password. The transitions each take 1 s, which is sufficient for a user to read all digits.

There is one element missing to display TOTP codes: I did not explain how the unique secret value is loaded into the watch. I use Google Authenticator to generate this secret value and to maintain a copy of it on Google’s servers so that I can use it to log on with TOTP.

LOGGING ONTO GMAIL

Google Authenticator is an implementation of TOTP developed by Google. It provides an implementation for Android, Blackberry, and IOS so you can use a smartphone as a secure token. In addition, it also enables you to extend your login procedure with a one-time password. You cannot replace your standard password with a one-time password, but you can enable both at the same time. Such a solution is called a two-factor authentication procedure. You need to provide a password and a one-time password to complete the login.

As part of setting up the two-factor authentication with Google (through Account Settings – Using Two-Step Verification), you will receive a secret key. The secret key is presented as a 16-character string made up of a 32-character alphabet. The alphabet consists of the letters A through Z and the digits 2, 3, 4, 5, 6, and 7. This clever choice avoids numbers that can confused with letters (8 and B, for example). The 16-character string thus represents an 80-bit key.

I program this string in the TOTP design for the eZ430-Chronos watch to initialize the secret. In the current implementation, the key is loaded in the function reset_totp().

base32_decode((const u8 *)
      ”4RGXVQI7YVY4LBPC”, stotp.key, 16);

Of course, entering the key as a constant string in the firmware is an obvious vulnerability. An attacker who has access to a copy of the firmware also has the secret key used by the TOTP implementation! It’s possible to protect or obfuscate the key from the watch firmware, but these techniques are beyond the scope of this article. Once the key is programmed into the watch and the time is correctly set, you can display TOTP codes that help you complete the logon process of Google. Photo 1 shows a demonstration of logging onto Google’s two-step verification with a one-time password.

OTHER USES OF TOTP

There are other possibilities for one-time passwords. If you are using Linux as your host PC, you can install the OATH Toolkit, which implements the HOTP and TOTP mechanisms for logon. This toolkit enables you to install authentication modules on your PC that can replace the normal login passwords. This enables you to effectively replace the password you need to remember with a password generated from your watch.

Incidentally, several recent articles—which I have included in the resources section of this article—point to the limits of conventional passwords. New technologies, including one-time passwords and biometrics, provide an interesting alternative. With standards such as those from OATH around the corner, the future may become more secure and user-friendly at the same time.

[Editor’s note: This article originally appeared in Circuit Cellar 262, May 2012.]

Patrick Schaumont writes the Embedded Security column for Circuit Cellar magazine. He is an Associate Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. Patrick works with his students on research projects in embedded security, covering hardware, firmware, and software.

PROJECT FILES

To download the code, go to ftp://ftp.circuitcellar.com/pub/Circuit_Cellar/2012/262.

RESOURCES

Google Authenticator, http://code.google.com/p/google-authenticator.

Initiative for Open Authentication (OATH), www.openauthentication.org.

Internet Engineering Task Force (IETF), www.ietf.org.

D. M’Raihi, et al, “TOTP: Time-Based One-Time Password Algorithm,” IETF RFC 6238, 2011.

—, “HOTP: An HMAC-Based One-Time Password Algorithm,” IETF RFC 4226, 2005.

OATH Toolkit, www.nongnu.org/oath-toolkit.

K. Schaffer, “Are Password Requirements Too Difficult?,” IEEE Computer Magazine, 2011.

S. Sengupta, “Logging in With a Touch or a Phrase (Anything but a Password),” New York Times, 2011.

SOURCES

IAR Embedded Workbench – IAR Systems

eZ430-Chronos Wireless development system and Code Composer Studio (CCStudio) IDE – Texas Instruments, Inc.

 

CC264: Plan, Construct, and Secure

Circuit Cellar July 2012 features innovative ideas for embedded design projects, handy design tips with real-world examples, and essential information on embedded design planning and security. A particularly interesting topic covered in this issue is the microcontroller-based home control systems (HCS). Interest in building and HCSes never wanes. In fact, articles about such projects have appeared in this magazine since 1988.

Circuit Cellar 264 (July 2012) is now available.

Turn to page 18 for the first HCS-related article. John Breitenbach details how he built an Internet-enabled, cloud-based attic monitoring system. Turn to page 36 for another HCS article. Tommy Tyler explains how to build a handy MCU-based digital thermometer. You can construct a similar system for your home, or you can apply what you learn to a variety of other temperature-sensing applications. Are you currently working on a home automation design or industrial control system? Check out Richard Wotiz’s “EtherCAT Orchestra” (p. 52). He describes an innovative industrial control network built around seven embedded controllers.

John Breitenbach's DIY leak-monitoring system

The wiring diagram for Tommy Tyler's MCU-based digital thermometer

The rest of the articles in the issue cover essential electrical engineering concepts and design techniques. Engineers of every skill level will find the information immediately applicable to the projects on their workbenches.

Tom Struzik’s article on USB is a good introduction to the technology, and it details how to effectively customize an I/O and data transfer solution (p. 28). On page 44, Patrick Schaumont introduces the topic of electronic signatures and then details how to use them to sign firmware updates. George Novacek provides a project development road map for professionals and novices alike (p. 58). Flip to page 62 for George Martin’s insight on switch debouncing and interfacing to a simple device. On page 68, Jeff Bachiochi tackles the concepts of wireless data delivery and time stamping.

Jeff Bachiochi's hand-wired modules

I encourage you to read the interview with Boston University professor Ayse Kivilcim Coskun on page 26. Her research on 3-D stacked systems has gained notoriety in academia, and it could change the way electrical engineers and chip manufacturers think about energy efficiency for years to come. If you’re an engineer fascinated by “green computing,” you’ll find Coskun’s work particularly intriguing.

Special note: The Circuit Cellar staff dedicates this issue to Richard Alan Wotiz who passed away on May 30, 2012. We appreciate having had the opportunity to publish articles about his inventive projects and innovative engineering ideas and solutions. We extend our condolences to his family and friends.

Circuit Cellar Issue 264 (July 2012) is now available on newsstands. Go to Circuit Cellar Digital and then select “Free Preview” to take a look at the first several pages.