Every few days we you a sneak peek at some of the exciting content that will run in Circuit Cellar‘s Anniversary issue, which is scheduled to be available in early 2013. You’ve read about Ed Nisley’s essay on his most memorable designs—from a hand-held scanner project to an Arduino-based NiMH cell tester—and Robert Lacoste’s tips for preventing embedded design errors. Now it’s time for another preview.
Many engineers know they are building electronic systems for use in dangerous times. They must plan for both hardware and software attacks, which makes embedded security a hot topic for 2013. In an essay on embedded security risks, Virginia Tech professor Patrick Schaumont looks at the current state of affairs through several examples. His tips and suggestions will help you evaluate the security needs of your next embedded design.
Schaumont writes:
As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.
First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.
Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries where originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip-package level of a microcontroller. This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.
— ADVERTISMENT—
—Advertise Here—Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra.
The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven Circuit Cellar’s Circuit Cellar 25th Anniversary Issue will be available in early 2013. Stay tuned for more updates on the issue’s content.techniques. Learn about the experiences of other designers.
Schaumont then provides lists of helpful embedded security-related resources, such as Flylogic’s Analytics Blog and the Athena website at GMU.
Sponsor this ArticleCircuit Cellar's editorial team comprises professional engineers, technical editors, and digital media specialists. You can reach the Editorial Department at editorial@circuitcellar.com, @circuitcellar, and facebook.com/circuitcellar