Editors’ Pick: A Review of Current Embedded Security Risks

In recent years, security in embedded systems design has become a major concern. Patrick Schaumont’s CC25 article looks at the current state of affairs through several examples. The included tips and suggestions will help you evaluate the security needs of your next embedded design.

When you’re secure, you’re protected from loss or danger. Electronic security—the state of security for electronic systems—is essential for us because we rely so much on electronic embedded systems in everyday life. Embedded control units, RFID payment systems, wireless keys, cell phones, and intellectual-property firmware are just a few examples where embedded security matters to us. System malfunctions or the malicious uses of such devices are guaranteed to harm us. Security requires stronger guarantees than reliability. When we implement a secure system, we’re assuming an adversary with bad intentions. We’re dealing with someone who’s intentionally trying to cause harm. This article emphasizes attacks rather than solutions. The objective is to give you a sense of the issues.schaumont

Defining Embedded Security

As design engineers, we want to know how to create secure designs. Unfortunately, it’s hard to define the properties that make a design secure. Indeed, being “secure” often means being able to guarantee what is not going to happen. For example: “The wireless door opener on my house cannot be duplicated without my explicit authorization” or “The remote update of this wireless modem will not brick it.” Designing a secure system means being able to tell what will be prevented rather than enabled. This makes the design problem unique.

There is, of course, a good amount of science to help us. Cryptologists have long analyzed the desirable features of secure systems, and they have defined security objectives such as confidentiality, privacy, authentication, signatures, and availability. They have defined cryptographic algorithms such as encryption and decryption, public-key and symmetric-key operations, one-way functions, and random-number generation. They have also created cryptographic protocols, which show how to use those cryptographic algorithms in order to meet the intended security objectives.

Cryptography is a good starting point for secure embedded design. But it is not enough. Secure embedded designs face two specific challenges that are unique to embedded implementation. The first is that, by definition, embedded systems are resource-constrained. For example, they may use an 8-bit microcontroller and 32 KB of flash memory. Or they may even have no microcontroller at all and simply consist of a passively powered RFID device. Such severe resource constraints imply that there are hardly any compute cycles available for security functions. The second challenge is that embedded systems have simple, accessible form factors. Once deployed in the field, they become easy to tamper with, and they are subject to attacks that cryptologists never thought of. Indeed, classic cryptography assumes a “black-box” principle: it assumes that crypto-devices are free from tampering. Clearly, when an attacker can desolder components or probe microcontroller pins, the black-box principle breaks down.

Embedded Security Attacks

Embedded security attacks come in all forms and types. Here I’ll detail a few examples of recent, successful cases. In each of them, the attackers used a different approach. Refer to the documents listed in the Resources section at the end of this essay for pointers to in-depth discussions.

Let’s begin with a classic case of cryptanalysis. Keeloq was a proprietary encryption algorithm used in remote keyless entry systems. The algorithm is used by many car manufacturers, including Chrysler, General Motors, and Toyota, to name a few. It has a 64-bit key, which means that randomly trying keys will lead to a key search space of 264 possibilities. That is at the edge of what is practical for an attacker. Even when trying 10 million keys per second, you’d still need thousands of years to try all the keys of a 64-bit cipher. However, in 2008, researchers in Leuven, Belgium, found a way to reduce the search space to 44 bits. Essentially, they found a mathematical weakness in the algorithm and a way to exploit it. A 44-bit search space is much smaller. At 10 million keys per second, it only takes 20 days to cover the search space—a lot more practical. Clearly, deciding the key length of a secure embedded system is a critical design decision! Too short, and any progress in cryptanalysis may compromise your system. Too long, and the design may be too slow, and too big for embedded implementation.

Attackers go further, as well, and tamper with the security protocol. In 2010, researchers from Cambridge, UK, demonstrated a hack on the “Chip and PIN” system, an embedded system for electronic payments. Chip and PIN is a system for electronic purchases. It is similar to a debit card, but it is based on a chip-card (a credit card with a built-in microprocessor). To make a purchase, the user inserts the chip-card in a merchant terminal and enters a PIN code. A correct PIN code will authorize purchases. The researchers found a flaw in the communication protocol between the merchant terminal and the chip-card. The terminal will authorize purchases if two conditions are met: when it has identified the chip-card and when it receives a “PIN-is-correct” message from this card. The researchers intercepted the messages between the terminal and the chip-card. They were then able to generate a “PIN-is-correct” message without an actual PIN verification taking place. The terminal—having identified the chip-card, and received a “PIN-is-correct” message—will now authorize purchases to the chip-card issuer (a bank). This type of attack, called a man-in-the-middle attack, was done with a hacked chip-card, an FPGA board, and a laptop. Equally important, it was demonstrated on a deployed, commercial system. In the Resources section of this article I list a nice demonstration video that appeared on the BBC’s Newsnight program.

One step beyond the man-in-the-middle attack, the attacker will actively analyze the implementation, typically starting with the cryptographic components of the design. A recent and important threat in this category is side-channel analysis (SCA). In SCA, an attacker observes the characteristics of a cryptographic implementation: its execution time, its power dissipation, and its electromagnetic patterns. By sampling these characteristics at high speed, the attacker is able to observe data-dependent variations. These variations are called side-channel leakage. SCA is the systematic analysis of side-channel leakage. Given sufficient measurements—say, a few hundred to a few thousands of measurements—SCA is able to extract cryptographic keys from a device. SCA is practical and efficient. For example, in the past two years, SCA has been used successfully to break FPGA bitstream encryption and Atmel CryptoMemory. Links to detailed information are in the Resources section of this essay.

If there’s one thing obvious from these examples, it is that perfect embedded security cannot exist. Attackers have a wide variety of techniques at their disposal, ranging from analysis to reverse engineering. When attackers get their hands on your embedded system, it is only a matter of time and sufficient eyeballs before someone finds a flaw and exploits it.

What Can You Do?

The examples above are just the tip of the iceberg, and may leave the impression of a cumbersome situation. As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.

First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.

Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries were originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip-package level of a microcontroller. This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.

Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra.

The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven techniques. Learn about the experiences of other designers. The following examples are good starting points for learning about current concerns and issues in embedded security.

Three Books for Your Desk

Security is a complex field with many different dimensions. I find it very helpful to have several reference works close by to help me navigate the steps of building any type of security service. The following three books describe the basics of information security and systems security. While not specifically targeted at the embedded context alone, the concepts they explain are equally valid for it as well.

Christof Paar and Jan Pelzl’s Understanding Cryptography: A Textbook for Students and Practitioners gives an overview of basic cryptographic algorithms. The authors explain the different types of encryption algorithms (stream and block ciphers, as well as various standards). They describe the use of public-key cryptography, covering RSA and elliptic curve cryptography (ECC), and their use for digital signatures. And they discuss hash algorithms and message authentication codes. The book does not cover cryptographic protocols, apart from key agreement. A nice thing about the book is that you can find online lectures for each chapter.

Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno’s Cryptography Engineering: Design Principles and Practical Applications covers basic cryptography as well, but with a slightly different emphasis as the first. It takes a more practical approach and frequently refers to existing practice in cryptography. It has sections on (software-oriented) implementation issues and practical implementation of key agreement protocols. This book would give immediate value to the practicing engineer—although it does not connect to the embedded context as well as the previous book. For example, it does not mention ECC.

Ross Anderson’s Security Engineering is a bible on secure systems design. It’s very broad. It builds up from basic cryptography over protocols up to secure systems design for telecoms, networking, copyright control, and more. It’s an excellent book on the systems perspective of secure design. The first edition of this book can be downloaded for free from the author’s website, though it’s well worth the investment to have the latest edition on your desk.

Four Sites

Many websites cover product teardowns and the specific security features of these implementations. Flylogic’s Analytics Blog (www.flylogic.net/blog/) describes the analysis of various chipcards. It contains chip micrographs and discusses various techniques to reverse-engineer chip security features. The website is an excellent showcase of what’s possible for a knowledgeable individual; it also clearly illustrates the point that perfect security cannot exist.

If you would like to venture in analysis of secure embedded designs yourself, then the Embedded Analysis wiki by Nathan Fain and Vadik is a must read (http://events.ccc.de/congress/2010/wiki/Embedded_Analysis). They discuss various reverse-engineering tools to help you monitor a serial line, extract the image of a flash memory, and analyze the JTAG interface of an unknown component. They also cover reverse-engineering practice in an online talk, which I’ll mention below.

Earlier I noted that cost is an important element in the security design. If you’re using cryptography, then this will cost you compute cycles, digital gates, and memory footprint. There are a few websites that give an excellent overview of these implementation costs for various algorithms.

The EBACS website contains a benchmark for cryptographic software, covering hash functions, various block and stream ciphers, and public-key implementations (http://bench.cr.yp.to/supercop.html). Originally designed for benchmarking on personal computers, it now also includes benchmarks for ARM-based embedded platforms. You can also download the benchmarks for a wealth of reference implementations of cryptographic algorithms. The Athena website at GMU presents a similar benchmark, but it’s aimed at cryptographic hardware (http://cryptography.gmu.edu/athena/). It currently concentrates on hash algorithms (in part due to its development for the SHA-3 competition). You can apply the toolkit to other types of cryptographic benchmarking as well. The website provides a host of hardware reference implementations for hash algorithms. It also distributes the benchmarking software, which is fully automated on top of existing FPGA design flows from Altera and Xilinx.

Three Newsletters

Security is a fast-evolving field. You can remain up to date on the latest developments by subscribing to a few newsletters. Here are three newsletters that have never failed to make a few interesting points to me. They do not exclusively focus on secure embedded implementations, but frequently mention the use of embedded technology in the context of a larger security issue.

The ACM RISKS list (http://catless.ncl.ac.uk/Risks) enumerates cases of typical security failures, many of them related to embedded systems. Some of the stories point out what can happen if we trust our embedded computers too blindly, such as GPS systems that lead people astray and stranded. Other stories discuss security implications of embedded computers, such as the recent news that 24% of medical device recalls are related to software failures.

Bruce Schneier’s “Schneier on Security” blog and Crypto-Gram newsletter (www.schneier.com/crypto-gram.html) focus on recent ongoing security issues. He covers everything from the issues with using airport scanners to the latest hack on BMW’s remote keyless entry system.

The Technicolor Security Newsletter (www.technicolor.com/en/hi/technology/research-publications/security-newsletters/security-newsletter-20) discusses contemporary security issues related to computer graphics, content protection, rights management, and more. The newsletter gives succinct, clear descriptions of content protection (and attacks on it) for mobile platforms, game machines, set-top boxes, and more.

Three Web Presentations

You can also learn from watching presentations by security professionals. Here are three interesting ones that specifically address security in embedded devices.

In a talk titled “Lessons Learned from Four Years of Implementation Attacks Against Real-World Targets,” Christof Paar covers the use of side-channel analysis (SCA) to break the security of various embedded devices, including wireless keys, encrypted FPGA bitstreams, and RFID smartcards. The talk is an excellent illustration of what can be achieved with SCA.

Nathan Fain gave a talk called “JTAG/Serial/Flash/PCB Embedded Reverse Engineering Tools and Technique” at a recent conference. The author discusses various tools for analyzing embedded systems. It’s the live version of the wiki page listed earlier. Go to his website (www.deadhacker.com) to download the tools he discusses.

Finally, in a talk titled “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” Stephen Checkoway discusses the embedded security analysis of cars. The author demonstrates how an attacker is able to access a car’s internal network, a concept called “the attack surface.” He points out several known issues, such as the risks posed by the on-board diagnostics (ODB) port. But he also demonstrates a wide variety of additional access points, from CD to long-range wireless links. Each of these access points comes with specific risks, such as remote unlocking of doors and unauthorized tracking. It’s a fascinating discussion that demonstrates how the ubiquitous microcontroller has brought safety as well as risk to our cars.

Looking Forward

Security in embedded systems design requires a designer to think about ways in which bad things are prevented from happening. We have seen a great deal of progress in our understanding of the threats to embedded systems. However, it’s clear that there is no silver bullet. The threats are extremely diverse, and eventually it’s up to the designer to decide what to protect. In this article, I provided a collection of pointers that should help you learn more about these threats.—By Patrick Schaumont (Patrick is an associate professor at Virginia Tech, where he works with students on research projects relating to embedded security. Patrick has covered a variety of embedded security-related topics for Circuit Cellar: one-time passwords, electronic signatures for firmware updates, and hardware-accelerated encryption.)

RESOURCES

R. Anderson, Security Engineering, Second Edition, Wiley Publishing, Indianapolis, IN, 2008.

J. Balasch, B. Gierlichs, R. Verdult, L. Batina, and I. Verbauwhede, “Power Analysis of Atmel CryptoMemory — Recovering Keys from Secure EEPROMs.” In O. Dunkelman (ed.), Topics in Cryptology — CT-RSA 2012, The Cryptographer’s Track at the RSA Conference, Lecture Notes in Computer Science 7178, O. Dunkelman (ed.), Springer-Verlag, 2012.

BBC Newsnight, “Chip and PIN is Broken,” www.youtube.com/watch?v=1pMuV2o4Lrw.

D. Bernstein and T. Lange, “EBACS: ECRYPT Benchmarking of Cryptographic Systems,”

http://bench.cr.yp.to/supercop.html.

E. Biham, O. Dunkelman, S. Indesteege, N. Keller, and B. Preneel, “How to Steal Cars—A Practical Attack on Keeloq,” COSIC, www.cosic.esat.kuleuven.be/keeloq/.

S. Checkoway, “Comprehensive Experimental Analyses of Automotive Attack Surfaces,” www.youtube.com/watch?v=bHfOziIwXic.

E. Diels, “Technicolor Security Newsletter,” www.technicolor.com/en/hi/technology/research-publications/security-newsletters/security-newsletter-20.

N. Fain and Vadik, “Embedded Analysis,”

http://events.ccc.de/congress/2010/wiki/Embedded_Analysis.

———, “JTAG/Serial/Flash/PCB Embedded Reverse Engineering Tools and Technique,” www.youtube.comwatch?v=8Unisnu-cNo.

N. Ferguson, B. Schneier, and T. Kohno, Cryptography Engineering, Wiley Publishing, Indianapolis, IN, 2010.

Flylogic’s Analytics Blog, www.flylogic.net/blog/.

K. Gaj and J. Kaps, “ATHENa: Automated Tool for Hardware Evaluation,” Cryptographic Engineering Research Group, George Mason University, Fairfax, VA, http://cryptography.gmu.edu/athena/.

A. Moradi, A. Barenghi, T. Kasper, and C. Paar, “On the Vulnerability of FPGA Bitstream Encryption Against Power Analysis Attacks,” IACR ePrint Archive, 2011, http://eprint.iacr.org/2011/390.

S. Murdoch, S. Drimer, R. Anderson, and M. Bond, “Chip and PIN is Broken,” 2010 IEEE Symposium on Security and Privacy, www.cl.cam.ac.uk/~sjm217/papers/oakland10chipbroken.pdf.

P. Neumann (moderator), “The Risks Digest: Forum on Risks to the Public in Computers and Related Systesm,” ACM Committee on Computers and Public Policy, http://catless.ncl.ac.uk/Risks.

C. Paar, “Lessons Learned from Four Years of Implementation Attacks Against Real-World Targets,” Seminar at the Isaac Newton Institute for Mathematical Sciences, 2012.

C. Paar and J. Pelzl, Understanding Cryptography, Springer-Verlag, 2010, www.crypto-textbook.com.

B. Schneier, “Crypto-gram Newsletter,” www.schneier.com/crypto-gram.html.

This article first appeared in CC25.

 

 

How to Improve Software Development Predictability

The analytical methods of failure modes effects and criticality analysis (FMECA) and failure modes effects analysis (FMEA) have been around since the 1940s. In recent years, much effort has been spent on bringing hardware related analyses such as FMECA into the realm of software engineering. In “Software FMEA/FMECA,” George Novacek takes a close look at software FMECA (SWFMECA) and its potential for making software development more predictable.

The roots of failure modes effects and criticality analysis (FMECA) and failure modes effects analysis (FMEA) date back to World War II. FMEA is a subset of FMECA in which the criticality assessment has been omitted. Therefore, for simplicity, I’ll be using the terms FMECA and SWFMECA only in this article. FMECA was developed for identification of potential hardware failures and their mitigation to ensure mission success. During the 1950s, FMECA became indispensable for analyses of equipment in critical applications, such as those occurring in military, aerospace, nuclear, medical, automotive, and other industries.

FMECA is a structured, bottom-up approach considering a failure of each and every component, its impact on the system and how to prevent or mitigate such a failure. FMECA is often combined with fault tree analysis (FTA) or event tree analyses (ETA). The FTA differs from the ETA only in that the former is focused on failures as the top event, the latter on some specific events. Those analyses start with an event and then drill down through the system to their root cause.

In recent years, much effort has been spent on bringing hardware related analyses, such as reliability prediction, FTA, and FMECA into the realm of software engineering. Software failure modes and effects analysis (SWFMEA) and software failure modes, effects, and criticality analysis (SWFMECA) are intended to be software analyses analogous to the hardware ones. In this article I’ll cover SWFMECA as it specifically relates to embedded controllers.

Unlike the classic hardware FMECA based on statistically determined failure rates of hardware components, software analyses assume that the software design is never perfect because it contains faults introduced unintentionally by software developers. It is further assumed that in any complicated software there will always be latent faults, regardless of development techniques, languages, and quality procedures used. This is likely true, but can it be quantified?

SOFTWARE ANALYSIS

SWFMECA should consider the likelihood of latent faults in a product and/or system, which may become patent during operational use and cause the product or the system to fail. The goal is to assess severity of the potential faults, their likelihood of occurrence, and the likelihood of their escaping to the customer. SWFMECA should assess the probability of mistakes being made during the development process, including integration, verification and validation (V&V), and the severity of these faults on the resulting failures. SWFMECA is also intended to determine the faults’ criticality by combining fault likelihood with the consequent failure severity. This should help to determine the risk arising from software in a system. SWFMECA should examine the development process and the product behavior in two separate analyses.

First, Development SWFMECA should address the development, testing and V&V process. This requires understanding of the software development process, the V&V techniques and quality control during that process. It should establish what types of faults may occur when using a particular design technique, programming language and the fault coverage of the verification and validation techniques. Second, Product SWFMECA should analyze the design and its implementation and establish the probability of the failure modes. It must also be based on thorough understanding of the processes as well as the product and its use.

In my opinion, SWFMECA is a bit of a misnomer with little resemblance to the hardware FMECA. Speculations what faults might be hidden in every line of code or every activity during software development is hardly realistic. However, there is resemblance with the functional level FMECA. There, system level effects of failures of functions can be established and addressed accordingly. Establishing the probability of those failures is another matter.

The data needed for such considerations are mostly subjective, their sources esoteric and their reliability debatable. The data are developed statistically, based on history, experience and long term fault data collection. Some data may be available from polling numerous industries, but how applicable they are to a specific developer is difficult to determine. Plausible data may perhaps be developed by long established software developers producing a specific type of software (e.g., Windows applications), but development of embedded controllers with their high mix of hardware/software architectures and relatively low-volume production doesn’t seem to fit the mold.

Engineers understand that hardware has limited life and customers have no problem accepting mean time between failures (MTBF) as a reality. But software does not fail due to age or fatigue. It’s all in the workmanship. I have never seen an embedded software specification requiring software to have some minimum probability of faults. Zero seems always implied.

SCORING & ANALYSIS

In the course of SWFMECA preparation, scores for potential faults should be determined: severity, likelihood of occurrence, and potential for escaping to the finished product. The scores between 1 to 10 are multiplied and thus the risk priority number (RPN) is obtained. An RPN larger than 200 should warrant prevention and mitigation planning. Yet the scores are very much subjective—that is, they’re dependent on the software complexity, the people, and other impossible to accurately predict factors. For embedded controllers the determination of the RPN appears to be just an analysis for the sake of analysis.

Statistical analyses are used every day from science to business management. Their usefulness depends on the number of samples and even with an abundance of samples there are no guarantees. SWFMECA can be instrumental for fine-tuning the software development process. In embedded controllers, however, software related failures are addressed by FMECA. SWFMECA alone cannot justify the release of a product.

EMBEDDED SOFTWARE

In embedded controllers, causes of software failures are often hardware related and exact outcomes are difficult to predict. Software faults need to be addressed by testing, code analyses, and, most important, mitigated by the architecture. Redundancy, hardware monitors, and others are time proven methods.

Software begins as an idea expressed in requirements. Design of the system architecture, including hardware/software partitioning is next, followed by software requirements, usually presented as flow charts, state diagrams, pseudo code, and so forth. High and low levels of design follow, until a code is compiled. Integration and testing come next. This is shown in the ubiquitous chart in Figure 1.

Figure 1: Software development "V" model

Figure 1: Software development “V” model

During an embedded controller design, I would not consider performing the RPN calculation, just as I would not try to calculate software reliability. I consider those purely statistical calculations to be of little practical use. However, SWFMECA activity with software ETA and FTA based on functions should be performed as a part of the system FMECA. The software review can be to a large degree automated by tools, such as Software Call Tree and many others. Automation notwithstanding, one should always check the results for plausibility.

TOOLS

Software Call Tree tells us how different modules interface and how a fault or an event would propagate through the system. Similarly, Object Relational Diagram shows how objects’ internal states affect each other. And then there are Control Flow Diagram, Entity Relationship Diagram, Data Flow Diagram, McCabe Logical Path, State Transition Diagram, and others. Those tools are not inexpensive, but they do generate data which make it possible to produce high-quality software. However, it is important to plan all the tests and analyses ahead of the time. It is easy to get mired in so many evaluations that the project’s cost and schedule suffer with little benefit to software quality.

The assumed probability of a software fault becomes a moot point. We should never plunge ahead releasing a code just because we’re satisfied that our statistical development model renders what we think is an acceptable probability of a failure. Instead, we must assume that every function may fail for whatever reason and take steps to ensure those failures are mitigated by the system architecture.

System architecture and software analyses can only be started upon determination that the requirements for the system are sufficiently robust. It is not unusual for a customer to insist on beginning development before signing the specification, which is often full of TBDs (i.e., “to be defined”). This may be leaving so many open issues that the design cannot and should not be started in earnest. Besides, development at such a stage is a violation of certification rules and will likely result in exceeding the budget and the schedule. Unfortunately, customers can’t or don’t always want to understand this and their pressure often prevails.

The ongoing desire to introduce software into the hardware paradigm is understandable. It could bring software development into a fully predictable scientific realm. So far it has been resisting those attempts, remaining to a large degree an art. Whether it can ever become a fully deterministic process, in my view, is doubtful. After all, every creative process is an art. But great strides have been made in development of tools, especially those for analyses, helping to make the process increasingly more predictable.

This article appears in Circuit Cellar 297, April 2015.

Seven Engineers on the Future of Electrical Engineering

The Circuit Cellar staff thought it would be interesting to kick off 2015 by asking several long-time contributors about the future of electrical engineering and embedded systems. Here we present the responses we received to the following questions: What are your thoughts on the future of electrical engineering? What excites you? Is there something in your particular field of interest that you think will be a “game changer”?

STEVE CIARCIA: Frankly speaking, if I was smart enough to accurately predict the future, I wouldn’t be doing all this again. Seriously, “What excites me in the future?” shouldn’t be the question I’m answering here. Instead, it should be  how much does all this embedded stuff we’re seeing and talking about today look like a classic case of déją vu to me. Circuit Cellar started 40 years ago in BYTE to promote my enthusiasm for professional-level DIY computer applications (albeit mostly embedded). The names have changed to Maker this and that and Raspberry Pi whatever, but what once was, still is. Solder fumes aside, Circuit Cellar has always been about nurturing the talented engineer who designs the game changer. (Steve is an electrical engineer who founded Circuit Cellar in 1988.)

DAVID TWEED: Embedded technology is becoming more pervasive, appearing in more and more places in our lives. Embedded processors have become as powerful as desktop machines were just a few short years ago, and the their ability to connect to the world at large through high-bandwidth wireless communications has grown to match this. This is both exciting and scary, because it becomes a powerful enabler for both positive and negative changes in how we live our lives. Take the ubiquitous “smart phone” as an example. It can process two-way audio, video, GPS data, and an Internet connection simultaneously in real time. This enables powerful applications such as GPS-based route finding that can give you verbal and pictorial directions to get you where you want to go. But, as anyone who watches the popular crime drama N.C.I.S. knows, that same technology can be used to track your phone’s location, along with everything it can “see” and “hear,” including the phone calls you have made. While that kind of surveillance can be used it positive ways, such as to aid you in an emergency, it can also be used to invade your privacy. Can you really be sure that everyone in law enforcement and other areas of government has only your best interests in mind when accessing your data? The increased power of embedded systems means that autonomous mechanisms gain capabilities they didn’t have before. Fully-autonomous vehicles—cars, trucks, trains, and aircraft—will be able to carry people and goods long distances over arbitrary routes. Factory automation will become more generic, because complex general-purpose mechanisms will be as easy to use as purpose-built mechanisms that only do one thing, because the software will manage all of the low-level details of “training” the system. Machine vision will be an important part of this, giving the system the feedback it needs to interact with objects and people. “With great power comes great responsibility.” This has never been more true. I’m excited by the possibilities that increasingly powerful embedded technology will open up for us, but let’s make sure that it is used responsibly! (David is a professional electrical engineer and long-time Circuit Cellar author and technical editor.)

ROBERT LACOSTE: I think the most significant change in embedded systems these last years is the nearly mandatory inclusion of Internet connectivity. It’s called the Internet of Things (IoT). Just enter those three words in Google and the 752 million results you get will show it’s a quite hot topic. When a customer meets with us to discuss a potential new product (whatever it is), the question is no longer: “Should it be connected?” The question is: “How should it be connected?” Having said that, the key difficulty is the long list of wireless protocols trying to become the ubiquitous solution for IoT: Wi-Fi, Bluetooth, Bluetooth Low Energy, ZigBee, Zwave, 6LowPan, and a hundred others. Bluetooth seems the clear winner for smartphone-based products, but what about the other applications like home automation, logistics, smart metering, or dog tracking? Which protocol(s) will be the winner(s)? Which one will be natively supported on our Internet access gateways or even rolled-out worldwide? Will it be Thread, sponsored by Google itself? Or will it be another derivative of Bluetooth, due to its huge predominance? (The overall sales of Bluetooth-capable chips already exceed four times the human population on earth.) Or could it be one of the machine-to-machine variants of 3G/4G cellular standards being studied? Or perhaps it will be one of the solutions proposed by one of the many startups working on the technology? Or maybe it will be a completely new protocol that we’ll invent? I don’t know the answer, but the result will be the next game changer! (Robert is an electrical engineer and Circuit Cellar columnist. In 2003, he founded ALCIOM, an electrical engineering firm near Paris, France.)

CHRIS COULSTON: While tech will companies continue to evolve existing technologies to offer more features, with lower power and at a lower cost, I think that the most exciting and revolutionary technology is to be found in the Internet of Everything (IOE) concept. Hardware supporting the IOE offers up the tantalizing potential to free our designs from physical interconnects, giving our designs world wide access, allowing us to interact with our designs in real time, and allowing our design to access the almost unlimited diversity of services available on the Internet. I am excited to explore a design space that enables me to connect something trivial like my key-ring to the Internet. The Raspberry Pi was the first breakthrough with companies like Intel redefining the cutting edge with their Edison module. There are several limiters to the IOE concept including power consumption and standardization. As these issues are addressed, the potential of the IOE concept will only be limited to the creativity of engineers and makers everywhere. (Chris is a professor of electrical and computer engineering at Penn State, Behrend. He’s also technical reviewer for Circuit Cellar.)

GEORGE NOVACEK: Embedded controllers are essential components of automatic systems. Without  automation, many products could not even be manufactured. Machines, such as aircraft, medical equipment, power generators, etc. could not be operated without the assistance of smart control systems. Until some, not yet invented, technology makes electronics obsolete, the future of embedded controllers will remain bright. In the coming years, more and more engineers will be focusing on system design, while only the brightest ones will be developing microelectronic components for those systems—more sophisticated, more integrated, faster, smaller, hardened to environment, consuming less power. There continues to be a trend towards universal embedded controllers. These, equipped with the appropriate sensors and actuators and loaded with a particular application software, could be used for fly-by-wire, or for control of an industrial machinery or just about everything else. Design engineers need to be cautious not to put powerful, yet inexpensive controllers into new products just because it  can be done. There is already a proliferation of simple  consumer products equipped, without any sensible need, with microcontrollers. This often leads to lower reliability, shorter life and, because these products are usually not repairable, to greater cost of ownership and waste. (George is professional engineer and Circuit Cellar columnist who served as president of a multinational manufacturer of embedded control systems for aerospace applications.)

ED NISLEY: The rise of the Maker Movement changes everything in the embedded systems field: Makers take control over the devices in their lives, generally by repurposing embedded hardware in ways its designers never intended. The trend becomes clear when dirt-cheap USB TV tuners become software defined radios. Embedded systems must eventually sprout exposed (and documented!) interfaces, debugging hooks, and protocols, because collaboration with Makers who want to turn the box inside-out and build something better can enrich our world beyond measure. Excluding those people won’t work over the long term: just as DRM-encumbered music became unacceptable, welded-shut embedded systems will become historic curiosities. You can make it so! (Ed is an electrical engineer and long-time Circuit Cellar columnist and contributor.)

KEN DAVIDSON: Twenty-five years ago, while developing the Circuit Cellar Home Control System (HCS) II, our group created a series of interface boards that could be placed around the house and communicate using RS-485. Tons of discrete wire running throughout buildings was the norm at the time, and the idea of running just a single twisted pair between units was novel and exciting. This all predated inexpensive Ethernet and public Internet. Today, such distributed intelligence has only gotten better, smaller, and cheaper. With the Internet of Things (IoT) everybody is talking about, it’s not unusual to find a wireless interface and embedded intelligence right down to the level of a light bulb. There was an episode of The Big Bang Theory where the guys set up the apartment lights so they could be controlled from anywhere in the world. Everyone got a laugh when the “geeks” were excited when someone from Japan was blinking their lights. But the idea of such embedded intelligence and remote access continuing to evolve and improve truly is exciting. I look forward to the day in the not-too-distant future when such control is commonplace to most people and not just a geeky novelty. (Ken is an embedded software engineer who has been contributing to Circuit Cellar for years as an author and editor.)

These responses appear in Circuit Cellar 294 (January 2015).

System Engineer’s Space for Designing & Testing

Many complicated motion control and power electronics systems comprise thousands of parts and dozens of embedded systems. Thus, it makes sense that a systems engineer like New Jersey-based John Roselle would have more than one workspace for simultaneously planning, designing, and testing multiple systems.

(Source: John Roselle)

John Roselle’s space for designing circuits and electronic systems (Source: John Roselle)

Roselle recently submitted images of his space and provided some interesting feedback when we asked him about it.

My main work space for testing and debugging of circuits consists of nothing more than a kitchen table with two shelves attached to the wall.  Shown in the picture (see above) a 265-V digital motor drive for a fin control system for an under water application.  In a second room I have a computer design center.

I design and test mostly motor drives for motion control products for various applications, such as underwater vehicles, missile hatch door motor drives, and test equipment for testing the products I design.

Computer design center (Source: John Roselle)

A second room serves as “computer design center” (Source: John Roselle)

John’s third workspace is used mainly for testing and assembling. At times there might be two or three different projects going on at once, he added.

(Source: John Roselle)

The third space is used to test and assemble systems (Source: John Roselle)

Do you want to share images of your workspace, hackspace, or “circuit cellar”? Send us your images and info about your space.

Professor’s Convertible Electronics Workspace

In addition to serving as a contributor and technical reviewer for Circuit Cellar, Chris Coulston heads the Computer Science and Software Engineering department at Penn State Erie, The Behrend College. He has a broad range of technical interests, including embedded systems, computer graphics algorithms, and sensor design.

Since 2005, he has submitted five articles for publication in Circuit Cellar, on projects and topics ranging from DIY motion-controlled gaming to a design for a “smart” jewelry pendant utilizing RGB LEDs.

We asked him to share photos and a description of the workspace in his Erie, PA, home. His office desk (see Photo 1) has something of an alter ego. When need and invention arise, he reconfigures it into an “embedded workstation.”

Coulston's workspace configured as an office desk

Photo 1: Coulston’s workspace configured as an office desk

When working on my projects, my embedded workstation contains only the essential equipment that I need to complete my project (see Photo 2).  What it lacks in quantity I’ve tried to make up for in quality instrumentation; a Tektronix TDS 3012B oscilloscope, a Fluke 87-V digital multimeter, and a Weller WS40 soldering iron.  While my workstation lacks a function generator and power supply, most of my projects are digital and have modest power requirements.

Coulston can reconfigure his desk into the embedded workstation pictured here.

Photo 2: Coulston can reconfigure his desk into the embedded workstation pictured here.

Coulston says his workspace must function as a “typical office desk” 80 percent of the time and electronics station 20 percent of the time.

It must do this while maintaining some semblance of being presentable—my wife shares a desk in the same space. The foundation of my workstation is a recycled desk with a heavy plywood backing on which I attached shelving. Being a bit clumsy, I’ve tried to screw down anything that could be knocked over—speakers, lights, bulletin board, power strip, cable modem, and routers.

The head of a university department has different needs in a workspace than does an electronics designer. So how does Coulston make his single office desk suffice for both his professional and personal interests? It’s definitely not a messy solution.

My role as department chair and professor means that I spend a lot time grading, writing, and planning. For this work, there is no substitute for uncluttered square footage—getting all the equipment off the working surface. However, when it’s time to play with the circuits, I need to easily reconfigure this space.

I have found organization to be key to successfully realize this goal. Common parts are organized in a parts case, parts for each project are put in their own bag, the active project is stored in the top draw, frequently used tools, jumper wires, and DMM are stored in the next draw. All other equipment is stored in a nearby closet.

I’ve looked at some of the professional-looking workspaces in Circuit Cellar and must admit that I am a bit jealous. However, when it comes to operating under the constraints of a busy professional life, I have found that my reconfigurable space is a practical compromise.

To learn more about Coulston and his technical interests, check out his Member Profile.

Chris Coulston

Chris Coulston

Q&A: Embedded Systems Training Expert

Professional engineer Jason Long worked as an embedded systems designer for more than a decade. In 2010 he founded Engenuics Technologies. Jason lives in Victoria, BC, where he continues growing his company alongside the MicroProcessor Group (MPG) embedded systems hardware teaching program he developed in 2000.

 

CIRCUIT CELLAR: In 2010 you founded your company Engenuics Technologies (www.engenuics.com) based on the success of the MicroProcessorGroup (MPG) program. Give us a little background. How did the MPG begin?

JASON: MPG started way back in 2000 at the University of Calgary when I was doing my undergraduate studies. I figured out that embedded systems was exactly what I wanted to do, but struggled to find enough hands-on learning in the core curriculum programs to satisfy this new appetite. I was involved in the university’s Institute of Electrical and Electronics Engineers (IEEE) student branch, where someone handed me my first Microchip Technology PIC microcontroller and ran a few lunchtime tutorials about getting it up and running. I wanted more, and so did other people.

Jason Long

Jason Long

I was also very aware that I needed to drastically improve my personal confidence and my ability to speak in public if I was going to have any luck with a career outside of a cubicle, let alone survive an interview to get a job in the first place. The combination of these two things was the perfect excuse/opportunity to start up the MPG to ensure I kept learning by being accountable to teach people new stuff each week, but also to gain the experience of delivering those presentations.

I was blown away when there were almost 30 people at the first MPG meeting, but I was ready. Two things became very clear very quickly. The first was that, to be able to teach, you must achieve a whole new level of mastery about your subject, but it was also okay to say, “I don’t know” and find out for next week. The second was that I could, in fact, get my nerves under control as long as I was prepared and didn’t try to do too much. I’m still nervous every time I start a lecture, even 14 years later, but now I know how to use those nerves! The best part was that people really appreciated what I was doing and perhaps were a bit more tolerant since MPG is free. I found a love for teaching that I didn’t expect, nor did I get how rewarding the endeavor would be.

When I was wrapping up the ninth year of the program, I considered giving it one more year and then calling it quits. I took a moment to look back at what the program was when I started and where it had come to—it had indeed evolved a lot, and I figured I had put in about 2,000 h by this point. It seemed like a waste to throw in the towel. I also looked at the relationships that had come from the program, both personally and professionally, and realized that the majority of my career and who I had become professionally had really been defined by my work with MPG. But the program—even though it was still just in Calgary—was too big to keep as a side project. I had $10,000 in inventory to support the development boards, and although all monies stayed in the program, there were thousands of dollars exchanging hands. This was a business waiting to happen, though I had never thought of myself as an entrepreneur. I was just doing stuff I loved.

This ARM-based development board is made by Jason’s company, Engenuics Technologies.

This ARM-based development board is made by Jason’s company, Engenuics Technologies.

Around the same time I discovered SparkFun Electronics, and more importantly, I discovered the story of how the company got started by Nathan Seidie. That story begins almost exactly how MPG began, but clearly Nathan is a lot smarter than I am and has built an amazing company in the same time it took me to get to this point. I feel quite disappointed when I think about it that way, but thankfully I don’t think it’s too late to do what I should have done a long time ago. I hope to meet Nathan one day, but even if I don’t, I consider him a mentor and his story provides validation that the MPG platform and community may be able to grow and be sustainable.

I think MPG/Engenuics Technologies can find similar success as SparkFun. We can do that without ever having to compete against SparkFun because what we do is unique enough. There might be a bit of overlap, but I’m always going to try to complement what SparkFun does rather than compete against it. We simply become another resource to feed the voracious and infinite appetite for information from students, hobbyists, and engineers. Win-win is always the way to go.

I decided I should grow the program instead of ending it, so I started Engenuics Technologies, which would be built on the decade of MPG experience plus the decade of embedded design experience I had from the industry. It seemed like a pretty solid foundation on which to start a company! Surely I could promote all of the content and find students of the same mindset I was in when I started MPG? They could lead the program at different universities and develop those infinitely valuable communication and leadership skills that MPG fosters, except they’d have the advantage of not having to put in hundreds of hours to write all of the material. Even if groups of people weren’t playing with MPG, individuals could make use of the technical resource on their own and we could have a solid online community. I also wanted to keep students engaged beyond the single year of their engineer degrees in which MPG existed.

CIRCUIT CELLAR: What other products/services does Engenuics Technologies provide?

JASON: I describe Engenuics Technologies as a four-tier company as there are three significant aspects of the business in addition to MPG. The main purpose of the company is to fill a gap in the industry for specific training in embedded systems. There is very little formal training to be found for low-to-mid-level embedded hardware and firmware development and quality/value is often hit or miss. From teaching for 10 years while being an embedded designer for the same amount of time, I felt like I had the right skills to create great training. I had already created a LabVIEW course that I delivered internally for a company while I worked there, and people were blown away by the quality and content. I saw a huge need to develop embedded-specific training to help new graduates transition to the industry as well as junior engineers who were lacking in some fundamental engineering knowledge.

We have an embedded boot camp course that is about 20% hardware and 80% firmware focused, which I think is essential for new engineering graduates getting into embedded design. Though the course is based specifically on a Cortex-M3 development board, we ensure that we focus on how to learn a processor so the knowledge can be applied to any platform.

Engenuics Technologies has several courses now and we continue to offer those periodically though never as often as we would like, as we’ve become too busy with the other parts of the company. We finally got an office last August with an onsite training room, which makes the logistics much easier, and we’re ramping up the frequency of the programs we offer.

CIRCUIT CELLAR: You earned your BSEE from the University of Calgary in 2002. Can you describe any of the projects you’ve worked while you were there?

JASON: The professors at the U of C were a phenomenal bunch and it was a privilege to get to know them and work with them during my undergraduate studies. I remain in contact with many of them, and several are very good friends. Aside from blinking some LEDs on breadboards, the first complicated device I built was an attempt at the IEEE Micromouse competition. That proved to be a little much and my robot never did do anything beyond go forward, sense a wall, and then back up.

While studying at the University of Calgary, some of Jason’s first embedded designs included a programmable phase-locked loop project, a robot built for an IEEE Micromouse competition, an MPG dev board, and a binary clock.

While studying at the University of Calgary, some of Jason’s first embedded designs included a programmable phase-locked loop project, a robot built for an IEEE Micromouse competition, an MPG dev board, and a binary clock.

I originally thought I would base MPG around building robots, but that proved impossible due to cost. Building a robot is still on my bucket list. I’ll likely get there once my two boys are old enough to want to build robots. I continue to fantasize about building an autonomous quadcopter that can deliver beer. I better get busy on that before its commonplace!

Our IEEE student branch had a Protel 99 SE license and somehow I learned how to design PCBs. The first board I designed was a binary clock that I still use. I then did a PIC programmer and later I built a combined development board and programmer for MPG.

I also designed the PCB for our fourth-year Capstone design project, which initially was a very boring implementation of a phase-locked loop, but became a lot more fun when I decided to make it programmable with a keypad and an LCD. I brought all these things to my BW Technologies job interview and proudly showed them off. For any students reading this, by the way, landing your first engineering job is probably 5% technical, 10% GPA, and 85% enthusiasm and demonstrated interest and achievement. It’s really boring to interview someone who has done nothing extracurricular.

CIRCUIT CELLAR: How long have you been designing embedded systems? When did you become interested?

JASON: My dad was a high school science teacher and my mom was a nurse, so I didn’t have a lot of technical influence growing up. I loved talking physics with my dad, and I’m one of the few engineers who can cook (thank you, mom).

Aside from really liking LEGO and dismantling anything electronic (without ever a hope of putting it back together but always wondering what all those funny looking components did), I barely demonstrated any interest in EE when I was young. But somehow I figured out in grade 12 that EE was probably what I should study at university.

I’m sure I still had visions of being a video game designer, but that nagging interest in learning what those funny components did steered me to EE instead of computer science. It wasn’t until my second year at university when someone gave me my first PIC microcontroller that I really knew that embedded was where I needed to be. That someone was a student named Sean Hum, a brilliant guy who is now an associate professor at the University of Toronto.

CIRCUIT CELLAR: Which new technologies excite you?

JASON: I particularly like the 2.4-GHz radio technologies that hold the potential to really make our environment interactive and intelligent. I think the world needs more intelligence to address the wasteful nature of what we have become whether it is by actively doing something like turning the lights or heat off when we’re not around, or by simply making us more aware of our surroundings. I love ANT+ and am just getting into BLE—obviously, smartphone integration will be critical.

I think technology will drive change in education and I hope to see (and perhaps be a driving force behind) a more cohesive existence between academics and the industry. I hope MPG becomes a model to the industry of what can be achieved with not a lot of financial resources, but has immense payback for employees who become mentors and students who can connect with the industry much earlier and thus get more from their degree programs and graduate with substantially higher capabilities.

You can read the entire interview in Circuit Cellar 289 (August 2014).

Q&A: Embedded Systems Consultant

Elecia White is an embedded systems engineer, consultant, author, and innovator. She has worked on a variety of projects: DNA scanners, health-care monitors, learning toys, and fingerprint recognition.—Nan Price, Associate Editor

 

NAN: Tell us about your company Logical Elegance. When and why did you start the company? What types of services do you provide?

ELECIA: Logical Elegance is a small San Jose, CA-based consulting firm specializing in embedded systems. We do system analysis, architecture, and software implementation for a variety of devices.

Elecia White

Elecia White

I started the company in 2004, after leaving a job I liked for a job that turned out to be horrible. Afterward, I wasn’t ready to commit to another full-time job; I wanted to dip my toe in before becoming permanent again.

I did eventually take another full-time job at ShotSpotter, where I made a gunshot location system. Logical Elegance continued when my husband, Chris, took it over. After ShotSpotter, I returned to join him. While we have incorporated and may take on a summer intern, for the most part Logical Elegance is only my husband and me.

I like consulting, it lets me balance my life better with my career. It also gives me time to work on my own projects: writing a book and articles, playing with new devices, learning new technologies. On the other hand, I could not have started consulting without spending some time at traditional companies. Almost all of our work comes from people we’ve worked with in the past, either people we met at companies where we worked full time or people who worked for past clients.

Here is Elecia’s home lab bench. She conveniently provided notes.

Here is Elecia’s home lab bench. She conveniently provided notes.

NAN: Logical Elegance has a diverse portfolio. Your clients have ranged from Cisco Systems to LeapFrog Enterprises. Tell us about some of your more interesting projects.

ELECIA: We are incredibly fortunate that embedded systems are diverse, yet based on similar bedrock. Once you can work with control loops and signal processing, the applications are endless. Understanding methodologies for concepts such as state machines, interrupts, circular buffers, and working with peripherals allows us to put the building blocks together a different way to suit a particular product’s need.

For example, for a while there, it seemed like some of my early work learning how to optimize systems to make big algorithms work on little processors would fall to the depths of unnecessary knowledge. Processors kept getting more and more powerful. However, as I work on wearables, with their need to optimize cycles to extend their battery life, it all is relevant again.

We’ve had many interesting projects. Chris is an expert in optical coherence tomography (OCT). Imagine a camera that can go on the end of a catheter to help a doctor remove plaque from a clogged artery or to aid in eye surgery. Chris is also the networking expert. He works on networking protocols such as Locator/ID Separation Protocol (LISP) and multicast.

I’m currently working for a tiny company that hopes to build an exoskeleton to help stroke patients relearn how to walk. I am incredibly enthusiastic about both the application and the technology.

That has been a theme in my career, which is how I’ve got this list of awesome things I’ve worked on: DNA scanners, race cars and airplanes, children’s toys, and a gunshot location system. The things I leave off the list are more difficult to describe but no less interesting to have worked on: a chemical database that used hydrophobicity to model uptake rates, a medical device for the operating room and ICU, and methods for deterring fraud using fingerprint recognition on a credit card.

Elecia says one of the great things about the explosion of boards and kits available is being able to quickly build a system. However, she explains, once the components work together, it is time to spin a board. (This system may be past that point.)

Elecia says one of the great things about the explosion of boards and kits available is being able to quickly build a system. However, she explains, once the components work together, it is time to spin a board. (This system may be past that point.)

In the last few years, Chris and I have both worked for Fitbit on different projects. If you have a One pedometer, you have some of my bits in your pocket.

The feeling of people using my code is wonderful. I get a big kick seeing my products on store shelves. I enjoyed working with Fitbit. When I started, it was a small company expanding its market; definitely the underdog. Now it is a success story for the entire microelectromechanical systems (MEMS) industry.

Not everything is rosy all the time though. For one start-up, the algorithms were neat, the people were great, and the technology was a little clunky but still interesting. However, the client failed and didn’t pay me (and a bunch of other people).

When I started consulting, I asked a more experienced friend about the most important part. I expected to hear that I’d have to make myself more extroverted, that I’d have to be able to find more contracts and do marketing, and that I’d be involved in the drudgery of accounting. The answer I got was the truth: the most important part of consulting is accounts receivable. Working for myself—especially with small companies—is great fun, but there is a risk.

NAN: How did you get from “Point A” to Logical Elegance?

ELECIA: ”Point A” was Harvey Mudd College in Claremont, CA. While there, I worked as a UNIX system administrator, then later worked with a chemistry professor on his computational software. After graduation, I went to Hewlett Packard (HP), doing standard software, then a little management. I was lured to another division to do embedded software (though we called it firmware).

Next, a start-up let me learn how to be a tech lead and architect in the standard start-up sink-or-swim methodology. A mid-size company gave me exposure to consumer products and a taste for seeing my devices on retailer’s shelves.

From there, I tried out consulting, learned to run a small business, and wrote a Circuit Cellar Ink article “Open Source Code Guide” (Issue 175, 2005). I joined another tiny start-up where I did embedded software, architecture, management, and even directorship before burning out. Now, I’m happy to be an embedded software consultant, author, and podcast host.

NAN: You wrote Making Embedded Systems: Design Patterns for Great Software (O’Reilly Media, 2011). What can readers expect to learn from the book?

ELECIA: While having some industry experience in hardware or software will make my book easier to understand, it is also suitable for a computer science or electrical engineering college student.

It is a technical book for software engineers who want to get closer to the hardware or electrical engineers who want to write good software. It covers many types of embedded information: hardware, software design patterns, interview questions, and a lot of real-world wisdom about shipping products.

Elecia White's BookMaking Embedded Systems is intended for engineers who are in transition: the hardware engineer who ends up writing software or the software engineer who suddenly needs to understand how the embedded world is different from pure software.

Unfortunately, most college degrees are either computer science or electrical engineering. Neither truly prepares for the half-and-half world of an embedded software engineer. Computer science teaches algorithms and software design methodology. Electrical engineering misses both of those topics but provides a practical tool kit for doing low-level development on small processors. Whichever collegiate (or early career) path, an embedded software engineer needs to have familiarity with both.

I did a non-traditional major that was a combination of computer science and engineering systems. I was prepared for all sorts of math (e.g., control systems and signal processing) and plenty of programming. All in all, I learned about half of the skills I needed to do firmware. I was never quite sure what was correct and what I was making up as I went along.

As a manager, I found most everyone was in the same boat: solid foundations on one side and shaky stilts on the other. The goal of the book is to take whichever foundation you have and cantilever a good groundwork to the other half. It shouldn’t be 100% new information. In addition to the information presented, I’m hoping most people walk away with more confidence about what they know (and what they don’t know).

Elecia was a judge at the MEMS Elevator Pitch Session at the 2013 MEMS Executive Congress in Napa, CA.

Elecia was a judge at the MEMS Elevator Pitch Session at the 2013 MEMS Executive Congress in Napa, CA.

NAN: How long have you been designing embedded systems? When did you become interested?

ELECIA: I was a software engineer at the NetServer division at HP. I kept doing lower-level software, drivers mostly, but for big OSes: WinNT, OS/2, Novell NetWare, and SCO UNIX (a list that dates my time there).

HP kept trying to put me in management but I wasn’t ready for that path, so I went to HP Labs’s newly spun-out HP BioScience to make DNA scanners, figuring the application would be more interesting. I had no idea.

I lit a board on fire on my very first day as an embedded software engineer. Soon after, a motor moved because my code told it to. I was hooked. That edge of software, where the software touches the physical, captured my imagination and I’ve never looked back.

NAN: Tell us about the first embedded system you designed. Where were you at the time? What did you learn from the project?

ELECIA: Wow, this one is hard. The first embedded system I designed depends on your definition of “designed.” Going from designing subsystems to the whole system to the whole product was a very gradual shift, coinciding with going to smaller and smaller companies until suddenly I was part of the team not only choosing processors but choosing users as well.

After I left the cushy world of HP Labs with a team of firmware engineers, several electrical engineers, and a large team of software engineers who were willing to help design and debug, I went to a start-up with fewer than 50 people. There was no electrical engineer (except for the EE who followed from HP). There was a brilliant algorithms guy but his software skills were more MATLAB-based than embedded C. I was the only software/firmware engineer. This was the sort of company that didn’t have source version control (until after my first day). It was terrifying being on my own and working without a net.

I recently did a podcast about how to deal with code problems that feel insurmountable. While the examples were all from recent work, the memories of how to push through when there is no one else who can help came from this job.

Elecia is shown recording a Making Embedded Systems episode with the founders of electronics educational start-up Light Up. From left to right: Elecia’s husband and producer Christopher White, host Elecia White, and guests Josh Chan and Tarun Pondicherry.

Elecia is shown recording a Making Embedded Systems episode with the founders of electronics educational start-up Light Up. From left to right: Elecia’s husband and producer Christopher White, host Elecia White, and guests Josh Chan and Tarun Pondicherry.

NAN: Are you currently working on or planning any projects?

ELECIA: I have a few personal projects I’m working on: a T-shirt that monitors my posture and a stuffed animal that sends me a “check on Lois” text if an elderly neighbor doesn’t pat it every day. These don’t get nearly enough of my attention these days as I’ve been very focused on my podcast: Making Embedded Systems on iTunes, Instacast, Stitcher, or direct from http://embedded.fm.

The podcast started as a way to learn something new. I was going to do a half-dozen shows so I could understand how recording worked. It was a replacement for my normal community center classes on stained glass, soldering, clay, hula hooping, laser cutting, woodshop, bookbinding, and so forth.

However, we’re way beyond six shows and I find I quite enjoy it. I like engineering and building things. I want other people to come and play in this lovely sandbox. I do the show because people continue to share their passion, enthusiasm, amusement, happiness, spark of ingenuity, whatever it is, with me.

To sum up why I do a podcast, in order of importance: to talk to people who love their jobs, to share my passion for engineering, to promote the visibility of women in engineering, and to advertise for Logical Elegance (this reason is just in case our accountant reads this since we keep writing off expenses).

NAN: What are your go-to embedded platforms? Do you have favorites, or do you use a variety of different products?

ELECIA: I suppose I do have favorites but I have a lot of favorites. At any given time, my current favorite is the one that is sitting on my desk. (Hint!)

I love Arduino although I don’t use it much except to get other people excited. I appreciate that at the heart of this beginner’s board (and development system) is a wonderful, useful processor that I’m happy to work on.

I like having a few Arduino boards around, figuring that I can always get rid of the bootloader and use the Atmel ATmega328 on its own. In the meantime, I can give them to people who have an idea they want to try out.

For beginners, I think mbed’s boards are the next step after Arduino. I like them but they still have training wheels: nice, whizzy training wheels but still training wheels. I have a few of those around for when friends’ projects grow out of Arduinos. While I’ve used them for my own projects, their price precludes the small-scale production I usually want to do.

Professionally, I spend a lot of time with Cortex-M3s, especially those from STMicroelectronics and NXP Semiconductors. They seem ubiquitous right now. These are processors that are definitely big enough to run an RTOS but small enough that you don’t have to. I keep hearing that Cortex-M0s are coming but the price-to-performance-to-power ratio means my clients keep going to the M3s.

Finally, I suppose I’ll always have a soft spot for Texas Instruments’s C2000 line, which is currently in the Piccolo and Delfino incarnations. The 16-bit byte is horrible (especially if you need to port code to another processor), but somehow everything else about the DSP does just what I want. Although, it may not be about the processor itself: if I’m using a DSP, I must be doing something mathy and I like math.

NAN: Do you have any predictions for upcoming “hot topics?”

ELECIA: I’m most excited about health monitoring. I’m surprised that Star Trek and other science fiction sources got tricorders right but missed the constant health monitoring we are heading toward with the rise of wearables and the interest in quantified self.
I’m most concerned about connectivity. The Internet of Things (IoT) is definitely coming, but many of these devices seem to be more about applying technology to any device that can stand the price hit, whether it makes sense or not.

Worse, the methods for getting devices connected keeps fracturing as the drive toward low-cost and high functionality leads the industry in different directions. And even worse, the ongoing battle between security and ease of use manages to give us things that are neither usable nor secure. There isn’t a good solution (yet). To make progress we need to consider the application, the user, and what they need instead of applying what we have and hoping for the best.

A Trace Tool for Embedded Systems

Tracing tools monitor what is going in a program’s execution by logging low-level and frequent events. Thus tracing can detect and help debug performance issues in embedded system applications.

In Circuit Cellar’s April issue, Thiadmer Riemersma describes his DIY tracing setup for small embedded systems. His system comprises three parts: a set of macros to include in the source files of a device under test (DUT), a PC workstation viewer that displays retrieved trace data, and a USB dongle that interfaces the DUT with the workstation.

Small embedded devices typically have limited-performance microcontrollers and scarce interfaces, so Riemersma’s tracing system uses only a single I/O pin on the microcontroller.

Designing a serial protocol that keeps data compact is also important. Riemersma, who develops embedded software for the products of his Netherlands-based company, CompuPhase, explains why:

Compactness of the information transferred from the embedded system to the workstation [which decodes and formats the trace information] is important because the I/O interface that is used for the transfer will probably be the bottleneck. Assuming you are transmitting trace messages bit by bit over a single pin using standard wire and 5- or 3.3-V logic levels, the transfer rate may be limited to roughly 100 Kbps.

My proposed trace protocol achieves compactness by sending numbers in binary, rather than as human-readable text. Common phrases can be sent as numeric message IDs. The trace viewer (or trace ‘listener’) can translate these message IDs back to the human-readable strings.

One important part of the system is the hardware interface—the trace dongle. Since many microcontrollers are designed with only those interfaces used for specific application needs, Riemersma says, typically the first step is finding a spare I/0 pin that can be used to implement the trace protocol.

In the following article excerpt, Riemersma describes his trace dongle and implementation requiring a single I/O pin on the microcontroller:

This is the trace dongle.

This is the trace dongle.

Photo 1 shows the trace dongle. To transmit serial data over a single pin, you need to use an asynchronous protocol. Instead of using a kind of (bit-banged) RS-232, I chose biphase encoding. Biphase encoding has the advantage of being a self-clocking and self-synchronizing protocol. This means that biphase encoding is simple to implement because timing is not critical. The RS-232 protocol requires timing to stay within a 3% error margin; however, biphase encoding is tolerant to timing errors of up to 20% per bit. And, since the protocol resynchronizes on each bit, error accumulation is not an issue.

Figure 1 shows the transitions to transmit an arbitrary binary value in biphase encoding—to be more specific, this variant is biphase mark coding. In the biphase encoding scheme, there is a transition at the start of each bit.

Figure 1: This is an example of a binary value transferred in biphase mark coding.

Figure 1: This is an example of a binary value transferred in biphase mark coding.

For a 1 bit there is also a transition halfway through the clock period. With a 0 bit, there is no extra transition. The absolute levels in biphase encoding are irrelevant, only the changes in the output line are important. In the previous example, the transmission starts with the idle state at a high logic level but ends in an idle state at a low logic level.

Listing 1 shows an example implementation to transmit a byte in biphase encoding over a single I/O pin. The listing refers to the trace_delay() and toggle_pin() functions (or macros). These system-dependent functions must be implemented on the DUT. The trace_delay() function should create a short delay, but not shorter than 5 µs (and not longer than 50 ms). The toggle_pin() function must toggle the output pin from low to high or from high to low.

For each bit, the function in Listing 1 inverts the pin and invokes trace_delay() twice. However, if the bit is a 1, it inverts the pin again between the two delay periods. Therefore, a bit’s clock cycle takes two basic “delay periods.”

Listing 1: Transmitting a byte in biphase encoding, based on a function to toggle an I/O pin, is shown.

Listing 1: Transmitting a byte in biphase encoding, based on a function to toggle an I/O pin, is shown.

The biphase encoding signal goes from the DUT to a trace dongle. The dongle decodes the signal and forwards it as serial data from a virtual RS-232 port to the workstation (see Photo 2 and the circuit in Figure 2).

Photo 2: The trace dongle is inserted into a laptop and connected to the DUT.

Photo 2: The trace dongle is inserted into a laptop and connected to the DUT.

This trace dongle interprets biphase encoding.

Figure 2: This trace dongle interprets biphase encoding.

The buffer is there to protect the microcontroller’s input pin from spikes and to translate the DUT’s logic levels to 5-V TTL levels. I wanted the trace dongle to work whether the DUT used 3-, 3.3-, or 5-V logic. I used a buffer with a Schmitt trigger to avoid the “output high” level of the DUT at 3-V logic, plus noise picked up by the test cable would fall in the undefined area of 5-V TTL input.

Regarding the inductor, the USB interface provides 5 V and the electronics run at 5 V. There isn’t room for a voltage regulator. Since the USB power comes from a PC, I assumed it might not be a “clean” voltage. I used the LC filter to reduce noise on the power line.

The trace dongle uses a Future Technology Devices International (FTDI) FT232RL USB-to-RS-232 converter and a Microchip Technology PIC16F1824 microcontroller. The main reason I chose the FT232RL converter is FTDI’s excellent drivers for multiple OSes. True, your favorite OS already comes with a driver for virtual serial ports, but it is adequate at best. The FTDI drivers offer lower latency and a flexible API. With these drivers, the timestamps displayed in the trace viewers are as accurate as what can be achieved with the USB protocol, typically within 2 ms.

I chose the PIC microcontroller for its low cost and low pin count. I selected the PIC16F1824 because I had used it in an earlier project and I had several on hand. The microcontroller runs on a 12-MHz clock that is provided by the FTDI chip.

The pins to connect to the DUT are a ground and a data line. The data line is terminated at 120 Ω to match the impedance of the wire between the dongle and the DUT.

The cable between the DUT and the trace dongle may be fairly long; therefore signal reflections in the cable should be considered even for relatively low transmission speeds of roughly 250 kHz. That cable is typically just loose wire. The impedance of loose wire varies, but 120 Ω is a good approximate value.

The data line can handle 3-to-5-V logic voltages. Voltages up to 9 V do not harm the dongle because it is protected with a Zener diode (the 9-V limit is due to the selected Zener diode’s maximum power dissipation). The data line has a 10-kΩ to 5-V pull-up, so you can use it on an open-collector output.

The last item of interest in the circuit is a bicolor LED that is simply an indicator for the trace dongle’s status and activity. The LED illuminates red when the dongle is “idle” (i.e., it has been enumerated by the OS). It illuminates green when biphase encoded data is being received.

After the dongle is built, it must be programmed. First, the FT232RL must be programmed (with FTDI’s “FT Prog” program) to provide a 12-MHz clock on Pin C0. The “Product Description” in the USB string descriptors should be set to “tracedongle” so the trace viewers can find the dongle among other FTDI devices (if any). To avoid the dongle being listed as a serial port, I also set the option to only load the FTDI D2XX drivers.

To upload the firmware in the PIC microcontroller, you need a programmer (e.g., Microchip Technology’s PICkit) and a Tag-Connect cable, which eliminates the need for a six-pin header on the PCB, so it saves board space and cost.

The rest of the article provides details of how to create the dongle firmware, how to add trace statements to the DUT software being monitored, and how to use the GUI version of the trace viewer.

The tracing system is complete, but it can be enhanced, Riemersma says. “Future improvements to the tracing system would include the ability to draw graphs (e.g., for task switches or queue capacity) or a way to get higher precision timestamps of received trace packets,” he says.

For Riemersma’s full article, refer to our April issue now available for membership download or single-issue purchase.

Serial Carrier Card with Flexible I/O and Serial Technology

New 3U CompactPCI Serial Carrier Card from MEN Micro IntegratesThe G204 is a 3U CompactPCI Serial carrier card with an M-Module slot that combines fast CompactPCI Serial technology with flexible I/O options. The card serves as the basis for powerful 19″-based system solutions for transportation and industrial applications (e.g., data acquisition, process control, automation and vehicle control, robotics or instrumentation).

M-Modules are modular I/O extensions for industrial computers (e.g., embedded systems and high-end workstations). The M-Module slot enables users to interchange more than 30 I/O functions within a system. The M-Module, which needs only one CompactPCI Serial slot, is screwed tightly onto the G204 and does not require a separately mounted transition panel.

The G204 modular mezzanine card operates in a –40°C to 85°C extended temperature range for harsh environments and costs $483.

MEN Micro Inc.
www.menmicro.com

Open-Source Guide for Embedded Systems Developers (EE Tip #114)

What comes to mind when you hear the term “open source”? Hopefully, it means more to you than just a software application running on a PC.

As an embedded systems developer, you should familiarize yourself with the wide range of open-source programs, programming tools, and hardware platforms currently available. In addition to saving yourself the costs of pricey user licenses, you’ll find that open-source community forums helpful, informative, and engaging.

Open-source software offers a number of advantages. The product is independent of a particular manufacturer and there aren’t license costs. Plus, the product is usually high quality because it is often supported by a large active community of users. When a program’s source code is available, you have the chance to fix errors, change its behavior, and even add new features.

The aforementioned advantages should be good enough reasons for any designer of microcontroller applications to work with open-source software. PC tools such as editors, documentation programs, toolchains (for the vast majority of microcontrollers), operating systems, and libraries are widely available with open-source code.

On the hardware side, open-source microcontroller boards are gaining popularity among serious engineers. The circuits, PCBs, and CAD files are available so you can modify them, improve them, and add more features to meet the demands of your applications. It’s an added benefit that open-source hardware is always supported by software code and libraries that enable you to get up and running fairly quickly.

Since we couldn’t include in the space provided all the open-source resources currently available, we simply list several open-source projects that Elektor and Circuit Cellar engineers and editors recommend.

Below we provide the following lists: hardware; libraries and run-time tools; PC tools, and GNU toolchains. By no means are the lists complete. Still, they’re helpful starting points.

Download your Arduino Uno poster

Click image to download a free Arduino Uno poster

Arduino—This popular platform offers a range of simple microcontroller and development boards that you can purchase from several suppliers. The Arduino website has an active forum and the wide range of software examples will ensure that you are up and running in minimum time.

Openmoko—It’s a complete software stack for a smart. The Neo FreeRunner mobile phone is the target hardware platform. Development and debug boards are also available.

GNU Radio & Universal Software Radio Peripheral—The GNU Radio project is a software toolkit to produce a software-defined radio. The open-source hardware for this project is the Universal Software Radio Peripheral (USRPBoard), which is based on an FPGA.

KiCAD—One of the best-known suites of CAD programs for hardware production, KiCAD includes tools for generating circuit diagrams and PCBs. You can view 3-D representations of the finished board.

Fab Lab—This interesting project offers 3-D laser cutters, 3-D printers, and other machines for use by the general public. It’s a handy resource for making robot parts and art objects.

uIP/lwIP—Two outstanding network stacks, the first is for 8-bit microcontrollers. lwIP is a development of the first and more suited to medium sized controllers. The uIP licence is not so strict allowing the stack to be used in commercial products.

LUFA (formally MyUSB)—A large library of applications for interfacing (both Host and Device) USB enabled AVR controllers. The demonstration applications allow an AVR controller for example to emulate a keyboard and many other devices (mass storage device, audio I/O etc.)OpenSource2

Crypto-avr-lib—It’s a library of optimized cryptographic routines for the Atmel ATmega controller. Issued under the GPL Version 3 licence. Contact the author for other types of licence.

FreeRTOS—FreeRTOS is a lightweight Real Time kernel which can run on many controller families. It can be used in commercial applications and allows the use of closed-source software.

U-Boot—Universal bootloader with a large range of routines for memory, UART interface, SD card, network and USB etc. Conceived originally as a bootloader but now through comprehensive hardware support can be used as the basis of a C code module.

Embedded Filesystems Library—A useful (FAT) file format, when you are short of memory. The GPL licence includes a clause allowing static linking to the library without public disclosure of your code.

.NET Micro Framework—Now open source this very compact, trimmed down .NET Framework running on diverse ARM platforms. Programmable using the object orientated C variant C#; lots of resources including support for I2C, Ethernet and many more. Helps reduce development time.

Eclipse—This is a good development environment. It has a modular structure which makes it very easy to configure. There are around 1,000 plug-in modules (both open source and commercial) for a range of program languages and target systems.

Kdevelop—Kdevelop is an integrated development environment which should satisfy most power-user needs. Runs in MS Windows, Mac OsX, Linux, Solaris and FreeBSD. Plug-in expandable.

Programmer’s Notepad—A lightweight but efficient editor for writing source code. Allows fast, simple and comfortable program production. Can be expanded with plug-ins.

Doxygen—An intelligent tool which can automatically generate code documentation (C, C++, Java etc.). The programmer provides tags in the source file; Doxygen generates the comprehensive documentation in PDF or HTML format. It can also extract the code structure from undocumented source files.

WinMerge—A good tool for code comparison and code synchronization. The program can also compare the contents of folders/files and display the results in a visual text format that makes it easy to understand.

Tera Term—A terminal program to access COM ports, supports Telnet communication Protocol. A debugging tool to eavesdrop on serial communications.

Note: Toolchains for GNU projects are available most processor architectures AVR, Coldfire, ARM, MIPS, PowerPC and Intel x86. The GNU-toolchain includes not only compilers for C, C++ and in most cases also Java (GCC = GNU Compiler Collection), but also Linkers, Assemblers and Debuggers together with C libraries (libc = C library). The tools are used from within other-open source projects, like WinAVR, which provides a familiar user interface to speed up program development.

One Desk Serves Two Roles for Professor and Designer

Chris Coulston, head of the Computer Science and Software Engineering department at Penn State Erie, The Behrend College, has a broad range of technical interests, including embedded systems, computer graphics algorithms, and sensor design.

Since 2005, he has submitted five articles for publication in Circuit Cellar, on projects and topics ranging from DIY motion-controlled gaming to a design for a “smart” jewelry pendant utilizing RGB LEDs.

We asked him to share photos and a description of the workspace in his Erie, PA, home. His office desk (see Photo 1) has something of an alter ego. When need and invention arise, he reconfigures it into an “embedded workstation.”

Coulston's workspace configured as an office desk

Photo 1: Coulston’s workspace configured as an office desk

When working on my projects, my embedded workstation contains only the essential equipment that I need to complete my project (see Photo 2).  What it lacks in quantity I’ve tried to make up for in quality instrumentation; a Tektronix TDS 3012B oscilloscope, a Fluke 87-V digital multimeter, and a Weller WS40 soldering iron.  While my workstation lacks a function generator and power supply, most of my projects are digital and have modest power requirements.

Coulston can reconfigure his desk into the embedded workstation pictured here.

Photo 2: Coulston can reconfigure his desk into the embedded workstation pictured here.

Coulston says his workspace must function as a “typical office desk” 80 percent of the time and electronics station 20 percent of the time.

It must do this while maintaining some semblance of being presentable—my wife shares a desk in the same space. The foundation of my workstation is a recycled desk with a heavy plywood backing on which I attached shelving. Being a bit clumsy, I’ve tried to screw down anything that could be knocked over—speakers, lights, bulletin board, power strip, cable modem, and routers.

The head of a university department has different needs in a workspace than does an electronics designer. So how does Coulston make his single office desk suffice for both his professional and personal interests? It’s definitely not a messy solution.

My role as department chair and professor means that I spend a lot time grading, writing, and planning. For this work, there is no substitute for uncluttered square footage—getting all the equipment off the working surface. However, when it’s time to play with the circuits, I need to easily reconfigure this space.

I have found organization to be key to successfully realize this goal. Common parts are organized in a parts case, parts for each project are put in their own bag, the active project is stored in the top draw, frequently used tools, jumper wires, and DMM are stored in the next draw. All other equipment is stored in a nearby closet.

I’ve looked at some of the professional-looking workspaces in Circuit Cellar and must admit that I am a bit jealous. However, when it comes to operating under the constraints of a busy professional life, I have found that my reconfigurable space is a practical compromise.

To learn more about Coulston and his technical interests, check out our Member Profile posted earlier this year.

 

Chris Coulston

Chris Coulston

CC 277: Using Files in Concurrent Linux Designs

In the August issue of Circuit Cellar, columnist Robert Japenga, who has been designing embedded systems since 1973, wraps up his eight-part series on the benefits and challenges of designing concurrency into your systems and some of the specific tools Linux provides for IPC.

His final installment discusses file usage. It also recounts how the development of read/write nonvolatile memory (i.e., flash technology) enabled embedded systems to contain cost-competitive file systems.

“Disk drives in the early days were too big and weren’t reliable enough for embedded systems. The first real disk drive I used in 1975 was a Digital Equipment RK-05 for a PDP-11 that held an amazing 2.5 MB of data,” Japenga says in his column. “The RK-05 was released in 1972. It initially weighed 100 lbs. The $74 monthly maintenance cost would buy a 1-TB drive today or 12 per year.

In 1972, a Digital Equipment RK-05 disk drive held only 2.5 MB of data. (Photo courtesy of Mark Csele)

“In 1977, a friend from Bell Labs carried an RK-05 with a copy of Unix onto a plane. At the gate, the inspector opened the lid and put his finger on the magnetic platter. Whoops. The disk gloriously crashed when inserted into my disk drive. It seemed I would have to wait for my first copy of Unix.

“”For a time, companies produced hardened disk drives. The cost was very prohibitive and the reliability was questionable. Then in 2001, the iPod changed all that when Apple used Toshiba’s 1.8” hard drive, which is only 0.2” thick. As a consumer product, it had to be extremely rugged. Very small embedded systems now had hard drives.

“But not all of us built millions of systems, nor could we afford to put a hard disk in our temperature controllers, motion-control devices, or avionics boxes. However, with the advent of read/write nonvolatile memory (i.e., flash technology), embedded systems now had a way to contain cost-competitive file systems. This paved the way for putting real OSes into embedded systems. In the late 1990s and later, we were putting DOS on a flash card. Well, not everything was a real OS! And that is where Linux comes into the picture.”

Japenga’s column goes on to discuss file systems and the mechanisms to create concurrent systems, including nonvolatile flag files, volatile flag files, data sharing, and event triggering. It concludes with a thorough discussion of some of the risks of using a file system in a concurrent system.

“Modern embedded systems are doing more than I ever imagined when I started out,” Japenga says. “Adding a file system to your design can provide significant advantages to improve your product. As with all OS functions, we need to understand how our file system works if we are going to use it properly—especially in systems with concurrency.”

For more, check out  Japenga’s column, Embedded in Thin Slices, in Circuit Cellar‘s August issue.