NPU Blade for High-Throughput Packet Processing

ADLINK

The aTCA-N700 NPU blade

The aTCA-N700 Advanced Telecommunications Computing Architecture (ATCA) packet processing blade features dual 32-core network processing units (NPUs) for parallel processing and up to 320G switching capability. The blade delivers advanced packet and security processing capabilities for high-performance, high-throughput, low-latency applications in broadband infrastructure elements (e.g., wireless access point controllers, network security platforms, deep packet inspection, IPTV, LTE gateways, and media servers).

The ADLINK aTCA-N700 complies with the ATCA Base Specification (PICMG 3.0 R.3.0) and the ATCA Ethernet Specification (PICMG 3.1 R2.0). It is powered by dual Cavium OCTEON II CN6880 processors, which each have 32 cnMIPS64 V2 cores and a highly optimized architecture for deep packet inspection, network security, and traffic-shaping applications.

Eight memory sockets are provided to support VLP DDR3-1333 REG/ECC up to 128 GB and data transfer bandwidth up to 320 Gbps. The aTCA-N700 blade also supports TCAM for fast router lookup. The blade features a powerful local management processor (LMP) and a quad-core Freescale Semiconductor QorIQ P2041, which makes local management more flexible and convenient and enables the Cavium processors to focus on packet processing.

Each set of NPUs features its own NOR boot flash memory and NAND OS flash memory in a redundant configuration. The LMP has two EEROM for U-Boot image storage and two SSD devices for operating system and application image storage.

For Ethernet connectivity, the aTCA-N700 utilizes the high-performance Broadcom BCM 56842 Ethernet switch to connect the CN6880 packet processors, backplanes, and I/O ports with the switch fabric providing up to 320 Gbps bandwidth. The aTCA-N700 uses dual fabric interface channels and two base interfaces for data transfer.

Contact ADLINK for pricing.

ADLINK Technology, Inc.
www.adlinktech.com

Q&A: Marilyn Wolf, Embedded Computing Expert

Marilyn Wolf has created embedded computing techniques, co-founded two companies, and received several Institute of Electrical and Electronics Engineers (IEEE) distinctions. She is currently teaching at Georgia Institute of Technology’s School of Electrical and Computer Engineering and researching smart-energy grids.—Nan Price, Associate Editor

NAN: Do you remember your first computer engineering project?

MARILYN: My dad is an inventor. One of his stories was about using copper sewer pipe as a drum memory. In elementary school, my friend and I tried to build a computer and bought a PCB fabrication kit from RadioShack. We carefully made the switch features using masking tape and etched the board. Then we tried to solder it and found that our patterning technology outpaced our soldering technology.

NAN: You have developed many embedded computing techniques—from hardware/software co-design algorithms and real-time scheduling algorithms to distributed smart cameras and code compression. Can you provide some information about these techniques?

Marilyn Wolf

Marilyn Wolf

MARILYN: I was inspired to work on co-design by my boss at Bell Labs, Al Dunlop. I was working on very-large-scale integration (VLSI) CAD at the time and he brought in someone who designed consumer telephones. Those designers didn’t care a bit about our fancy VLSI because it was too expensive. They wanted help designing software for microprocessors.

Microprocessors in the 1980s were pretty small, so I started on simple problems, such as partitioning a specification into software plus a hardware accelerator. Around the turn of the millennium, we started to see some very powerful processors (e.g., the Philips Trimedia). I decided to pick up on one of my earliest interests, photography, and look at smart cameras for real-time computer vision.

That work eventually led us to form Verificon, which developed smart camera systems. We closed the company because the market for surveillance systems is very competitive.
We have started a new company, SVT Analytics, to pursue customer analytics for retail using smart camera technologies. I also continued to look at methodologies and tools for bigger software systems, yet another interest I inherited from my dad.

NAN: Tell us a little more about SVT Analytics. What services does the company provide and how does it utilize smart-camera technology?

MARILYN: We started SVT Analytics to develop customer analytics for software. Our goal is to do for bricks-and-mortar retailers what web retailers can do to learn about their customers.

On the web, retailers can track the pages customers visit, how long they stay at a page, what page they visit next, and all sorts of other statistics. Retailers use that information to suggest other things to buy, for example.

Bricks-and-mortar stores know what sells but they don’t know why. Using computer vision, we can determine how long people stay in a particular area of the store, where they came from, where they go to, or whether employees are interacting with customers.

Our experience with embedded computer vision helps us develop algorithms that are accurate but also run on inexpensive platforms. Bad data leads to bad decisions, but these systems need to be inexpensive enough to be sprinkled all around the store so they can capture a lot of data.

NAN: Can you provide a more detailed overview of the impact of IC technology on surveillance in recent years? What do you see as the most active areas for research and advancements in this field?

MARILYN: Moore’s law has advanced to the point that we can provide a huge amount of computational power on a single chip. We explored two different architectures: an FPGA accelerator with a CPU and a programmable video processor.

We were able to provide highly accurate computer vision on inexpensive platforms, about $500 per channel. Even so, we had to design our algorithms very carefully to make the best use of the compute horsepower available to us.

Computer vision can soak up as much computation as you can throw at it. Over the years, we have developed some secret sauce for reducing computational cost while maintaining sufficient accuracy.

NAN: You wrote several books, including Computers as Components: Principles of Embedded Computing System Design and Embedded Software Design and Programming of Multiprocessor System-on-Chip: Simulink and System C Case Studies. What can readers expect to gain from reading your books?

MARILYN: Computers as Components is an undergraduate text. I tried to hit the fundamentals (e.g., real-time scheduling theory, software performance analysis, and low-power computing) but wrap around real-world examples and systems.

Embedded Software Design is a research monograph that primarily came out of Katalin Popovici’s work in Ahmed Jerraya’s group. Ahmed is an old friend and collaborator.

NAN: When did you transition from engineering to teaching? What prompted this change?

MARILYN: Actually, being a professor and teaching in a classroom have surprisingly little to do with each other. I spend a lot of time funding research, writing proposals, and dealing with students.

I spent five years at Bell Labs before moving to Princeton, NJ. I thought moving to a new environment would challenge me, which is always good. And although we were very well supported at Bell Labs, ultimately we had only one customer for our ideas. At a university, you can shop around to find someone interested in what you want to do.

NAN: How long have you been at Georgia Institute of Technology’s School of Electrical and Computer Engineering? What courses do you currently teach and what do you enjoy most about instructing?

MARILYN: I recently designed a new course, Physics of Computing, which is a very different take on an introduction to computer engineering. Instead of directly focusing on logic design and computer organization, we discuss the physical basis of delay and energy consumption.

You can talk about an amazingly large number of problems involving just inverters and RC circuits. We relate these basic physical phenomena to systems. For example, we figure out why dynamic RAM (DRAM) gets bigger but not faster, then see how that has driven computer architecture as DRAM has hit the memory wall.

NAN: As an engineering professor, you have some insight into what excites future engineers. With respect to electrical engineering and embedded design/programming, what are some “hot topics” your students are currently attracted to?

MARILYN: Embedded software—real-time, low-power—is everywhere. The more general term today is “cyber-physical systems,” which are systems that interact with the physical world. I am moving slowly into control-oriented software from signal/image processing. Closing the loop in a control system makes things very interesting.

My Georgia Tech colleague Eric Feron and I have a small project on jet engine control. His engine test room has a 6” thick blast window. You don’t get much more exciting than that.

NAN: That does sound exciting. Tell us more about the project and what you are exploring with it in terms of embedded software and closed-loop control systems.

MARILYN: Jet engine designers are under the same pressures now that have faced car engine designers for years: better fuel efficiency, lower emissions, lower maintenance cost, and lower noise. In the car world, CPU-based engine controllers were the critical factor that enabled car manufacturers to simultaneously improve fuel efficiency and reduce emissions.

Jet engines need to incorporate more sensors and more computers to use those sensors to crunch the data in real time and figure out how to control the engine. Jet engine designers are also looking at more complex engine designs with more flaps and controls to make the best use of that sensor data.

One challenge of jet engines is the high temperatures. Jet engines are so hot that some parts of the engine would melt without careful design. We need to provide more computational power while living with the restrictions of high-temperature electronics.

NAN: Your research interests include embedded computing, smart devices, VLSI systems, and biochips. What types of projects are you currently working on?

MARILYN: I’m working on with Santiago Grivalga of Georgia Tech on smart-energy grids, which are really huge systems that would span entire countries or continents. I continue to work on VLSI-related topics, such as the work on error-aware computing that I pursued with Saibal Mukopodhyay.

I also work with my friend Shuvra Bhattacharyya on architectures for signal-processing systems. As for more unusual things, I’m working on a medical device project that is at the early stages, so I can’t say too much specifically about it.

NAN: Can you provide more specifics about your research into smart energy grids?

MARILYN: Smart-energy grids are also driven by the push for greater efficiency. In addition, renewable energy sources have different characteristics than traditional coal-fired generators. For example, because winds are so variable, the energy produced by wind generators can quickly change.

The uses of electricity are also more complex, and we see increasing opportunities to shift demand to level out generation needs. For example, electric cars need to be recharged, but that can happen during off-peak hours. But energy systems are huge. A single grid covers the eastern US from Florida to Minnesota.

To make all these improvements requires sophisticated software and careful design to ensure that the grid is highly reliable. Smart-energy grids are a prime example of Internet-based control.

We have so many devices on the grid that need to coordinate that the Internet is the only way to connect them. But the Internet isn’t very good at real-time control, so we have to be careful.

We also have to worry about security Internet-enabled devices enable smart grid operations but they also provide opportunities for tampering.

NAN: You’ve earned several distinctions. You were the recipient of the Institute of Electrical and Electronics Engineers (IEEE) Circuits and Systems Society Education Award and the IEEE Computer Society Golden Core Award. Tell us about these experiences.

MARILYN: These awards are presented at conferences. The presentation is a very warm, happy experience. Everyone is happy. These things are time to celebrate the field and the many friends I’ve made through my work.

ZigBee PRO Communication Controller Chip

GreenPeak

The GP490 controller chip

The GP490 is an ultra-low-power ZigBee PRO communication controller. It supports the ZigBee PRO chip features, including bidirectional commissioning, bidirectional communication, and full security mode.

The controller is specifically designed for low-power smart home applications including door, window, and temperature sensors and light switches. Equipped with the energy-optimized GP490, these low-power, low-data rate smart home applications can operate on a coin cell battery for more than 10 years.

The GP490 development kit with reference design and software can help device manufacturers build low-cost, low-power sensor nodes, optimized for standard battery and coin-cell battery operation. Guidelines and tools are provided to ensure an efficient ZigBee certification for a cost-optimized feature set of the ZigBee Home Automation (HA 1.2) Application Profile.

The low-cost GP490 enables developers and manufacturers to supply ZigBee Smart Home sensor solutions starting at $5 each. Contact GreenPeak Technologies for specific pricing.

GreenPeak Technologies
www.greenpeak.com

Embedded Security Tips (CC 25th Anniversary Preview)

Every few days we you a sneak peek at some of the exciting content that will run in Circuit Cellar‘s Anniversary issue, which is scheduled to be available in early 2013. You’ve read about Ed Nisley’s essay on his most memorable designs—from a hand-held scanner project to an Arduino-based NiMH cell tester—and Robert Lacoste’s tips for preventing embedded design errors. Now it’s time for another preview.

Many engineers know they are building electronic systems for use in dangerous times. They must plan for both hardware and software attacks, which makes embedded security a hot topic for 2013.  In an essay on embedded security risks, Virginia Tech professor Patrick Schaumont looks at the current state of affairs through several examples. His tips and suggestions will help you evaluate the security needs of your next embedded design.

Schaumont writes:

As design engineers, we should understand what can and what cannot be done. If we understand the risks, we can create designs that give the best possible protection at a given level of complexity. Think about the following four observations before you start designing an embedded security implementation.

First, you have to understand the threats that you are facing. If you don’t have a threat model, it makes no sense to design a protection—there’s no threat! A threat model for an embedded system will specify what can attacker can and cannot do. Can she probe components? Control the power supply? Control the inputs of the design? The more precisely you specify the threats, the more robust your defenses will be. Realize that perfect security does not exist, so it doesn’t make sense to try to achieve it. Instead, focus on the threats you are willing to deal with.

Second, make a distinction between what you trust and what you cannot trust. In terms of building protections, you only need to worry about what you don’t trust. The boundary between what you trust and what you don’t trust is suitably called the trust boundary. While trust boundaries where originally logical boundaries in software systems, they also have a physical meaning in embedded context. For example, let’s say that you define the trust boundary to be at the chip-package level of a microcontroller. This implies that you’re assuming an attacker will get as close to the chip as the package pins, but not closer. With such a trust boundary, your defenses should focus on off-chip communication. If there’s nothing or no one to trust, then you’re in trouble. It’s not possible to build a secure solution without trust.

Third, security has a cost. You cannot get it for free. Security has a cost in resources and energy. In a resource-limited embedded system, this means that security will always be in competition with other system features in terms of resources. And because security is typically designed to prevent bad things from happening rather than to enable good things, it may be a difficult trade-off. In feature-rich consumer devices, security may not be a feature for which a customer is willing to pay extra.

The fourth observation, and maybe the most important one, is to realize is that you’re not alone. There are many things to learn from conferences, books, and magazines. Don’t invent your own security. Adapt standards and proven Circuit Cellar’s Circuit Cellar 25th Anniversary Issue will be available in early 2013. Stay tuned for more updates on the issue’s content.techniques. Learn about the experiences of other designers.

Schaumont then provides lists of helpful embedded security-related resources, such as Flylogic’s Analytics Blog and the Athena website at GMU.

One-Time Passwords from Your Watch

Passwords establish the identity of a user, and they are an essential component of modern information technology. In this article, I describe one-time passwords: passwords that you use once and then never again. Because they’re used only once, you don’t have to remember them. I describe how to implement one-time passwords with a Texas Instruments (TI) eZ430-Chronos wireless development tool in a watch and how to use them to log in to existing web services such as Google Gmail (see Photo 1).

Photo 1—The Texas Instruments eZ430 Chronos watch displays a unique code that enables logging into Google Gmail. The code is derived from the current time and a secret value embedded in the watch.

To help me get around on the Internet, I use a list of about 80 passwords (at the latest count). Almost any online service I use requires a password: reading e-mail, banking, shopping, checking reservations, and so on. Many of these Internet-based services have Draconian password rules. For example, some sites require a password of at least eight characters with at least two capitals or numbers and two punctuation characters. The sheer number of passwords, and their complexity, makes it impossible to remember all of them.

What are the alternatives? There are three different ways of verifying the identity of a remote user. The most prevailing one, the password, tests something that a user knows. A second method tests something that the user has, such as a secure token. Finally, we can make use of biometrics, testing a unique user property, such as a fingerprint or an eye iris pattern.

Each of these three methods comes with advantages and disadvantages. The first method (passwords) is inexpensive, but it relies on the user’s memory. The second method (secure token) replaces the password with a small amount of embedded hardware. To help the user to log on, the token provides a unique code. Since it’s possible for a secure token to get lost, it must be possible to revoke the token. The third method (biometrics) requires the user to enroll a biometric, such as a fingerprint. Upon login, the user’s fingerprint is measured again and tested against the enrolled fingerprint. The enrollment has potential privacy issues. And, unlike a secure token, it’s not possible to revoke something that is biometric.

The one-time password design in this article belongs to the second category. A compelling motivation for this choice is that a standard, open proposal for one-time passwords is available. The Initiative for Open Authentication (OATH) is an industry consortium that works on a universal authentication mechanism for Internet users. They have developed several proposals for user authentication methods, and they have submitted these to the Internet Engineering Task Force (IETF). I’ll be relying on these proposals to demonstrate one-time passwords using a eZ430-Chronos watch. The eZ430-Chronos watch, which I’ll be using as a secure token, is a wearable embedded development platform with a 16-bit Texas Instruments MSP430 microcontroller.

ONE-TIME PASSWORD LOGON

Figure 1 demonstrates how one-time passwords work. Let’s assume a user—let’s call him Frank—is about to log on to a server. Frank will generate a one-time password using two pieces of information: a secret value unique to Frank and a counter value that increments after each authentication. The secret, as well as the counter, is stored in a secure token. To transform the counter and the secret into a one-time password, a cryptographic hash algorithm is used. Meanwhile, the server will generate the one-time password it is expecting to see from Frank. The server has a user table that keeps track of Frank’s secret and his counter value. When both the server and Frank obtain the same output, the server will authenticate Frank. Because Frank will use each password only once, it’s not a problem if an attacker intercepts the communication between Frank and the server.

Figure 1—A one-time password is formed by passing the value of a personal secret and a counter through a cryptographic hash (1). The server obtains Frank’s secret and counter value from a user table and generates the same one-time password (2). The two passwords must match to authenticate Frank (3). After each authentication, Frank’s counter is incremented, ensuring a different password the next time (4).

After each logon attempt, Frank will update his copy of the counter in the secure token. The server, however, will only update Frank’s counter in the user table when the logon was successful. This will intercept false logon attempts. Of course, it is possible that Frank’s counter value in the secure token gets out of sync with Frank’s counter value in the server. To adjust for that possibility, the server will use a synchronization algorithm. The server will attempt a window of counter values before rejecting Frank’s logon. The window chosen should be small (i.e., five). It should only cover for the occasional failed logon performed by Frank. As an alternate mechanism to counter synchronization, Frank could also send the value of his counter directly to the server. This is safe because of the properties of a cryptographic hash: the secret value cannot be computed from the one-time password, even if one knows the counter value.

You see that, similar to the classic password, the one-time password scheme still relies on a shared secret between Frank and the server. However, the shared secret is not communicated directly from the user to the server, it is only tested indirectly through the use of a cryptographic hash. The security of a one-time password therefore stands or falls with the security of the cryptographic hash, so it’s worthwhile to look further into this operation.

CRYPTOGRAPHIC HASH

A cryptographic hash is a one-way function that calculates a fixed-length output, called the digest, from an arbitrary-length input, called the message. The one-way property means that, given the message, it’s easy to calculate the digest. But, given the digest, one cannot find back the message.

The one-way property of a good cryptographic hash implies that no information is leaked from the message into the digest. For example, a small change in the input message may cause a large and seemingly random change in the digest. For the one-time password system, this property is important. It ensures that each one-time password will look very different from one authentication to the next.

The one-time password algorithm makes use of the SHA-1 cryptographic hash algorithm. This algorithm produces a digest of 160 bits. By today’s Internet standards, SHA-1 is considered old. It was developed by Ronald L. Rivest and published as a standard in 1995.

Is SHA-1 still adequate to create one-time passwords? Let’s consider the problem that an attacker must solve to break the one-time password system. Assume an attacker knows the SHA-1 digest of Frank’s last logon attempt. The attacker could now try to find a message that matches the observed digest. Indeed, knowing the message implies knowing a value of Frank’s secret and the counter. Such an attack is called a pre-image attack.

Fortunately, for SHA-1, there are no known (published) pre-image attacks that are more efficient than brute force trying all possible messages. It’s easy to see that this requires an astronomical number of messages values. For a 160-bit digest, the attacker can expect to test on the order of 2160 messages. Therefore it’s reasonable to conclude that SHA-1 is adequate for the one-time password algorithm. Note, however, that this does not imply that SHA-1 is adequate for any application. In another attack model, cryptographers worry about collisions, the possibility of an attacker finding a pair of messages that generate the same digest. For such attacks on SHA-1, significant progress has been made in recent years.

The one-time password scheme in Figure 1 combines two inputs into a single digest: a secret key and a counter value. To combine a static, secret key with a variable message, cryptographers use a keyed hash. The digest of a keyed hash is called a message authentication code (MAC). It can be used to verify the identity of the message sender.

Figure 2 shows how SHA-1 is used in a hash-based message authentication code (HMAC) construction. SHA-1 is applied twice. The first SHA-1 input is a combination of the secret key and the input message. The resulting digest is combined again with the secret key, and SHA-1 is then used to compute the final MAC. Each time, the secret key is mapped into a block of 512 bits. The first time, it is XORed with a constant array of 64 copies of the value 0×36. The second time, it is XORed with a constant array of 64 copies of the value 0x5C.

Figure 2—The SHA-1 algorithm on the left is a one-way function that transforms an arbitrary-length message into a 160-bit fixed digest. The Hash-based message authentication code (HMAC) on the right uses SHA-1 to combine a secret value with an arbitrary-length message to produce a 160-bit message authentication code (MAC).

THE HOTP ALGORITHM

With the HMAC construction, the one-time password algorithm can now be implemented. In fact, the HMAC can almost be used as is. The problem with using the MAC itself as the one-time password is that it contains too many bits. The secure token used by Frank does not directly communicate with the server. Rather, it shows a one-time password Frank needs to type in. A 160-bit number requires 48 decimal digits, which is far too long for a human.

OATH has proposed the Hash-based one-time password (HOTP) algorithm. HOTP uses a key (K) and a counter (C). The output of HOTP is a six-digit, one-time password called the HOTP value. It is obtained as follows. First, compute a 160-bit HMAC value using K and C. Store this result in an array of 20 bytes, hmac, such that hmac[0] contains the 8 leftmost bits of the 160-bit HMAC string and hmac[19] contains the 8 rightmost bits. The HOTP value is then computed with a snippet of C code (see Listing 1).

Listing 1—C code used to compute the HTOP value

There is now an algorithm that will compute a six-digit code starting from a K value and a C value. HOTP is described in IETF RFC 4226. A typical HOTP implementation would use a 32-bit C and an 80-bit K.

An interesting variant of HOTP, which I will be using in my implementation, is the time-based one-time password (TOTP) algorithm. The TOTP value is computed in the same way as the HOTP value. However, the C is replaced with a timestamp value. Rather than synchronizing a C between the secure token and the server, TOTP simply relies on the time, which is the same for the server and the token. Of course, this requires the secure token to have access to a stable and synchronized time source, but for a watch, this is a requirement that is easily met.

The timestamp value chosen for TOTP is the current Unix time, divided by a factor d. The current Unix time is the number of seconds that have elapsed since midnight January 1, 1970, Coordinated Universal Time. The factor d compensates for small synchronization differences between the server and the token. For example, a value of 30 will enable a 30-s window for each one-time password. The 30-s window also gives a user sufficient time to type in the one-time password before it expires.

IMPLEMENTATION IN THE eZ430-CHRONOS WATCH

I implemented the TOTP algorithm on the eZ430-Chronos watch. This watch contains a CC430F6137 microcontroller, which has 32 KB of flash memory for programs and 4,096 bytes of RAM for data. The watch comes with a set of software applications to demonstrate its capabilities. Software for the watch can be written in C using TI’s Code Composer Studio (CCStudio) or in IAR Systems’s IAR Embedded Workbench.

The software for the eZ430-Chronos watch is structured as an event-driven system that ties activities performed by software to events such as alarms and button presses. In addition, the overall operation of the watch is driven through several modes, corresponding to a particular function executed on the watch. These modes are driven through a menu system.

Photo 2 shows the watch with its 96-segment liquid crystal display (LCD) and four buttons to control its operation. The left buttons select the mode. The watch has two independent menu systems, one to control the top line of the display and one to control the bottom line. Hence, the overall mode of the watch is determined by a combination of a menu-1 entry and a menu-2 entry.

Photo 2—With the watch in TOTP mode, one-time passwords are shown on the second line of the display. In this photo, I am using the one-time password 854410. The watch display cycles through the strings “totP,” “854,” and “410.”

Listing 2 illustrates the code relevant to the TOTP implementation. When the watch is in TOTP mode, the sx button is tied to the function set_totp(). This function initializes the TOTP timestamp value.

Listing 2—Code relevant to the TOTP implementation

The function retrieves the current time from the watch and converts it into elapsed seconds using the standard library function mktime. Two adjustments are made to the output of mktime, on line 11 and line 12. The first factor, 2208988800, takes into account that the mktime in the TI library returns the number of seconds since January 1, 1900, while the TOTP standard sets zero time at January 1, 1970. The second factor, 18000, takes into account that my watch is set to Eastern Standard Time (EST), while the TOTP standard assumes the UTC time zone—five hours ahead of EST. Finally, on line 14, the number of seconds is divided by 30 to obtain the standard TOTP timestamp. The TOTP timestamp is further updated every 30 s, through the function tick_totp().

The one-time password is calculated by compute_totp on line 33. Rather than writing a SHA1-HMAC from scratch, I ported the open-source implementation from Google Authenticator to the TI MSP 430. Lines 39 through 50 show how a six-digit TOTP code is calculated from the 160-bit digest output of the SHA1-HMAC.

The display menu function is display_totp on line 52. The function is called when the watch first enters TOTP mode and every second after that. First, the watch will recompute the one-time password code at the start of each 30-s interval. Next, the TOTP code is displayed. The six digits of the TOTP code are more than can be shown on the bottom line of the watch. Therefore, the watch will cycle between showing “totP,” the first three digits of the one-time password, and the next three digits of the one-time password. The transitions each take 1 s, which is sufficient for a user to read all digits.

There is one element missing to display TOTP codes: I did not explain how the unique secret value is loaded into the watch. I use Google Authenticator to generate this secret value and to maintain a copy of it on Google’s servers so that I can use it to log on with TOTP.

LOGGING ONTO GMAIL

Google Authenticator is an implementation of TOTP developed by Google. It provides an implementation for Android, Blackberry, and IOS so you can use a smartphone as a secure token. In addition, it also enables you to extend your login procedure with a one-time password. You cannot replace your standard password with a one-time password, but you can enable both at the same time. Such a solution is called a two-factor authentication procedure. You need to provide a password and a one-time password to complete the login.

As part of setting up the two-factor authentication with Google (through Account Settings – Using Two-Step Verification), you will receive a secret key. The secret key is presented as a 16-character string made up of a 32-character alphabet. The alphabet consists of the letters A through Z and the digits 2, 3, 4, 5, 6, and 7. This clever choice avoids numbers that can confused with letters (8 and B, for example). The 16-character string thus represents an 80-bit key.

I program this string in the TOTP design for the eZ430-Chronos watch to initialize the secret. In the current implementation, the key is loaded in the function reset_totp().

base32_decode((const u8 *)
      ”4RGXVQI7YVY4LBPC”, stotp.key, 16);

Of course, entering the key as a constant string in the firmware is an obvious vulnerability. An attacker who has access to a copy of the firmware also has the secret key used by the TOTP implementation! It’s possible to protect or obfuscate the key from the watch firmware, but these techniques are beyond the scope of this article. Once the key is programmed into the watch and the time is correctly set, you can display TOTP codes that help you complete the logon process of Google. Photo 1 shows a demonstration of logging onto Google’s two-step verification with a one-time password.

OTHER USES OF TOTP

There are other possibilities for one-time passwords. If you are using Linux as your host PC, you can install the OATH Toolkit, which implements the HOTP and TOTP mechanisms for logon. This toolkit enables you to install authentication modules on your PC that can replace the normal login passwords. This enables you to effectively replace the password you need to remember with a password generated from your watch.

Incidentally, several recent articles—which I have included in the resources section of this article—point to the limits of conventional passwords. New technologies, including one-time passwords and biometrics, provide an interesting alternative. With standards such as those from OATH around the corner, the future may become more secure and user-friendly at the same time.

[Editor's note: This article originally appeared in Circuit Cellar 262, May 2012.]

Patrick Schaumont writes the Embedded Security column for Circuit Cellar magazine. He is an Associate Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. Patrick works with his students on research projects in embedded security, covering hardware, firmware, and software.

PROJECT FILES

To download the code, go to ftp://ftp.circuitcellar.com/pub/Circuit_Cellar/2012/262.

RESOURCES

Google Authenticator, http://code.google.com/p/google-authenticator.

Initiative for Open Authentication (OATH), www.openauthentication.org.

Internet Engineering Task Force (IETF), www.ietf.org.

D. M’Raihi, et al, “TOTP: Time-Based One-Time Password Algorithm,” IETF RFC 6238, 2011.

—, “HOTP: An HMAC-Based One-Time Password Algorithm,” IETF RFC 4226, 2005.

OATH Toolkit, www.nongnu.org/oath-toolkit.

K. Schaffer, “Are Password Requirements Too Difficult?,” IEEE Computer Magazine, 2011.

S. Sengupta, “Logging in With a Touch or a Phrase (Anything but a Password),” New York Times, 2011.

SOURCES

IAR Embedded Workbench – IAR Systems

eZ430-Chronos Wireless development system and Code Composer Studio (CCStudio) IDE – Texas Instruments, Inc.