Design Solutions Research & Design Hub

Embedded System Security Live

Written by Colin O'Flynn

Coverage of Two Security Events

Colin summarizes some interesting presentations from the Black Hat conference in Las Vegas—along with an extra bonus event. This will help you keep up-to-date with some of the latest embedded attacks, including execute only memory attacks, fault injection on embedded devices, 4G cellular modems and FPGA bitstream hacking.

I know it’s not easy to stay current with all the latest embedded security news and attacks. With that in mind, this month I want to bring together a few pieces of research I thought would be of interest to you. To do this, normally I’d focus only on topics from the Black Hat “hacker” conference in Las Vegas, NV. But this time, I’m also including some additional research from the USENIX WOOT (Workshop On Offensive Technology) conference. WOOT took place in Santa Clara, CA shortly after Back Hat.

Execute Only Memory (XOM)
I’m going to start with a presentation from WOOT, because it’s probably of the most importance for embedded developers. The paper in question is entitled “Taking a Look into Execute-Only Memory” by Marc Schink and Johannes Obermaier. The paper attacked the idea of Execute Only Memory (XOM), which is present in different forms on many devices. Generally, this means a memory space (typically flash or ROM) from which we can execute code, but can’t actually read data from. This is almost always sold as a way to protect your sensitive code from being read out by an attacker.

To be effective, this is enforced in hardware. The enforcement happens because reads from the execute-only memory space must come from the instruction bus and not from the data bus. See Figure 1 for an example of these different memory buses. The bottom of that example processor core in Figure 1 has several buses: “ICode” is the instruction bus interface and “DCode” is a data bus interface. In theory that means that certain memory sections should only be able to send data over the ICode bus. All that said, here the authors found several implementation errors for detecting what counts as instruction vs data bus access. Some devices—the STMicroelectronics STM32F7 in particular is called out in the paper—incorrectly classify certain accesses from the debug access port logic as instruction bus access, and allow reading out of the protected memory.

Figure 1 – An example processor core. Note the bottom side shows different buses for Instruction (ICode) and Data (DCode) access.

More fundamentally, the authors also attacked the very idea of XOM. It’s instructive to observe the side effects of instruction execution. For example, we can see a single unknown instruction executed in Figure 2. We assume we know the input state (“1” in Figure 2), which is the values in all registers, processor flags and similar. We can also observe the state (“3” in Figure 2) after the instruction execution. You can observe this in two possible ways. The easiest is when devices still allow debugger connectivity during XOM execution. The debugger cannot see any of the instructions being executed, but can observe the registers/SRAM. Therefore, as you single-step through the instructions, you get new ideas of the instruction set. If debug access is not possible, they also demonstrated a second way, which is using an interrupt after each instruction executed. The interrupt handler can then observe or download the system state in a similar manner to the debugger.

Figure 2 – Observing the side-effect of instruction execution can reveal both the instruction and the arguments.

Certain instructions would have certain side-effects. A memory load for example would see the value in a register overwritten. But many other instructions would also change a register value—an addition or subtraction, for example, would also overwrite a register. But, if we had a known pattern loaded into memory and the registers, the load would be distinguishable from an addition or subtraction. Therefore, by observing side-effects in a more controlled environment, it becomes possible to discover both the instruction and the arguments. This is iteratively repeated to narrow down similar instructions that might require different starting states to distinguish them. For example, there are several conditional branch instructions, and you would need to distinguish from a “branch if not equal” and “branch if less than”.

— ADVERTISMENT—

Advertise Here

The authors of the paper extended this idea to demonstrate a full read-out attack on a device, and also worked to prove how this worked against devices that disable debug during XOM execution. The result is that a demonstration of how XOM could be “reversed” by a dedicated attacker.

Fault Injection Attacks
Meanwhile, the Black Hat event had (at least) two talks on fault injection. One was my own, entitled “MINimum Failure.” I presented the work that I wrote about in the May 2019 issue (Circuit Cellar 346). If you don’t recall that article, the summary is that you can use fault injection to corrupt the processing of the wLength value of a USB packet. This allows an attacker to read back up to 65 KB of memory, which I demonstrated as recovering the private key from a Bitcoin wallet.

Since that article, I extended that work in a few ways. First, I realized that similar processing is present in almost all USB stacks. This includes many vendor-provided examples alongside most USB stacks that vendors provide. I also released my open-source hardware tool that I call PhyWhisperer-USB, It enables you to easily trigger on USB data—along with some basic sniffing of USB 2.0 LS/FS/HS data. You can see a photo of that in Figure 3. You can learn more about PhyWhisperer-USB from its github page. This includes all the hardware documentation, along with example Python scripts and documentation.

Figure 3 – PhyWhisperer-USB is an open-source tool for USB 2.0 triggering and sniffing.

The second presentation was entitled “Chip.Fail” by Thomas Roth, Josh Datko and Dmitry Nedospasov. This demonstrated basic voltage fault injection attacks on various devices, which helps demonstrate that you should consider fault-resistant coding techniques to help prevent some of these attacks. These fault-resistant techniques are something I plan on talking about in my next article, so keep an eye out for the January 2020 issue where you can start your new year out by working with fault-resistant designs.

4G Module Attacks
Another interesting presentation was one from the Baidu Security Lab, which presented a talk entitled “All the 4G Modules Could be Hacked.” This talk is of interest to anyone who uses cellular modules in their products, because these modules are a common method of adding remote connectivity for data logging and remote control. These modules are often found as part of a mini-PCIe card in an embedded Linux system, but they take other forms as well. The talk focused on both problems with the 4G modules, including common issues with the user configuration and fundamental problems in the baseband device itself. I’ll give you a few examples of both of these.

One of the most common problems was that the security implications of adding that module may not be understood. Some carriers, for example, would add multiple 4G devices to a network without isolating them from each other, allowing someone to scan (and connect to) other devices. As a user, you don’t necessarily know the configuration of the access point to which you are connecting, so you need to assume someone else on the network can find this device. A screenshot of the port scan test they performed is given in Figure 4.

Figure 4 – Port scan results can reveal open ports on 4G networks, where the utility has not isolated clients.

Assuming someone does find the device, what can they do? Part of this presentation showed how services (such as TELNET or SSH) were running with hard-coded usernames and passwords. It was possible to recover these hard-coded passwords, giving someone remote access to the system. And, because these appeared to be reused across the deployment, the high effort involved in breaking one password now allowed a more widespread attack to apply.

In addition, there are some more fundamental security implications of adding these modules. Most of them will downgrade to older (GSM or “2G”) protocols if they cannot reach a 3G/4G base-station. Unfortunately, these older standards can be easily abused to force the device to connect to a malicious cellular base station, which was also demonstrated in this talk. Such attacks are well-known, but again someone looking to simply add easy cellular connectivity to their product may not consider that an attacker could easily observe (or control) any network traffic.

Forcing use of higher-layer encryption is a requirement to survive in such an environment, for example, by only allowing encrypted traffic going over HTTPS or SSH. In addition, turning off 2G support may be wise to prevent these older standards from being used at all. Many carriers around the world have stopped supporting 2G in order to free up bandwidth for 4G, with most major carriers already fully turning off their 2G networks or announcing dates in the next few years to do so.

— ADVERTISMENT—

Advertise Here

Rewriting FPGA Bitstreams
A final attack of interest was one with a very odd name. The name is depicted by three angry cat emojis, which we can’t represent on these magazine pages. So, instead you can go by the pronounced version of that attack that the paper’s authors suggest: “Thrangycat.” I’ll get to the title of paper in moment—you’ll see why. This attack invalidated a huge amount of security assumptions on Cisco gear, which used an FPGA as a hardware root of trust. A block diagram of the Cisco setup is shown in Figure 5.

Figure 5 – Cisco Root of Trust relies on a FPGA to perform initial device boot, along with resetting the processor during security violations.

In the Cisco setup, the FPGA bitstream is loaded from a SPI flash. The FPGA forms the “root-of-trust” because it then loads the basic bootloader into the application processor (main processor). The FPGA can then observe the loading of the Stage 1 bootloader, and during the loading of this Stage 1 bootloader the FPGA can validate that correct code is being loaded—in other words, that no attacker has modified the system.

If any security violation is found, the FPGA uses the reset line to kill the system. The idea here being that the FPGA is performing a hardware action to prevent a clever attacker from doing low-level modifications. The FPGA has a 100-second timeout before it resets the system. And that is the origin of the title of this talk presented at Black Hat by Jatin Kataria, Richard Housley, and Ang Cui: “100 Seconds of Solitude: Defeating Cisco Trust Anchor with FPGA Bitstream Shenanigans.”

But, as you know, the FPGA itself is not fixed hardware. It loads a bitstream from a SPI flash. Modifying the SPI flash allows you to modify the FPGA bitstream. Because the FPGA bitstream itself does other tasks (including loading the Stage 0 bootloader), the original bitstream does need to be loaded. But the Cisco assumption was that reverse-engineering the full bitstream to modify the design would be very difficult (which is true). Luckily a full reverse-engineering isn’t actually needed. In this case, they only need to change the reset pin output such that it is no longer asserted. This minor modification is something that can be reverse-engineered, since it only modifies the output drive of the FPGA.

To assist with this work, they built a FPGA bitstream visualizer tool. Back in June 2014, I actually talked about the Spartan 6 bitstream partial reconfiguration, and discussed some of the bitstream format since it is partially documented. My article was called “Partial FPGA Configuration,” (Circuit Cellar 287, June 2014). But back then I didn’t have such a nice visualization tool as you can see in Figure 6!

Figure 6 – This tool is part of the open-source Spartan 6 bitstream reverse engineering efforts.

To round out the attack, they demonstrated how someone can remotely reload the SPI flash that holds the FPGA bitstream. The result of this means a remote attacker could also disable the ability to recover the SPI flash—the reprogramming of the SPI flash is done via a FPGA feature. The attacker can build a new bitstream that simply disables those I/O pins once the bitstream is booted. This requires a technician to physically reprogram the SPI flash on-board the affected Cisco product.

Keeping up on more attacks
Hopefully, this summary has given you a look at a few new attacks from 2019. It’s hard to keep up with everything that comes out (even for myself), and for space reasons I can’t hope to fit every important attack into this article. But having an idea of these attacks is useful as you design your own products. 

Additional materials from the author are available at:
www.circuitcellar.com/article-materials

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • NOVEMBER 2019 #352 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
Website | + posts

Colin O’Flynn has been building and breaking electronic devices for many years. He is an assistant professor at Dalhousie University, and also CTO of NewAE Technology both based in Halifax, NS, Canada. Some of his work is posted on his website (see link above).

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Embedded System Security Live

by Circuit Cellar Staff time to read: 9 min