Spotlight on the MAX32520
The new MAX32520 MCU from Maxim provides a number of security features that are normally found only in devices without public information available. Here, Colin explores a few of those features, and performs some hands-on testing to see how they hold up to the types of attacks they claim to defeat.
Previously in this column, I’ve covered a few new interesting security products. This article will be another along those lines. It follows my experience with a new microcontroller (MCU) from Maxim Integrated Products. This product has only recently become available, and seems to come with a rather loaded press releases extolling how it will create the secure device of your dreams. But, behind all that is a pretty interesting product that is worth finding out more about—with the usual caveats that things might not be as rosy as the marketing department wants you to believe.
To understand what makes this device interesting, it’s also useful to understand how secure products were typically distributed in the past. There have been many secure MCUs in existence before this device, and there will continue to be them afterwards.
These secure devices had datasheets available only under non-disclosure agreement (NDA). Part of the reason for this is the method used to assign security levels to devices gives a certain amount of points to having information not publicly accessible. The idea being the less an attacker knows about the target, the more difficult it will be for the attacker to exploit it.
Getting access to the parts under NDA required a certain amount of interest from the MCU manufacture in your application. This may mean minimum volumes in the millions or more units before access to the NDA parts was even considered.
This also meant that many companies had secure devices and peripherals that were very well tested. So, it’s clear many of these MCU manufacturers have the expertise and know-how to produce a secure device. But the design teams producing non-NDA devices available in the open market—or “broad market”—did not get to re-use those secure MCU blocks or peripherals.
This has left a soft spot in embedded security: Until relatively recently, the security of open-market devices you can simply buy from distributors has been rather poor. This has been shifting in recent years with the slow introduction of power-analysis resistance and other types of countermeasures. If you are developing a product and not building millions of units, you should be interested in what the most security you can get from non-NDA parts.
The Maxim MAX32520 appears to be an interesting experiment by Maxim to step further in that direction. The device includes many features that are typically found in their secure (under NDA) parts such as active tamper detection, internal shields, and fault injection monitors. There is limited talk of DPA resistance, but it’s something we can explore a little bit with their development board.
Let’s take a look at the parts and then see how we can evaluate a few of the claims! I need to warn you: At the time of writing this article, the SDK and documentation had only just become available. A full reference manual is not available, for example. Hopefully, more details will hopefully become available so you can use all the features of the device.
On the security front, there are a few interesting features. The first is what they call “ChipDNA”—which is Maxim’s take on a Physically Unclonable Function (PUF). The idea of a PUF is that you have some unique “key” generated as a physical artifact of the device. The PUF can be designed such that the characteristics of the circuit will become unstable if certain invasive attacks are performed on the device. This can be used to record what are valid devices at manufacture time, or form part of a secure cryptographic attestation protocol. This is a good step to prevent against someone who clones your product from having access to online resources such as firmware updates. A remote server can ensure only known valid devices are given access to such services.
Another use of the PUF is to encrypt firmware on the device. If an attacker is able to read the firmware out of the MCU, it will be encrypted with a key that is unique per device. This means the internal flash can be encrypted/decrypted on-the-fly with AES-256 using the PUF key. In fact, the datasheet claims this gives you the “ultimate resistance against reverse-engineered based attacks.”
This sounds great, but the only guaranteed protection is against an attacker who is reading flash memory out by physically opening the case of the device and reading the flash memory out. This is shown in Figure 1 at location A. In reality, probing the die is a fairly complicated attack vector. But if an attacker finds a vulnerability that allows them to read from the flash memory-space the MCU will already be loaded and using that magic PUF key. (An example of such a fault attack reading unintended memory is described in my article “Attacking USB Gear with EMFI” in Circuit Cellar 346, May 2019.) This means the decryption block will happily pass the decrypted data to the MCU, who is then passing it to the attacker, as in location B in Figure 1.
THERE’S NO MAGIC
The ChipDNA PUF feature alone does not magically protect against reverse engineering attacks. While it’s a useful additional defense against certain attacks, don’t think it will somehow stop an attacker from easily stealing your firmware if you accidently leave a door open! Carefully enabling the decryption only when needed and ensuring your clear the decryption key when under attack can be helpful in making the most of the feature.
ChipDNA does provide a useful method of ensuring only valid devices having access to online or value-add services, such that cloned devices are not competitive in the market. This requires more effort on your back-end services—assuming your device connects to them—to lock out counterfeit devices.
Another feature that is highlighted in the datasheet is a serial flash emulation. Many MCUs will boot from SPI flash, and the MAX35250 can emulate those SPI flashes. This feature is interesting as it allows you to retrofit an existing unsecure MCU with a secure boot platform. So, if you already have a design booting from SPI flash, the MAX35250 can replace that SPI flash, and now the MAX32520 can first verify that a given firmware image should be booted.
Of course, this feature still means you’ll likely be sending unencrypted firmware to your “dumb” host MCU that doesn’t support secure boot. So, keep in mind this feature doesn’t immediately protect someone from copying your design! Can they simply write the encrypted firmware into an old dumb SPI flash in their cloned product?
A much smarter design is to implement some functionality in the MAX32520, such that your system would only function if both the “dumb” MCU alongside the MAX32520 are copied. It should be much more difficult to copy the MAX32520 compared to observing the unencrypted SPI flash data transfers.
The MAX32520 uses entirely internal oscillators to avoid allowing an attacker the ability to control an external oscillator. This makes it impossible to perform attacks such as clock glitching, and may make power analysis attacks more difficult, since the device could internally be adjusting its clock frequency. We’ll investigate that a little bit later. The MAX32520 also supports several tamper sensors. Let’s take a look at them and see how well they work in practice.
External Tamper: The external tamper sensor is fairly straightforward. A random pattern is sent out on one pin, and must be observed on the second pin in order for the device to function. I captured a portion of that waveform in Figure 2, you can see it looks like a random binary pattern at about 3kHz fundamental rate with the default SDK example. You can change the frequency via the register setting.
I didn’t evaluate the random pattern—let’s assume that is OK! The main issue with the external tamper is going to be limited to your use case. This sensor is only active when the device is powered. If an attacker can fully remove the power supply, open the case and short the two tamper pins to each other—your device will happily boot. If you plan on using the external tamper wire, ensure you have sufficient battery power to keep the MCU running with the tamper system. Of course, as a low-power MCU this means running off a small battery for a long period is reasonable. But that does mean you need to perform a careful low-power design for the external tamper feature to be useful.
Internal Tamper: The device also claims multiple internal tamper sensors. Some of them are pretty straightforward, such as checking the device is running inside valid temperature ranges and voltages. It also adds more advanced features such as an active die shield, as well as a various fault detectors. The provided API lists only the “digital fault detector,” but the security monitor (SMON) header files and SVD file (SVD provides register bits for debuggers) tease a lot more interesting information.
The “Digital Fault Detector (DFD)” has an API call in the provided peripheral library with the SDK, and it becomes enabled with the MXC_F_SMON_INTSCN_DFD_EN bit. In addition, there is a low and high voltage detector for the VDD core, and a voltage glitch detector.
First, I’ll use the provided SDK to enable the DFD. I’ve implemented a simple double-loop that will help me see if any faults have affected the program flow, as in Listing 1. I previously discussed this type of fault attack look in the column. I’ve also printed the status of the security monitor register to see if any changes occurred.
To start with, I used electromagnetic fault injection (EMFI) using the setup of Figure 3 to see if I could affect the loop. This has the development board mounted onto my UFO target board. See the ChipWhisperer project (www.newae.com/chipwhisperer) for more details of that board. The example output in Listing 2 shows that a few loops appeared to have invalid counts without the security monitor tripping. Unfortunately, I can’t give you a more definitive result. That’s because there are few details on how this is implemented, and the documentation shows only functions to enable the DFD feature and read a status. While I experimented with all settings—as well as some hidden flags such as a “voltage glitch enable”—it never fully stopped the glitches from working.
1000000 1 0
1000000 2 0
993687 3 0
960420 4 0
LISTING 2 – If the loops of Listing 1 execute correctly, “t” will have been incremented 1,000,000 times. Note: Two outputs with incorrect values showing faults were injected into the flow.
The potential for an effective glitch detector is something worth keeping the MAX32520 in mind for. But be sure to help validate how it works in practice in your product. This is important not only to test the effectiveness of the MAX32520, but also to confirm you have configured it correctly. As you can see from my experiments, following the bare minimum configuration may not be enough.
What would a secure device be without a series of crypto accelerators? The MAX32520 is no different of course! It lists features such as AES, SHA, DES, ECC and RSA accelerators. You should note that the RSA “accelerator” is actually a software-based (presumably ROM code) accelerator, but, as of yet, there is no documentation for using the RSA accelerator.
With that in mind, let’s see how we can measure the power consumption of their AES implementation. This should help us see if complicated counter-measures are implemented in the device, and if we can see an interesting power signature that is worth exploring in more detail.
A close-up of their development board mounting on my CW308 UFO Board is shown in Figure 4. To help reduce noise, I’m feeding in 1.2V to the VCore supply pin, and have removed the “regulator output” capacitors. Because I’m feeding in slightly higher than the specified core voltage, the internal regulator should turn off resulting in a clean power trace.
You can see the power traces of an AES encryption in Figure 5. Unlike many classic MCUs, there is no strong or obvious signature of the accelerator block. This is a good sign that the accelerator may have a harder power signature to detect, but more effort would be needed to see if it’s possible to detect the operation of it, and also to break the accelerator itself. We can hope this device isn’t vulnerable to normal attacks such as a classic differential power analysis attack.
TOWARDS BROAD MARKET MCUS
The MAX32520 appears to be a shift toward more open secure MCUs. As you can see from this article, there are still many features with too little documentation. This hampers the ability to perform critical external security evaluations.
In addition, devices like this will always be vulnerable to the classic problem of the user misconfiguring it. For example, while the device has internal flash memory encryption, a software (logic) bug that issues read requests to the flash memory will be automatically decrypted by the security system. To correctly use these devices, you need to Understand what the security features are buying you.
All that said, the inclusion of many more advanced features, without requiring an NDA to see some details of them, is a strong step in the direction we need to help design secure products. I know many people (myself included) are not designing products in industries that involve the volumes needed to get classic secure MCUs under NDA. So, keep an eye out for more details of the MAX32520 as they become available. You might find that you can finally get some real security features in a broad market MCU. And we can hope that other vendors will be in hot pursuit of this more open approach to secure devices, giving us the ability to evaluate and choose between multiple such devices.
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • MAY 2020 #358 – Get a PDF of the issueSponsor this Article
Colin O’Flynn has been building and breaking electronic devices for many years. He is an assistant professor at Dalhousie University, and also CTO of NewAE Technology both based in Halifax, NS, Canada. Some of his work is posted on his website (see link above).