Hacker vs. Tracker
In this article, Colin takes a look at a recently demonstrated fault injection attack on the AirTag device. While the AirTag alone has a limited attack surface, it opens the door for a variety of additional analysis. He looks at the process of going from consumer device, to identifiable vulnerability, to full bypass of the hardware security measures.
The Apple AirTag is a small tracking device, designed to help you find your lost keys or wallets. But to extend the reach beyond just your phone’s wireless range, the AirTag can communicate with a network of Apple products to help get tags (which are presumably attached to something more valuable) back to their owners.
The AirTag is also an interesting consumer device, and makes a great “target” for this column. In this case, I’m going to discuss some analysis of the physical device itself, along with showing you how quickly researchers were able to unlock the device to dive deeper inside it. I’ll also explain some suggested countermeasures that could be applied with this same hardware.
Because this device is new, analysis and research around it is rapidly expanding. As a result, there may be even more news released by the time this article goes to press. All that said, it’s still valuable to understand how these types of attacks start by gaining code access using physical attacks, even if later attacks discovered don’t require a physical attack. If you’re developing products yourself, it’s important to consider this threat model as part of your evaluation of a device.
Because the AirTags are relatively low cost, it’s also a good way to “play along” yourself. An attack on an automotive ECU—as I’ve covered in previous articles—is a lot more expensive to do “for fun.” (An example automotive ECU attack article is “Low-Level Automotive ECU Security”—Circuit Cellar 364, November 2020 [1]). In contrast, you can likely pick up some AirTags if you want to play along here.
AIRTAG TEARDOWN
The AirTag is a thin (0.3mm) PCB supported by a plastic holder, which also serves to hold the various antennas in position. Luckily if you remove the battery from the AirTag and remove the rear plastic cover (it can just be pried out), you’ll be rewarded with a series of test points as shown in Figure 1. I’ve already annotated the test point numbers so I can refer to the test points. But besides that, there isn’t anything too interesting on this side.
These test points will be enough to attack the device with physical attacks, but for our own curiosity lets go further. The rear enclosure can be bent back as shown in Figure 2, which allows you to remove the PCB without flexing it. You’ll need to remove the plastic carrier (which will break off the antennas), and finally be rewarded with the rear side view as shown in Figure 3.
— ADVERTISMENT—
—Advertise Here—
There have already been several teardowns of the AirTag, so I can refer you to various online resources such as the iFixIt teardown [2] or Adam Catley who is keeping detailed notes of the device construction [3]. I even did an AirTag teardown of my own, posted on my website at [4]. These links are all included in RESOURCES at the end of this article.
From the view of the PCB in Figure 3, we can see one of the main devices is a nRF52832CIAAE, which is a Bluetooth-enabled microcontroller (MCU) from Nordic Semiconductor in a wafer level chip scale package (WLCSP). This type of packaging could be vulnerable to Body Biasing Injection (BBI), but it turns out we don’t even need to go to such effort for a successful fault on this device. In this case, the prior work on the nRF52832 had already been done.
nRF READ PROTECTION FAULT
The nRF52832 was known to be vulnerable to a fault injection attack, which was discovered and detailed by the security researcher “LimitedResults” with their blog post “nRF52 Debug Resurrection (APPROTECT Bypass)” [5]. This post details how the debug protection of this device works, along with a successful voltage fault injection (glitch) attack. It was published in June 2020, so, while relatively new, it should be considered as a threat model for any serious customer.
This means that, without any effort, we know that the device may also be vulnerable to such attacks that use a voltage fault injection to unlock the device. These types of faults are the classic example of a huge amount of expertise and effort required to understand where we place the “X” mark, but once done, the work is easily recreated by others. In this case, LimitedResults carefully tracked the loading of various option settings, using power analysis to understand the likely boot process. From there the specific timing locations were identified, and confirmed that they are effective in allowing a bypass of the code protection.
While I didn’t have an nRF52832 target already built, I did have the similar Nordic Semi nRF52840 target as part of my NewAE Technology CW308T-NRF52840 device, so I could easily generate a trace of the boot process in Figure 4. I’ve discussed monitoring the boot trace for this purpose in my previous articles—such as in my article “Recreating Code Protection Bypass” (Circuit Cellar 338, September 2018) [6]. The idea here is that you can actually see various operations occurring, and you can also watch the sort of results your glitch has on the power trace too. Taken alone, Figure 4 just shows an example of the boot, but not the actual detail of each operation. This is the trickier part, and I’ll again refer you to LimitedResult’s blog post [5] which details the specifics of the operations.
FROM THEORY TO PRACTICE
Applying this glitch requires you to understand the test points on the board, which I teased in Figure 1. In this case, I worked to map the test points by removing the WLCSP device to make it easier to map the test points. This step isn’t fully required, but the low cost of the devices made it just easier to destructively map them!
From Figure 1, the first thing your board will need is power. You need to apply +3.3V on both VDD1 and VDD2—these pads are not connected together. From there, the minimum signals you will require to talk to the device is the SWDIO/SWCLK signals on test pads 36/35 respectively. Because these signals are 1.8V, test point 34 serves as a 1.8V reference for your debug tool, if required.
But using the debug tool on the SWD connection won’t be successful, which we already know. So, the final information will come in the fact that test points 26/27/28 connect to some of the nRF’s power rails. In particular, test point 28 goes to the interesting power rail as discussed by LimitedResults [5] and then confirmed on this device by Thomas Roth (also known as “stacksmashing” on Twitter and YouTube).
Applying this to the AirTag was first done by Thomas Roth, which happened on May 11, 2021 only very shortly after their final release. Thomas made a detailed YouTube video of this process, which was amazingly demonstrated using a Raspberry Pi Pico board to drive it [7]. This made for a very low-cost fault injection tool. I’d highly recommend watching it for a complete demonstration.
— ADVERTISMENT—
—Advertise Here—
Thomas used a crowbar glitch circuit to pull this test point to ground at an appropriate spot. (You can see my paper “Fault Injection using Crowbars on Embedded Systems” [8] for other examples of crowbar glitches.) Once Thomas showed how well this worked “in circuit,” it helped open the flood gates to other researchers recreating it.
In short notice several other demonstrations popped up on the AirTag—Lennert Wouters [9] used a NewAE Technology ChipWhisperer-Lite (which I’ll use here) to recreate it, and Willem Melching [10] released an example of driving the glitch from the very low-cost STM32F1 “Blue Pill” development boards from STMicrolectronics.
An example of the ChipWhisperer attached to the AirTag is shown in Figure 5. Here, I’m controlling the AirTag target power from the ChipWhisperer. This arrangement was based on Thomas’ example setup, which Lennert modified to work with the crowbar circuit built into the ChipWhisperer-Lite (and I recreated similarly here). The trigger is based on a 1.8V power signal at test point 34 coming from the nRF chip, which may require voltage translation for the ChipWhisperer-Lite (which is based on 3.3V logic, requiring at least 2.0V to register a “high”). In Figure 5, I’ve connected it directly, and found it seemed to still trigger OK. You “should” use a true voltage translator, or even just a simple MOSFET based one instead of my questionable hack!
Assuming we have the correct timing for the glitch, we can use the SWD interface with the debug tool (such as OpenOCD) to confirm the status. This interface isn’t shown in Figure 5, but is connected to the brown, red and black wires which are the SWD interface and ground. Note: I’m not including any of this code to make a turn-key attack because it’s not required to push this article forward. This means we could either reprogram the device to unlock the debug interface, or even potentially single-step through the code. This might beg the question: What is the real threat model here?
THREAT MODEL
What threat model is Apple worried about? It’s unlikely that Apple is worried about someone “counterfeiting” the AirTag. The big reason is part of the AirTag is a proprietary ASIC, which is the so-called “U1” chip. The U1 chip is the actual precision positioning solution, so a counterfeit version simply cannot be built from off-the-shelf parts. It’s also worth noting that the AirTag “ecosystem” is open to third-party tags. So, counterfeiters have a low incentive to copy the AirTag design verbatim, since they could freely create their own “equivalent” tag and likely shave off more features (and save more money).
Instead, the major security threat is that it allows a deeper inspection of the code. It appears the default code does not prevent modified versions of the code from booting. Even if attackers have total control of a device, you can make it more difficult for them to reverse engineer your code and program flow. Ideally the part you select supports some form of “secure boot”, which for example validates a user-level bootloader has not been modified before booting.
The nRF52832 device used by Apple didn’t support any sort of “secure boot” type feature, meaning countermeasures would have always been possible to bypass with additional effort. If early on in the application code for example a signature is checked, it would have been possible to modify the flash code ahead of the signature check.
The other design choice I didn’t discuss is that one of the parts in Figure 3 is a standard SPI flash chip. Looking at the contents of the flash chip showed the data and code present on it did not appear to be encrypted. The flash chip did not store the “user code” the nRF52832 chip ran, because internal flash in the nRF52832 stored that code instead.
Unencrypted SPI flash is often used where an older device is booting from SPI flash, but the older device doesn’t support encryption. Because the nRF52832 was not booting from the SPI flash, any sort of encryption layer could be added.
PREVENTION
Ultimately, there is no perfect prevention with the chosen device. Based on the fact this debug unlock was a well-known attack on the nRF52832, I assume Apple was aware of the threat to the MCU, but did not consider this to be an actual threat to the user. This makes sense when you consider that AirTags primarily require a connection to the backend to be useful, and as mentioned there was little risk from a commercial perspective of them being cloned.
All that said, it’s also somewhat surprising that some low-effort blocks were not put in place. Even when you know the end system is attackable with enough effort, having some minor defense may make your product less appealing for the type of “hobby hacker” that is mostly driven by getting interesting results.
The SPI flash I mentioned being totally in the clear was surprising to me. Even a basic encryption should have been possible in the nRF52832 chip on data access without a substantial overhead. This would have prevented easy detection of useful strings being present in the code. It also could “key” the SPI flash contents to a unique ID stored in flash. This would make it more difficult for an attacker to swap SPI flash images or firmware images to see various effects.
The other surprising omission was some sort of internal signature or check of integrity on boot. As mentioned, this will be fundamentally possible to bypass with debug access on this device, but it is still valuable to prevent remote attacks, where an attacker may have an ability to only modify certain segments.
A typical method would be to store a known hash of the flash section—upon booting, the hash of flash is calculated and compared with the known good value. The more complete solution is to use signatures to validate the “known good” value comes from a trusted source, which would be part of the secure boot process. If the device lacks certain hardware supports for securely storing the hash or key accelerating these algorithms, you might be tempted to simply skip this step. But you can use, for example, a very lightweight hash algorithm, which provides better security than doing nothing. Managing updates to the trusted hash value can be handled as part of your “regular” firmware update mechanism. The firmware update mechanism now becomes the target for an attacker of course, but you’ve helped close a few potential open windows by implementing any sort of boot validation.
Another surprising miss was the lack of a “relocker” on the code read protection. Many devices I’ve seen will actually check the firmware protection is set correctly on boot, and either refuse to boot or relock the device. The Trezor bitcoin wallet I looked at in my article “Attacking USB Gear with EMFI: Pitching a Glitch” (Circuit Cellar 346, May 2019) [11] did the relocking procedure, for example. While this is again code that could be “patched out,” it increases the effort once more.
— ADVERTISMENT—
—Advertise Here—
From a nihilist perspective these additional countermeasures would be irrelevant. They could be bypassed, so why bother with the effort of adding them? If they would require substantial effort I would agree, but again these should be relatively lightweight features to add. The only major unknown is of course there could have been power consumption or code size trade-offs that meant adding the features was impossible in practice—I can only guess.
THE NEXT AIRTAGGER
I doubt that security researchers are even close to done with the AirTags. In fact, by the time you read this, I know of at least one interesting presentation that will have occurred. Jiska Classen and Alexander Heinrich will have given the talk “Wibbly Wobbly, Timey Wimey—What’s Really Inside Apple’s U1 Chip” [12] at Black Hat USA 2021, which covers details of the Apple U1 chip I mentioned earlier being also present on the AirTag. Because this chip is proprietary, all information on the U1 chip is coming via reverse engineering efforts.
In addition, watch out for more interesting work based on Thomas Roth’s initial effort. The actual interesting security work on these devices has only just begun, in large part due to the ability to perform firmware analysis of the actual running code. The most threating attacks for AirTag users will be ones which could happen purely “over the air.” Because AirTag uses a Bluetooth connection, I expect there may be a few loose ends in the code or protocol, but time will tell. And hopefully such over the air bugs are ones that can be easily patched by Apple—meaning that this third-party security evaluation is ultimately resulting in a more secure product going to consumers.
As information on the U1 chip is published, we may even see users being able to work with the AirTags in more detail. The very low cost of the AirTags means they could be a valuable precision wireless positioning system. If you’re interested in experimenting with a relatively new consumer IoT device, I think you’ll find the AirTag is a fun experiment. The links I’ve provided on Circuit Cellar’s article materials webpage should get you started by enabling you to review the demonstrations by those who started the process.
And, if you’re interested in learning more about the fault injection methods, I’ve recently released a book called The Hardware Hacking Handbook with No Starch Press [13], in which I describe in more detail how the types of voltage fault injection attacks can be used. The current release schedule means it should be available as this article is in your hands, and I hope you find it a valuable addition.
RESOURCES
References:
[1] “Low-Level Automotive ECU Security”—(Circuit Cellar 364, November 2020)
[2] Sam Goldhear (iFixIt). “AirTag Teardown: Yeah, This Tracks.” 2021.
https://www.ifixit.com/News/50145/airtag-teardown-part-one-yeah-this-tracks
[3] Adam Catley. “Apple AirTag Reverse Engineering.” 2021.
https://adamcatley.com/AirTag.html
[4] Colin O’Flynn. “Apple AirTag Teardown & Test Point Mapping.” 2021.
https://colinoflynn.com/2021/05/apple-airtag-teardown-test-point-mapping
[5] LimitedResults. “nRF52 Debug Resurrection (APPROTECT Bypass).” 2020.
https://limitedresults.com/2020/06/nrf52-debug-resurrection-approtect-bypass
[6] “Recreating Code Protection Bypass” (Circuit Cellar 338, September 2018)
[7] Thomas Roth (‘stacksmashing’). “How the Apple AirTags were hacked.” 2021.
https://www.youtube.com/watch?v=_E0PWQvW-14
[8] Colin O’Flynn. “Fault Injection using Crowbars on Embedded Systems.” 2016.
https://eprint.iacr.org/2016/810.pdf
[9] Lennert Wouter
https://twitter.com/LennertWo/status/1392932635333255170
[10] Willem Melching. “AirTag dumper.” 2021. https://github.com/pd0wm/airtag-dump
[11] “Attacking USB Gear with EMFI: Pitching a Glitch” (Circuit Cellar 346, May 2019)
[12] Jiska Classen, Alexander Heinrich. 2021. “Wibbly Wobbly, Timey Wimey – What’s Really Inside Apple’s U1 Chip.”
https://www.blackhat.com/us-21/briefings/schedule/index.html#wibbly-wobbly-timey-wimey–whats-really-inside-apples-u-chip-23328
[13] “The Hardware Hacking Handbook” by Colin O’Flynn, published by with No Starch Press
NewAE Technology | www.newae.com
Nordic Semiconductor | www.nordicsemi.com
STMicrolectronics | www.st.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • September 2021 #374 – Get a PDF of the issue
Sponsor this ArticleColin O’Flynn has been building and breaking electronic devices for many years. He is an assistant professor at Dalhousie University, and also CTO of NewAE Technology both based in Halifax, NS, Canada. Some of his work is posted on his website (see link above).