CC Blog Design Solutions Research & Design Hub

Testing Timing Attacks on Access Control

Written by Colin O'Flynn

Premises Protection

How do you apply attacks to complete systems, and how do you test systems you might use in your day-to-day life? In this article, Colin discusses how he confirmed that an access control system he was considering to use doesn’t have a timing attack present.

  • How to do a test timing attack on an access control system

  • How do access control systems work?

  • What is the “Wiegand interface”?

  • Why timing is importing in this attack

  • What are the shortcuts we can use?

  • How to test the system

  • How to examine the statistical results

  • Zilog Z80

  • Kantech KT-300

  • Arduino board

  • LEDs

In the summer of 2021, I moved into a new office with my company. Our new location had a full access control installation—card readers, backend controllers and everything else needed to make it work. Of course, this piqued my interest and led to a few different experiments. I want to share one of them with you here.

In this article I’m going to look at how you can (try) to perform a timing attack on an access control system. In this case my system does not appear to have any exploitable timing leakage. That’s a failure (from an attacker perspective), but a success (from the designer perspective). Whether you think this is a failure or success, the process of how you actually perform this test is something that will be of interest for everyone.


Before I jump into the attack, I’ll give a bit of a crash course about access control systems. The following will be very bare-bones because we don’t need to get too detailed for our purpose. The system that I’m evaluating is a relatively old one, which uses readers that communicate using the so-called “Wiegand interface.” This was the format used with some of the (now older) proximity cards, which simply transmit a fixed ID to the card reader. This fixed ID is compared to a backend database of valid IDs, and if it matches, the door unlocks.

These cards are not secure. Anyone can read the ID from the card, and most of them even print the ID right onto the physical card itself. This means that if someone even looks at the ID badge (yet alone if they use a sniffer or card reader) they get the required information to “clone” these old types of badges. But what if an attacker doesn’t have that information? I wanted to explore if we could use a timing attack to recover some of the codes, as this would mean that an attacker wouldn’t even need to see a valid card in order to be successful with their attack.

The cards I started with are the most basic types of cards, which use the “26-bit” format. In this format there are 8 bits of “facility code” followed by 16 bits of “card code.” The final two bits (to make a 26-bit format) are parity bits and not related to security. The printed code is often split into the facility and card code. For example, one card that I have is marked “12:28732,” which means a facility code of 12, and a card code of 28732. The facility code may be written in hex (so you would see a card marked “0B:28732”).

Many variations of this code exist. For example, the Kantech system I’ll be demonstrating on normally uses their proprietary “Extended Secure Format” (XSF) instead of the standard Wiegand 26-bit format. The XSF format has a different number of bits and “layout”, but it can be easily cloned. And again the “secret information” is printed right onto the badge or tag by default! For simplicity, I’ll only be using the Wiegand 26-bit format in the following examples, but the same tests could be done on other formats.

The physical setup of a reader is shown in Figure 1. The exact setup will vary, but you can see there are two communication wires (D0 and D1), along with wires to trigger the LED and buzzer on the reader. When a card is presented to the reader, it sends back the card information on the D0/D1 lines (D0 used to send 0s, and D1 used to send 1s). More advanced (newer) card readers use RS-485 or other protocols, but almost all older readers use this simple system.

Figure 1 The card reader is connected to the backend controller using five or six wires.
Figure 1
The card reader is connected to the backend controller using five or six wires.

The main controller will receive the card number, and if it is an allowed (valid) card number, it will unlock the door. The LED may also flash a certain pattern or change color. Some readers encode different LED colors on the single line, some readers use separate red and green LED wires, and some readers have just a single color.

This information is all that we need to know in order to consider our classic timing attack. We know how to send data to the controller (the D1/D0 wires), along with how we receive the status back (the LED). With that in mind, let’s review how a timing attack might work and get to it!


The basic idea of a timing attack is simple: The execution time of a piece of code depends on some sensitive information. In this case, the sensitive information is fairly simple: the facility code and card code. As mentioned, both of them may be easily recovered if an attacker has access to a card, but I still wanted to understand if a timing attack was also a threat to this system.

Consider how you might implement the backend controller—one such example is shown in Listing 1. Here I’m assuming there are some data structures holding lists of valid cards, and we simply search through them. You can see the main loop on line 8 searching up to max_users. In this implementation the possible timing attack comes from the comparison on line 9. This comparison first checks if the card facility code (stored in the facility variable) matches a known facility code, and after that the code continues to compare the card number. The code path is shorter (faster) when the facility code is wrong, meaning we can “brute force” the facility code (which only has 256 options) by detecting if there is a facility code that takes slightly longer to reject the card number.

Listing 1
Pseudo code showing an implementation of a card check

extern int max_users;                 //Registered users
extern uint8_t[] known_users_fac;     //Known user DB, facility code
extern uint16_t[] known_users_card;   //Known user DB, card number

int is_card_valid(uint8_t facility, uint16_t card)
       //Check if facility ID matches known
       for(unsigned int i = 0; i < max_users; i++){
              if (facility == known_users_fac[i]) {
                     if (card == known_users_card[i]) {
                            return 1;
       return 0;

Once we know what a valid facility is, we can then brute-force the card number. This card number brute-forcing would take about 54 hours if we had to search the entire 16-bit search space and could only test one code every 3 seconds. On average, it will take half that time (27 hours), and if the system has many cards programmed, we are going to reduce the time further.

For all of these tests, it assumes the system is not triggering an alarm caused by too many invalid card reads. Triggering an alarm is a common countermeasure against brute-forcing codes. If these best practices are (hopefully) being implemented by the access control technician and monitoring system, my timing attack would be stopped.

My timing attack based on the code of Listing 1 is entirely a guess about a possible implementation. We may need to continue to check for timing attacks in case the lower or upper bits of the card number are compared first, for example.

The actual device I’ll be testing against is the Kantech KT-300, shown connected to an Arduino in Figure 2. This device uses a Zilog Z80 as the main CPU (I told you it was an old system!), meaning any comparison is guaranteed to be happening 8 bits at a time. On newer systems (using Arm controllers) the comparison could be done on all 32-bits at a time, but of course even then timing attacks could still exist as they may be comparing the facility code and card code separately.

Figure 2 The KT-300 (left) uses a Z80 microcontroller, and the Arduino (right) replaces the card reader.
Figure 2
The KT-300 (left) uses a Z80 microcontroller, and the Arduino (right) replaces the card reader.

While the underlying code will have some timing that varies with the comparison result (pass or fail), the system designers may have purposely made the overall comparison constant-time. In addition, the system might use features such as tasks running on periodic interrupts that will mask the time-dependent leakage. Our goal is now to evaluate this system to see if we can find any timing leakage.


The physical setup is shown in Figure 2, which uses an Arduino to drive the Wiegand interface. The same Arduino monitors the LED pin that the controller is using to drive the LED on the reader. In evaluating the system, I want to make the attacker as powerful as possible. In this case the attacker doesn’t need to deal with delay or time jitter added by the reader as they are talking directly to the backend controller. And because we have complete control over the system, we can take several shortcuts to help with our evaluation.

The first shortcut is that we know when we are sending valid or invalid cards to the reader. This is important because the single LED line can be used to communicate both a successful read and a failed read depending on the configuration. The second shortcut is that we can modify the configuration of the controller to help speed up our analysis. The default configuration makes the LED blink for three or more seconds on a failed card read, and the system will not accept a new card read until the blinking stops. I modified the configuration to set the controller’s LED blinking time to the lowest allowable setting (1 second). This shorter time setting means that we can retry card reads much faster.

The actual amount of time it takes from sending the card code over the Wiegand interface to the LED turning on (“access denied” indicator) is measured by the Arduino and printed to a serial port. An attached computer simply logs the delay and card ID number, and we can perform analysis of the time based on that result. Before we look at the results there is one more critical caveat I want to bring to your attention.

When designing the experiment, you might think to simply iterate through IDs presented to the system. Let’s say the valid facility code is 5, and the system does have a timing leak like in Listing 1. We could iterate through sending a facility code of 1…2…3… all the way up to 255. We are looking for one facility code that takes longer to process, and in this case, we know the valid code is 5 so we’re looking for this code.

An example of the results I got when iterating through facility codes linearly is presented in Figure 3. Note that several codes seem to have a noticeable difference in timing. You can see shorter times for several codes. The shorter times seen by the dots on the left of the 190ms time delay axis mark.

Figure 3 The shorter response times (data points left of 190ms) occur because of a repetitive task running on the main control unit, and are not a true data leakage.
Figure 3
The shorter response times (data points left of 190ms) occur because of a repetitive task running on the main control unit, and are not a true data leakage.

But what you’re seeing is an artifact of my test setup. I’ve unintentionally added a dependence on system time. By iterating through the code space linearly, it means the time between when a given facility code is presented to the controller is constant. Any periodic task—such as an interrupt firing or other regular task—will thus always affect the same (or nearby) facility IDs! This is exactly what you are seeing in Figure 3: Not a true dependency between input data and execution time, but simply a mistake in how I collected the data.

It’s very easy to introduce such dependencies by accident, so for that reason you must always ensure you are presenting a random test to the target device. Any pattern in the input data means you will add unintentional correlation between external events and the measurements you are taking.

This is the case whether you have many input values you are testing (such as the 256 different facility codes I tested) or whether you are testing between two different values. When testing two values you cannot simply swap between the two values, but you need to randomly select which test value is presented to the target.

Note that the target I’ve used to create Figure 3 is a different access control unit, because it happened to have a more pronounced “regular task.” With the knowledge about how to correctly collect the data, let’s see what the actual controller does in response to different facility codes.


Finally, we can send random facility codes to the device, and see if there is a noticeable difference in time before the device responds with “access denied.” Listing 2 shows a snippet of the Arduino code to do this. The full version is available on Circuit Cellar article code and file webpage. In order to make such a difference even more obvious, I also tried loading many (1,000) valid cards into the system, all with the same facility code. This means that if the facility code matches first, and it then checks for a valid code, it may take a substantially longer time to perform the validation.

Listing 2
Arduino code for sending random facility codes. It uses Weigand interface code from CPallini in the “Wieganduino” article on

uint32_t facility = 0x01;
uint32_t cid = 0x4001; //invalid card number

#define make_card(facility, cid) (((uint32_t)facility << 16) | cid)

int32_t start;

int samples = 0;

void loop() {
  Serial.print(“TESTING CARD: “);
  Serial.println(make_card(facility, cid), HEX);  
  digitalWrite(4, 0); //Trigger LOW
  send_wiegand26(make_card(facility, cid));
  start = micros();
  digitalWrite(4, 1); //Trigger HIGH

  digitalWrite(LED_BUILTIN, 0);

  //wait for LED high (‘failed’ indicator)
  while(digitalRead(8) == 1);

  int32_t del = micros() - start;

  digitalWrite(LED_BUILTIN, 1);
  delay(2000); //Access control needs ~1 second to reset
  digitalWrite(4, 0);

  facility = rand() & 0xff;

The final results are shown in Figure 4. Each facility code is tested many times, with the correct code being 47 (shown with red Xs). If there was an obvious timing attack, we’d expect to see the red Xs far to the right of the rest of the green points. But as you can see there is no obvious difference here, suggesting that in fact there is no timing attack.

Figure 4 Timing information when sending random facility code (IDs) to the KT300, with the correct one marked in red Xs.
Figure 4
Timing information when sending random facility code (IDs) to the KT300, with the correct one marked in red Xs.

What you don’t see is that I have repeated this test across several facility codes, as well as compared various bytes of the card number (in case my pseudo code from Listing 1 was wrong, and it didn’t match the facility code first), along with testing different numbers of loaded cards. All results look similar to Figure 4.

You might notice a few interesting data points that still came out of this test. However, note that there is a noticeable bias on the small facility code (ID) values, which seems to be biased toward a longer execution time. You can see this on the lower few “lines” of Figure 4. This appears to be unrelated to the actual valid code, so it may simply be some artifact of the design of the system.

The lack of exploitable timing information may be a deliberate design goal (this is a security product after all), or it may be a happy artifact of the system design. An example of why it could be a happy artifact would be if the comparison logic simply communicates back to a task scheduler, and that task scheduler will only turn on the external LED at the next periodic scheduled time. This would completely hide the comparison time. The only way to see leakage would be if the comparison logic was slow enough to throw the scheduler into a later than normal time slot. I tried to cause that to happen by loading a large number of valid cards that would be compared.

With all of this, I have left the practical matters a little vague. How can you speed up the test so you don’t need to produce the graph such as in Figure 4? We can try to answer that by exploring some statistical questions about our timing data.


We don’t need to turn to very advanced statistics to get a reasonable answer to the question of if a device has a timing leak or not. We will however first simplify the experiment. Rather than collecting the large dataset of all possible facility codes, we’ll focus on two cases. For these two cases I’ll ask a simple question: Is there a difference in the timing between the two cases?

This would mean collecting a dataset where the first case sends the known valid (correct) facility code and the second case sends a random invalid code. Note that I say to send a random invalid code (instead of a fixed invalid code) to try and reduce any bias. The choice of sending a random invalid code or the correct code should itself be random—not simply alternating between valid and invalid.

This would mean that we have two lists of response times. The end goal is to check if the two lists (each list will be a sample set) have different means (average response time). The problem is that we don’t know if we’ve collected enough data. The data could be very noisy so just taking three measurements for example, may not be reliable. You can see this spread of response times in Figure 4.

Such questions can be most effectively answered with a test such as Welch’s t-test. This test can even be performed in a spreadsheet (Google Sheets has it built in), and allows us to calculate a “p-value,” which gives us a number indicating our confidence in determining if the observed timing values mean we can declare there is some statistically significant difference between the case where we sent a valid facility code and when we sent a random invalid facility code.

Of course, if you look back to Figure 4, you’ll see that my choice of the fixed “valid” code may still have some unintentional bias! If I had used a valid code that was naturally slower to process, it may incorrectly appear as if the system has a timing leak. For this reason, we would proceed to confirm the “leak” appears across multiple valid codes, and not just a single code. For my system, no such leak appeared, so I did not need to go to that effort.


This article might seem to be a bit of a letdown. After all there was no attack! But it shows you the process of validating your own devices—be it one you bought or designed—against common attacks, and shows you that it is something you can perform with very little hardware.

This article can be used to help you understand the tools needed to validate other devices against timing attacks. Any devices that process sensitive data may potentially have a timing leak, so performing a simple validation is a good practice to ensure attackers can’t easily recover it. 



Weigand interface:

“Example: Two Sample t-Test” on

Arduino |
Zilog |



Advertise Here

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
Website | + posts

Colin O’Flynn has been building and breaking electronic devices for many years. He is an assistant professor at Dalhousie University, and also CTO of NewAE Technology both based in Halifax, NS, Canada. Some of his work is posted on his website (see link above).

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Testing Timing Attacks on Access Control

by Colin O'Flynn time to read: 13 min