Complex Benchmark Development Journey Yields Success
The explosive growth of the IoT market looks much different today than the emergence of the embedded market the late 1990s. That said, it does resemble that early period in its lack of meaningful benchmarks. Developing an IoT benchmark is a non-trivial task because it must represent a real-world scenario, using wired and wireless interfaces. And since the devices are largely battery-powered, one of the key judgement criteria of an IoT device is its power consumption. I’ll explain here EEMBC’s process for creating a standard power benchmark for IoT edge nodes.
In 2014, the EEMBC IoT Working Group began evaluating market research from several of our members, analyzing commonalities and creating an overall summary of IoT device traits. We examined a broad range of applications from the following categories: medical devices, personal fitness trackers, industrial and agricultural environmental sensors and home automation. This analysis exposed considerable overlap summarized in these four traits: battery-powered with multi-year longevity expectations; at least one sensor; a low-bandwidth bidirectional radio and under 50 m proximity; and low compute-intensive MCU functionality.
Unlike microcontroller (MCU) performance benchmarks—which typically only require the user to compile and run—an IoT benchmark has several external hardware requirements. To address this, we developed the IoTConnect benchmarking framework consisting of a host system running the user interface and multiple USB subsystems. The four subsystems include: the energy monitor (EMON) which supplies power and measures energy; the radio manager, which acts as a gateway; the I/O manager, which emulates a sensor for a variety of wired interfaces (I2C, SPI and so on), and the device under test (DUT). The host system software configures the hardware and coordinates the flow of the benchmark by issuing instructions to each of the subsystems in a pre-defined set of sequences (Figure 1).
The framework’s energy monitor (EMON) was developed by STMicroelectronics. Called the STM32 PowerShield (X-NUCLEO-LPM01A), it is available on their website for about $70. It uses high-speed ADC sampling and delivers higher current (50 mA), adjustable voltage (1.8 to 3.3V), a resolution down to just under 100 nJ and is programmed via a simple UART CLI.
Early prototypes of the benchmark experimented with different radios, but soon it became clear that most of the members already had BLE and/or Wi-Fi offerings. BLE was selected by vote since it had the broadest support among members. Regarding sensor peripheral, the debate resolved to I2C versus SPI, but I2C was chosen because we already had an SPI peripheral benchmark (ULPMark-PeripheralProfile). We opted for a single-sensor to reduce complexity.
The profile defines the operation of the device under test (DUT) during the benchmark. This differs from a synthetic benchmark that simply iterates over some reductive algorithm, like matrix multiply. A profile implies a behavioral model that attempts to capture an intersection of real-life scenarios.
As we iterated on radio choice, the profile also evolved. For example, we initially started out with a read-only protocol, but the process of provisioning, upgrading, and decommissioning deployed products matters, leading to the addition of bidirectional transactions. Since we chose BLE, the profile contains two key modes, advertise and connected. Given our definition of an IoT edge node as comprising a sensor, an MCU and a radio, it also made sense that the profile should demonstrate each.
Advertise mode is fairly simple: the device must broadcast a connectable advertise on all three BLE channels with a minimum three bytes at 100 ms intervals and a transmit power no lower than 0 dBm. Connected mode is much more complex, since it has to reflect the variety of scenarios the working group researched at the outset (Figure 2).
Connected mode performs a looping set of interactions between the DUT, the gateway (Radio Manager) and the sensor (IO Manager). The BLE client (Radio Manager) and the BLE server (DUT) are synchronized by the BLE physical link’s connection interval. Both devices periodically wake up on their own asynchronous schedules to perform work. On the client’s timeline, it wakes up and sends a BLE command-write packet, emulating a cloud-configuration operation. This packet is queued until the next BLE connection interval. The client then goes back to sleep. This packet is serviced when the server wakes up, and it performs a CRC8 check on the data, storing the results. On the server timeline, it wakes up, reads a quantity of bytes from the I2C sensor, runs a simple LPF, and then sends the filtered data to the client via a BLE subscription notify event. This packet is serviced when the client wakes up, and is stored for verification.
SCORING AND VERIFICATION
IoTMark-BLE is a power benchmark and lower power means a higher score. The score is a weighted combination of power used during the advertise and connect components. During final balancing, the working group adjusted the weight factors of advertise and connect power in the scoring equation, the connection and wakeup intervals, and the number of bytes read and transmitted via I2C and BLE. This was done with feedback across multiple devices to find a balance that did not deliberately over-emphasize any one part of the benchmark.
A benchmark that can be gamed is worthless. To mitigate this, the DUT has to show proof of work: all of the data sent by the Radio Manager or I/O Manager acting-as-sensor is randomly generated by the host system and checked afterwards, thereby defeating attempts to circumvent functionality. To level-set transmit power, we set a minimum of 0 dBm, which the EEMBC lab verifies during certification.
It took several years of negotiation, research, engineering development and testing—testing and more testing—but in Q4 of 2018, the combined efforts of the tireless members of the IoT working group introduced IoTMark-BLE, a benchmark that provides a solid standard for characterizing BLE edge-node device power. The working group will continue to advance this benchmark as new low-power radios standards and SoCs appear in the future.
EEMBC | www.eembc.org
STMicroelectronics | www.st.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • JANUARY 2019 #342 – Get a PDF of the IssueSponsor this Article
Peter Torelli was named as the President and CTO of EEMBC in November 2017. In this role, he has focused on maintaining the alliance’s position as the industry’s authority in performance benchmarking. Under his leadership, EEMBC has launched working groups to bolster the capabilities of existing benchmarks, while researching and developing new performance standards for today’s disruptive technologies, including security, IoT, ADAS, machine learning and more.