Design Solutions Research & Design Hub

Getting Started with AI at the Edge

Figure 1 QuickFeather development board

Using QuickFeather and SensiML

The QuickFeather development board and SensiML toolkit are open-sourced solutions for developing IoT devices with ML and AI. In this article, Raul steps through the QuickFeather board and SensiML toolkit specs. He then explains how to collect sensor data and train an ML model to recognize arm motion patterns with a wristband wearable prototype.

  • How to collect sensor data and train an ML model to recognize arm motion patterns with a wristband wearable prototype.

  • How the QuickFeather board and SensiML toolkit works

  • How to understand a SensiML workflow

  • How to capture and label data

  • How to use the metadata

  • How to test the created model

  • SensiML Analytics Toolkit from QuickLogic

  • QuickLogic’s QuickFeather board

  • DPS310 pressure sensor from Infineon

  • IM69A130 PDM digital microphone from Infineon

  • QuickLogic Open Reconfigurable Computing (QORC) SDK

The QuickFeather is a development board by QuickLogic based on its EOS S3 line of low-power multi-core system-on-chips (SoCs). The board is aimed at developing low-power machine learning (ML) capable Internet of Things (IoT) devices and it is based on open-source hardware and software. It comes with an accelerometer, a pressure sensor and a digital microphone to easily get you started with processing sensor data at the edge. The SensiML Analytics Toolkit, also by QuickLogic, automates the process of creating optimized artificial intelligence (AI) models for sensor data recognition for IoT applications.

In this article, I’ll go through the QuickFeather board and SensiML toolkit specifications. Then, I discuss the basics of how to gather sensor data and train a ML model to recognize arm motion patterns with a wristband wearable prototype. The idea behind this example is foremost to introduce you to the use of the QuickFeather board and SensiML toolkit to do embedded ML at the edge. It will also enable me to showcase a fast proof-of-concept wearable device to improve safety for forest firefighters. In the example, I use the QuickFeather’s onboard accelerometer to detect motion patterns and train a model to detect the following actions: walkingusing a chainsawusing an ax and idle (or the absence of action, possibly for being unconscious).


The QuickFeather development board is based on QuickLogic’s EOS S3 SOC, which is described in its datasheet as a multicore, ultra-low power sensor processing system designed for mobile market applications such as smartphones, wearables and IoT devices [1]. The EOS S3 includes an Arm Cortex-M4F core with floating point support, a proprietary µDSP-like flexible fusion engine (FFE) core, a sensor manager core with its own microcontroller (MCU), an eFPGA on-chip programmable logic module with 2,400 effective logic cells and a voice subsystem that can be used for always-on voice capabilities and sound detection. The Arm core has 512KB of RAM, and the other cores/modules have working memory of their own as well. The SoC carries an assortment of commonly used peripherals and modules, such as ADC, SPI, UART, I2S, I2C, GPIO and a 2-pin serial wire debug (SWD) port.

The QuickFeather comes with an onboard GigaDevice Semiconductor GD25Q16CEIGR 16Mb flash memory, an MC3635 accelerometer by mCube, a DPS310 pressure sensor and an IM69A130 PDM digital microphone, both by Infineon Technologies [2]. It can be powered from USB or a Li-Po battery and it has an integrated battery charger. The QuickFeather is a development board intended for low-power ML IoT applications, for instance using the SensiML Analytics Toolkit suite or Google’s TensorFlow Lite, but it can also be used for general-purpose embedded applications. The board is compatible with the “Adafruit Feather” form factor and it has full support from the Zephyr real-time operating system (RTOS) [3]. It is also supported by the SymbiFlow FPGA Tools, a fully open-source toolchain for developing applications with FPGAs of multiple vendors. Figure 1 shows the QuickFeather development board.

Figure 1 QuickFeather development board
Figure 1
QuickFeather development board

The SensiML Analytics Toolkit suite lets you easily build AI sensor algorithms for embedded IoT devices. The suite has a set of software tools to gather sensor data, train and optimize ML algorithms, compile code and deploy an embedded ML application at the edge. Its Automated ML (AutoML) technology automates each step of the process for creating optimized AI recognition code for IoT sensor data, putting at the users hands a workflow that gives access to a growing library of advanced ML and AI algorithms for real-time event detection [4]. SensiML is a paid toolkit, but there’s a free Community Edition license that gives access to the suite for quick evaluation and prototyping.

The SensiML AutoML analytics engine generates an optimized algorithm that balances hardware resource constraints with desired accuracy. Then, it compiles the code to a binary that can be flashed directly to your device for a quick test. A “knowledge pack” source code library can also be downloaded and integrated to your specific application. AutoML automates the process for constructing ML models by automatically performing the tasks a ML modeler usually does. These tasks include pre-processing the input data and preparing it into a suitable form, selecting the type of ML classifier, optimizing model parameters, tuning hyperparameters and so forth. The difference is that AutoML does all that automatically with many types of models and it tries hundreds of thousands of permutations in much less time.


It’s worth noting that not only QuickLogic’s boards are supported by SensiML, but other hardware platforms as well. At the time of this writing other supported platforms were: Arduino ARM, ARM GCC Generic, Nordic Thingy, NXP i.MX RT10XX, Raspberry Pi, SensorTile, SparkFun ESP32 Thing Plus, TensorFlow Lite for Microcontrollers, Thunderboard Sense 2 and x86 GCC Generic. Once the code is in your hardware of choice, it runs in real-time avoiding the use of additional cloud resources and requiring less network performance. Security is also improved by compartmentalizing data processing.

According to the company, SensiML offers significant benefits for the development process by improving development time up to 5x over hand-coded algorithms. It eliminates the complexity of the data science aspect in AI with its AutoML tool, which any mainstream developer can use easily. It enables practical AI applications on embedded IoT devices by providing smart compilation and optimization for MCUs, DSPs and FPGAs, while also providing the extensibility and flexibility to add your own algorithms and customized code [5].

QuickLogic also offers the QuickLogic Open Reconfigurable Computing (QORC) software development kit (SDK). QORC SDK provides the basic source code components needed to easily get you started with embedded AI using the QuickFeather and other boards based on the EOS S3 SoC. QORC SDK is based on the FreeRTOS real-time operating system and it contains example application projects for the QuickFeather. It provides a hardware abstraction layer (HAL) to all the features and peripherals of the EOS S3 SoC with corresponding source code libraries. It also provides a set of FPGA designs to get you started with the eFPGA present in the EOS S3 SoC.


The proposed workflow to use the QuickFeather (or any other supported board) and the SensiML suite is shown in Figure 2. First, the QuickFeather board is used with a sensor of choice, along with SensiML’s Data Capture Lab (DCL) application to capture sensor data. Second, the DCL application is used to label the captured data. Third, the Analytics Studio application is used to generate the recognition algorithm and run tests on it. Fourth, a knowledge pack in binary or source code form is generated, also with the Analytics Studio application.

Figure 2 SensiML workflow
Figure 2
SensiML workflow

Optionally, the Analytics Cloud can be accessed programmatically with a Python client to tune the model. Fifth and last, the generated firmware code is integrated with your application to test and validate it. In general, you may have to repeat all five steps more than once until you get the best results. The official SensiML toolkit documentation [6] provides user guides, general documentation and applications examples showcasing the discussed workflow. Later in this article, I will go through the workflow to train a model for recognizing the arm motion patterns I mentioned before.

It’s not hard to imagine how the use of embedded AI could help improve safety for high-risk jobs, such as the ones performed by search and rescue personnel and firefighters. In the remaining of this article, I will discuss how I went through the SensiML workflow to train and test a model that detects four motion patterns, that could be typical of the activities forest firefighters perform when they are on duty: walkingusing a chainsawusing an ax, and idle. The last one will be in fact the absolute absence of motion, as when a firefighter loses consciousness. I used the QuickFeather’s onboard accelerometer to detect motion and an offboard Bluetooth transceiver module connected to the QuickFeathers’ UART port to wirelessly send the data to a PC or mobile device, both for recording training and test data and also for visualizing recognition results. Figure 3 shows the proposed system’s block diagram. Figure 4 shows the circuit diagram and Figure 5 shows the wearable prototype.

Figure 3
System block diagram
Figure 4
Prototype circuit diagram
Figure 5 Prototype of the wearable device
Figure 5
Prototype of the wearable device

The process of capturing data was very straightforward. I wore the prototype and performed each one of the targeted motion patterns to generate and record corresponding accelerometer data. Sure enough, I didn’t use a real ax or a chainsaw. It was just safer and more convenient to mimic the motions indoors, near my laptop and workbench. For the idle motion pattern, I just put my arm over my desk and stayed still for a few seconds alternating various positions and orientations. For the walking pattern, I walked around the room. For the using a chainsaw one, I put my fist on the desk and shook the arm fast, trying to mimic the vibration induced by a chainsaw motor on the operator. For the using an ax pattern, I mimicked having an ax in my hands and chopping some wood.

In the QuickFeather board, the MC3635 accelerometer is connected to the EOS S3 SoC via I2C. The QORC-SDK contains the necessary drivers and libraries for accessing data from the accelerometer. When capturing sensor data, these are sent outside the QuickFeather via the UART port, and can be captured in a computer through a UART-to-USB converter. Instead, I connected an HC-06 Bluetooth module to the UART port to send the data wirelessly to my laptop (see Figure 4). To record the training and testing data I used the Data Capture Lab application for the Windows 10 operating system, although there’s also a version for Android devices.

The QORC SDK already includes an application example called “QuickFeather Simple Streaming Interface AI Application Project” (qf_ssi_ai_app) that performs both data collection and motion recognition with the onboard accelerometer. It uses the simple streaming interface (SSI) protocol to stream the sensor data to the DCL application. The SSI protocol is a JSON-based data interface primarily intended for prototyping with off-the-shelf boards like the Arduino or the like [7]. Figure 6 shows a screen capture of the Minicom terminal emulator in my laptop receiving SSI data. I used Minicom on Windows Subsystem for Linux (WSL), but Tera Term or PuTTy can be used instead. For more complex projects that demand full command and control, there’s also the “MQTT-SN Interface Specification” data interface.

Figure 6 Terminal emulator displaying SSI data interface protocol
Figure 6
Terminal emulator displaying SSI data interface protocol

To begin capturing data, I compiled the qf_ssi_ai_app project and flashed the firmware to the QuickFeather. To compile QORC SDK projects, you have two options: from the command line, or using the Eclipse IDE. The official documentation [8] explains how to set up the development environment. I opted for setting up the Eclipse IDE on Windows 10.

The qf_ssi_ai_app example is based on FreeRTOS and uses the RTOS timer to read the sensor every 10ms (100Hz sampling rate). After building the application, I flashed the firmware to the QuickFeather with the TinyFPGA Programmer Application command line tool. A Segger J-link debugger can be used as well to flash the firmware directly from the Eclipse IDE. Next, I opened the DCL application and created a new project. In the new project, I imported the dcl_import.ssf device plug-in file from the qf_ssi_ai_app folder and then, I selected the device protocol (SSI) to have the DCL correctly interfaced with the firmware running on the QuickFeather. There are available video tutorials that show step-by-step the workflow described here [9]. The QORC SDK official documentation is also a great place to get you started.

Next, in the project you have to add a new sensor configuration and select the “QuickFeather SimpleStream” plug-in. A sample rate of 100Hz should be configured as well in the sensor properties. That basically means the data will be processed 100 samples at a time for training the ML algorithms and also for performing the classification (recognition). It is possible to choose other sample rates (50, 200, 250 and 333) for motion capture. The option for voice capture is also available if you want to use the onboard microphone, and examples of voice recognition applications are also available in the QORC SDK.


Next, in the project properties we must add labels for the motion patterns which we want to train for. So, I added the labels: walkingusing_chainsawusing_ax and idle. Metadata fields can be also configured to easily manage the captured data files. For instance, I created the metadata fields: Data SetDevice and Subject. For the Data Set field two values are available: Train and Test. The Device field is for the device’s name used to generate the data (for instance: QuickFeather SimpleStream—COM8) and the Subject field is for the person’s name wearing the device when the data was captured.

As said before, the metadata enables us to manage our captured data conveniently. For instance, in the training step, we can choose training data files by selecting the Train metadata value. For evaluating the trained algorithm, we can choose data files tagged with the Test metadata value. If we want to evaluate how the generated algorithm behaves with a specific subject, say John, we can select all data files tagged with the value John in the Subject metadata field, and so on.

Before starting the data capture process, a label must be selected for the data you are about to record. Metadata fields can also be populated in this instance, or added later. Next, from the DCL GUI you can connect to the COM port assigned to the QuickFeather by the operating system and immediately begin to receive sensor data, which is visualized on the application’s window. In my case, I connected to the COM port assigned by Windows to the HC-06 Bluetooth module, after I paired it with my laptop. To begin recording data, click the Begin Recording button, and the same button again to stop the recording process. Alternatively, a Max Record Time in seconds can be set in the project settings; after that time the recording process stops automatically.

Figure 7 shows the recorded training data for the using_ax motion pattern (swinging an ax). The figure zooms into two segments (between blue and red vertical lines) that I manually defined after recording the data. These segments showcase the waveforms of the XYZ accelerometer data for when the ax is swung. In the first half of each segment the ax is lifted in the air and in the second half it falls down fast, causing the accelerometer data to saturate in some of the XYZ axes (see the clipping in the data). I recorded 90 seconds of training data for each motion pattern and additional 15 seconds of testing data to test the trained algorithm’s accuracy.

Figure 7
Captured data for the using_ax motion pattern

After collecting and labeling the training and test data, I then switched to the Analytics Studio Notebook application to build a query. I used the Windows graphical application, but there’s also a Python client that can be used from the command line or from a Jupyter notebook. And there’s a web application that runs directly on a web browser [10].

query for the Automated ML (AutoML) system contains the training data from the DCL project, properly selected and filtered based on metadata fields and labels. The Analytics Studio application will use AutoML on the cloud to build your model, taking into account target constraints (for instance, optimization metrics and classifier size in bytes) you can set to get the appropriate balance between accuracy and used resources in your device. After the building process, five models (rank_0 to rank_4) will be presented to you, each with a given accuracy, classifier size (in bytes), number of features, sensitivity and F1-score. For instance, Figure 8 shows the models I obtained. Although all of them show an accuracy of 100, after testing them in practice, rank_2 was the one that performed better. Coincidentally, it is the one that has the greatest classifier size (1,318 bytes) and a good number of features (8).

Figure 8 Auto Sense results
Figure 8
Auto Sense results

An Explorer Model tab in the Analytics Studio’s menu shows detailed information about model performance. It includes model visualization, a confusion matrix, a feature summary, a model summary, a pipeline summary and a knowledge pack summary. Using the Test Model menu tab, you can run your model against your recorded test data. To generate test results, SensiML emulates the firmware model classification for an accurate view of the performance. Classification charts of predictions vs. ground truth labels for each test data file are then generated as well. A summary in the form of a Confusion Matrix performance, across all files used in the test is also provided to see the accuracy percentage against the labels created in the Data Capture Lab.


After choosing the best model, you can generate a SensiML “knowledge pack” from the “Download Model” tab in the Analytics Studio application. This knowledge pack is generated as a binary that can be flashed and run directly on the QuickFeather, or a source code library that can be integrated in a C/C++ project. The qf_ssi_ai_app project already has a knowledgepack folder that can be replaced with the one downloaded. And you can add code for your specific application as well before compiling the project. If you use just the binary (as I did for this example), the QuickFeather will be in recognition mode by default after flashing the firmware and rebooting.

To flash the firmware, I used the Python TinyFPGA Programmer Application command-line tool. To run it, I used an Anaconda Python distribution I had already installed in my Windows laptop. I just had to install the pyserial package additionally, which is a dependency for TinyFPGA Programmer.

During my hands-on classification (recognition) tests, the model performed remarkably well with good accuracy (empirically between 90% to 100%), especially for the idlewalking  and using_chainsaw motion patterns. For the using_ax motion pattern the accuracy was somehow lower, around 60% to 80%, which is still remarkable considering I reached those accuracy levels just in my first time using the SensiML suite and after two model retrains adding a bit more data.

The model doesn’t properly account for a number of plausible variations of each motion pattern. For this first proof-of-concept, I always performed the motion patterns in a consistent manner, trying not to vary much the specific movements. For that reason, the model overfits the specific motions I performed and can’t generalize well to other motion nuances in each kind of activity. But of course, the model can be retrained with more data that include more motion variations to improve generalization, at the expense of a bigger memory footprint and more computing cycles for the processor.

Figure 9 shows a representation of the using_ax motion pattern being performed with the wristband prototype on the right arm, and Figure 10 shows a screen capture of the terminal emulator on my laptop showing the recognition output. In Figure 10, the classification number 2 is for using_ax and 4 is for walking (a 1 was for idle and a 3 was for using_chainsaw). Note that in the lapse between most of the ax swings (2), the model wrongly recognizes a walking pattern (4). From Circuit Cellar’s article code and files webpage you can download the DCL project for this article, as well as the binary and source knowledge packs generated for this example.

Figure 9 Performing the using_ax motion pattern
Figure 9
Performing the using_ax motion pattern
Figure 10 Terminal emulator displaying the recognition output
Figure 10
Terminal emulator displaying the recognition output

AI at the edge is one of the most exciting topics currently being discussed in embedded systems and IoT circles. The combination of AI and tiny embedded devices has the potential to heavily impact a great variety of applications, many of which already rely on embedded systems. At $49, the QuickFeather board is a great option to get you started on the subject, and the SensiML Community Edition license provides a zero-cost entry point for creating a proof-of-concept or prototype.

The quick proof-of-concept I made for this article gave me a pretty good grasp about what could be possible in terms of applications and solutions when using a board with the specs of the QuickFeather. And I haven’t even used the onboard FPGA yet, nor the voice recognition module! In the future, I plan on improving this prototype by retraining the system with more data and adding a couple of motion patterns, including some type of S.O.S. gesture to ask for help. I also tested the system with an Android device running the “Bluetooth Terminal HC-05” application to receive data from the wearable prototype. In the future, I might also build my own custom Android application as well.

Going forward, I’m considering building a wearable device for a gesture-based remote-control system to control PX4 based multirotors. It would be interesting to see if it’s possible to reliably control a drone with gestures. For instance, a quick tap with the wearable device hand, from below to the other hand would be the gesture for the drone to take off. A tap from above would be the gesture to land the drone. A vertical whirlwind motion with the wearable device hand would be the gesture to return the drone to the take-off coordinates and land, and so on. This could do for a great follow-up to this article. Let’s see if I can pull it off! 


[1] EOS S3 Ultra Low Power multicore MCU Datasheet,
[2] QuickFeather Development Kit,
[3] QuickFeather Dev Kit with SensiML AI Analytics Toolkit,
[4] Sensi ML AutoML,
[5] SensiML Analytics Toolkit Suite, Products,
[6] SensiML Toolkit Documentation,
[7] SensiML Simple Streaming Interface,
[8] QuickLogic Open Reconfigurable Computing QORC-SDK documentation,
[9] SensiML Tutorials and Guides,
[10] SensiML Cloud Application,

GigaDevice Semiconductor |
Infineon Technologies |
mCube |
Quicklogic |
SensiML |


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
| + posts

Raul Alvarez Torrico has a B.E. in electronics engineering and is the founder of TecBolivia, a company offering services in physical computing and educational robotics in Bolivia. In his spare time, he likes to experiment with wireless sensor networks, robotics and artificial intelligence. He also publishes articles and video tutorials about embedded systems and programming in his native language (Spanish), at his company’s web site You may contact him at

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Getting Started with AI at the Edge

by Raul Alvarez Torrico time to read: 16 min