Projects Research & Design Hub

A Comprehensive Introduction of TinyML (Part 1)

Figure 1 The Edge Impulse platform for Tiny ML model development, the Arduino BLE Sense development board, and the Sony Spresense Board with the Camera Module for Vision applications
Written by Dhairya A. Parikh

Control RGB LED Using Voice Commands

This series will utilize TinyML for some useful projects. This part aims to provide a comprehensive introduction to TinyML and develop a system to control RGB LED using voice commands. The other three parts will include Adding Voice Control to existing Smart Home Systems, Building a Home Security System with Face Recognition, and Controlling Lights with a Magic Wand.

  • How to control RGB LED using voice command?

  • What is TinyML?

  • Why use Edge Impulse platforms for ML on edge applications.

  • Why use Arduino Nano 33 BLE Sense for AI applications?

  • How to set up Arduino IDE and Edge Impulse platform?

  • How do you collect audio data for model training?

  • How to create, train, and test the ML model through Edge Impulse?

  • Arduino Nano 33 BLE Sense

  • nRF 52840 microcontroller

  • Arduino IDE

  • Edge Impulse platform

  • RGB LED

  • Keras library

  • Jupyter notebook

  • MobileNetV1 0.1 Keyword Spotting

Machine Learning (ML) is one of the fastest-growing technological fields in the last three decades. We are using ML models for a large variety of applications. For instance, every social media feed you explore is selected by various models running on the backend, providing you with posts that may interest you.

But training conventional ML models is computationally expensive, and even the best personal PCs and laptops cannot train these complex models. All these models run on massive data centers with numerous server clusters with CPUs and GPUs (even TPUs for deep learning applications).

So, what is the alternative? We can perform that locally instead of sending the input to a server or data center for inference. This will not only reduce the latency but also eliminate an internet dependency. This is precisely what TinyML does.

Figure 1 The Edge Impulse platform for Tiny ML model development, the Arduino BLE Sense development board, and the Sony Spresense Board with the Camera Module for Vision applications
Figure 1
The Edge Impulse platform for Tiny ML model development, the Arduino BLE Sense development board, and the Sony Spresense Board with the Camera Module for Vision applications
TINYML

According to the tinyML Foundation, “Tiny machine learning is broadly defined as a fast-growing field of machine learning technologies and applications including hardware, algorithms, and software capable of performing on-device sensor data analytics at meagre power, typically in the milliwatt (mW) range and below, and hence enabling a variety of always-on use-cases and targeting battery-operated devices.”

To explain it in simple terms, TinyML is a field of ML dedicated to running on device inference where the devices are incredibly low-powered microcontrollers. TinyML helps in providing—Low Latency output (fast inference in milliseconds), Extremely Low Power Consumption (perfect for battery-powered devices), and No Internet dependency (on the edge computation).

In terms of power consumption, TinyML has a huge impact. While consumer CPUs consume between 65W and 85W and most consumer GPUs consume anywhere between 200W to 500W. But a typical microcontroller consumes power in mWs or even microwatts, which is fantastic. With this advantage, there are some limitations as well.

— ADVERTISMENT—

Advertise Here

We can only train straightforward and basic models at the moment as the size of the model file is in kilobytes. Most microcontrollers for which these models are intended have a RAM and Flash Memory of less than 1 megabyte.

EDGE IMPULSE

Edge Impulse (Figure 2) is one of the leading development platforms for ML on edge applications. It is free for developers and provides additional services for enterprise-level projects and companies. It is a straightforward platform with excellent documentation for every obstacle you encounter.

Figure 2 The Edge Impulse platform assists you in developing ML models that can run on the edge on low power devices like microcontrollers.
Figure 2
The Edge Impulse platform assists you in developing ML models that can run on the edge on low power devices like microcontrollers.

Moreover, the developer community is highly active and helpful. For a typical ML workflow, the steps include realizing the problem, forming an initial solution, collecting the supporting data, training your ML model, validating the model performance, and deploying the model. With Edge Impulse, this workflow is highly simplified, and hence even people with zero code experience can create their TinyML models and deploy them in a matter of minutes.

This article will walk you through how to create your first TinyML model using the Edge Impulse Platform and deploy it on the Arduino BLE Sense development board.

ARDUINO BLE SENSE

The Arduino Nano 33 BLE Sense is the smallest AI-enabled Arduino board, with its dimensions being 45mm×18mm. The form factor is the same as the original Arduino Nano, making it easy to add AI capabilities to existing projects.

The significant difference is that a 3.3V supply powers the BLE Sense while the conventional nano operates on 5V. On the flip side, it has a lot of embedded sensors (Figure 3), which makes it the perfect board for a variety of AI applications. The on-board sensors have been listed below.

  • Nine axes inertial sensor: makes this board ideal for wearable devices
  • Humidity, and temperature sensor: to get highly accurate measurements of the environmental conditions
  • Barometric sensor: you could make a simple weather station
  • Microphone: to capture and analyze sound in real-time
  • Gesture, proximity, light color, and light intensity sensor: estimate the room’s luminosity, and also whether someone is moving close to the board
Figure 3 A detailed pinout and sensor definition diagram for the Arduino Nano BLE Sense development board
Figure 3
A detailed pinout and sensor definition diagram for the Arduino Nano BLE Sense development board

The board is based on the nRF 52840 microcontroller, which has a clock speed of 64MHz, RAM of 256KB, and a Flash memory of 1 megabyte. It has been specifically designed for ML applications making it the obvious choice for this article series.

METHODOLOGY

The goal of this project is to use simple voice commands to control the RGB LED on the Arduino BLE Sense board. For easier understanding, we will cover the methodologies of the project, including the setup of the Arduino IDE and Edge Impulse platform, collecting audio data for model training, creating and training the model, testing the model through Edge Impulse, and code explanation to control the RGB LED.

SETUP ARDUINO IDE AND EDGE IMPULSE PLATFORM

Setup of the Arduino IDE: The first thing we need to do is set up our Arduino IDE to support the Arduino BLE Sense board. There are a number of great tutorials available on the Internet to complete this process quickly. Just refer to the official guide by Arduino for the Arduino BLE Sense development board [1] to complete the setup process.

— ADVERTISMENT—

Advertise Here

Setup the Edge Impulse platform: We will be setting up the Edge Impulse platform to train our first tiny ML model and deploy it on the BLE sense development board.

First, head to the Edge Impulse website.Once the homepage opens, just sign in to your existing account, or else just create a new account as it would be required. Once this is done, now we have to set the Arduino BLE Sense device in Edge Impulse. They have a great article [2] for this, which is quite easy to follow.

If you have set up everything according to the tutorial, you should see your device under the Devices section of your project.

We will now move on to the next section, where we will collect audio data using our development board’s inbuilt microphone to create our first TinyML model.

AUDIO DATA COLLECTION

In this section, we will collect studio samples used to train a model that will allow us to change the RGB LED color on the Arduino BLE sense using simple voice commands. For this article, we will use the color name as the voice command for that color. For instance, we have to speak RED aloud to make the LED red. We will train the model to recognize three such colors—Red, Yellow, and Green.

We require two additional classes for any audio classification model—Noise and Unknown. We will be taking some data from a dataset that is publicly available and can be downloaded [3].

Collecting data in Edge Impulse is very easy and they have a great step-by-step tutorial [4] that helps a lot.

Please note that you will need to change the labels for each data sample but the rest of the process remains the same. The labels we would be using for this project are listed later in the article. We will use simple color names as labels. Hence, use red, green, or yellow. It would be best to do that until you have at least 60 such samples for each of the three labels (red, green, and yellow). If you have followed the tutorial correctly, you should have 1-minute data from five different samples.

Hence, we have completed the data acquisition part of this project. We will now train our first TinyML model using the data that we just collected.

TRAINING YOUR TINYML MODEL

Now that we have all the required data to train our model, the next step is defining a model, which will help us with audio classification. From the menu on the left side of the screen, go to the Impulse design tab. It will open a new webpage. There are three approaches to creating a model on Edge Impulse:

  • Use the UI provided by the website to create your model. You need to specify three blocks, and this option will give the least customization options.
  • Define a Deep learning model by writing your code using the Keras library. The language used in python.
  • The final option the website offers is to open a Jupyter notebook for you and write your custom code for everything.

For this article, we will stick to the most straightforward option and use the UI provided by the website. As mentioned previously, we need to configure three blocks.

First is the Input block. The default configuration would already be set for our case. It will be the pink block, saying that our input is time-series data with a duration of 1 second, i.e., our voice sample. Keeping the default parameters would be enough.

Next, we will need to add a processing block. For that, just click on the link saying: Add a processing block and select the Audio (MFE) option.

After this, we will have to specify a learning block, and the terminology used for the model. We have three options in this case which will be visible once you press on the Add a learning block link. This is our first project so we will use a pre-existing MobileNet model for audio keyword classification. For this, select the Transfer Learning option by clicking on the Add button. This completes all the required configurations for our model. Your screen will look as shown in Figure 4. Press on the Save Impulse button.

Figure 4 Impulse design where we use one processing block. Moreover, we will use transfer learning and a readily available model and train using that as a reference.
Figure 4
Impulse design where we use one processing block. Moreover, we will use transfer learning and a readily available model and train using that as a reference.

Before training, we will need to generate the MFE features for each audio sample. Once you have pressed the Save Impulse button, you will see that the MFE option will be available on the left menu. Just press on that option to open the MFE feature generation screen.

— ADVERTISMENT—

Advertise Here

You generally have to specify specific vital parameters about the features you generate. Just keep the default parameters and click on the Generate features button on the top for this project. This should open a new page where you will have to press the green-colored generate features button to start the feature generation process. This will take some minutes to complete, and once it is done, you will be able to view a plot of all your data samples in a graphical view, which they call the Feature Explorer. It is one of the most powerful features that Edge Impulse provides, as you will be able to visualize how well your model will be able to differentiate between these samples. If everything is done as instructed, you should have a graph, as shown in Figure 5, of a very distinct separation between the five classes.

Figure 5 Feature Explorer graphical interface, there should be a considerable distance between each sample group as that indicates that the model will be able to distinguish between the samples using the generated features.
Figure 5
Feature Explorer graphical interface, there should be a considerable distance between each sample group as that indicates that the model will be able to distinguish between the samples using the generated features.

Finally, we are now ready to train our model. Just move to the Transfer Learning option from the left side menu, which will open a new page. Here, you will have to specify the necessary parameters for model training. Just select the parameters as follows:

  • Number of training cycles: 50
  • Learning rate: 0.01
  • Validation set size: 30%

Check the auto-balance dataset option. Choose the MobileNetV1 0.1 Keyword Spotting model for the microcontroller.

Now press the Start training button. This will start the model training, which may take several minutes to complete. Once that is complete, you will be able to see a new section on the page wherein you will be able to assess the performance of your trained model on the validation data (a random subset of the unseen training data) as visible in Figure 6. You will be able to view metrics like Accuracy, Loss, Confusion Matrix, etc.

Figure 6 Model training output including accuracy and loss along with a confusion matrix
Figure 6
Model training output including accuracy and loss along with a confusion matrix

A feature explores graph will also be visible, showing you all the validation dataset’s correct and incorrectly classified samples. Finally, at the bottom, you will see the on-device performance metrics as well (these are for the Arduino BLE Sense Board specifically). This completes the training process.

Now, we will assess the model performance using the test dataset (20% of all the collected data samples).

ASSESS MODEL PERFORMANCE

Move to the Model testing tab from the menu on the left side. It will open a new page where you will see your test data samples. Just press the Classify all button, and that is all. Just sit back and relax while edge impulse classifies each test data sample using the trained model. Once that is done, you will see performance metrics similar to those visible on the Transfer Learning screen.

Next, we will deploy this model on our Arduino BLE Sense board, where we will control onboard RGB LED using voice commands.

CODE EXPLANATION TO CONTROL THE RGB LED

We will need a deployable instance from the Edge Impulse platform to deploy this model on our hardware. For that, open the Deployment tab from the menu. This will open a new window that will list the options available for deployment.

The Create library option will let you download the optimized source code for one of the mentioned specific boards.

Next, the Build firmware option will let you build and deploy the firmware from the website itself. We did not choose this option for the article.

Finally, you can even run the impulse on your computer or mobile through the Run your Impulse directly section.

For this article, we will create an Arduino library for our board. This allows us to write additional Arduino code to control the Arduino GPIO based on the model classification output. For that, just press the Arduino library button and then press the Build button to start the building process and finally download a library zip file.

Now, open Arduino IDE and navigate to Sketch >> Include Library >> Add .zip Library. Select the zip file you just downloaded from the Edge Impulse platform. This will open the code in front of you.

Now, if you compile and upload this code to your Arduino board, you will be able to see that the microphone constantly records samples of the surroundings and classifies them into one of the five classes. It prints the probability for each class on the Serial monitor. But we want to add our custom code into the mix to control the onboard RGB LED on the Arduino.

To do that, just note that pins 22, 23, and 24 are the pins for red, green, and blue LEDs, respectively. Moreover, we will change the EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW value from 4 to 1 as we do not want 250ms slices of our 1-second sample. Hence, we will define them as shown in Listing 1.

The next change will come in the setup () function. Here, we will set the mode for the LED pins as output and set them as 0 or off by default. Please note that these are low-level activated, so we will need to write a HIGH signal to turn them off (Listing 2).

The final code change will be in the loop() function where the actual classification code has been written. Here, we will add some additional conditions, which will set the LEDs according to the classified output. In the for loop where the prediction probability for each class is printed, just add the code given in Listing 3.

LISTING 1
Defining constants we will be using in the code
#define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 1
#define RED 22     
#define BLUE 24     
#define GREEN 23
LISTING 2
Configuring IO pins to which the RGB LED on the Arduino is configured as output and turning them off by default
pinMode(RED, OUTPUT);
pinMode(BLUE, OUTPUT);
pinMode(GREEN, OUTPUT);
// Turn off all leds by default
digitalWrite(RED, HIGH);
digitalWrite(BLUE, HIGH);
digitalWrite(GREEN, HIGH);
LISTING 3

The main logic of our project. It handles the RGB LED state according to the output of our model.
if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) {
	//print the predictions
	ei_printf(“Predictions “);
	ei_printf(“(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)”,
	result.timing.dsp, result.timing.classification, result.timing.anomaly);
	ei_printf(“: \n”);
	for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
	ei_printf(“    %s: %.5f\n”, result.classification[ix].label,
	result.classification[ix].value);
	if (result.classification[ix].label == “Red” && result.classification[ix].value > 0.8)
	{
		digitalWrite(RED, LOW);
		digitalWrite(BLUE, HIGH);
		digitalWrite(GREEN, HIGH);
		delay(2000);
		digitalWrite(RED, HIGH);
		digitalWrite(BLUE, HIGH);
		digitalWrite(GREEN, HIGH);
	}
	else if (result.classification[ix].label == “Yellow” && result.classification[ix].value > 0.8)
	{
		digitalWrite(RED, LOW);
		digitalWrite(BLUE, HIGH);
		digitalWrite(GREEN, LOW);
		delay(2000);
		digitalWrite(RED, HIGH);
		digitalWrite(BLUE, HIGH);
		digitalWrite(GREEN, HIGH);
	}
	else if (result.classification[ix].label == “Green” && result.classification[ix].value > 0.8)
	{ 
		digitalWrite(RED, HIGH);
		digitalWrite(BLUE, HIGH);
		digitalWrite(GREEN, LOW);
		delay(2000);
		digitalWrite(RED, HIGH);
		digitalWrite(BLUE, HIGH);
		digitalWrite(GREEN, HIGH);
	}
}

The code for my project is in the GitHub repository for this article if you require it for reference purposes. Please note that it may be not 100% accurate as this is a straightforward model trained with just 60 samples and running on the edge. We need a lot more data to create a complex model. We will be doing precisely that in the Part 2 of this series.

In Part 2, we will be training a complex audio classification model, and we will be using a lot more data and even use the advanced Keras model to define a model in Edge Impulse. I hope you found this article informative and valuable. 

REFERENCES
[1] Official guide by Arduino for the Arduino BLE Sense development board, https://www.arduino.cc/en/Guide/NANO33BLESense
[2] Article, set the Arduino BLE Sense device in Edge Impulse, https://docs.edgeimpulse.com/docs/arduino-nano-33-ble-sense
[3] Aaudio classification model—Noise and Unknown, https://cdn.edgeimpulse.com/datasets/keywords2.zip
[4] Collecting data in Edge Impulse, step-by-step tutorial, https://docs.edgeimpulse.com/docs/tutorials/responding-to-your-voice#2.-collecting-your-first-data

RESOURCE
Edge Impulse Website | www.edgeimpulse.com
GitHub Repo | https://github.com/Dhairya1007/TinyML-Article-1
Arduino Nano BLE Sense documentation | https://docs.arduino.cc/hardware/nano-33-ble-sense
Edge Impulse Documentation | https://docs.edgeimpulse.com/docs
Tiny ML Website | www.tinyml.org

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • JULY 2022 #384 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Dhairya Parikh is an Electronics Engineer and an avid project developer. Dhairya makes projects that can bring a positive impact in a person’s life by making it easier. He is currently working as an IoT engineer. His projects are mainly related to IoT and machine learning. Dhairya can be contacted on dhairyaparikh1998@gmail.com

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

A Comprehensive Introduction of TinyML (Part 1)

by Dhairya A. Parikh time to read: 13 min