CC Blog Projects Research & Design Hub

A Comprehensive Introduction of TinyML (Part 2)

Figure 1 The Edge Impulse platform for Tiny ML model development, the Arduino BLE Sense development board, and the Sony Spresense Board with the Camera Module for Smart Home system
Written by Dhairya A. Parikh

Adding Voice Control to Existing Smart Home Systems

This series will utilize TinyML for some useful projects. This part aims to add voice control to existing Smart Home Systems with an ESP32 device and TinyML. The next two parts will include Building a Home Security System with Face Recognition and Controlling Lights with a Magic Wand.

  • How to add voice control to existing smart home systems?

  • How to control RGB LED using voice command?

  • What is TinyML?

  • Why use Edge Impulse platforms for ML on edge applications.

  • Why use Arduino Nano 33 BLE Sense for AI applications?

  • How to set up Arduino IDE and Edge Impulse platform?

  • How do you collect audio data for model training?

  • How to create, train, and test the ML model through Edge Impulse?

  • Arduino Nano 33 BLE Sense

  • OLED display

  • ESP32 Board

  • nRF 52840 microcontroller

  • Arduino IDE

  • Edge Impulse platform

  • RGB LED

  • Keras library

  • Jupyter notebook

  • MobileNetV1 0.1 Keyword

  • Spotting

In the first article of the TinyML series, we covered the basics and created our very first TinyML project wherein we controlled the Arduino’s onboard LED use period instead of exclamation point voice! In this article, we will be creating a far more complex and fun project. I have an ESP32-powered custom board for Home Automation [1]. We will convert our Arduino Bluetooth Low Energy (BLE) Sense into a voice-controlled remote, allowing us to control our home appliances connected to the ESP32 board via voice interactions (Figure 1). This can be scaled further to work with multiple such devices as well.

Figure 1 The project in action with the ESP32 and Arduino BLE Sense devices.
Figure 1
The project in action with the ESP32 and Arduino BLE Sense devices.

This article is divided into three different parts for easier understanding. In the project hardware section, we will get familiar with the hardware we would be using in this project. Finally, we will see the flowchart, which describes how the project works. In the TinyML model development section, we will create the TinyML model for this project. We will be adding the capability to control three different home appliances using voice commands. We will define generic voice commands so that if we want to change the device connected to our ESP32 device, we can. In the code development and explanation section, we will be discussing the code for both the ESP32 and Arduino BLE Sense and how they will be communicating with each other through a BLE connection.

PROJECT HARDWARE

Let’s start by learning more about the two devices we will be using for this project. The smart home control system will be using the ESP32 development board. And we will create our own handheld device. It will be powered by the Arduino BLE Sense Board and the device will also have an OLED display showing the current state of switches.

Smart Home System: The system has the ability to control four relays through the ESP32. It opens limitless possibilities as ESP32 is configurable with almost every platform. Additionally, you will be able to connect and control your appliances using normal switches (manual feedback). There are three LEDs and a buzzer on the PCB as well so they can be used for alerts or notifications, for example, illuminate an LED if the Internet connection is down.

The circuit diagram (Figure 2) shows how to connect your home appliances to your smart home system. It is a pretty straightforward connection diagram and please turn off the main supply power before connecting the appliances to your PCB.

Voice Control System: The main hardware that we made from scratch is the handheld device built around the Arduino BLE Sense board. Additionally, we have also included an OLED display to view the present state of our switches. We are able to change the state just by using simple voice commands. The schematic diagram (Figure 3) of this device shows how to connect the OLED display to the Arduino BLE Sense.

— ADVERTISMENT—

Advertise Here

For this article, we also train a model, which allows us to control two switches on our smart home device using voice commands. We control it using our voice the relays on another device altogether connected through BLE. Once our device recognizes a particular phrase that our model is trained for, it will write a new value in the characteristics created for that particular switch.

Now the ESP32 will detect this change and it will alter the state of the corresponding relay pin. The complete project flow is shown in Figure 4.

Figure 2 Connection between home appliances to the smart home system PCB along with the switch connections to enable manual control if required.
Figure 2
Connection between home appliances to the smart home system PCB along with the switch connections to enable manual control if required.
Figure 3  Schematic diagram on how to connect an I2C OLED Display to the Arduino BLE Sense. The Arduino pins used for this have been marked in the diagram as well.
Figure 3
Schematic diagram on how to connect an I2C OLED Display to the Arduino BLE Sense. The Arduino pins used for this have been marked in the diagram as well.
Figure 4 The workflow of the entire project, a visual explanation of how the project exactly works, starting from the audio input to the relay state changes.
Figure 4
The workflow of the entire project, a visual explanation of how the project exactly works, starting from the audio input to the relay state changes.

Now that the project hardware is set up, we will train the model for this project.

AUDIO MODEL TRAINING AND DEPLOYMENT

Here, we train a TinyML model for our Arduino BLE Sense which will recognize 1-second audio samples in one of these six classes:

  • Lights On
  • Lights Off
  • Fan On
  • Fan Off
  • Noise
  • Unknown

These keywords are customizable and you can choose any word or phrase of your choice. I have chosen the phrases for simplicity. We used the Edge Impulse platform to train this model as well, as we did in the first article of this series but for this project, we will utilize several features of the platform which we skipped in the last article. So, let’s start with the first step, which is to collect the audio data.

Audio Data Collection: Navigate to the Edge Impulse website and log in with your credentials. Next, create a new project in Edge Impulse and give it a name of your choice and choose the Developer option in project type. Once you are on the project homepage, just navigate to the Data Acquisition tab. The data for Noise and Unknown labels will be taken from the same keywords dataset we used in the first article.

We just have to collect some data for the first four labels. We used the Arduino BLE sense’s microphone to collect the data in the last project but there is a drawback in that mode and we can only collect data for a 21-second window at once because of the limited memory of the microcontroller.

This means we would have to collect a lot of such windows for each sample and then split them. Instead, we will use our smartphone to collect long 60-second samples at once so that we can quickly collect the data for all the labels.

For the Data Acquisition tab, there will be an option that says “Did you know?” at the top. It will allow you to select a mode for data collection once you press the “Show Options” link. We will use the “Use your mobile phone option.”

— ADVERTISMENT—

Advertise Here

Just click on the “Show QR Code” box. It will open a new window with a QR code. Just scan that QR code from your mobile device and it will open a link in your browser. As soon as the link opens, you will see that the phone will be visible in the selection list for available devices. You will get a basic user interface for data collection.

There will be three options asking you what type of data you wish to collect. Just click on the “Collecting Audio” button. They will redirect you to a new page and open a pop-up window asking you to give microphone access.

Just press the “Allow” button and the audio data collection interface will open. The UI for this screen is shown in Figure 5. As you can see from the figure 5, you will have the ability to choose the label for your audio data, the length of the sample, and the category under which you would like to record this sample (Training or testing data).

Figure 5 The web interface provided by edge impulse to record audio data from smartphones. We can set the label name, length of audio (in seconds), and the Category for which we would be recording the audio.
Figure 5
The web interface provided by edge impulse to record audio data from smartphones. We can set the label name, length of audio (in seconds), and the Category for which we would be recording the audio.

The next step is easy. Just select a sample and specify the length to 60 seconds and record all the audio under the training category.

Please note that we will not be using a pre-trained model architecture for this project as we did in the first article so you will need to collect considerably more data for each sample. I would suggest collecting at least 5 minutes of data per label. I was able to collect 35 to 40 good samples from the 60-second windows.

Hence, you will have to collect seven to eight such windows of data for each label. Once that is done, select 300 random samples each for the noise and unknown labels from the keywords dataset. Just upload them using the UPLOAD DATA section.

This completes the data collection for this project. Next, we will build and train our machine learning model.

Model Creation and Training: Once the data is collected, we can create our model, which is referred to as an Impulse. Just navigate to the Impulse Design section and create the basic design for your model as we did in the earlier project.

For this project, we chose the MFCC processing block, which uses the Mel Frequency Cepstral Coefficients (MFCC) as audio features. For the learning block, we chose an artificial neural network (ANN) classifier and finally, the output features show six possibilities (which are our labels). The final Impulse design for the project is shown in Figure 6.

Figure 6 Project Impulse design where we have a single MFCC processing block for learning and we will define a neural network classifier and train our model from scratch.
Figure 6
Project Impulse design where we have a single MFCC processing block for learning and we will define a neural network classifier and train our model from scratch.

Next, we generated the MFCC parameters for each datapoint. To do this, first, you need to press the green Save Impulse button shown on the Impulse Design screen, which gives you access to the MFCC section in the explorer window on the left side of the screen.

Once you open that, you will see the parameters section open by default. We kept these parameters at their default value. Clicking the “Save Parameters” button (at the bottom of the screen) will redirect you to the Generate Features section.

Once there, click the “Generate Features” button and then wait while the system trains and generates the MFCC. This takes a few minutes to complete, once it is done you can view a plot of all your data samples in a graphical view, which they call the Feature Explorer.

In my opinion, it is one of the most powerful features that Edge Impulse provides because you can visualize how well your model will be able to differentiate between these samples. If everything is done as instructed, you should have a graph, as shown in Figure 7, of a very distinct separation between the six classes (in this case, noise and unknown voices have poor separation as they are non-human voices).

Figure 7 The Feature Explorer graphical interface. There should be a considerable distance between each sample group as that indicates that the model will be able to distinguish between the samples using the generated features.
Figure 7
The Feature Explorer graphical interface. There should be a considerable distance between each sample group as that indicates that the model will be able to distinguish between the samples using the generated features.

Finally, we are now ready to train our model. Just move to the Transfer Learning option from the left side menu, which will open a new page. Here, you need to specify the necessary parameters for model training.

Just select the Neural Network settings as follows:

— ADVERTISMENT—

Advertise Here

Number of training cycles: 100
Learning rate: 0.005
Validation set size: 20%
Data Augmentation: Yes
Add Noise: None
Mask Time bands: Low
Mask frequency bands: Low
Warp time axis: No

Please note that the settings of your impulse are totally up to you. You can experiment with different settings as I did to get the best performance, in terms of accuracy and on-device performance. The second one needs to be taken into consideration as you may get a very good model but its ram usage and inference time would be above the threshold for your microcontroller to handle.

Next, define the Neural Network architecture for our model training. We created our architecture using the blocks given instead of using a pre-trained model as we did earlier. We used 2D convolutional layers for this project, so please choose the 2D Convolutional option for architecture presets. The final model architecture is shown in Figure 8. You can use the Add an extra layer button at the bottom to add the required layers. The configuration of each layer is visible in the architecture.

Figure 8 Neural Network model which used 2D convolutional blocks and three convolutional layers.
Figure 8
Neural Network model which used 2D convolutional blocks and three convolutional layers.

The setup is now complete and we can proceed with model training. Just click on the green “Start Training” button at the bottom of the screen and the process should start, logs are visible in the Training output window on the top-right. Once it is done, you will be able to see the performance statistics in the section below the Training output. Figure 9 shows how the performance stats are shown in Edge Impulse.

Model Testing: We will now test our model on the test dataset. Just follow the same steps and move to the Model testing tab and press the Classify all button. This should run the inference for each test datapoint after generating MFCC features for them and you should see a similar output to what you saw in Figure 9.

Figure 9 The model training output represents accuracy and loss along with a confusion matrix showing how many correct and incorrect assumptions were made.
Figure 9
The model training output represents accuracy and loss along with a confusion matrix showing how many correct and incorrect assumptions were made.

Model Deployment: This completes the Model training section. Next, deploy this model on the Arduino BLE Sense board. We need to add custom code on top of the model output. To do so create an Arduino Library for this model. Go to the Deployment tab, then press the Arduino Library option under the Create Library section and then just select the Quantized(int8) optimization and finally press the Build button. This should let you download the zip file for your model, which you will be able to directly import as a library in Arduino IDE. Now that we have Hardware set up and the model trained, it’s time to write some code!

CODE DEVELOPMENT AND EXPLANATION

The full code for both the Arduino BLE Sense and ESP32 board can be found in the GitHub repository of this project. The ESP32 code is pretty simple and I have added detailed comments for easier understanding.We will learn more about the additions we will be making in the Model inferencing code for our Arduino BLE Sense. We have to add the following things:

  • Add BLE capabilities, which will allow the board to write values in a particular BLE characteristic. The ESP32 will read the updated value and change the state of its relays accordingly.
  • Logic to infer the model output label and write a particular value to a BLE characteristic.

For this project, we will do this for two relays and create two BLE characteristics, one for each switch. For the easier distinction between the TinyML Library code and the code I have added, I marked each section with a comment saying Code Addition so you do not get confused. So, let’s start from the beginning. First, we will change the number of slices per model to 1 instead of its default value of 3. That means the model will infer a 1-second window. Next, in addition to the existing library imports, we also need to add the ArduinoBLE and SFE_MicroOLED library. Finally, we need to declare all the variables we would be using in our code and initialize the OLED object. We also need to define UUIDs for the service and the characteristics we would like to use. The same UUIDs should be used in the ESP32 code to establish a BLE connection Listing 1.

Next, we add some code in the setup function to start the BLE service on our Arduino BLE Sense. The code will be stuck on this section until the BLE service begins (Listing 2). Moreover, we can start the OLED and print Circuit Cellar on it once. Now to the main part that defines the logic for this project. First, in the loop function, we will need to connect to the ESP32 BLE service using the ESP32uuid defined earlier and then connect to both the switch characteristics defined within that service. Then we create an object for each characteristic which can be used to fetch the present value. The code to achieve this is given in Listing 3.

Now, develop the main logic of this project—writing the value in a characteristic based on the model output label. We have chosen a confidence probability of 60% or 0.6 so if the model is 60% or more certain that the heard audio is one of the four labels, we can write the value to its specific characteristic. Please find the code defined for one label. This duplicates the same for other labels as well with changes in just the value we would be writing or the characteristic to which we would be writing it. After that, based on the flag values set during the logic, we define a choice value and print the states on both the switches on the OLED display based on Listing 4. We will add the control logic for the remaining labels in the same manner in else if blocks. That is all for the code changes. The logic for printing the status of the switches on the OLED display is achieved using an if-else if block only and once we have the choice value, call the printoled function. Both the logic and the function code are given in Listing 5.

LISTING 1
Importing the required libraries and declaring the necessary variables, constants, and objects.

// Constants 
#define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 1 
// Library Imports
#include <ArduinoBLE.h>
#include <Wire.h>
#include <SFE_MicroOLED.h>
#define DC_JUMPER 1
#define PIN_RESET 9 
// BLE constant declarations (Use these same uuids in the ESP32 Code)
const char* ESP32uuid = “99dcc956-e6e1-4fd8-800b-8d3ea14b308a”;
const char* Switch1CharacteristicUuid = “5ed88eac-d876-4e57-85c0-1391af164cbe”;
const char* Switch2CharacteristicUuid = “d6b3f3d6-1ddb-4bcd-a50a-d5550d8d542b”;
String switch1message = “”;
String switch2message = “”; 
MicroOLED oled(PIN_RESET, DC_JUMPER); // I2C declaration 
int SCREEN_WIDTH = oled.getLCDWidth();
int SCREEN_HEIGHT = oled.getLCDHeight();
int flag1 = 0;
int flag2 = 0;
int choice = 0;
LISTING 2

Initializing the BLE Service and the OLED Display and printing the Welcome message on the display.

// Starting the BLE Service
if (!BLE.begin()) {
    Serial.println(“* Starting Bluetooth® Low Energy module failed!”);
    while (1);
}
  
BLE.setLocalName(“Nano 33 BLE (Central)”); 
BLE.advertise(); 
// OLED code
oled.begin();
oled.clear(ALL);
oled.display();
oled.setFontType(0);
oled.setCursor(0,0);
oled.print(“Circuit\nCellar”);
oled.display();
delay(2000);
oled.clear(PAGE);
Serial.println(“ OLED Online “);
LISTING 3
Searching for our Smart Home system and once you connect to it, define 2 characteristics, one for each switch.

// BLE code for this project
// define an object for BLE Device
BLEDevice peripheral; 
Serial.println(“- Discovering peripheral device...”);
do
{
    BLE.scanForUuid(ESP32uuid);
    peripheral = BLE.available();
} while (!peripheral);
    
if (peripheral) {
      Serial.println(“* Peripheral device found!”);
      Serial.print(“* Device MAC address: “);
      Serial.println(peripheral.address());
      Serial.print(“* Device name: “);
      Serial.println(peripheral.localName());
      Serial.print(“* Advertised service UUID: “);
      Serial.println(peripheral.advertisedServiceUuid());
      Serial.println(“ “);
      BLE.stopScan();      
} 
// Define the Characteristics of each switch to which we would be writing the values
BLECharacteristic Switch1Characteristics=peripheral.characteristic(Switch1CharacteristicUuid);    
BLECharacteristic 
Switch2Characteristics = peripheral.characteristic(Switch2CharacteristicUuid);
LISTING 4
Code to run inference on the collected data sample from the Arduino BLE Sense board and publish values on the corresponding switch characteristics.

// Main project logic
if (result.classification[ix].label == “lights on” && result.classification[ix].value > 0.6) 
// Keeping the confidence value at 60%
        {
          if(peripheral.connected()) { 
            Serial.println(“Turning on the Lights!”);
            char* msg = “1”;
            Switch1Characteristics.writeValue(msg);
	flag1 = 1;
          }
        }
LISTING 5
Code for OLED display logic, which shows the live state of the smart home system switches.

// Logic
// Get the value of choice for OLED Display
    if((flag1 == 0) && (flag2 == 0)) {
     choice = 1;
    }
    else if((flag1 == 1) && (flag2 == 0)){
     choice = 2;
    }
    else if((flag1 == 0) && (flag2 == 1)){
     choice = 3;
    }
    else if((flag1 == 1) && (flag2 == 1)){
     choice = 4;
    }
    printoled(choice); //Function call 
// Function Code
void printoled(int val)
{
 switch(val)
 { 
   case 1:  
    delay(200);
    oled.setFontType(0);
    oled.setCursor(0,0);
    oled.clear(PAGE);
    oled.print(“Switch 1: “);
    oled.print(“ OFF”);
    oled.print(“\nSwitch 2: “);
    oled.print(“ OFF”);
    oled.display();
    break; 
   case 2:  
    delay(200);
    oled.setFontType(0);
    oled.setCursor(0,0);
    oled.clear(PAGE);
    oled.print(“Switch 1: “);
    oled.print(“ ON”);
    oled.print(“\nSwitch 2: “);
    oled.print(“ OFF”);
    oled.display();
    break; 
   case 3:  
    delay(200);
    oled.setFontType(0);
    oled.setCursor(0,0);
    oled.clear(PAGE);
    oled.print(“Switch 1: “);
    oled.print(“ OFF”);
    oled.print(“\nSwitch 2: “);
    oled.print(“ ON”);
    oled.display();
    break; 
   case 4:  
    delay(200);
    oled.setFontType(0);
    oled.setCursor(0,0);
    oled.clear(PAGE);
    oled.print(“Switch 1: “);
    oled.print(“ ON”);
    oled.print(“\nSwitch 2: “);
    oled.print(“ ON”);
    oled.display();
    break; 
   default: 
    delay(200);
    oled.setFontType(0);
    oled.setCursor(0,0); 
    oled.clear(PAGE);
    oled.print(“Circuit\nCellar”);
    oled.display();
  }
}

Now upload the code on the Arduino BLE Sense and the ESP32 Board through the Arduino IDE. Once it is done, you can control our ESP32 Smart Home System relays using just voice commands.

Congratulations!! You have completed this project.

The next part of this series will include Building a Home Security System with Face Recognition. We will use yet another development board, the Sony Spresense along with its camera module to build a home security system that can recognize faces and perform actions based on the output. I hope you found this article informative and valuable. 

RESOURCES
Arduino Nano BLE Sense | https://docs.arduino.cc
Edge Impulse | www.edgeimpulse.com
TinyML | www.tinyml.org

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • AUGUST 2022 #385 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Dhairya Parikh is an Electronics Engineer and an avid project developer. Dhairya makes projects that can bring a positive impact in a person’s life by making it easier. He is currently working as an IoT engineer. His projects are mainly related to IoT and machine learning. Dhairya can be contacted on dhairyaparikh1998@gmail.com

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

A Comprehensive Introduction of TinyML (Part 2)

by Dhairya A. Parikh time to read: 15 min