CC Blog Projects Research & Design Hub

A Comprehensive Introduction of TinyML (Part 3)

Figure 1 The actual project hardware. This is the Sony Spresense (with the expansion board) connected with the add-on camera board. This hardware will run a vision model on the edge!
Written by Dhairya A. Parikh

Building a Home Security System with Face Recognition

As part of this series, we added voice AI capabilities to our existing projects and controlled a custom-built home automation system just using voice commands. Until now, we have only created projects wherein we have voice data. Let’s widen our scope now.

  • What is TinyML?

  • Why use Edge Impulse platforms for ML on edge applications.

  • How to build a home security system with face recognition?

  • What hardware would you require?

  • How does visual model training and deployment work?

  • What are the important facts about Edge Impulse setup?

  • How to get our model running locally?

  • Sony Spresense

  • Spresense camera board

  • TinyML model

  • Edge Impulse

  • Arduino Library

  • Silicon Labs CP210x USB

  • UART Bridge VCP

In this article, we will be creating a simple home security system that will recognize a person or face. We will be using a new piece of hardware for this project, the Sony Spresense along with its camera add-on board (Figure 1). This hardware is on Edge Impulse’s supported microcontrollers list so we will be able to use the platform to create a model for this board as well. Moreover, we will be deploying the project differently, by building and flashing pre-built firmware on our development board. This article is divided into three different parts for easier understanding.

Figure 1
The actual project hardware. This is the Sony Spresense (with the expansion board) connected with the add-on camera board. This hardware will run a vision model on the edge!
Figure 1
The actual project hardware. This is the Sony Spresense (with the expansion board) connected with the add-on camera board. This hardware will run a vision model on the edge!

In the project hardware section, we will get familiar with the hardware we are using in this project. Finally, we will see the flowchart, which describes how the project works.

In the TinyML model development section, we will create the TinyML model for this project. We will give our device the capability to recognize faces and perform certain actions based on that. For this article, I am using two classes, one will be my face and the other will be unknown or no face essentially, wherein we provide training images of the surrounding objects in the house.

In the project demonstration section, we learn how to get our model running locally on the Sony Spresense and see the project in action.

PROJECT HARDWARE

In this article, we create a simple hardware schematic wherein we connect our Sony Spresense (equipped with its camera) to a relay module. Before we move on to the circuit schematic, let us familiarize ourselves with the Sony Spresense board to know more about its capabilities and features.

Sony Spresense (Figure 2) is a very powerful microcontroller with a six-core ARM Cortex-M4F processor and it supports high-res audio and camera input. The main features of this development board include:

— ADVERTISMENT—

Advertise Here

  • ARM Cortex-M4F x 6 cores CPU
  • 1.5 megabytes RAM
  • 8 megabytes Flash memory
  • Digital I/O support: GPIO, SPI, I2C, UART, I2S
  • Two-channel ADC (0.7V range)
  • On-chip GPS
  • Support for external board connections

For this project, we also need the Spresense camera board (Figure 2) (the normal or HDR one), which can directly connect to the main board. The specifications of the normal camera board include:

  • Size: 24 mm x 25 mm
  • Number of pixels: 5.11 megapixels
  • Operating voltage: 3.7V
  • CMOS 8-bit parallel camera interface
  • Supported Output formats: Y/C, RGB, RAW, and JPEG
  • Control Interface: I2C
  • FOV: 78° ± 3°
  • Depth of field: 77.5 cm ~ ∞
  • Fixed-focus camera

The major advantage of using this development board is the detailed documentation and examples provided by Sony. There are several ways in which we can program the board. Sony has developed its very own Spresense SDK for this as well.

As previously discussed, we will show the model inference output for this article by flashing pre-built firmware. However, this is just a tutorial to get you started and there are several applications for this, for example, a home security system that alerts when it detects a person or an automatic lighting system that turns on when it recognizes you. We can even change configurations according to the person it recognizes!

Figure 3 shows the flowchart for how this project works. Now that the project hardware is set up, we will train the model for this project.

Figure 2
A high-resolution image for the Sony Spresense main board and the Sony’s Camera Add-on board.
Figure 2
A high-resolution image for the Sony Spresense main board and the Sony’s Camera Add-on board.
Figure 3
The project flow for the entire project. This gives a visual explanation of how the project exactly works, starting from raw image capture to the final inference output.
Figure 3
The project flow for the entire project. This gives a visual explanation of how the project exactly works, starting from raw image capture to the final inference output.
VISUAL MODEL TRAINING AND DEPLOYMENT

In this article, we will train a TinyML model for our Sony Spresense, which would recognize a captured image in one of the two classes:

  • Face (label can be anything, even your name, which I have done)
  • Unknown (anything other than that)

These labels are customizable and you can choose any word or phrase of your choice. I chose the provided phrases for simplicity. Please note that an image has a lot more data compared to the audio samples we have used so far. Hence, we would not get a high accuracy score unless we train on a very large dataset. For this project, you will be collecting:

  • 50 images of yourself (only the face and upper body)
  • 50 images of anything without people (I have chosen my own home’s furniture)
  • So, let’s start with the first setup.

Edge Impulse Setup: We will be creating a new project on Edge Impulse for this article. Just go to the homepage and log in using your existing credentials. Once you have signed in, you will see a select project page from where you can just click on the Create new project button and follow the steps from there.

Once your new project is open in the Edge impulse studio, the next step is to connect your Presence to this project. We will have to flash the Edge impulse firmware on Spresense for this. Once the firmware has been flashed to your Spresense, just open a new CMD or terminal window and type the edge-impulse-daemon command.

It will ask you to select the port to which your device has been connected. Please note that this is the port you have used while flashing the firmware. You will be asked for your Edge Impulse credentials as well so please keep them in hand. Once that is complete, you will have to select the project to which we want to connect our device. Just select the new project we created using the arrow keys and then press enter. Finally, it will ask you the name you want to give your device for which you can type in any name of your choice (our choice was very obvious, Sony Spresense). After a few seconds, you should see your device listed in the Devices section of your Edge Impulse project!

— ADVERTISMENT—

Advertise Here

Now we can use Edge impulse to train a project for our Spresense. So, let’s get started with Data Acquisition, which in this case is uploading our dataset to Edge Impulse.

Data Acquisition: The data collection process will be a little different for this article. Until now we have been using our microcontroller for data collection, but this time we will change things a bit as gathering image data from a constrained device such a microcontroller is difficult and time-consuming.

Instead, just take a snap of all the 100 images directly on your smartphone. It is recommended to take the photos from various angles and in different lighting conditions to make the model more robust to such parameters. Once you have taken these images, we will transfer them from the phone to your PC or laptop.

Once this is done, add these images to the Edge Impulse project so that we can train our model on this data. For that, we will use the Upload existing data feature provided by the platform (we have already used this in Part 1 and Part 2). Open the Upload data tab and from there, upload 50 images for each class in a single go. Add the label name for the class accordingly, select the “Automatically split between training and testing” option under the Upload in category section, and then press the Begin Upload button (Figure 4). Once the data for both classes has been uploaded, you will be able to see the data in the Training and Testing data tabs. This concludes the data collection process for this article. Next, we will train our very first vision-based Tiny ML model!

Model Creation and Training: Now that we have the data for both classes, we can now use these images to create and train our very first vision-based TinyML model. The process is the same as before. Go to the Impulse design tab from the menu, which will open the Create Impulse page. Here, we need to specify the size of the image we would like as input. This is a very important parameter as it decides the input dimension for our model. As we have to use Transfer learning for this project, we will keep the image size to 96×96 and the Resize mode as Fit longest axis (to know more about the different modes, just hover on the image beside the selection box). Next, we will add a processing and learning block for our impulse. Just click on the Add processing block and Add learning block button and select Image and Transfer Learning respectively. The final impulse design for this project is shown in Figure 5. Now just press the Save Impulse button. This will create a new sub-section named Image under the Impulse design tab. Next, we need to create the features that will be used for model training.

Once you move to the Image section, you will first see the Parameters section open. Here, you can see the RGB processed features for each image. We need to generate these features for each image. We just have a single configurable parameter, in this case, color depth. For this project, we will keep the color depth as RGB and then move to the Generate features tab. Just press the Generate features button to start the process. It will take two or three minutes to generate features and once this process is complete, you will be able to feature explorer visualization (which should have well-differentiated clusters for both classes) along with performance parameters for on-device feature generation. Please refer to Figure 6 to see what this looks like.

Figure 4
Upload data section in Data Acquisition will be used to add existing data (for both labels) directly to training and testing data to create a TinyML model using that data.
Figure 4
Upload data section in Data Acquisition will be used to add existing data (for both labels) directly to training and testing data to create a TinyML model using that data.
Figure 5
The Impulse design for the project. We have a processing Image block that resizes and generates image features from the RAW image and we will use Transfer learning to train on top of a pre-trained MobilenetV2 model.
Figure 5
The Impulse design for the project. We have a processing Image block that resizes and generates image features from the RAW image and we will use Transfer learning to train on top of a pre-trained MobilenetV2 model.
Figure 6
The Featre Explorer visualization after feature generation shows you the separation between the feature values of different labels. Ideally, they should be finely separated clusters (as is the case in the image). In addition to that, we also have the performance stats, which show the amount of RAM and Flash usage to generate these features on the Sony Spresense.
Figure 6
The Featre Explorer visualization after feature generation shows you the separation between the feature values of different labels. Ideally, they should be finely separated clusters (as is the case in the image). In addition to that, we also have the performance stats, which show the amount of RAM and Flash usage to generate these features on the Sony Spresense.

Finally, we will now train our model. Move to the Transfer Learning block and set the following:

  • Number of training cycles: 20
  • Learning rate: 0.0005
  • Validation set size: 20%
  • Data augmentation: Selected
  • Model: MobilenetV1 96×96 0.25

Once all these have been set, just click on the Start training button to start the training process. This will take a few minutes. Once the model training is complete, you will be able to see all the performance statistics in the same window. Figure 7 shows the stats we got for my model.

As you can see from the figure, we got an accuracy of 81.3% which is considered a good performance considering that the model has only been trained on 100 images. The main thing to consider is the on-device performance statistics as we it comes to vision models; the RAM and Flash consumption needs to be monitored.

Now, we will test our model by taking some images from our Sony Spresense!

Model Testing: We now have a trained model. To test this, we will use our Sony Spresense board. Just connect the board to your PC and run the edge-impulse-daemon command in the terminal window. In a minute or so, you should see your device online on Edge Impulse.

Go to the Live classification section now. Here, you will be able to capture images from your Spresense and run them through our classifier. You will also be able to see the live camera feed, so once you have the desired snap, just press the Start sampling button. This will take the picture and run it through our classifier. It will finally give you an output with the probability for each class. Please refer to Figure 8 for a detailed view of the output.

Next, we will utilize our model on the test data to get the testing Accuracy score. For this, move to the Model testing section and click on the Classify all button. This will start the feature generation and inferencing process on the entire test dataset and after two or three minutes, you will see a similar output as you did after the model training was complete. Our model got a testing accuracy of 90.48%, which is acceptable considering the size of the dataset the model was trained on.

This concludes the testing section. We will finally deploy this model and run it locally on our Sony Spresense board with the Camera attached.

Model Deployment: We will be doing something new for this project and building firmware for the TinyML model we just created. Just go to the Deployment tab, then select the Sony Spresense board under the Build firmware section (Figure 9), and then just select the Quantized(int8) optimization (the accuracy will be lesser than the Float model but it is worth the fast inference time and less RAM and memory usage) and finally press the Build button.

— ADVERTISMENT—

Advertise Here

This should start the build process, which will take a few minutes to complete. Once the firmware has been built, a zip file will be downloaded and a new window will pop up with a video that shows you how to use this firmware with the Spresense board.

You just built custom firmware for your Spresense board. Now, we will see how to test it and see the project in action.

Figure 7
Model performance on training and testing data. The metric used for this evaluation is accurate and we also have a confusion matrix to better understand the performance. Finally, we also have the on-device performance statistics.
Figure 7
Model performance on training and testing data. The metric used for this evaluation is accurate and we also have a confusion matrix to better understand the performance. Finally, we also have the on-device performance statistics.
Figure 8
Live Classification output shows us the raw image, and the generated features along with the inference output, which shows the classified label along with the probability for each label.
Figure 8
Live Classification output shows us the raw image, and the generated features along with the inference output, which shows the classified label along with the probability for each label.
Figure 9
We will be creating custom firmware for the Sony Spresense, which will let us deploy and test our model running on the Sony Spresense locally without any coding!
Figure 9
We will be creating custom firmware for the Sony Spresense, which will let us deploy and test our model running on the Sony Spresense locally without any coding!
PROJECT DEMONSTRATION

Now, we will flash the firmware on our Spresense board so that we can run the TinyML model we built on the edge. To do this, just extract the zip file you downloaded on a path of your choice and then depending on the operating system, run the flashing script (naming convention followed is flash_<OS>).

You may face some issues while flashing this firmware. There are two main issues we faced, which we will cover below.

The first time I tried to run the flash_windows.bat file, I got the following error given below.

AttributeError: Module ‘collections’ has no attribute ‘callable’

After a bit of research, we found out that the collections function has been changed after Python 3.10.x, which was causing this issue. The simplest way to resolve this is to remove Python 3.10.x and download and install any Python 3.9.x version of your choice. After you have done this, you will have to install the inquirer and pyserial packages using pip again (the commands will be available on the output screen when you execute the bat file)

After resolving this issue, the flashing process does start but I got another error, which is stated below:

excessive protocol errors, transfer aborted

The root cause for this issue is the default baud rate of 926000. To resolve this, you will have to run the flash_writer.py file present in your firmware directory (you will have to cd into it). Just type in the following command:

python -u flash_writer.py -s -d -b 115200 -n edge_impulse_firmware.spk

This will successfully flash the firmware to your Spresense. Now, we can finally see our model in action.

To run the model on the device, open a new command prompt/terminal window and connect your Spresense board (with the camera board connected) to the computer (using the USB of the main board and not the extension), and type in the following command on the command prompt:

edge-impulse-run-impulse –debug

It will then ask you to choose the port to which your Spresense is connected. Check the active ports from the device manager and select the one which says Silicon Labs CP210x USB to UART Bridge. Once you select the port, it will take a few minutes to initialize and then the inferencing will start and an additional line will be printed:

“Want to see a feed of the camera and live classification in your browser? Go to http://192.168.1.9:4915”

You can see the live camera feed and classification if you open this link in a browser in addition to the inference results you see on the terminal. This is great as you will be able to see the actual image on which the classification is running! Figure 10 shows our Spresense running the TinyML model on the edge.

Figure 9
We will be creating custom firmware for the Sony Spresense, which will let us deploy and test our model running on the Sony Spresense locally without any coding!
Figure 10
Project demonstration for this project, which shows the real-time inference along with the live feed of the Sony Spresense camera. You even get the time metrics that show how much time it takes for classification. As you can see, our Spresense can classify an image within 0.38 seconds!

You can easily configure actions, which would be triggered based on an inference output, like sending an intruder alert notification or turning on the bedroom lights automatically. To do this, you can simply download a C++ or Arduino Library from Edge Impulse and write a script according to your application requirement.

This marks the end of this article. In the final article of this series, we will build a model on top of accelerometer data on Edge Impulse. We hope you found this article informative and valuable. Please contact me personally if you have any questions or would just like to connect and chat. 

RESOURCES
GitHub Repository: https://github.com/Dhairya1007/TinyML-Article-2
Techiesms Article Link: https://techiesms.com/internet-and-manual-home-automation-using-blynk/
https://www.uuidgenerator.net/
YouTube Channel Link: https://www.youtube.com/techiesms
PCB Design Link: https://oshwlab.com/techiesms/interent-manual-using-esp32

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • SEPTEMBER 2022 #386 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Dhairya Parikh is an Electronics Engineer and an avid project developer. Dhairya makes projects that can bring a positive impact in a person’s life by making it easier. He is currently working as an IoT engineer. His projects are mainly related to IoT and machine learning. Dhairya can be contacted on dhairyaparikh1998@gmail.com

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

A Comprehensive Introduction of TinyML (Part 3)

by Dhairya A. Parikh time to read: 13 min