CC Blog Projects Research & Design Hub

Prototype AI Apps with SenseCAP K1100

Part 1: Model Training and Deployment

Microcontrollers (MCUs) today can run deep learning computer vision algorithms at the edge—something that felt far out of reach just a few years ago. In this article, I discuss how to get started with your own artificial intelligence (AI) app prototype, using Seeed Studio’s SenseCAP K1100 Kit.

Artificial Intelligence (AI) at the edge has seen a considerable breakthrough recently, especially regarding the use of microcontrollers (MCUs). Not long ago, a Linux-capable minicomputer was required to run deep learning computer vision algorithms at the edge. Today, it’s possible to run those algorithms on relatively small MCUs. Many (including me) didn’t see that coming a few years ago. This development opens a big door to a number of interesting applications.

In this article, I discuss the use of Seeed Studio’s SenseCAP K1100 Kit for prototyping AI-based Internet of Things (IoT) applications. The kit comes with an assortment of devices to let you easily get started prototyping IoT applications using sensors and Wi-Fi/Bluetooth/LoRa wireless communication. Particularly exciting is the possibility to combine all the above with embedded deep learning computer vision. The kit is advertised by the manufacturer as having a fast Artificial Intelligence of Things (AIoT) application deployment track, great extensibility with more than 400 “Grove” sensors to support many application and customization options, broad integration with cloud services, and an open-source programming platform compatible with beginner-friendly platforms such as Arduino. The kit comes with prototyping-friendly sensors out-of-the-box, but they can be upgraded to industrial-grade “SenseCAP” sensors if needed.

In Part 1 of this series, I will briefly test and review the kit’s deep learning computer vision capabilities, following the product’s documentation for training and deploying a custom object detection model. To understand what will be discussed here, you’ll need to have some familiarity with deep learning models, transfer learning, datasets, model evaluation parameters, and the Arduino development environment.

HARDWARE

Figure 1 shows the devices included with the SenseCAP K1100 Kit. From the top left to the bottom right, it comes with: (a) a Wio Terminal controller; (b) a Wio-E5 LoRa wireless communications module; (c) an SHT40 Temperature and Humidity sensor; (d) a Soil Moisture sensor; (e) a Vision AI sensor; and (f) an SGP30 volatile organic compound (VOC) and carbon dioxide (CO2) sensor, along with an assortment of patch cables. All mentioned modules are compatible with Seeed’s prototyping connector system, Grove. Let’s discuss the main characteristics and specifications of each device.

FIGURE 1
SenseCAP K1100 Kit components. a) Wio Terminal, b) Wio-E5 LoRa module, c) SHT40 Temperature and Humidity sensor, d) Soil Moisture sensor, e) Vision AI sensor, f) SGP30 VOC and CO2 sensor
FIGURE 1
SenseCAP K1100 Kit components. a) Wio Terminal, b) Wio-E5 LoRa module, c) SHT40 Temperature and Humidity sensor, d) Soil Moisture sensor, e) Vision AI sensor, f) SGP30 VOC and CO2 sensor

The Wio Terminal controller is an MCU module based on a Microchip ATSAMD51P19 MCU harnessing an ARM Cortex-M4F core that runs at 120MHz. It includes Bluetooth and Wi-Fi wireless connectivity powered by a Realtek RTL8720DN transceiver. It also integrates a 2.4” LCD Screen, an STMicroelectronics LIS3DHTR Inertial Measurement Unit (IMU), a microSD card slot, microphone, buzzer, light sensor and an infrared emitter. The Wio Terminal comes with two Grove ports and a Raspberry Pi-compatible 40-pin GPIO port. Using its two Grove ports, the Wio Terminal can interface with more than 400 Grove sensor modules available from the manufacturer.

The Grove Wio-E5 LoRaWAN module is based on a STMicroelectronics STM32WLE5JC 32-bit ARM Cortex M4, an ultra-low-power MCU core running at 48MHz. It gets its LoRa capabilities from a Semtech SX126x long-range, low-power LoRa Radiofrequency Transceiver that supports the LoRaWAN protocol on EU868/US915 (868MHz/915MHz) frequency bands, with a range of up to 10km (line of sight).

The Grove SGP30 VOC and eCO2 Gas Sensor is an air quality sensor based on the Sensirion SGP30. It measures total volatile organic compounds (TVOC) and CO2 equivalent (CO2eq) with great long-term stability and low power consumption.

The Grove Soil Moisture Sensor is a low-cost resistive sensor that indirectly measures soil moisture by measuring soil electric resistance. Its sensing element has two probes that, when buried in soil, allow current to pass through the soil to measure a resistance value that roughly corresponds to soil moisture level. This type of sensor is not very accurate, but it’s sufficient to measure soil moisture for plants.

The Grove SHT40 Temperature and Humidity Sensor module is based on the Sensirion SHT40 Temperature and Relative Humidity Sensor. It uses I2C communications to interface with external devices, and has a temperature accuracy rate of ±0.2°C and a humidity accuracy rate of ±1.8% relative humidity (RH). The sensor works within ranges of 0% to 100% RH and -40°C to 125°C.

Finally, the Grove Vision AI Module for computer vision at the edge is based on a Himax HX6537-A processor. This is an ultra-low-power, high-performance MCU designed for battery-powered TinyML applications. This processor embeds a powerful 400MHz ARC EM9D digital signal processor (DSP) core, which comes with a floating-point unit and XY local data memory architecture that accelerate convolution operations (typical of neural network algorithms). The module is also equipped with an OmniVision OV2640 CMOS image sensor, which has its own System-on-Chip (SoC) processor that supports auto-exposure and auto-white balance for capturing good quality images.

The Grove Vision AI Module is tiny (40mm x 20mm x 13mm) and can run pre-flashed machine learning (ML) algorithms by itself. It supports “YOLOv5” customized models for image classification, object detection, and image segmentation.

SOFTWARE

The Wio Terminal controller can be programmed using Arduino, CircuitPython, MicroPython and ArduPy. It also comes with a serial port (UART) AT command interface to interact with it. It is supported by Seeed Studio’s SenseCraft open-source software platform, which is promoted by the manufacturer as “a complete out-of-the-box solution to sense the real-world, process data and send the data to the cloud in the easiest and fastest way possible with no coding experience at all” [1].

The Wio Terminal comes factory pre-programmed with SenseCraft firmware, which implements several key functions without the need for further writing and compiling any code. The following are some functions available in the SenseCraft firmware that let you get some out-of-the-box functionality from the SenseCAP K1100 Kit:

  • Basic user interface interaction: It lets you switch pages on the LCD screen between the “Sense,” “Process,” and “Uplink” functions.
  • Real-time data display: It displays real-time sensor data on the homescreen.
  • LoRa communication (SenseCAP): It uploads sensor data to the cloud via Lora.
  • Automatic sensor recognition: It automatically recognizes any supported sensor connected to the Wio Terminal and displays on the screen its status and data in real time.
  • Grove Vision AI Module logs: It displays in real time training logs from the Grove Vision AI Module.
  • Connect to Ubidots via Wi-Fi: It lets you bind the Wio Terminal to a specified Ubidots account. In a config file, the user fills in the Wi-Fi credentials and the Ubidots binding code. Then, the Wio Terminal automatically binds itself to that account.
  • LoRa communication information panel display: It displays in real time the radio signal strength and the number of successfully transferred packets.
  • Connect to Azure IoT Central via Wi-Fi: It lets users connect to the Azure IoT Central platform via Wi-Fi, and display data.
  • Live sensor line chart: Displays sensor data as line charts in real time on the screen, one sensor at a time. You can scroll left and right through all available sensors.
  • Save sensor data to SD card: Under each sensor type, it displays a switch that controls the data-saving function of that sensor to the SD card.
  • Automatic screen rotation: It automatically rotates the screen orientation up and down (left and right not supported).
  • Lora and Wi-Fi connection status: It displays a button that, when pressed, produces a pop-up window to confirm whether the device is disconnected or not.
  • Threshold-enabled data filtering: The user can set min/max thresholds to activate or deactivate the data-sending function to the cloud.

Aside from this pre-compiled firmware, you can always craft any required functionality by writing, compiling, and flashing your own programming code. SenseCraft’s GitHub page has steps to set up a development environment based on Microsoft Visual Studio Code and the PlatformIO extension [1].

VISION AI MODULE

One of the SenseCAP K1100 Kit’s more exciting features is the ability to do AI-powered computer vision using the Grove Vision AI Module. The module’s included OV2640 camera sensor has a maximum resolution of 1600×1200 pixels, although a lower resolution of 192×192 pixels is typically used to maximize the object detection inference speed. This module also carries an onboard MSM261D3526H1CPM microphone and an STMicroelectronics STLSM6DS3TR-C 3D digital accelerometer/gyroscope. These extra onboard sensors expand the possibilities for other ML applications involving sound and inertial measurement data.

The Grove Vision AI Module can also be used with other MCU modules from Seeed Studio, not just the Wio Terminal processor—for instance, the XIAO SAMD21, XIAO RP2040 and XIAO BLE, or any other MCU for that matter.

Training and deploying object detection models with the Vision AI Module is easy and can be done within minutes. The official Wiki page [2] explains the process in detail. The workflow involves using Roboflow services for annotating dataset images, the PyTorch-based Ultralytics YOLOv5 object detection algorithm for training, and the TensorFlow Lite platform for inferencing. Google’s Colaboratory (Google Colab) online platform is used for model training. In case you’re still confused about these tools, let me briefly explain each of them.

Roboflow is a company that offers services to help customers easily build computer vision applications. One of its services is an online annotation tool that lets you annotate images in a training dataset and export the labeled dataset into different formats, including YOLOv5 PyTorch, which is used with the Vision AI Module. Roboflow also offers free downloadable public image datasets for training deep learning models.

An acronym for “You Only Look Once,” YOLO is an algorithm that detects and recognizes various objects in an image in real time. YOLOv5 is a version of the YOLO algorithm by Ultralytics that’s based on the PyTorch framework. Using this version, Seeed Studio has developed its own pre-trained model that fits well with Seeed Studio’s AIoT hardware and devices.

TensorFlow Lite is an open-source deep learning framework that makes it possible to obtain trained models optimized for speed or storage, suitable for deep learning inference at the edge. These models can be deployed on edge devices like Android or iOS mobile devices, Linux-based embedded devices like Raspberry Pi, or MCUs.

There are basically two options to begin prototyping object detection models with the Grove Vision AI Module:

  • Train and deploy a custom AI model with a public dataset.
  • Train and deploy a custom AI model with your own dataset.

The documentation on Seeed Studio’s Wiki [2] outlines the necessary steps for each one of these options.

The first option is the fastest, and it will take you only a few minutes to get your first object detection model up and running. Basically, you just have to download a publicly available data set from Roboflow into a Google Colab Python notebook, perform the training, download the trained model and store it in the Vision AI Module. The required Google Colab notebook is also readily available.

The second option will take you more time because you must first collect your custom dataset. That means putting together a set of images (at least 1,500 for decent results) containing the object you want to detect. Then, you must annotate them all using the Roboflow online annotation tool. After you have completed these preliminary steps, you can now follow basically the same procedure described for option 1 to train and deploy your custom model.

DOG DETECTION MODEL

Now, I’m going to describe how I trained a custom model to detect dogs. In Part 2 of this series, I’ll use this model to detect my pet dog and give him pre-recorded audio commands. Here I roughly follow SenseCAP K1100 Kit’s documentation page for training and deploying models with public datasets.

First, download a dataset from Roboflow’s “Universe,” a resource where you can find publicly available datasets for training your models. At the time of working with my prototype, the Roboflow Universe web page had basically two datasets for training a dog detection model: the “united-dogs” dataset with 1,980 images, and the “dogs-avx0k” dataset with 1,417 images. I briefly tested the two of them to train my dog detection model. Interestingly, with the larger “united-dogs” dataset I obtained trained models with maximum precisions of around 60%, while with the smaller “dogs-avx0k” dataset I obtained precisions of approximately 90%. So, I chose to move forward with the second dataset.

There are various metrics to evaluate a trained model: “Confusion Matrix,” “Accuracy,” “Precision,” “Recall,” and “Mean Average Precision” (mAP), among others [3]. For a quick prototype, I chose Precision (instead of, say, mAP), as I was initially most interested in the model’s trustiness in classifying positive samples. It will also help readers who aren’t deeply familiar with model evaluation metrics to more easily understand the model evaluation in this article. Put simply: When a model has high Precision, it’s very reliable when it classifies a sample as “positive.” That is, it will be correct with a high degree of confidence. But, it may still completely miss some objects of the same type and leave them undetected. Because I wanted to detect a dog at least once eventually while the dog is in the camera’s field of view, I considered Precision to be a reasonable metric to evaluate my model for the purpose of this article.

At the Roboflow Universe web page, you have two choices to download datasets. On the one hand, you can download all images in a ZIP file to your local computer. Normally you’ll choose this option if you plan to train the model on your own machine. To do this, a powerful graphics processing unit (GPU) is ideal.

On the other hand, you can use the Google Colab environment, which runs on servers that include deep learning hardware processors, free of charge. To do this, copy a Python code snippet from Roboflow’s site that downloads the images directly to a Google Colab environment (Figure 2). I chose this method because I don’t have a computer with a sufficiently powerful GPU. Besides, if you want to use your own hardware, you also must set up the required software environment to train models—a task that has its own quirks.

FIGURE 2
Python code to download the “dogs-avx0k” dataset
FIGURE 2
Python code to download the “dogs-avx0k” dataset

Training a model on Google Colab is easy. Seeed Studio provides a prepared workspace with a Python notebook containing all code and instructions outlining the necessary steps, which are:

  • Set up the training environment.
  • Download your dataset.
  • Perform the training.
  • Download the trained model and copy it to the Vision AI module.

Listing 1 shows the Python code from the Google Colab notebook to perform each one of these steps.

LISTING 1
Python code to perform the model training

1. “””**Step 1.** Choose **GPU** in **Runtime** if not already selected
2. 
3. **Step 2.** Clone repo, install dependencies and check PyTorch and GPU”””
4. !git clone https://github.com/Seeed-Studio/yolov5-swift  # clone
5. %cd yolov5-swift
6. %pip install -qr requirements.txt  # install dependencies
7. 
8. import torch
9. import os
10. from google.colab import files
11. print(‘Setup complete. Using torch %s %s’ % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else ‘CPU’))
12. 
13. “””**Step 3.** Set up environment”””
14. os.environ[“DATASET_DIRECTORY”] = “/content/datasets”
15. 
16. “””**Step 4.** Copy and paste the displayed code snippet from Roboflow on to the code cell below”””
17. #copy and paste the code here and make sure it follows the same format as below.
18. !pip install roboflow
19. from roboflow import Roboflow
20. rf = Roboflow(api_key=”YOUR API KEY HERE”)
21. project = rf.workspace().project(“YOUR PROJECT”)
22. dataset = project.version(“YOUR VERSION”).download(“yolov5”)
23. 
24. # this is the YAML file Roboflow wrote for us that we’re loading into this notebook with our data
25. %cat {dataset.location}/data.yaml


26. 
27. “””**Step 5.** Download a pre-trained model suitable for our training”””
28. !wget https://github.com/Seeed-Studio/yolov5-swift/releases/download/v0.1.0-alpha/yolov5n6-xiao.pt
29. 
30. “””**Step 6.** Start training”””
31. !python3 train.py --img 192 --batch 64 --epochs 100 --data {dataset.location}/data.yaml --cfg yolov5n6-xiao.yaml --weights yolov5n6-xiao.pt --name yolov5n6_results --cache
32. 
33. “””**Step 7.** Export TensorFlow Lite file”””
34. !python3 export.py --data {dataset.location}/data.yaml --weights runs/train/yolov5n6_results/weights/best.pt --imgsz 192 --int8 --include tflite
35. 
36. “””**Step 8.** Convert TensorFlow Lite to UF2 file “””
37. 
38. # Place the model to index 1
39. !python3 uf2conv.py -f GROVEAI -t 1 -c runs//train/yolov5n6_results//weights/best-int8.tflite -o model-1.uf2
40. %cp model-1.uf2 ../
41. 
42. “””**Step 9.** Download the trained model file”””
43. files.download(“/content/model-1.uf2”)
44. 
45. “””The above is the file that we will load into the SenseCAP A1101/ Grove - Vision AI Module to perform the inference!”””
46. 
47. “””Download training results”””
48. files.download(“runs/train/yolov5n6_results/results.csv”)

Lines 4-11 clone Seeed’s ‘yolov5-swift’ GitHub repository, install the required dependencies, and check PyTorch and GPU properties. Line 14 sets up the dataset folder for the environment. Lines 18-22 contain the code snippet copied from Roboflow’s website when choosing the dataset (Figure 2). Line 28 downloads Seeed’s ‘yolov5’ pre-trained model on which the transfer learning technique will be applied to re-train and “adapt” it to detect a new object type.

Line 31 runs the training. The first argument –img sets the input image size in pixels, the second argument –batch sets the batch size (the number of training samples used in one training iteration), and the third one –epochs defines the number of training epochs (an epoch is one full cycle through the training dataset). Line 34 exports the resulting TensorFlow Lite model file obtained from the process and line 39 converts it to a UF2 file type. UF2 is a file format developed by Microsoft that the Grove AI Vision Module uses. Line 43 downloads the UF2 file to your local computer. I added line 48 to the original script to download the results.csv file that contains data about the training results for every epoch. Figure 3 shows an excerpt of the results.csv from one of my training sessions. I used these .cvs files to obtain the Precision graphs in Figure 4.

FIGURE 3
Excerpt from the ‘results.csv’ from one of the training sessions
FIGURE 3
Excerpt from the ‘results.csv’ from one of the training sessions

In line 31 you can change the –epochs parameter to gradually increase the default of 100 and compare results. For my tests, I trained the model with 100, 200, 300, 400, 500, and 600 epochs. Depending on the dataset, more epochs can improve model precision, but sometimes additional epochs don’t improve the model at all. In fact, sometimes more epochs decrease precision. It’s a matter of trial and error. Figure 4 shows the precision versus epoch graphs from 100 to 600 training epochs. As you can see, precision improves in graphs a) to c) (epochs 100–300), and plateaus in d) and e) (epochs 400–500). In f) (600 epochs) it even starts to decrease around the 450th epoch.

FIGURE 4 
“Precision” vs. epoch for the training sessions
FIGURE 4
“Precision” vs. epoch for the training sessions

Table 1 shows “Best Precision,” “Best Fitness,” and “Best Precision for Best Fitness” obtained in each of the six cases. As can be seen, I obtained the best precision with the 200 epochs training session, and the best fitness with 600 epochs. As a matter of fact, “Fitness” is the evaluation metric that the training code uses to effectively choose the best-obtained model. It is defined as [4]:

fitness = 0.1*[mAP@0.5] + 0.9*[mAP@0.5:0.95]

TABLE 1
“Best precision”, “best fitness” and “best precision for best fitness” for each training session
TABLE 1
“Best precision”, “best fitness” and “best precision for best fitness” for each training session

Where the mAP@ terms are the Mean Average Precisions over different thresholds [5]. mAP is not too important for the purposes of this article, because we’re using Precision for a more intuitive evaluation of the resulting models. However, because the code uses Fitness to choose the best model, if I choose the best-obtained fitness from all training sessions (52.03% from 600 epochs), I will have an effective precision of 84.00% (see the Best Precision for Best Fitness column in Table 1). That’s because Fitness balances precision with other metrics that counter precision (such as Recall). So, if we are mainly interested in precision, and without further configuration of the training environment, a good compromise could be to choose the results from 500 epochs (51.61% Fitness and 89.37 Best Precision for Best Fitness). When it’s not possible to further improve precision and/or fitness, for instance by adding more epochs, one way to make further improvements is to use more training data by adding more annotated pictures to the dataset.

TESTING THE MODEL

To test the model, first, you must flash the UF2 file from the training process to the Grove Vision AI Module. That is as easy as copying the file to a USB flash drive (see the documentation page). Next, the object_detection.ino example from the Seeed_Arduino_GroveAI library must be flashed to the Wio Terminal. That implies, of course, that you must first install that library to your Arduino IDE environment. Listing 2 shows an excerpt from the aforementioned example which I modified to add Wi-Fi client capabilities. When the system detects a dog, it sends an HTTP GET request to an Espressif ESP32-based web server to play a given audio file. The server will receive the request and play the audio file through an I2S audio amplifier. This functionality will be discussed with more detail in Part 2 of this series.

LISTING 2
Arduino code for the Wio Terminal

1. void loop() {
2.   if (state == 1) {
3.     uint32_t tick = millis();
4.     if (ai.invoke()) { // begin invoke
5.       while (1) { // Wait until invoke is finished
6.         CMD_STATE_T ret = ai.state(); 
7.         if (ret == CMD_STATE_IDLE) { break; }
8.         delay(20);
9.       }
10. 
11.       uint8_t len = ai.get_result_len(); // Receive # of detections
12.       if (len) {
13.         digitalWrite(LED, HIGH); // Turn external LED on
14.         Send_Request(); // Send HTTP GET request to web server
15.         
16.         int time1 = millis() - tick; 
17.         Serial.print(“Time elapsed: “); Serial.println(time1);
18.         Serial.print(“Number of dogs: “); Serial.println(len);
19.         object_detection_t data; // Get data
20.         
21.         for (int i = 0; i < len; i++) {
22.           Serial.println(“result:detected”);
23.           Serial.print(“Detecting and calculating: “); Serial.println(i+1);
24.           ai.get_result(i, (uint8_t*)&data, sizeof(object_detection_t)); // Get result
25.         
26.           Serial.print(“confidence:”); Serial.println(data.confidence);
27.         }
28.      }
29.      else {
30.        Serial.println(“No identification”); digitalWrite(LED, LOW); // Turn external LED off
31.      }
32.     }
33.     else {
34.       delay(1000); Serial.println(“Invoke Failed.”);
35.     }
36.   }
37.   else { state == 0; }
38. }
39. 
40. void Send_Request() {
41.   Serial.print(“Connecting to “); Serial.println(host);
42.   // Use WiFiClient class to create TCP connections
43.   WiFiClient client;
44.   if (!client.connect(host, port)) {
45.       Serial.println(“Connection failed.”);
46.       Serial.println(“Waiting 2 seconds before retrying...”);
47.       delay(2000);
48.       return;
49.   }
50.   client.print(“GET /H HTTP/1.1\n\n”); // Send HTTP GET request
51.   
52.   // Wait for the server’s reply to become available
53.   int maxloops = 0;
54.   while (!client.available() && maxloops < 1000) {
55.       maxloops++; delay(1); // Delay 1 msec
56.   }
57.   if (client.available() > 0) {
58.       String line = client.readString(); // Read the server response
59.       Serial.println(line);
60.   } else {
61.       Serial.println(“client.available() timed out “);
62.   }
63.   Serial.println(“Closing connection.”);
64.   client.stop();
65. }

Lines 1-38 comprise the void loop() function from the original object_detection.ino example. I added lines 13-14 to turn on an external LED and send the HTTP GET request to the web server when a dog is detected. Lines 40-65 is the code that sends the HTTP request, which I adapted from the WiFiClient example from the Seeed_Arduino_rpcWiFi library, installed separately.

In my tests, the trained object detection model performed well most of the time, but there were more false positives and false negatives than I would have wanted to have. That did not surprise me, however, because the dataset I used to train the model is somewhat small. Model performance can be improved by retraining it with a bigger dataset. Besides, there are also other steps that can be taken to improve model accuracy and decrease false positives [7].

FIGURE 5
Wio Terminal with the AI Vision module
FIGURE 5
Wio Terminal with the AI Vision module

Figure 5 shows the Wio Terminal with the AI Vision module and the external LED that turns on every time a dog is detected. Figure 6 shows the web browser and Arduino serial monitor showing detection results. When you plug the AI Vision Module into the computer, a notification pops up from your computer’s operating system to open the web page that shows the detection image.

FIGURE 6
Web browser and Arduino serial monitor showing detection results
FIGURE 6
Web browser and Arduino serial monitor showing detection results
CONCLUSION

The process of training and deploying models with the SenseCAP K1100 Kit is indeed straightforward. The only time-consuming step is the training. For example, the last training session I performed took 1.4 hours for a total of 551 epochs. Adding more images to the dataset will extend the training time.

Despite the less-than-optimum performance of my obtained object detection model, it was still useful to get acquainted with the kit and obtain a first prototype. In Part 2 of this article series, I plan to discuss a couple of options to further improve the dog detection model. I’ll also use this dog detection prototype to detect when my dog tries to mark territory near my backyard door. (Yep, that’s happening and it’s kind of funny—what can I say!) When the system detects my dog, it will trigger an audio warning with my recorded voice to try to dissuade him. Let’s see how that works. 

REFERENCES
[1] SenseCraft: https://github.com/Seeed-Studio/SenseCraft
[2] Train and Deploy Your Own AI Model Into Grove – Vision AI, https://wiki.seeedstudio.com/Train-Deploy-AI-Model-Grove-Vision-AI/
[3] Evaluating Deep Learning Models: The Confusion Matrix, Accuracy, Precision, and Recall, https://blog.paperspace.com/deep-learning-metrics-precision-recall-accuracy/
[4] Best Fitness: https://github.com/ultralytics/yolov5/issues/582#issuecomment-667232293
[5] Evaluating Object Detection Models Using Mean Average Precision (mAP): https://blog.paperspace.com/mean-average-precision/
[6] Ultralytics Yolo Version 5 Tips for Best Training Results, https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results
[7] How to decrease false positives: https://github.com/ultralytics/yolov5/issues/666#issuecomment-1047657174

— ADVERTISMENT—

Advertise Here

SOURCES
SenseCAP K1100 Kit: https://www.seeedstudio.com/Seeed-Studio-LoRaWAN-Dev-Kit-p-5370.html
Himax HX6537-A processor: https://www.himax.com.tw/product-brief/HX6537.39.40-A_product_brief.pdf
Roboflow Universe: https://roboflow.com/universe

Code and Supporting Files

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • June 2023 #395 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
| + posts

Raul Alvarez Torrico has a B.E. in electronics engineering and is the founder of TecBolivia, a company offering services in physical computing and educational robotics in Bolivia. In his spare time, he likes to experiment with wireless sensor networks, robotics and artificial intelligence. He also publishes articles and video tutorials about embedded systems and programming in his native language (Spanish), at his company’s web site www.TecBolivia.com. You may contact him at raul@tecbolivia.com

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Prototype AI Apps with SenseCAP K1100

by Raul Alvarez Torrico time to read: 17 min