CC Blog Projects Research & Design Hub

Prototype AI Apps with SenseCAP K1100

Part 2: Dog Detection and Deterrence

In Part 1 of this article series, I tested and reviewed Seeed Studio’s SenseCAP K1100 Kit for prototyping IoT apps. This month in Part 2, I discuss the prototype I built using a Wio Terminal and a Vision AI Module from the kit, and an ESP32-based audio player/web server. The system detects when my dog tries to “mark territory” in my yard, then plays an audio warning file of my recorded voice to try to distract the dog and stop the behavior.

  • How can I get started with an artificial intelligence app prototype?
  • What is a fun project with the SenseCAP K1100 kit?
  • How can I deploy machine learning on a microcontroller?
  • Seeed Studio SenseCAP K1100 Kit MAX98357A
  • I2S audio amplifier
  • ESP32

In Part 1 of this article, “Model Training and Deployment” (Circuit Cellar 395, June 2023, 2023) [1], I briefly tested and reviewed Seeed Studio’s SenseCAP K1100 Kit for prototyping AI-based Internet of Things (IoT) applications. The kit comes with a microcontroller (MCU) module, Wi-Fi/Bluetooth/LoRa wireless communications capabilities, and a set of sensors to let you easily get started prototyping IoT applications. It also comes with the Grove Vision AI module—an Artificial Intelligence (AI) camera for embedded, deep-learning computer vision, which is particularly interesting.

The kit is advertised by the manufacturer as having a fast Artificial Intelligence IoT (AIoT) application deployment track, great extensibility with more than 400 “Grove” sensors to support many application and customization options, broad integration with cloud services, and an open-source programming platform compatible with beginner-friendly platforms such as Arduino.

I tested the kit’s Vision AI module object detection capabilities, along with the Wio Terminal MCU module, by following the product’s documentation for training and deploying a custom, deep-learning model. This month in Part 2, I will discuss the build of my “dog detection and deterrence” prototype. The system detects my pet dog when he tries to mark territory near my backyard door. After the dog is detected, an audio warning with my recorded voice is triggered to try to dissuade him.

To follow what will be discussed here, you need to have some familiarity with concepts such as deep-learning models, transfer learning, datasets, and model evaluation parameters. You also need to have some basic experience implementing Espressif Systems ESP32-based web servers and clients with Arduino. To get additional background regarding the SenseCAP K1100 Kit, please refer to Part 1 [1].


Figure 1 is the block diagram for the dog detection and deterrence system. The object detection sub-system comprises the Wio Terminal and Vision AI Module from the SenseCAP K1100 Kit. A detailed description of the Wio Terminal specifications was given previously [1]. The Wio Terminal is basically an Espressif ESP32-based controller that includes a display, some push buttons, Bluetooth and Wi-Fi wireless connectivity, a micro SD card slot, a variety of onboard sensors, and an I/O connection header. The Vision AI Module is composed of a camera and a high-performance MCU designed for battery-powered machine learning (ML) applications. The MCU integrates a powerful Digital Signal Processor (DSP) core that accelerates convolution operations, which are typical of neural network algorithms.

Figure 1
Block diagram for the dog detection and deterrence system.
Figure 1
Block diagram for the dog detection and deterrence system.

Every time a dog is detected in the camera’s field of view, the Vision AI module sends detection data in ASCII format to the Wio Terminal thorough an I2C connection. It sends, for instance, the class of the detected object and the confidence score. Then, the Wio Terminal sends an HTTP request to the ESP32-based audio player/web server to play a warning audio file to dissuade the dog. The ESP32 board receives the HTTP request and plays an MP3 audio file randomly selected from a list. The MP3 files in the list contain my recorded voice with sentences to try to dissuade the dog. There are recorded phrases, such as: “Hey, Rigby, what are you doing?” “Where did you leave your angry bird toy? Go and fetch it.” “Good boy!”. The idea behind this is that when the dog hears my voice, he will get distracted and deterred from marking territory. Besides, I know for a fact that he never does his business when people are present. So, it’s a way of fooling him into thinking I’m nearby.

I stored these audio files in a local file server I have at home. When the ESP32 server receives the HTTP request, it randomly selects an MP3 file URL from an array, and downloads it from the local file server. Alternatively, the files could be stored and retrieved from a micro SD card reader attached to the ESP32. The “Arduino Audio Tools” library used to read and play the audio files supports fetching files from a URL or from an SD card. In my case, I spared the use of an additional micro SD card module, and just stored the files in my local file server.


Advertise Here

The ESP32-based audio player/web server uses the Analog Devices MAX98357A I2S audio amplifier module to play audio. The MAX98357A integrated circuit (IC) operates with power input voltages from 2.5V to 5.5V and delivers 3.2W output power into 4Ω at 5V, with 92% efficiency. It supports sample rates from 8kHz to 96kHz and left, right, or (left/2 + right/2) output schemes. As a transducer to reproduce the sound waves, I connected a regular passive computer speaker to the MAX98357A module’s output.

The original idea was to use the Wio Terminal also to play the audio files, but I couldn’t successfully compile the “Arduino Audio Tools” library with SenseCAP K1100 Kit’s SenseCraft source code. Apparently, Wio Terminal’s SenseCraft source code hardware configurations clash with the AudioTools library configurations. So, after unsuccessfully trying to compile using many source code configuration variations, I decided to use an extra ESP32 board to play the audio files. This approach added the benefit of not having to place the audio-playback sub-system near the object-detection sub-system. I can install both in different places for optimum camera/audio area coverage.


I followed Seeed’s documentation to train the dog detection model. Details on how I did it are given in Part 1 [1], or you can refer directly to the official documentation [2]. The resulting model precision wasn’t very good back then, so I re-trained the model again using more pictures. Here’s what I did this time:

On Roboflow’s “Universe” website [3], I created my own Roboflow project, and then cloned to my project all the images from the “dogs-avx0k” dataset I used to train the model the first time [1]. Roboflow lets you do that with open-source datasets on its site; you just have to create a free account. A tutorial on Roboflow’s blog page explains how to do the cloning. (I added the link to it on Circuit Cellar’s Article Materials and Resources webpage.)

After cloning all pictures from the “dogs-avx0k” dataset, I took 122 additional photos of my two pet dogs (Rigby and Eileen) and uploaded them to my Roboflow project. The idea was to improve the dataset and make the trained model more effective in detecting especially Rigby (he’s the troublesome one, by the way!). Some of the photos had two instances (that is, two dogs in the same picture), and I had to annotate each picture by hand. This is easy to do using Roboflow’s tools, but still it takes some time and patience.

In the first part of this series [1], I used “precision” as the metric to evaluate the resulting re-trained models, because it was conceptually more intuitive to explain and easy to understand. Besides, for this application, it is more important to correctly detect a dog, even if sometimes we fail to detect some dogs in some photos (that’s “precision”), than to always detect every dog present in a photo (that’s “recall”).

To evaluate the re-trained models this time, I will use another metric called “mAP 0.5:0.95,” which stands for, “Mean Average Precision over different IoU thresholds, from 0.5 to 0.95.” And yes, that sounds very confusing, and not intuitive at all. To understand this metric requires a more in-depth knowledge of object detection model evaluation metrics in general. For example, you must be familiar with concepts such as precision, recall, precision-recall curve, intersection over union, and “plain” average precision. In case you are interested, I included a link to a blog post explaining these concepts on the Circuit Cellar Article Materials and Resources webpage.

Generally, “mAP 0.5:0.95” is nowadays the preferred metric used to evaluate object detection models. So, this is the metric I used to evaluate my newly trained model. In the end, “mAP 0.5:0.95” is just a “percentage” number that tells us how good our model is in detecting objects.


Advertise Here

After enhancing the dataset, I followed the same process described in Part 1 to re-train my model [1]. It is the same process described in Seeed’s official documentation [2]. I trained the model seven times with 100, 200, 300, 400, 500, 600 and 1,000 epochs, similarly to what I did in Part 1. (Note: In machine learning, an “epoch” is used to determine the accuracy of the model. It is a measure of how many times the algorithm has seen the same data.) However, after training with 600 epochs, I noticed that mAP 0.5:0.95 was still increasing moderately; so, for the next training session, I went from 600 right up to 1,000 epochs. Nevertheless, the training process automatically stopped at 893 epochs, because there was no further improvement in the model. The training code is configured by default to stop the process if no further improvement is observed in the last 100 epochs.

The mAP 0.5:0.95 versus number of training epochs from 200 to 893 epochs is shown in Figure 2. By checking the Comma Separated Values (CSV) file generated in the training process (which I also talked about in Part 1), I learned that the best result was observed at epoch 792. And that’s the state of the model exported as final after the training stopped. Still, as shown in Figure 2, chart “f,” the best mAP I got was less than 0.45, which is still a bit low. Obviously, there’s a lot of room for improvement.

Figure 2
mAP 0.5:0.95 vs. epoch graphs from 200 to 893 training epochs.
Figure 2
mAP 0.5:0.95 vs. epoch graphs from 200 to 893 training epochs.

Figure 3 is the circuit schematic both for the ESP32-based audio player/web server and the object recognition sub-systems. Beginning with the ESP32-based audio player/web server, you can see that the required hardware to reproduce sound is minimal, and yet yielded good results in my experience. To reproduce the sound waves, I used a passive 4Ω computer speaker with the MAX98357A I2S amplifier. To my surprise, it sounded loud enough for the purpose of this project. The IC is very small and yet provides reasonable output power. The ESP32 audio player/web server prototype is shown in Figure 4. I’m powering the board using a USB cell phone charger.

Figure 3
Dog detection and deterrence circuit schematic.
Figure 3
Dog detection and deterrence circuit schematic.
Figure 4
ESP32 audio player/web server.
Figure 4
ESP32 audio player/web server.

To play the audio files, the “AudioTools” Arduino library is used, along with the “CodexMP3Helix” Arduino MP3 Codec library. Before attempting to compile the source code these two libraries must first be installed. (See Circuit Cellar’s Article Materials and Resources webpage for the respective download links.) Testing the ESP32-based audio player in isolation from the rest of the system is easy. It is recommended to test this sub-system first, before attempting to test the object recognition one. And before compiling and flashing the code, remember to change the following code lines with your own data:

const char* ssid = “MyWiFi”; // Your WiFi name

const char* password = “MyCatKnowsC++”; // Your WiFi password

// Static IP configuration:

IPAddress local_IP(192, 168, 0, 14); // Static IP to request from the router

IPAddress gateway(192, 168, 0, 1); // Corresponding gateway

IPAddress subnet(255, 255, 0, 0); // Corresponding subnet mask

IPAddress primaryDNS(8, 8, 8, 8); //optional

IPAddress secondaryDNS(8, 8, 4, 4); //optional

The “local_IP” is the static IP address you want to assign to the audio player/web server. You must choose an IP from your router’s address pool that’s not reserved or used by any other device. It is important to have the IP configured as “static,” because this is the IP address that will also be configured in the Wio Terminal’s source code, so that it can successfully reach the audio player/web server. Otherwise, the Wio Terminal will never find the ESP32 web server to request the corresponding audio playback.

Once the circuit has been built and the code flashed to the ESP32, open a web browser and point it to the audio player/web server’s IP address. Using the above example, this address is “192, 168, 0, 14.” The webpage served by the ESP32 will look like the one shown in Figure 5. This is a simple HTML page to test the web server functionality. By clicking on the first link, a randomly chosen MP3 audio file will be played through the speaker, and the ESP32’s built-in LED will be turned on. By clicking on the second link, the built-in LED will be turned off. Otherwise, it will automatically turn off once the audio file has finished playing.

Figure 5
Webpage served by the ESP32 web server.
Figure 5
Webpage served by the ESP32 web server.

Listing 1 is an excerpt of the Arduino code for the ESP32 audio player/web server. This code is based on a straightforward example of an Arduino web server, plus code to read and play MP3 audio files using the “AudioTools” Arduino library. In the listing, I omitted most of the Wi-Fi configuration and setup, because it is a common procedure for Arduino-based web servers.


Advertise Here

Listing 1
Arduino code for the ESP32 audio player/web server.

1. #include <WiFi.h>
2. #include “AudioTools.h”
3. #include “AudioCodecs/CodecMP3Helix.h”
4. // ...
5. const byte url_len = 50; // At least the lenght of longest URL, plus 1
6. const char audio_file_array[][url_len] = {“http://<my-file-server-domain-or-ip>/talk1.mp3”, 
7.                                     “http://<my-file-server-domain-or-ip>/talk2.mp3”, 
8.                                     “http://<my-file-server-domain-or-ip>/talk3.mp3”};
9. // ...
10. void setup()
11. {
12.   // ...
13.   AudioLogger::instance().begin(Serial, AudioLogger::Info);  
14.   auto config = i2s.defaultConfig(TX_MODE);
15.   config.pin_ws = 15;
16.   config.pin_bck = 14;
17.   config.pin_data = 22;
18.   i2s.begin(config);
19.   // setup I2S based on sampling rate provided by decoder
20.   dec.setNotifyAudioChange(i2s);
21.   dec.begin();
22. }
24. void loop(){
25.  WiFiClient client = server.available();   // listen for incoming clients
27.   if (client) {                             // if you get a client,
28.     String currentLine = “”;                // String to hold incoming data from the client
29.     while (client.connected()) {            // loop while the client’s connected
30.       if (client.available()) {             // if there’s bytes to read from the client,
31.         char c =;             // read a byte, then
32.         Serial.write(c);                    // print it out the serial monitor
33.         if (c == ‘\n’) {                    // if the byte is a newline character
34.           if (currentLine.length() == 0) {  // Send HTTP header
35.             client.println(“HTTP/1.1 200 OK”);
36.             client.println(“Content-type:text/html”); client.println();
37.             // Simple web page to test the server:
38.             client.print(“Click <a href=\”/play\”>here</a> to play audio.<br>”);
39.             client.println(“Click <a href=\”/led_off\”>here</a> to turn the LED off.<br>”);
40.             break; // break out of the while loop
41.           } else { currentLine = “”; }
42.         } else if (c != ‘\r’) { currentLine += c; }
44.         // Check to see if the client request was “GET /play” or “GET /led_off”:
45.         if (currentLine.endsWith(“GET /play”)) {
46.      digitalWrite(LED_BUILTIN, HIGH);  // GET /play turns the LED on and plays an audio file
47.      if(!is_playing) { // Only play a new audio file if there’s no other currently playing
48.      is_playing = true;
49.      unsigned int file_index = random(0, num_audio_files); // Randomly get an audio file index
50.             url.begin(audio_file_array[file_index], “audio/mp3”);
51.           }
52.         }
53.         if (currentLine.endsWith(“GET /led_off”)) {
54.           digitalWrite(LED_BUILTIN, LOW); // GET /led_off turns the LED off
55.         }
56.       }
57.     }
58.     client.stop(); // Close connection
59.     Serial.println(“Client Disconnected.”);
60.   }
62.   copier.copy(); // Get more audio data from audio file
63.   int avail = copier.available();
64.   if (avail == 0) { // The end of the audio file has been reached
65.     is_playing = false;
66.     digitalWrite(LED_BUILTIN, LOW); 
67.   }
68. }

Lines 5-8 show how I set up the list of MP3 files to be played back. I’m using just three audio files with different messages, but more can be added. The audio files contain recorded phrases, such as, “Hey Rigby, what are you doing?” “Go and fetch your angry bird toy,” “Good boy!” and so on. I used Audacity software to record my audio warnings with a microphone, and exported them to MP3 format @64kbps. At that compression rate, the largest MP3 file is around 73KB, and despite the relatively low bit rate, the final reproduced audio quality is quite good. To add more audio files to the reproduction list, you just have to add their corresponding URLs to the array shown in line 6. Of course, you must add the MP3 files to the local file server as well.

Lines 13-21 show how to configure the operation of the MAX98357A IC with the “AudioTools” library. Lines 15-17 in particular configure the ESP32 pins that will be used to do the I2S interfacing. These are the same pins shown in the circuit schematic in Figure 3.

Lines 24-68 show the program’s main infinite loop. This web server implementation is mostly a copy of the “SimpleWiFiServer” that is included in the “WiFi” ESP32 Arduino library, which is installed along with the “Arduino-ESP32” board support package (see Circuit Cellar’s Article Materials and Resources webpage). So, I won’t explain in detail how the HTTP server works. Please check the aforementioned “SimpleWiFiServer” to see basic details about how the web server works.

Going forward, lines 35-36 send the response HTTP header to the web client that originated the request. Lines 38-39 send the HTML code for the webpage shown in Figure 5. Lines 45-55 catch the HTTP GET request received from the webpage, each time any of the two links are clicked. If a “GET /play” HTTP request is received (lines 45-52), a random index number (“file_index”) is generated in line 49 to index into the “audio_file_array” defined in lines 6-8.

Then, the AudioTools url.begin() function is called to request the randomly selected audio file from the local file server (see line 50). The first argument to this function is a string containing the audio file’s URL. The second argument is just the file type (“audio/mp3”) to make the appropriate HTTP request to the local file server. The audio file in question will be downloaded by this function from the local file server, and played back. The Wio Terminal will send this exact type of request every time a dog is detected.

In contrast, if the “GET/led_off” HTTP request is received (lines 53-55), the built-in LED is turned off. This part of the implementation is just for debugging purposes, and is not required for the project’s main functionality. Because downloading the MP3 file completely is not required to start reproducing it, line 62 is used to get additional data bytes from the incoming MP3 stream to continue the reproduction. Once there are no more incoming data bytes, the reproduction will stop automatically. Lines 63-67 let us verify this condition and flag appropriately that the audio file playing has ended.


To test the trained machine learning model, the “UF2” model file resulting from the training process must be flashed to the Vision AI module [1]. To do this, connect the module to the computer with a USB cable. It will get enumerated by the operating system as if it were a USB flash drive. Copy the file to the module the same way you copy files to a USB drive. After the Vision AI is loaded with the model file, disconnect it from the computer and attach it to the Grove port in the Wio Terminal. The circuit schematic for the object detection sub-system is shown in Figure 3, and Figure 6 is a photo of the object detection sub-system with the Vision AI and an external LED module attached to the Wio Terminal controller. Here, I’m using another USB cell phone charger to power up the sub-system.

Figure 6
Object detection sub-system.
Figure 6
Object detection sub-system.

Next, connect the Wio Terminal to the PC and flash the “object_detection_wifi_client.ino” Arduino code. A download link to all source files is available on the Circuit Cellar Article Code and Files webpage. Before flashing the code, be sure to change the following lines in the source code with your own information:

const char* ssid = “MyWiFi”; // Your WiFi name

const char* password = “MyCatKnowsC++”;
// Your WiFi password

const char* host = “”;
// audio player-web server’s IP Address

The “host” address above is the static IP address assigned previously to the ESP32-based audio player/web server. After flashing the code, open the serial terminal window in the Arduino IDE to see the debugging messages from the Wio Terminal. It will try to connect to the Wi-Fi router, receive detection data from the Vision AI module, and request to the ESP32 web server that an audio file is played every time a dog is detected.

For debugging purposes the Vision AI module can be connected to a PC with a USB cable as well, to monitor the detection process on a web browser [1]. Figure 7 is a screenshot of how it looks. The text in the lower section is from the Arduino IDE’s serial terminal, which shows the Wio Terminal’s output.

Figure 7
Vision AI detection in a web browser.
Figure 7
Vision AI detection in a web browser.

The Arduino code for the Wio Terminal previously discussed in Part 1 of this article [1] is given in Listing 2. Most of the code is an adaptation from the “object_detection” example that comes with the “Seeed_Arduino_GroveAI” library installation. I just added web client capabilities to send HTTP requests to the ESP32 audio player/web server.

Listing 2

Arduino code for the object detection sub-system’s Wio Terminal.

1. void loop() {
2.   if (state == 1) {
3.     uint32_t tick = millis();
4.     if (ai.invoke()) { // begin invoke
5.       while (1) { // Wait until invoke is finished
6.         CMD_STATE_T ret = ai.state(); 
7.         if (ret == CMD_STATE_IDLE) { break; }
8.         delay(20);
9.       }
11.       uint8_t len = ai.get_result_len(); // Receive # of detections
12.       if (len) {
13.         digitalWrite(LED, HIGH); // Turn external LED on
14.         Send_Request(); // Send HTTP GET request to web server
16.         int time1 = millis() - tick; 
17.         Serial.print(“Time elapsed: “); Serial.println(time1);
18.         Serial.print(“Number of dogs: “); Serial.println(len);
19.         object_detection_t data; // Get data
21.         for (int i = 0; i < len; i++) {
22.           Serial.println(“result:detected”);
23.           Serial.print(“Detecting and calculating: “); Serial.println(i+1);
24.           ai.get_result(i, (uint8_t*)&data, sizeof(object_detection_t)); // Get result
26.           Serial.print(“confidence:”); Serial.println(data.confidence);
27.         }
28.      }
29.      else {
30.        Serial.println(“No identification”); digitalWrite(LED, LOW); // Turn external LED off
31.      }
32.     }
33.     else {
34.       delay(1000); Serial.println(“Invoke Failed.”);
35.     }
36.   }
37.   else { state == 0; }
38. }
40. void Send_Request() {
41.   Serial.print(“Connecting to “); Serial.println(host);
42.   // Use WiFiClient class to create TCP connections
43.   WiFiClient client;
44.   if (!client.connect(host, port)) {
45.       Serial.println(“Connection failed.”);
46.       Serial.println(“Waiting 2 seconds before retrying...”);
47.       delay(2000);
48.       return;
49.   }
50.   client.print(“GET /play HTTP/1.1\n\n”); // Send HTTP GET request
52.   // Wait for the server’s reply to become available
53.   int maxloops = 0;
54.   while (!client.available() && maxloops < 1000) {
55.       maxloops++; delay(1); // Delay 1 msec
56.   }
57.   if (client.available() > 0) {
58.       String line = client.readString(); // Read the server response
59.       Serial.println(line);
60.   } else {
61.       Serial.println(“client.available() timed out “);
62.   }
63.   Serial.println(“Closing connection.”);
64.   client.stop();
65. }

When the system detects a dog, it turns on the external LED connected to the Wio Terminal GPIO header. Then, the Wio Terminal sends an HTTP GET request to the ESP32 audio player/web server to play an audio file (see lines 13-14). The ESP32 receives the request, selects an audio file, and plays it through the MAX98357A I2S audio amplifier.

Lines 40-65 contain the Send_Request() function definition that sends the HTTP GET requests. If you have previous experience implementing HTTP servers and clients with the Arduino platform, you will not notice anything unusual or complicated in this function. The function basically sends the request and prints the server’s response to the serial terminal. Note that line 50 is in charge of sending the “GET /play” request. As you may recall from Listing 1, the “/play” Uniform Resource Identifier (URI) is the one that triggers the reproduction of an MP3 audio file on the ESP32 server. You can confirm this by referring again to lines 45-52 from Listing 1.


The main purpose of building this prototype was to see how easily and reliably it would perform object detection using the SenseCAP K1100 Kit—specifically with the Vision AI module and the Wio Terminal controller. Training and deploying models with the Vision AI module is indeed very easy. Moreover, the Wio Terminal is a great all-in-one controller that embeds a relatively powerful MCU with wireless connectivity, an LCD screen, push buttons, and some onboard devices, such as an accelerometer, microphone, speaker, micro SD card slot, and a 40-pin GPIO header, for connecting even more off-board sensors and actuators.

To obtain good performance from object detection with the Vision AI module, it is critical to train the machine learning model appropriately. And that’s the most time-consuming and relatively hard part, since it is generally true with most machine learning applications. Normally, obtaining a good model requires crafting a very good dataset and training the model many times with many parameter variations, until good precision is obtained. Apparently, this gets harder when the final model will be integer-quantized and run on an MCU. To be able to get a good model in this context, reasonable experience and practice in model training is normally required.

It is pretty amazing what can be done these days with machine vision, microcontrollers and wireless communication. A couple of decades ago, I managed to send some UDP packets from a Microchip PIC16F876 MCU to a web server, using a 1” dial-up modem. Seeing that work was mesmerizing. If someone had told me what would be possible to do two decades later with a microcontroller, a tiny video camera, and wireless communication, I would have not believed it! 

[1] Raul Alvarez Torrico, “Prototype AI Apps with SenseCAP K1100 (Part 1: Model Training and Deployment).” Circuit Cellar 395, June, 2023.
[2] Train and Deploy Your Own AI Model Into Grove – Vision AI,
[3] Roboflow’s “Universe” website:

Launch: Cloning Images from Open Source Datasets

Dog Detection SenseCAP Dataset

Arduino Audio Tools Library

A MP3 and AAC Decoder Using Helix

Installing the ESP32 Board in Arduino IDE (Windows, Mac OS X, Linux)

Seeed Studio – SenseCraft

What is Mean Average Precision (mAP) in Object Detection?

Best Fitness

Arduino |
DFRobot |
Espressif Systems |
Roboflow |
Seeed Studio |
Ultralytics |

Code and Supporting Files


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
| + posts

Raul Alvarez Torrico has a B.E. in electronics engineering and is the founder of TecBolivia, a company offering services in physical computing and educational robotics in Bolivia. In his spare time, he likes to experiment with wireless sensor networks, robotics and artificial intelligence. He also publishes articles and video tutorials about embedded systems and programming in his native language (Spanish), at his company’s web site You may contact him at

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Prototype AI Apps with SenseCAP K1100

by Raul Alvarez Torrico time to read: 18 min