The Scouting Owl Project
A rescue robot can play an important role in searching for endangered individuals and studying hazardous conditions. Learn how these Cornell students built a Raspberry Pi 4-based robot that allows rescuers to detect and communicate with endangered individuals in an environment that is not yet suitable for in-person rescue.
— ADVERTISMENT—
—Advertise Here—
— ADVERTISMENT—
—Advertise Here—
In incidents such as fire, earthquake, flood, gas leakage or any environment that has potential threats to the life of a rescuer, a rescue robot can play an important role in searching for endangered individuals and studying hazardous conditions. For increased safety of first responders, an autonomous robot that can detect endangered individuals, avoid obstacles, scout the environment and communicate with endangered individuals in extreme environments would be a desirable safety improvement.
SYSTEM OVERVIEW
Scouting Owl is a robot that allows the rescuers to detect and communicate with endangered individuals in an environment that is not yet suitable for in-person rescue. To perform the critical rescue task, we built a robot based on a Raspberry Pi 4, with the following functions: 1) a reliable half-duplex connection between the robot and the control device; 2) fetching and displaying live IR camera feed along with sensor data on the display device; 3) remote control of the robot through an Android App; and 4) mapping the surrounding area (Figure 1).
HARDWARE DESIGN
The diagram in Figure 2 shows the wiring of our system. Sensors and actuators include two DC motors, a flame sensor, a water level sensor, an infrared camera, an ultrasonic sensor, an electromagnetic pulse sensor, a temperature sensor, a DIY radiation sensor, a speaker and a microphone. We used both 5V and 3.3V to power sensors, depending on their operating voltage. For the sensor that is taking 5V from the Pi, we used resistors to divide its output voltage, to protect the GPIO pins on the Raspberry Pi. A 9V battery was used to power the motors. The main components used in the project are listed in Table 1.
— ADVERTISMENT—
—Advertise Here—
System block diagram that shows the relationships among different system elements.
To get a live feed of our surroundings, we used the Adafruit AMG8833 IR Thermal Camera. This camera module has 12 pins, out of which we only used four. To get data from the camera, we used the I2C communication protocol. We also needed to install the “adafruit_amg88xx” library to use the module. Once running, the module returns an array of 64 individual infrared temperature readings over I2C. These 64 readings will later be presented in an 8×8 grid. This module can measure temperatures ranging from 0°C to 80°C with an accuracy of ±2.5°C, making it ideal for our application. It can also detect a human from a distance of up to 7m. We used image processing using the SciPy Python library to interpolate the 64(8×8) grid. We used Fusion360 software to CAD an owl-shaped mounting mechanical, to mount the IR camera securely and enhance the aesthetic of the robot (Figure 3).
The 3D printed camera IR camera mount.
We used the flame sensor to find any fire nearby. If so, the rescuers are alerted about it on the display device. By detecting the infrared in a flame, this sensor gives out a digital output corresponding to whether a fire is present or not. To receive the output, we used an edge-triggered callback function, because it saves precious CPU cycles.
Similarly, we used the water sensor to identify the presence of water. The sensor is a long variable resistor that decreases its resistance as the water level increases. Since our robot is an electronic device, it is prone to short circuits when it comes in contact with water. Therefore, the water level sensor protects the robot from traveling into a puddle, and helps the robot to identify areas with a shallow flood. Like the flame sensor, this sensor has a digital output corresponding to whether water is present or not. As with the flame sensor, we used an edge-triggered callback function in our program.
Initially, while testing the flame sensor, we used a piezo-electric barbeque lighter. We discovered that the lighter triggered both the flame and the water sensors— all for the wrong reasons. After debugging for hours, we found out that the lighter we were using generated thousands of volts for a short period, which resulted in the generation of EM waves. These waves were intercepted by the wires acting as an antenna, which triggered the interrupts. We added a long wire connected to a free GPIO pin to utilize the above phenomenon. This wire acts as an antenna and can detect such bursts of radiation. To test our hypothesis, we connected the wire to an oscilloscope and tested the result. We found that it does detect EM discharges. When we detect a pulse on the EMP wire, we simply ignore all other sensors for a few seconds, because their values could all be disrupted. Afterward, all the values will be reset correctly.
To measure the temperature, we used the LM393 temperature sensor. Since Raspberry Pi does not have any analog input pin to read analog value, we used an MCP3008 to do the analog to digital conversion. It is an 8-channel, 10-bit analog to digital converter. ‘8 channel’ means that it can accept up to eight different analog voltages. The 10-bit property is the resolution of the ADC, or the precision to which it can measure a voltage. Using SPI, the chip sent the raw ADC value to the Raspberry Pi, which then converts it to the corresponding temperature value using the expression:
The HC-SR04 ultrasonic sensor was used to sense obstacles around the robot. This sensor has a range of about 2m, beyond which its accuracy decreases rapidly. The purpose of this sensor is to avoid collisions and map the area around the robot. We used a small voltage divider breakout circuit to convert the 5V maximum output down to 3.3V, since 3.3V is the maximum input voltage for a Raspberry Pi. Moreover, we modularized the voltage divider part of our ultrasonic sensor, and used a protoboard to provide both mechanical and electrical connections with the breadboard.
In a rescue event, being able to communicate with the endangered individual is crucial. Hence, we added a Bluetooth speaker that can be controlled directly from the Python script. A tiny USB microphone is also included to collect vocalizations from the endangered individual. To play and record audio, the rescuer can simply tap the corresponding button on the screen and control the audio playback.
The microphone allows the endangered individual to let their rescuers know of any dangerous situation or special needs that the robot isn’t able to capture, whereas the speaker can play pre-recorded messages that allow for the rescuers to give necessary instructions to the endangered individual and to play the message recorded via the microphone. The pre-recorded messages meant for the endangered individuals include an assurance that the help is arriving, directions to the nearest exit, and instructions to speak up, so that the robot can start recording his or her voice. All these commands can be accessed from the Android application with a mere touch of a button. Once the rescuer presses the button either to play a pre-recorded message or prompt the endangered individual to start speaking, the robot receives the corresponding command and stops in its place either to play the message or record the voice.
SOFTWARE DESIGN
The software design (Figure 4) bridges the robot and the rescuer, while also containing programs to interact with all the peripherals. It consists of two main parts: the Python program on the Raspberry Pi, and the Android application on the rescuer device. The Python program collects all the sensor data, compiles it into actual readings, packages it, and transfers it to the Android GUI. Meanwhile, the rescuer can also send instructions to the robot through the GUI.
Flow diagram for the Python program, which bridges the robot and the rescuer. It also contains programs to interact with all the peripherals.
Running on the Raspberry Pi, the Python program creates a TCP/IP server and opens up a socket for communication. After the socket is created, it listens and waits for clients to connect to it. Once a client is connected to the server, the main Python program resumes and starts executing the main function of the code.
During each cycle of execution, the Python script collects data from the IR camera, the water level sensor, the fire sensor, the radiation sensor, the temperature sensor and the ultrasonic sensor, and bundles these values into a single data packet that is sent over to the client. Each packet of multiple sensor values is terminated with a semicolon. Within each multiple-sensor packet, individual sensor values are comma-delimited. For example, if the robot detects a fire but not water, the data string would be formatted as: “Fire,No Water;”. This ensures that the client can safely separate out the various sensor values without having to transmit them one at a time. Once the data packet is transmitted, the server waits for the client to issue commands in a blocking manner. While waiting, the server (the Raspberry Pi on the robot), continues doing its current assignment, until the client issues a new command. As soon as a string is received, the server decodes the string and issues an appropriate response based on the client’s command.
The Python program running on the server is also responsible for combining the sensor data into a human-readable format. Such a process varies among the sensors, depending on the communication protocols used to fetch the sensor data and its properties. For instance, the ultrasonic sensor uses the time of flight of sound to measure distance. The Raspberry Pi sends a 0.01ms pulse to the trigger pin of the ultrasonic sensor, and takes input from the echo pin. Then the program uses the time difference between the trigger output and echo input to calculate the distance traveled by the sound wave back and forth, after reflecting from the detected object. With speed and time, the script determines, quite accurately, the distance of the sensor from the object.
Mapping was a challenging feature to implement on this robot, due to the limited access to high-precision sensor information. There is no way for the robot to accurately determine its pose without any localization sensors, such as IMU or GPS, to track the accurate position of the robot. Therefore, we decided to fix the robot at the local origin, so that we have a stable reference for the distance sensor data. To obtain the robot pose, we made the robot turn in place for a complete circle with a fixed number of sample points, so that the robot is always at (0,0), and the angle of its pose can be calculated using the linspace function. Figure 5 describes the two coordinate systems. The robot origin locates at its center, and the sensor coordinates are [10.8, 0], since the robot has a length of 21.6mm. The coordinates of the data point are then [10.8+d, 0], in which d is the measured distance. Last, the coordinates can be transformed to coordinates in the global coordinate system and then be plotted.
The two coordinate systems used during mapping.
With the robot position and the coordinate of the measurement point in the robotic frame, we are able to convert those coordinates into the global frame using a rotation matrix, and map all the distance measurements into the global coordinate. At this point, a rough map is generated, so the rescuer can have a better understanding of the rescue environment remotely.
THE ANDROID APPLICATION
In conjunction with the Python script running on the Raspberry Pi server, we also developed an Android application acting as the client, to allow the rescuers to control the robot remotely. When the application boots up, it searches for the server with the predefined given credentials on the connected network. Once it finds the server and connects to it, the application waits for the sensor data and the IR camera feed from the robot. As soon as the robot sends in the data to the application, the application comes alive, and the rescuers can control the robot using various buttons. The process flow of the application is illustrated in Figure 6.
The Android application has a minimalist design (Figure 7). At the center of the screen, it shows the IR feed from the robot, which updates at a 10Hz refresh rate. Adjacent to the IR camera display is the sensor data display panel. This panel is used to view the data received from all the sensors on the rescue robot. The Android application also has 10 control buttons: four direction control buttons to move the robot around, two speed-control buttons to speed up or slow down the robot, three audio control buttons to get audio input and output, and a map button to start the mapping process on the robot.
One of the challenges we faced while implementing the aforementioned algorithm was that a regular commercially available router at home blocked the TCP/IP packets we wanted to transmit, preventing a connection between the server and the client to be established. To tackle this problem, we simply used an open Wi-Fi network (like RedRover), which doesn’t block the TCP/IP packets.
VERIFICATION
To test the robustness and reliability of the robot, we went through a series of dedicated unit tests and then integration tests to weed out potential bugs and sources of errors. We started with really simple tests for all the sensors and modules. For instance, one of the first tests involved checking to see if the robot responded to the commands sent out by the Android application. We established the TCP/IP connection and used the console to check if all the commands were received as intended.
Once that was working fine, we integrated individual components on the robot and tested their functionality. We started off with the IR thermal camera display, and tried to get the live feed on the Android application. Our first approach was to get a camera feed on the Raspberry Pi, convert the image frames into an array of pixels, and send those pixel values to the Android application. However, after testing that, we discovered that the system was too slow and hence, not feasible at all. The solution to this problem was that instead of sending each individual pixel over the connection, we sent only the raw sensor values, and then processed those values in the application directly. This worked remarkably well.
We used a flint spark lighter and brought the flame close to the IR camera (Figure 8). The camera was able to pick up the changes in thermal reading immediately, and the display on the Android application updated accordingly. Later on, when conducting integration tests for the flame sensor and the IR thermal camera, they worked perfectly with one another and were able to detect the lighter flame even from a distance of 1m.
Android application interface with control buttons and sensor data.
We also tested all the individual sensors, including the water sensor and the ultrasonic sensor, individually, by sending their data values to the Android application. Once all the individual sensors had been tested, we integrated everything, and connected the motors to test the robot all together, especially when it was moving. The robot behaved exactly as we wanted it to, except for a few hiccups such as loose wires—which weren’t too difficult to rectify.
CONCLUSION
It was satisfying to demonstrate the integrated features of the robot, and to realize that the user interface worked well. The rescuer can remotely control the robot with a smartphone over a Wi-Fi network. The Scouting Owl keeps streaming its surroundings with the IR camera, and constantly updates sensor information that benefits the rescuer, including temperature, water level, and flame. It can even generate a clear local map with only the ultrasonic sensor, and can establish conversations with the endangered individual. All the features mentioned are well integrated into the concise Android application.
Overall, our project covers a wide range of topics, and requires skillsets from computer science (Application development, TCP/IP communication), electrical engineering (sensors, embedded system programming), and mechanical engineering (CAD, 3D printing).
Along with our accomplishments, we learned a lot when getting stuck. We discovered that the communication between the Android device and the Raspberry Pi cannot be established in a private Wi-Fi network. We found a lot of broken libraries when trying to integrate sensors, which suggests that a better solution might be using a microcontroller and establishing communication with it. Learning about the limitations and advantages of a Raspberry Pi will help us to make better design choices in future projects.
FUTURE SCOPE
Although the prototype Scouting Owl robot runs well, we can also improve this initial design. First, with the 8-channel ADC converter, we can include more analog sensors that give us more information about the environment. For example, we can add photo-resistors, a humidity sensor, and a motion detector. Second, as mentioned above, Raspberry Pi is not ideal for directly interfacing with sensors, so using a microcontroller as a data acquisition board and communicating sensor readings with the Pi might be a more optimal and professional solution. Third, we can enhance mapping by using a GPS or IMU to collect information, and then use an Extended Kalman Filter to perform localization for the robot. In this way, we could create a much larger map. To map with higher precision, we can use the Intel real-sense depth sensor that provides nine depth measurements from nine different angles at each time interval. Finally, we can also implement a feedback mechanism for each motor, so that they are closed-loop controlled. To implement that, we can connect an encoder on the motor shaft, and then tune it with a control algorithm such as PID. A closed-loop controlled motor moves more gradually, and would allow sensors to collect more accurate data. CC
Additional materials from the author are available at:
www.circuitcellar.com/article-materials
RESOURCES
Adafruit | www.adafruit.com
Processing for Android | https://android.processing.org
Raspberry Pi | www.raspberrypi.com
SparkFun | www.sparkfun.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • MAY 2022 #382 – Get a PDF of the issue
Sponsor this Article