Testing the Robotic Car
In the third and final part of this article series, Raul discusses how to test the hardware and software of the robotic car he built in Part 2. He also details how to test the robotic car and how to run the robotic car remotely from the Ubuntu PC he set up in Part 1.
Now it’s time to describe how to test the hardware and software of the robotic car we built in Part 2 (Circuit Cellar 369, April 2021) [1]. I’ll begin by discussing how to test the robotic car by running the ROS nodes directly from the Raspberry Pi’s desktop, to check that the hardware and software are working correctly. Next, I detail how to run the robotic car remotely, headless, from the Ubuntu PC we set up in Part 1 (Circuit Cellar 368, March 2021) [2]. I will also explain how to monitor the computer vision target detection video feed by using the rqt_image_view ROS package, and how to use the rostopic package to remotely monitor topics and messages available in the system for debugging purposes. Finally, I’ll examine launch files, a widely used ROS feature that automates the simultaneous starting of multiple nodes with custom parameters.
TESTING THE ROBOTIC CAR NODES
Once we have the robotic car hardware and software in place, we are ready to do a local test, to make sure everything works correctly. Connect a monitor, keyboard and mouse to your robotic car’s Raspberry Pi, and boot up. Listing 1 shows the commands to test the system from the Raspberry Pi’s desktop. If you copied the Python node scripts (command_node.py, drive_node.py and opencv_node.py) from the source code folder provided for this project (available from Circuit Cellar’s article code and files webpage), you would have to make the script files executable by running the commands in lines 6-11. After that, in the same terminal window, start the ROS master by executing line 16. Next, open a second terminal window and start the command node (line 19).
Listing 1
Bash commands to test the system locally
# Optional: Sometimes running node files copied from
# elsewhere give errors because they are not executable.
# Do the following to make sure your node files are:
# Go to your robotic car's package scripts folder
cd ~/catkin_ws/robotic_car/src/scripts
# Make the scripts executable by running:
chmod +x command_node.py
chmod +x drive_node.py
chmod +x opencv_node.py
# --------------------------------------------------
# Test the nodes by executing the following commands.
# In the first terminal window run the ROS master:
roscore
# In terminal window 2 run:
rosrun robotic_car command_node.py
# In terminal window 3 run:
rosrun robotic_car drive_node.py
# Now go back to terminal window 2, lift the robotic
# car in the air, so the wheels can rotate freely and
# command the car to go forwards (type 'i' in the
# keyboard), to check both wheels turn forwards. If
# not, shut down the car and change the polarity of
# the motor(s) turning backwards to correct them.
# In terminal window 4 run:
rosrun robotic_car opencv_node.py
Open a third terminal window and start the drive node, as well (line 22). Now, lift the robotic car in the air, so the wheels can rotate freely, and turn on the switch that powers the motor driver from the 18650 Li-ion battery pack. Then, go back to terminal window 2 and hit the i key in the keyboard connected to the Raspberry Pi, to make the robotic car drive forward. (Don’t place it on the floor yet!)
Check that both wheels rotate, so that if placed on the floor, the robotic car would move forward. If any of the two wheels is rotating backward, turn off the Li-ion battery pack and then shut down the Raspberry Pi from the desktop menu. With the robotic car unpowered, swap the terminals of any motor rotating backward to change its rotation direction, as shown in Figure 1.
After correcting any backward wheel rotation, reboot, repeat all steps involving lines 16, 19 and 22 and verify that now both wheels rotate forward when you hit i in the keyboard. Try also the commands ”,” (comma key) (backward), j (left), l (right) and k (stop) to verify they work correctly. For instance, when you type j (turn left), the left wheel should turn backward, and the right wheel should turn forward in a way that will make the robotic car turn counter-clockwise on the floor. If you type l (turn right), the left wheel should turn forward and the right wheel should turn backwards to make the robotic car turn clockwise on the floor.
Now, put the robotic car in “autonomous” mode by typing a on the keyboard. Open a fourth terminal window and run line 32 to run the computer vision node. Once this node starts, a window titled Detect and Track should appear on the desktop showing the camera image.
— ADVERTISMENT—
—Advertise Here—
Put a blue object in front of the camera, and you should see it being detected by the computer vision algorithm. Figure 2 shows the system running and a target being detected. The robotic car wheels should begin to rotate, in an attempt to follow the target. For instance, if the detected target’s centroid is horizontally near the center, both wheels should move forward at the same speed. If you move the object in front of the camera to the left, the robotic car should try to turn to the left, and the other way around if you move the object to the right.
When the detected object’s enclosing radius is greater than image_width/3 (33% of the image width), the robotic car will stop, assuming that means the target is too close to the robotic car. Move the blue object close enough to the camera to make both wheels stop. Move it back, and the wheels should start spinning again.
RUNNING THE SYSTEM REMOTELY
Clearly, the robotic car is not of much use if we can’t detach it from the display, keyboard and mouse, so it can move freely over the ground while we control it remotely from another computer. In Part 1 of this article [2], we started learning ROS by installing ROS on an Ubuntu 20.04 PC. We even created our first ROS package called first_example that contained the most basic level functionality for the robotic car, including a first bare-bones version of the two core nodes: command_node.py and drive_node.py.
Now, we will use that PC to remotely control the robotic car. From the PC we will access the Raspberry Pi via a Wi-Fi network, start the nodes and control the robotic car by using the first_example ROS package we built for the PC in Part 1 (a slightly improved version, as a matter of fact). We will also monitor the detection video feed from the computer vision node, as well as ROS topics and messages available in the system. Before we can do that, however, we need to add some configurations to our now multi-computer system, composed of the robotic car’s Raspberry Pi and the desktop PC interconnected in a Wi-Fi LAN (Figure 3).
Let’s begin. Boot-up your Raspberry Pi, and make sure it is connected to the same Wi-Fi network to which you will be connecting the Ubuntu PC. Now, login to your router’s control panel, and reserve an unused static IP address for your Raspberry Pi. If you never did this before, on in RESOURCES at the end this article you can find a link [3] to a tutorial on how to configure IP address reservation on TP-Link wireless routers. For other router brands the procedure is similar. Alternately, if you don’t want to reserve an IP address, you can configure your Raspberry Pi with a static IP address, though in my opinion, the first option is preferable for less Linux-savvy users.
If you will be testing the robotic car just with the dynamic IP address assigned by the router to your Raspberry Pi (that is, not using a static IP), get this address by running the command hostname -I in the Raspberry Pi’s command terminal. Make sure, however, you are connected to your router via Wi-Fi and not Ethernet. You must also be aware that dynamic IPs can potentially change between boot-ups (hence, better to use a static IP).
Going on forward, make sure SSH access is enabled in the Raspberry Pi. You should have enabled it already, if you followed the installation and configuration procedure from Part 2 [1]. Otherwise, check the install_run.md file from that part for a list of steps to enable it.
Next, open the file opencv_node.py in the Raspberry Pi, go to line 106 (cv2.imshow(“Detect and Track”, frame)) and comment it. We will be monitoring remotely the detection video feed as an ROS topic, so from now on we will not need to open a GUI window to show it. Once these modifications are done, shut down the Raspberry Pi from the desktop menu, disconnect the display, keyboard and mouse, and reboot it, this time powering it from the power bank, so it can freely wander over the floor.
If you will be using a virtual Ubuntu PC with VirtualBox as discussed in Part 1, before starting the virtual machine, make the following configuration:
— ADVERTISMENT—
—Advertise Here—
- Select your virtual machine from the list of available machines in the VirtualBox Manager window.
- Click Settings, and go to the Network submenu.
- Under Adapter 1 in the Attached to: drop-down menu, choose Bridged Adapter (see Figure 4). This configuration will cause your guest Ubuntu OS to work on the same network as your host computer (and the Raspberry Pi, for that matter).
- Now, after booting-up your Ubuntu PC, replace the file command_node.py inside your ROS package first_example from Part 1, with the one provided with the source code for Part 3. This file is the same as the one included in Part 2’s robotic_car ROS package for the Raspberry Pi. It includes the processing of the a keyboard command to put the robotic car in autonomous mode, which was not discussed in Part 1.
Now, follow the commands in Listing 2 to run the robotic car remotely.
- First, open a terminal window and start a remote SSH connection to your Raspberry Pi (see line 3).
- Remember to change the IP address for the one assigned to your Raspberry Pi.
- Once you are logged into the Raspberry Pi, run line 6 to check the environment variable ROS_IP, which contains the currently configured ROS IP address for the local computer (the Raspberry Pi). Generally, this environment variable will be empty (no assigned value) by default.
- Next, run line 7 to check the currently configured Uniform Resource Identifier or URI (let’s just say “address”) for the ROS master in the local computer. In general, this will be configured as localhost by default.
Listing 2
Bash commands to test the system remotely from the Ubuntu PC
### ----- Run on your Ubuntu PC's terminal window 1 ------
# Change the IP address below for the one assigned to your RPi
ssh pi@192.168.0.150 # (Enter your RPi password when prompted)
# OPTIONAL: Check your configured ROS addresses in the RPi
echo $ROS_IP # The RPi's current ROS address
echo $ROS_MASTER_URI # The ROS master's current address
# Configure new ROS addresses for the RPi
# Change the IP address below for the one assigned to your RPi:
export ROS_IP=192.168.0.150 # Configure the RPi's ROS address
export ROS_MASTER_URI="http://"$ROS_IP":11311" #RPi as ROS master
# Run the ROS master in the RPi
roscore
### ----- Run on your Ubuntu PC's terminal window 2 ------
# Change the IP address below for the one assigned to your RPi
ssh pi@192.168.0.150 # (Enter your RPi password when prompted)
# Configure new ROS addresses for the RPi
# Change the IP address below for the one assigned to your RPi:
export ROS_IP=192.168.0.150 # Configure the RPi's ROS address
export ROS_MASTER_URI="http://"$ROS_IP":11311" #RPi as ROS master
# Run the driver node in the RPi
rosrun robotic_car drive_node.py
### ----- Run on your Ubuntu PC's terminal window 3 ------
# Change the IP address below for the one assigned to your RPi
ssh pi@192.168.0.150 # (Enter your RPi password when prompted)
# Configure new ROS addresses for the RPi
# Change the IP address below for the one assigned to your RPi:
export ROS_IP=192.168.0.150 # Configure the RPi's ROS address
export ROS_MASTER_URI="http://"$ROS_IP":11311" #RPi as ROS master
# Run the computer vision node in the RPi
rosrun robotic_car opencv_node.py
### ----- Run on your Ubuntu PC's terminal window 4 ------
# Check the IP assigned to your Ubuntu PC and take note of it
hostname -I
# OPTIONAL: Check your configured ROS addresses in your PC
echo $ROS_IP # The PC's current ROS address
echo $ROS_MASTER_URI # The ROS master's current address
# Configure new ROS addresses in the Ubuntu PC
# Change the IP address below for the one assigned to your PC:
export ROS_IP=192.168.0.11 # Configure the PC's ROS address
# Change the IP address below for the one assigned to your RPi:
export ROS_MASTER_URI="http://192.168.0.150:11311" #RPi as ROS master
# Run the command node
rosrun first_example command_node.py
### ----- Run on your Ubuntu PC's terminal window 5 ------
# Configure new ROS addresses in the Ubuntu PC
# Change the IP address below for the one assigned to your PC:
export ROS_IP=192.168.0.11 # Configure the PC's ROS address
# Change the IP address below for the one assigned to your RPi:
export ROS_MASTER_URI="http://192.168.0.150:11311" #RPi as ROS master
# Run the following command to monitor the detection video feed:
rqt_image_view
#...In the opened window, from the first drop-down menu select
# the topic 'detect_image/compressed' and the detection video
# should be visible in the window
### ----- Test the robotic car ------
# Go to terminal window 4, where the driver node is running and
# type 'i', ',', 'j', 'l' and 'k'. The robotic car should
# perform the corresponding maneuvers.
# Type 'a' to enter "autonomous" mode. Put a blue object in
# front of the camera. The robotic car should start following
# the object.
ENVIRONMENT VARIABLES
Before running the system, we must assign these environment variables proper values. To do so, run the commands in lines 11-12. Also remember to change the IP address in line 11 for the one assigned to your Raspberry Pi. Now, you can finally run the ROS master on the Raspberry Pi computer by executing line 15. Open a second terminal window in your Ubuntu PC, and run lines 19-27 to open a second remote SSH session into the Raspberry Pi, configure addresses for that terminal window and start the motor driving node (drive_node.py). Next, open a third terminal window, and with a similar set of commands, start the computer vision node (lines 31-39).
Note that these first two nodes (plus the ROS master) are running on the Raspberry Pi computer, not in the PC. There’s a way of running multiple nodes in just one terminal window by executing a single launch file. We will show how to do that later. For now, running the nodes in individual windows will give us more insight about what’s going on with every node when the whole system is running.
The rest of the nodes will run locally on the Ubuntu PC. Open a fourth terminal window and run line 43 to verify the IP address assigned by your Wi-Fi router to your Ubuntu PC. Remember, you should have your PC and the Raspberry Pi connected in the same Wi-Fi network (Figure 3). Now, execute lines 46-47 to check the ROS environment variables ROS_IP and ROS_MASTER_URI currently configured in your Ubuntu PC. Then, execute lines 51-53 to configure them with new values.
Remember to change the IP address in line 51 to the one assigned to your Ubuntu PC, and the one in line 53 to the one assigned to your Raspberry Pi (the ROS master is running in the Raspberry Pi). Finally, run line 56 to start the ROS node command_node.py. As said before, this node will run locally in the Ubuntu PC. This assumes you have already replaced the file command_node.py from Part 1 in your Ubuntu PC with the updated version provided with the source code for this article on Circuit Cellar’s article code and files webpage.
Now, let’s open a fifth terminal window and execute lines 61-63 to configure addresses as before. Finally, let’s execute line 67 to run the rqt_image_view ROS node. rqt_image_view is a ROS package that comes included with the ROS desktop-full installation, which provides a GUI plugin for displaying images. After running line 67, a GUI window will show up. Go to the drop-down menu at the upper left in that GUI window, and select the topic detect_image/compressed to get the detection video feed displayed in the window (see Figure 5). Remember from Part 2 that the detect_image/compressed topic is published by the opencv_node.py ROS node that runs in the Raspberry Pi.
Now, we are ready to make the robotic car move. Go back to terminal window 4, where the command node is running, and use the keyboard commands to make the robotic car move. Type i, “,” (comma key), j, l and k to control the robotic car manually, and type a to enter autonomous mode. In autonomous mode, put a blue object in front of the camera, and the robotic car should start following it. Figure 5 shows the desktop PC with the full ROS system running, and Figure 6 shows the robotic car running headless.
MONITORING ROS TOPICS
Another ROS package, rostopic, is included with the ROS installations desktop full and barebones. It contains the rostopic command-line tool for displaying debug information about topics, including publishers, subscribers, publishing rates and messages [4]. Because of that, rostopic is used to debug nodes behavior. Listing 3 shows some examples of the use of this command-line tool. Let’s see how it works.
Listing 3
Bash commands to monitor ROS topics with the rostopic tool
### ----- Run on your Ubuntu PC's terminal window 6 ------
# Configure new ROS addresses for the PC
# Change the IP address below for the one assigned to your PC:
export ROS_IP=192.168.0.11 # Configure the PC's ROS address
# Change the IP address below for the one assigned to your RPi:
export ROS_MASTER_URI="http://192.168.0.150:11311" #RPi as ROS master
## Check topics and messages from your ROS system:
# List all available ROS topics
rostopic list
# Check the messages published to the '/target_coord' topic
rostopic echo /target_coord # Press <ctrl>+c to abort
# Clear the screen before each message is published
rostopic echo -c /target_coord
# Check bandwidth used by the '/detect_image/compressed' topic
rostopic bw /detect_image/compressed
# Print info about the '/target_radius' topic.
rostopic info /target_radius
# Display publishing rate of the '/detect_image/compressed' topic
rostopic hz /detect_image/compressed
# Publish a message to the '/command' topic
rostopic pub /command std_msgs/String forward
With your ROS system still running, open a new terminal window in the Ubuntu PC and configure addresses, as we did before (see lines 4-6 in Listing 3). Now, execute line 10 to get a listing of all available ROS topics in the system. Some are published by the ROS master and rqt_image_view and look unknown. Some others are more easily recognizable, because they are created by our custom nodes. For instance, you should see the topics /command, detect_image/compressed, /target_coord, /target_radius and /image_width in the list. Those less familiar, we can safely ignore for now.
If you run line 13, messages available in the /target_coord topic will be printed to the screen. The three numbers seen in each message are the XYZ coordinates of the target detected by the computer vision node. Remember from Part 2 that the Z coordinate will always be zero, because we don’t have depth information from the camera image. If there’s not a blue object in front of the camera, the coordinates will be zero. Put a blue object in front and move it to see the XY coordinates change. Press the <Ctrl>+c key combination to stop the message echoing. If you run line 16, the same topic will be monitored, but now the screen clears before printing the next available message. If you want, you can execute rostopic echo with the rest of the topics to check their messages as well. With the detect_image/compressed topic, you should see the image’s “jpg” values as a big list of integer numbers.
Now, run line 19 to check how much bandwidth (bw is for bandwidth) the detect_image/compressed topic is using. See also the bandwidth of any other topic by changing the topic name in the same command. With line 22 you can print the metadata about a topic, and with line 25 you get the publishing rate (hz is for Hertz) of a topic. Finally, line 28 lets us manually publish a message to a given topic. In this case, /command is the topic to which we want to publish, std_msgs/String is the message type and forward is the message. After executing line 28, the robotic car should start to move forward. You can publish messages to any of your topics, provided that the type and the message being published are correct. For instance, you cannot publish forward to the /target_radius topic, which is of type std_msgs/UInt16.
LAUNCH FILES
By now, probably you are thinking that having to open five windows to run five nodes by hand is impractical, and you are right. Enter launch files. Launch files let us run nodes in groups, along with the ROS master, with a single command. They add a level of flexibility to ROS, but they also add a layer of complexity, because now you have to learn about launch files.
Launch files are XML documents with the .launch extension. They contain information about nodes to be started, some parameters to pass to them, and also the inclusion of other launch files. Launch files can be placed anywhere in the package directory, but it’s a common practice to create a launch directory inside the package’s src folder and put them inside. Next, we will be testing the system again, but now using launch files, so before starting this new test, stop all ROS nodes still running both in the Raspberry Pi and the Ubuntu PC.
Listing 4 shows the content of the launch file robotic_car_core.launch, which we will use to start the nodes drive_node.py and opencv_node.py at once. This launch file is intended to run on the Raspberry Pi and it will automatically launch the ROS master as well, if it is not running already.
Listing 4
XML code for the robotic_car_core.launch launch file
<!--
This launch file is intended to run in the Raspberry Pi and
starts the nodes ‘drive_node.py’ and ‘opencv_node.py’ at once.
It automatically starts the ROS master as well if it isn't
running already. After SSH-ing to your Raspberry Pi, configure
the environment variables before running this script:
# Configure new ROS addresses for the RPi
# Change the IP address below for the one assigned to your RPi:
export ROS_IP=192.168.0.150 # Configure the RPi's ROS address
export ROS_MASTER_URI="http://"$ROS_IP":11311" #RPi as master
-->
<launch>
<!-- NOTE: The [output="screen"] argument enables printed text
output to the screen. Delete the argument to disable it. -->
<!-- From package 'robotic_car' run 'driver_node.py' with
custom name 'driver' -->
<node pkg="robotic_car" type="drive_node.py" name="driver"
output="screen"/>
<!-- From package 'robotic_car' run 'opencv_node.py' with
custom name 'detector' -->
<node pkg="robotic_car" type="opencv_node.py" name="detector"
output="screen"/>
</launch>
All the content in an XML launch file (except for any comments) must be surrounded by a pair of launch tags (see lines 14 and 27). Furthermore, node tags contain the details of each node to run. Let’s take as an example lines 20-21. The pkg argument states the ROS package to which the node in question belongs, the type argument refers to the node’s Python script file, the name argument is a custom name the node will take when spawned, and the output argument enables printed text output from the node to the screen (generated by Python print() statements, in our case). If you omit the output argument, no text from print() statements will be shown in the terminal window. Lines 25-26 mean the same for the second node.
— ADVERTISMENT—
—Advertise Here—
To run this launch file in the Raspberry Pi, from the Ubuntu PC open a remote SSH session into it, and configure the environment variables as before (read the comment in lines 1-12). Then, type roslaunch robotic_car robotic_car_core.launch. The command-tool to run launch files is roslaunch, robotic_car is the package where the launch file is located and robotic_car_core.launch is the name of the launch file you are about to run.
The second launch file we will use is robotic_car_remote.launch. This is intended to run on the Ubuntu PC, and it starts two nodes: command_node.py and rqt_image_view. The first one will allow us to remotely control the robotic car from the PC keyboard. The second one will allow us to monitor in the PC Desktop the video feed from the computer vision detection task. If you open this file with a text editor, you will see that the XML code is similar to the code in the previous launch file, except for the type = rqt_image_view argument, which refers to a binary executable instead of a Python script. This is so, because rqt_image_view is written in C++ and, as all C++ nodes, it is compiled to a binary file.
To run this second launch file, first copy the launch directory from the source code provided for Part 3, into the first_example package folder in the Ubuntu PC. Then, open a terminal window and configure the environment variables. Here, you should configure ROS_IP with the IP address of the Ubuntu PC and ROS_MASTER_URI with the Raspberry Pi’s (see the comment at the beginning of this file as well). Then, type roslaunch first_example robotic_car_remote.launch. In this case, first_example is the name we gave to the package we created in Part 1, and robotic_car_remote.launch is the name of the launch file you are about to run.
You may recall that both the first_example package from Part 1 for the Ubuntu PC and the robotic_car package from Part 2 for the Raspberry Pi are almost the same. The fact that both have different names is just circumstantial. In fact, both can take the same name. For instance, we can build the Ubuntu PC package again from scratch with the name robotic_car. And they can have the exact same files, acknowledging that drive_node.py would not be able to run well on the Ubuntu PC, because there are not any GPIO pins to access. However, opencv_node.py would run just fine on the Ubuntu PC, if we have a webcam connected to it. After running both launch files, you should have the complete system running again.
CONCLUSION
My article “Write an Object Tracking Drone Application” (Circuit Cellar 362, September, 2020) [5] provides an introduction to object detection with color segmentation and explains how to change the color range for the objects you want to detect. Please refer to that article to learn how to change the color range for the target detection.
When testing the robotic car, you will notice that it is not very responsive when trying to follow the targets. This is due to the latency imposed by all running tasks on the Raspberry Pi’s processor. Moreover, a proper PID control would be more effective to do the target following. But the main goal of this article series was to introduce ROS in a convenient way for ROS newcomers, while trying to avoid other complexities not directly related to ROS. Also, we have already warned readers that color segmentation has its downside regarding object detection, because it depends heavily on ambient light and the camera’s white-balance setting.
A much better approach would be to use Deep Learning inference with a neural network, but this is another complex topic, in itself. Speaking of that, a couple of follow-ups to this article, coming in the future are meant to discuss some advanced topics related to ROS and object detection with Deep Learning. Stay tuned!
RESOURCES
References:
[1] “Intro to Robot Operating System (ROS), Part 2: Build an ROS-based Robotic Car” (Circuit Cellar 369, April 2021)
[2] “Intro to Robot Operating System (ROS), Part 1: A Framework for Developing Robotics Software” (Circuit Cellar 368, March 2021)
[3] How to configure Address Reservation on TP-Link wireless router
https://www.tp-link.com/us/support/faq/182
[4] “rostopic”, http://wiki.ros.org/rostopic
[5] “Write an Object Tracking Drone Application” (Circuit Cellar 362, September, 2020)
ROS Wiki | https://wiki.ros.org
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • MAY 2021 #370 – Get a PDF of the issue
Sponsor this ArticleRaul Alvarez Torrico has a B.E. in electronics engineering and is the founder of TecBolivia, a company offering services in physical computing and educational robotics in Bolivia. In his spare time, he likes to experiment with wireless sensor networks, robotics and artificial intelligence. He also publishes articles and video tutorials about embedded systems and programming in his native language (Spanish), at his company’s web site www.TecBolivia.com. You may contact him at raul@tecbolivia.com