Build an ROS-based Robotic Car
Raul continues his article series on the Robot Operating System (ROS). Here in Part 2, he shares how he built a Raspberry Pi-based differential drive robotic car as a low-cost platform to let you experiment with ROS middleware.
Building on what I covered in Part 1 of this series (Circuit Cellar 368, March, 2020) [1], in Part 2 I’ll discuss the building of a differential drive robotic car based on Raspberry Pi, as a low-cost platform for introductory experimentation with the Robot Operating System (ROS) middleware. In manual mode, the robotic car can be controlled from the keyboard of a remote PC. And in autonomous mode it can follow a target detected by a webcam mounted on the robotic car. Target objects are detected using computer vision color segmentation, and the video feed from the detection process can be monitored from the remote PC. All ROS nodes for the robotic car are written in Python. I’m using ROS (version 1) instead of ROS2 (version 2) for the reasons discussed in Part 1.
This article is geared toward beginners, and its main purpose is to show a basic working example of an ROS system that integrates computer vision with the OpenCV library. If you don’t have a basic understanding of ROS, please read Part 1 for a quick introduction to ROS basic concepts, how to set up an Ubuntu machine with ROS middleware and how to build your first ROS package. I would advise you to read Part 1 of this article or have equivalent knowledge of discussed topics. You should also have basic experience setting up a Raspberry Pi SBC with an operating system, and basic experience with Python and the OpenCV computer vision library.
FUNCTIONAL DESCRIPTION
Figure 1 shows the main hardware components for the robotic car. I am using a Raspberry Pi 3B SBC as the main controller, but any of the newer versions (3B+ and 4B) should work as well—though 4B is recommended. The Raspberry Pi is powered from a 10,000mAhour, 5V/3A cell phone recharging power bank from RAVPower. The robotic car has a differential-drive configuration with two 60RPM DC motors from Pololu driven by an STMicroelectronics (ST) L298N dual full-bridge driver module and powered from a dedicated 18650 rechargeable Li-ion, 7.4V, 2,000mAhour battery pack from Sparkfun. A generic Linux-compatible USB Video Class (UVC) webcam from Logitech is used to detect the targets. (Most webcams are UVC enabled.) Figure 2 shows the circuit diagram for the robotic car, and Figure 3 shows the finished hardware build.
The Raspberry Pi runs the Ubuntu MATE 18.04 operating system, on top of which it runs ROS. Other Raspberry Pi operating systems could be used as well. I chose Ubuntu MATE 18.04 because, being Ubuntu, it is directly supported by ROS; it has a desktop (convenient for beginners), and runs well on Raspberry Pi 3B, 3B+ and 4B boards. For this robotic car, we will create a custom ROS package, similar to the one we created in Part 1 of this article. The package will contain three ROS nodes: one for commanding the robotic car from the PC keyboard; one for driving the robotic car’s motors and one for detecting targets with computer vision.
RASPBERRY Pi INSTALLATION
I’m assuming you already know how to prepare a microSD card with an operating system (Raspberry Pi OS, for example) and boot the Raspberry Pi for the first time. Setting up the Raspberry Pi with the Ubuntu MATE operating system is very similar. Check the file install_run.md, included with this article’s source code, for a summary of steps to install and configure Ubuntu MATE 18.04 and ROS Melodic. The procedure is almost the same as the one we followed in Part 1 to install ROS Noetic in an Ubuntu 20.04 PC.
Once you have ROS installed, set up a catkin workspace and build an ROS package with the name robotic_car. This package will be almost the same as the one we built in Part 1, except that we are adding the opencv_node.py ROS node for target detection with computer vision. We are also updating the code in the nodes command_node.py and drive_node.py, to add the new “autonomous mode” to the robotic car and to control real DC motors with pulse width modulation (PWM) signals. Follow the steps in the install_run.md file; the procedure is very similar to the one followed in Part 1. Install also the cv_bridge ROS package that allows the conversion of OpenCV images to ROS image messages.
— ADVERTISMENT—
—Advertise Here—
Once you have your robotic_car package ready, copy the files command_node.py, drive_node.py and opencv_node.py from the catkin_ws/src/robotic_car/scripts folder from the source code provided for this article to the folder catkin_ws/src/robotic_car/scripts inside the package you just created.
COMMAND AND DRIVE NODES
command_node.py: The code for this node is almost the same as that which we used to control the fictitious robotic car in Part 1. Here, we will publish the same command messages that the drive_node.py ROS node must read now to control the robotic car motors with PWM signals. The code is easy to understand. Please refer to Part 1 for an explanation of how it works. In Part 1, this node published five commands to a String message type topic called /command. Those commands were forward, backward, left, right and stop, which corresponded to the keys: i, “,” (comma key), j, l and k on the PC keyboard. Here, we will add a sixth command, the letter a, to change the robotic car’s control mode from “manual” (forward, backward, left, right and stop) to “autonomous” (follow the target). In autonomous mode, the robotic car will follow any target detected by the webcam. Instead of using string commands (forward, backward, left and so on) we could use a pair of two velocities, linear and angular (in the XY plane) to control the robotic car, as is generally done with professionally designed robotic car systems. Here we are using only string commands for the sake of simplicity.
drive_node.py: The code for this node is also almost the same as the one we used in Part 1, except for the driving functions. In Part 1 these were left empty, because we were dealing with a fictitious robotic car. Here, we must fill in the six driving functions (forward, backward, left, right, stop and autonomous) with code to control the motors using PWM signals.
This node subscribes to the /command topic to read the command strings published by command_node.py. The first five commands (forward, backward, left, right and stop) control the robotic car manually; however, the sixth command (auto) puts the robotic car in autonomous mode. In this mode, it follows any blue target detected by the webcam—later you can change the color to be detected. This node reads the detected target’s XY coordinates from the /target_coord topic published by the opencv_node.py computer vision node. These coordinates are defined in a Cartesian coordinate system with its origin in the center of the image frame (Figure 4). Listing 1 shows the code for drive_node.py.
LISTING 1 – Code for drive_node.py
#!/usr/bin/env python
import rospy # Python library for ROS
from std_msgs.msg import String, UInt16
from geometry_msgs.msg import Point
import RPi.GPIO as GPIO
# Set the GPIO modes
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
# Set variables for the GPIO motor driver pins
motor_left_fw_pin = 10
motor_left_bw_pin = 9
motor_right_fw_pin = 8
motor_right_bw_pin = 7
# PWM signal frequency in Hz
pwm_freq = 2000
# PWM % duty cycle
fw_bw_duty_cycle = 60
turn_duty_cycle = 40
# Set the GPIO Pin mode to output
GPIO.setup(motor_left_fw_pin, GPIO.OUT)
GPIO.setup(motor_left_bw_pin, GPIO.OUT)
GPIO.setup(motor_right_fw_pin, GPIO.OUT)
GPIO.setup(motor_right_bw_pin, GPIO.OUT)
— ADVERTISMENT—
—Advertise Here—
# Create PWM objects to handle GPIO pins with 'pwm_freq' frequency
motor_left_fw = GPIO.PWM(motor_left_fw_pin, pwm_freq)
motor_left_bw = GPIO.PWM(motor_left_bw_pin, pwm_freq)
motor_right_fw = GPIO.PWM(motor_right_fw_pin, pwm_freq)
motor_right_bw = GPIO.PWM(motor_right_bw_pin, pwm_freq)
# Start PWM with a duty cycle of 0 by default
motor_left_fw.start(0)
motor_left_bw.start(0)
motor_right_fw.start(0)
motor_right_bw.start(0)
# Global variables for storing received ROS messages
received_command = ''
last_received_command = ''
received_coord = Point(0, 0, 0)
target_radius = None
MIN_TGT_RADIUS_PERCENT = 0.05
image_width = 0
CENTER_WIDTH_PERCENT = 0.30
# Motor driving functions
def forward():
motor_left_fw.ChangeDutyCycle(fw_bw_duty_cycle)
motor_left_bw.ChangeDutyCycle(0)
motor_right_fw.ChangeDutyCycle(fw_bw_duty_cycle)
motor_right_bw.ChangeDutyCycle(0)
def backward():
motor_left_fw.ChangeDutyCycle(0)
motor_left_bw.ChangeDutyCycle(fw_bw_duty_cycle)
motor_right_fw.ChangeDutyCycle(0)
motor_right_bw.ChangeDutyCycle(fw_bw_duty_cycle)
def left():
motor_left_fw.ChangeDutyCycle(0)
motor_left_bw.ChangeDutyCycle(turn_duty_cycle)
motor_right_fw.ChangeDutyCycle(turn_duty_cycle)
motor_right_bw.ChangeDutyCycle(0)
def right():
motor_left_fw.ChangeDutyCycle(turn_duty_cycle)
motor_left_bw.ChangeDutyCycle(0)
motor_right_fw.ChangeDutyCycle(0)
motor_right_bw.ChangeDutyCycle(turn_duty_cycle)
def stopMotors():
motor_left_fw.ChangeDutyCycle(0)
motor_left_bw.ChangeDutyCycle(0)
motor_right_fw.ChangeDutyCycle(0)
motor_right_bw.ChangeDutyCycle(0)
# The node's main entry function
def listener():
rospy.init_node('motor_driver', anonymous=True)
rospy.Subscriber('/command', String, commandCallback)
rospy.Subscriber('/target_coord', Point, targetCoordCallback)
rospy.Subscriber('/target_radius', UInt16, targetRadiusCallback)
rospy.Subscriber('/image_width', UInt16, imageWidthCallback)
rospy.spin()
# Callback functions to process incoming ROS messages
def commandCallback(message):
global received_command
global last_received_command
received_command = message.data
if received_command == 'forward':
forward()
elif received_command == 'backward':
backward()
elif received_command == 'left':
left()
elif received_command == 'right':
right()
elif received_command == 'stop':
stopMotors()
elif received_command == 'auto':
autonomous()
else:
print('Unknown command!')
if received_command != last_received_command:
print('Received command: ' + received_command)
last_received_command = received_command
def imageWidthCallback(message):
global image_width
image_width = message.data
def targetRadiusCallback(message):
global target_radius
target_radius = message.data
— ADVERTISMENT—
—Advertise Here—
def targetCoordCallback(message):
global received_coord
received_coord.x = message.x
received_coord.y = message.y
# Target following function in autonomous mode
def autonomous():
global image_width
global target_radius
global MIN_TGT_RADIUS_PERCENT
if target_radius >= image_width*MIN_TGT_RADIUS_PERCENT and target_radius <= image_width/3:
if abs(received_coord.x) <= image_width*CENTER_WIDTH_PERCENT:
forward()
elif received_coord.x > 0:
right()
elif received_coord.x < 0:
left()
else:
stopMotors()
if __name__ == '__main__':
print('Ready to receive commands!')
listener()
print('Node is shutting down, stopping motors')
stopMotors()
In Part 1, I described how the code related to ROS in this node works. Here, I will focus mainly on the code we are adding to control the real car. Let’s see what we have in Listing 1. In line 3 we import the String and UInt16 data types from the std_msgs ROS package that will allow us to subscribe to string and unsigned integer message type topics. With line 4 we import the Point data type from the geometry_msgs ROS package, which provides data representation for points (X, Y, Z) in 3D space, and will allow us to subscribe to Point message type topics. With line 5, we import the Python Raspberry Pi GPIO library to control the Raspberry Pi pins and generate PWM signals to drive the motors. I will not cover in detail how this library works; however, I will succinctly explain the code lines that use this library to generate PWM signals. See RESOURCES at the end of the article for suggested beginner’s tutorials on PWM, and also on the GPIO library.
GPIO LIBRARY
With line 8 we set the pin-numbering scheme for the GPIO library to BCM (there’s also the BOARD option), which allows us to follow the low-level numbering system defined by the Broadcom processor in the Raspberry Pi. Line 9 disables some annoying warnings from the GPIO library. With lines 11-14 we define the Raspberry Pi pins we want to use to generate the PWM signals for driving the left and right motors. For instance, we will output a PWM signal via pin 10 (motor_left_fw_pin) to make the left motor rotate forwards, and via pin 9 (motor_left_bw_pin) to make it rotate backwards. We’ll do the same with pins 8 and 7 for the right motor.
Line 17 defines a frequency of 2,000Hz for the PWM signal. Line 19 defines the PWM duty cycle to use when the car goes forward or backward, and line 20 defines the duty cycle to use when the car turns left or right. With lines 23-26 we configure the four selected pins as outputs, and with lines 29-32 we create four corresponding PWM software objects (we can call them “handles”) to interact with the selected pins to generate the PWM signals. Lines 35-38 start the PWM signal generation process with a duty cycle of zero to have the motors stopped by default. Lines 41-47 define some global constants and variables. Their use will be evident when we explain the code further.
Lines 50-78 define the driving functions. For instance, the forward() function (lines 50-54) drives the robotic car forward. Each motor in the robotic car is driven by an H-bridge, which is a special electronic circuit to drive a DC motor in both directions with two PWM signals. For that reason, we need two pins to control each motor. To make the left motor go forward, we output through the motor_left_fw pin a PWM signal with a duty cycle greater than zero and through the motor_left_bw pin a PWM signal with a duty cycle of 0 (see lines 51-52).
We do the same for the right motor (lines 53-54), and when both motors turn forward, the robotic car goes forward as well. Conversely, to make the robotic car go backward, we set the PWM signals in both backwards pins to a duty cycle greater than zero and both forwards pins to 0 (see lines 57-60). To make the car turn left, the left motor must turn backward and the right one must turn forward (see lines 63-66). To make it turn right, the opposite (lines 69-72). To stop the car, all pins must have a duty cycle of 0 (lines 75-78).
For driving forward and backward, for now we are setting a duty cycle of 60% (see line 19: fw_bw_duty_cycle = 60) and to turn left and right a duty cycle of 40% (see line 20: turn_duty_cycle = 40). Don’t hesitate to change them to make the robotic car maneuver faster or slower, as its final velocity will depend on the type of motors you are using and even the voltage level in your battery pack.
The listener() function defined in lines 81-88 is the main entry function for this node. In line 5 we initialize the node with the name motor_driver, and with lines 83-86 we subscribe it to four topics: /command, published by command_node.py; and the other three— /target_coord, /target_radius and /image_width published by opencv_node.py. Line 88 puts this node in an infinite loop to execute the callback functions when new messages are available in any of the subscribed topics.
AUTONOMOUS MODE
In Part 1, I discussed how the commandCallback
(message
) callback function (lines 91-114) works. This function executes when a new message is available in the /command topic. What’s new here is the parsing of the auto message (lines 107-108) that triggers the autonomous mode by running the autonomous()
function defined in lines 130-143. Regarding the imageWidthCallback
(message
) function defined in lines 116-118, it executes when new data are available in the /image_width topic (see line 86). In line 118 this function reads the data field from the received message containing the width in pixels of the image frame used in the computer vision detection task.
These data are stored in the image_width global variable (lines 41-47 in Listing 1 define the global constants and variables), and it represents the camera’s horizontal “field of view” in pixels. targetRadiusCallback(message)
executes when new data are available in the /target_radius topic and, similarly to the previous callback function, it reads the message data and stores it in the target_radius global variable (line 122). This message contains the radius of a computed enclosing circle around the detected target. Similarly, the function targetCoordCallback(message)
reads from the /target_coord topic the target’s (X, Y) coordinates (in pixel units) and stores them in the received_coord global variable. This variable is of type Point, and it has three fields: X, Y and Z, but we are not using the Z value because we can’t extract depth information from the camera image.
Finally, the autonomous()
function makes the robotic car autonomously follow the detected target, using the data provided by opencv_node.py. In line 135 the function checks if the detected target’s enclosing circle radius is greater than or equal to MIN_TGT_RADIUS_PERCENT (0.05 or 5% in this case) of image_width, and less than or equal to image_width/3 (a third of the image frame’s width), as a condition to follow the target. Otherwise, it will stop the robotic car (lines 142-143). Thus, the car will only track targets with an enclosing circle radius between 5% and 33.33% of the image frame width. The lower limit stops the car from chasing targets too small that could be noise, and the upper limit ideally makes it stop when it’s too close to the target.
You can change these two parameters to values that work best for you. Nevertheless, if the target has the expected size, in lines 136-137 the function will command the car to go forward, as long as the absolute value of the target’s X coordinate is less than or equal to CENTER_WIDTH_PERCENT (0.30 or 30% in this case) of the image width. For instance, if the image width is 320 pixels, the car will go forward, as long as the target’s X coordinate is less than or equal to 96 and greater than or equal to -96 pixels. These values are defined in a Cartesian coordinate system with its origin at the image frame’s center (Figure 4). If the target is far to the right, the car will turn right (lines 138-139), and if it is far to the left, the car will turn left (lines 140-141), until the target is centered in the image frame again.
COMPUTER VISION NODE
In my previously published article “Write an Object Tracking Drone Application” (Circuit Cellar 362, September, 2020) [2], I explained how to use color segmentation-based object detection with OpenCV to detect a target. If you are new to that topic, I strongly suggest you read that article for a succinct explanation of how to detect objects and apply a low-pass moving average filter to the target’s XY coordinates to reduce noise in the data.
In that article, I used a drone in a simulated environment to track and follow a camera-detected target. Here, we will use a similar approach to follow the target with the robotic car, though this time also using the ROS middleware. So, the object detection code here is very similar to the code used in that previous article. With that in mind, I will give just a general explanation of this node’s main entry function, leaving aside the auxiliary functions that were explained in detail in the article in reference [2].
Listing 2 shows the imported libraries, global constants and variables, and the main entry function for opencv_node.py. With line 3, we import the CompressedImage message type from the sensor_msgs package, which will allow us to publish a video stream as an ROS topic message. Line 4 imports the Point message type, which will allow us to publish the (X, Y, Z) coordinates of a point in 3D space (though the Z coordinate will not be used), and line 5 imports the UInt16 data type to publish unsigned integer values as topic messages. Line 6 imports the OpenCV library, and line 7 imports the Cv_Bridge object that will allow us to convert from OpenCV images to ROS images publishable as ROS topic messages.
LISTING 2 – Code for opencv_node.py
#!/usr/bin/env python
import rospy # Python library for ROS
from sensor_msgs.msg import CompressedImage
from geometry_msgs.msg import Point
from std_msgs.msg import UInt16
import cv2
from cv_bridge import CvBridge
# Global constants and variables
NUM_FILT_POINTS = 5
DESIRED_IMAGE_HEIGHT = 240
filt_buffer = {'width':[], 'height':[]}
params = {'image_height':None, 'image_width': None,'resized_height':None,'resized_width': None,
'x_ax_pos':None, 'y_ax_pos':None, 'scaling_factor':None}
# Main entry function
def detect_target():
detect_image_pub = rospy.Publisher('detect_image/compressed', CompressedImage, queue_size=10)
target_coord_pub = rospy.Publisher('/target_coord', Point, queue_size=10)
target_radius_pub = rospy.Publisher('/target_radius', UInt16, queue_size=10)
image_width_pub = rospy.Publisher('/image_width', UInt16, queue_size=10)
rospy.init_node('detection_node', anonymous=True)
rate = rospy.Rate(30) # In Hz
vid_cam = cv2.VideoCapture(0) # Access the camera with index '0'
if vid_cam.isOpened() is False:
print('[ERROR] Couldnt open the camera.')
return
get_image_params(vid_cam)
bridge = CvBridge()
while not rospy.is_shutdown():
tgt_cam_coord, frame, contour, radius = get_target_coordinates(vid_cam)
if tgt_cam_coord['width'] is not None and tgt_cam_coord['height'] is not None:
tgt_filt_cam_coord = moving_average_filter(tgt_cam_coord)
else:
tgt_cam_coord = {'width':params['y_ax_pos'], 'height':params['x_ax_pos']}
tgt_filt_cam_coord = {'width':params['y_ax_pos'], 'height':params['x_ax_pos']}
tgt_cart_coord = {'x':(tgt_filt_cam_coord['width'] - params['y_ax_pos']),
'y':(params['x_ax_pos'] - tgt_filt_cam_coord['height'])}
frame = draw_objects(tgt_cam_coord, tgt_filt_cam_coord, frame, contour)
detect_image_pub.publish(bridge.cv2_to_compressed_imgmsg(frame))
tgt_coord_msg = Point(tgt_cart_coord['x'], tgt_cart_coord['y'], 0)
target_coord_pub.publish(tgt_coord_msg)
target_radius_pub.publish(radius)
image_width_pub.publish(params['resized_width'])
cv2.imshow("Detect and Track", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
rate.sleep()
Lines 10-14 define some global constants and variables for use in the object detection and coordinate data filtering process. The function detect_target() is the node’s main entry function. In line 18 we instantiate an ROS publisher to publish the detect_image/compressed topic containing the video frames from the object detection process. In line 19 we create a publisher for the target_coord topic, which will carry the (X, Y, Z) coordinates of the detected target (Z will be ignored). In line 20 we create a publisher for the detected target’s enclosing circle radius, and with line 21 we create a publisher for the image frame‘s width (both of type UInt16).
HIGH EXECUTION RATE
Line 23 initializes the node with the name detection_node, and line 24 sets its execution rate to 30Hz. Because of the latency in the detection process, the node will execute at a lower frequency, but we are aiming at the highest execution rate possible, assuming a video feed of 30 frames per second. Line 26 creates a VideoCapture object to access the default webcam connected to the Raspberry Pi (which generally will have an index of 0). Lines 28-30 check if the camera has been opened successfully. If not, the node terminates execution.
Line 32 calls the get_image_params(vid_cam) function to compute some useful parameters derived from the image frame’s dimensions. The image frame is reduced to a height of DESIRED_IMAGE_HEIGHT = 240 pixels (see line 11) to avoid overloading the Raspberry Pi’s processor (see the full listing of opencv_node.py for all auxiliary function definitions). These parameters are stored in the params dictionary defined in lines 13-14. For instance, x_ax_pos stores the position in pixels in the camera image frame coordinate system of the X axis for a Cartesian coordinate system, with its origin at the center of the image frame (Figure 4). With line 34 we instantiate a CvBridge object to convert the OpenCV images to ROS images.
Lines 36-66 define an infinite loop that terminates when the node is shut down. With line 37 we call the get_target_coordinates(vid_cam) function to get four values: the detected target’s coordinates in the camera image coordinate system; the image frame from the detection process; the target’s geometric contour; and the radius of its computed enclosing circle. If the obtained coordinates are valid, we pass them through the moving average filter to reduce the noise in the data (lines 39-40). Otherwise, we make the variables for the target coordinates in the camera image system point to the Cartesian origin (the center of the frame). Figure 4 shows the definition of the “camera image” coordinate system, with its origin at the top left corner of the image frame, and the Cartesian coordinate system, with its origin at the center of the image frame. With lines 46-47 we convert the coordinates from camera image to Cartesian using the equations in Figure 4.
The draw_objects() function in line 49 draws some visualization objects in the image frame (see Figure 5)—for example, the orange Cartesian coordinate axes, the target’s contour in green, its non-filtered centroid in red, and its filtered centroid in yellow. Finally, line 51 converts the image frame from OpenCV to ROS message using the bridge object. Then, it uses the detect_image_pub publisher to publish it to the detect_image/compressed topic. Similarly, line 54 publishes the target’s coordinates to the /target_coord topic. Line 56 publishes the targets enclosing circle radius to the /target_radius topic and line 58 publishes the image frame width to the /image_width topic. Line 60 creates a desktop window to visualize the image frame (Figure 5) and lines 62-64 detect if the “q” key has been pressed on the computer keyboard to stop the node’s execution.
In Part 3, I will explain in detail how to test the system. However, if you don’t want to wait, follow Listing 3 to run the nodes. Figure 6 shows what you should see in the desktop window.
LISTING 3 – Commands to test the robotic car’s nodes
# Optional: Sometimes running node files copied from
# elsewhere give errors because they are not executable.
# Do the following to make sure your node files are:
# Go to your robotic car's package scripts folder
cd ~/catkin_ws/robotic_car/src/scripts
# Make the scripts executable by running:
chmod +x command_node.py
chmod +x drive_node.py
chmod +x opencv_node.py
# ----------------------------------------------------------
# Test the nodes by executing these commands:
# In the first terminal window run the ROS master:
roscore
# In terminal window 2 run:
rosrun robotic_car command_node.py
# In terminal window 3 run:
rosrun robotic_car drive_node.py
# Now go back to terminal window 2, lift the robotic car in the air,
# so the wheels can rotate freely and command the car to go forwards
# (type 'i' in the keyboard), to check both wheels turn forwards.
# If not, shut down the car and change the polarity of the motor(s)
# turning backwards to correct the rotation direction.
# In terminal window 4 run:
rosrun robotic_car opencv_node.py

CONCLUSION
Color segmentation is not the best approach to object detection, because it is affected by changes in lighting and white balance settings in the camera. However, it is one of the simplest object detection techniques to illustrate how to integrate computer vision object tracking with ROS. The latency imposed by the computer vision task in the Raspberry Pi could demean the tracking and following performance of the robotic car if the image size is too big. For instance, with an image height of 240 pixels, I got a latency of around 94ms in the computer vision node. After increasing it to 480 pixels, the latency went up to around 250ms. We also made the robotic car’s control strategy very simple and trivial to focus better on the inner workings of ROS.
In Part 3 of this article series, I will discuss how to command the robotic car remotely from the Ubuntu PC we set up in Part 1. I will also explain how to visualize the camera video feed remotely, how to monitor message data published to topics, and how to change the color range for the targets we want to detect.
RESOURCES
References
[1] “Intro to Robot Operating System (ROS), Part 1: A Framework for Developing Robotics Software” (Circuit Cellar 368, March, 2021)
[2] “Write an Object Tracking Drone Application” (Circuit Cellar 362, September, 2020)
Raspberry GPIO
https://learn.sparkfun.com/tutorials/raspberry-gpio/python-rpigpio-api
H-Bridges – the Basics
https://www.modularcircuits.com/blog/articles/h-bridge-secrets/h-bridges-the-basics
Introduction to Pulse Width Modulation
https://www.embedded.com/introduction-to-pulse-width-modulation
Raspberry Pi L298N Interface Tutorial
https://www.electronicshub.org/raspberry-pi-l298n-interface-tutorial-control-dc-motor-l298n-raspberry-pi
Differential drive robotic car chassis
https://www.amazon.com/Smart-Chassis-Motors-Encoder-Battery/dp/B01LXY7CM3
195:1 Metal Gearmotor 20Dx44L mm 6V CB
https://www.pololu.com/product/3707
Battery Holder – 2×18650 (wire leads)
https://www.sparkfun.com/products/12900
L298N motor driver module
https://www.amazon.com/Controller-Stepper-H-Bridge-Compatible-Mega2560/dp/B08DRGCKRL
10,000 mAh power bank
https://www.amazon.com/RAVPower-10000mAh-Portable-Ultra-Slim-Nintendo/dp/B077CY4M8P
Raspberry Pi 4 Model B (4GB)
https://www.sparkfun.com/products/15447
Logitech C270 webcam
https://www.amazon.com/dp/B01BGBJ8Y0
Lithium Ion Battery – 18650 Cell (2,600mAh)
https://www.sparkfun.com/products/12895
Tenergy T4s Intelligent Universal Charger – 4-Bay
https://www.sparkfun.com/products/14457
Logitech | www.logitech.com
ROS Wiki | https://wiki.ros.org
Sparkfun Electronics | www.sparkfun.com
STMicroelectronics | www.st.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • APRIL 2021 #369 – Get a PDF of the issue
Sponsor this ArticleRaul Alvarez Torrico has a B.E. in electronics engineering and is the founder of TecBolivia, a company offering services in physical computing and educational robotics in Bolivia. In his spare time, he likes to experiment with wireless sensor networks, robotics and artificial intelligence. He also publishes articles and video tutorials about embedded systems and programming in his native language (Spanish), at his company’s web site www.TecBolivia.com. You may contact him at raul@tecbolivia.com