Projects Research & Design Hub

Smartphone-Controlled Omnidirectional Ball-Tracking Robot

This omnidirectional robot was built to detect and track colored objects in real-time while communicating with a mobile application. The mobile application enables users to select the color of the object to track.

  • How to build an omnidirectional robot that detect and tracks colored objects in real-time

  • How to make modifications are made to the motor shield

  • How to develop the iOS app

  • How to implement the vision systems

  • Arduino Uno microcontroller.

  • Raspberry Pi 2 model B SBC

  • webcam

  • iPhone 5s smartphone

In recent years, manufacturers of microcontrollers have made significant improvements in their computational power, enabling the development of small mobile platforms with embedded high-level computing. Moreover, the development of the Open Source Computer Vision (OpenCV) library has created an opportunity for engineers to integrate vision-based techniques on small mobile robot platforms. Finally, the use of smartphone applications to control robots has gained popularity in recent years. Specifically, the sensors and interactive features of devices like smartphones can be readily exploited to create natural and intuitive user interfaces with novel interaction modalities (e.g., touch-based, vision-based, sound-based, etc.) for remotely interacting with mobile robots.

This article describes the design and implementation of a ball tracking omni-directional robot that is wirelessly commanded by a user, through a mobile application, to track a user-selected colored ball (see Photo 1). A web camera functions as a vision sensor, a Raspberry Pi performs image processing routines to detect the object, and an Arduino microcontroller handles motor control to track the detected object. The mobile application enables users to select the color of the object to track by sending wireless commands to the robot once the user has made a selection on the touchscreen of a smart device.

Photo 1 The omnidirectional ball-tracking robotic system.
Photo 1
The omnidirectional ball-tracking robotic system.

SYSTEM OVERVIEW
The major components that comprise our robotic system include an Arduino Uno microcontroller, a Raspberry Pi 2 model B single-board computer, a webcam, and an iPhone 5s smartphone. The webcam is responsible for streaming images to the Raspberry Pi through a USB connection, and performs all sensing on the mobile robot. The webcam used for this project is a Logitech c270 model, which can stream images at resolutions as high as 1280 × 720 pixels. The Arduino Uno is equipped with an Atmel ATmega328p microcontroller, and the Raspberry Pi has an embedded quad-core ARM7-based Broadcom CPU. Three omnidirectional wheels with DC motors are fitted at the base of the mobile robot, and one servo motor is attached to the webcam to allow it to tilt in search of colored objects. Each of the motors is controlled via a DK Electronics motor shield attached to the Arduino.

Three different colored balls are used for this project: a red, a yellow, and a blue ball. With the mobile application, a user selects a colored ball for the mobile robot to detect and track. The Raspberry Pi receives messages from the smartphone and modifies the image-processing routine on the fly. The center of the object is determined in image coordinates, and based on its detected location, control commands are computed such that the object is repositioned at the center of the image. Finally, control commands are transmitted from the Raspberry Pi to the Arduino, and the motors are driven as instructed (see Figure 1).

Figure 1 Overview of logic flow
Figure 1
Overview of logic flow
Figure 2 The ball-tracking robotic system's circuitry
Figure 2
The ball-tracking robotic system’s circuitry

HARDWARE DESIGN
The robot’s ability to move the camera to track the specified ball is important to the functionality of the project. Three omnidirectional wheels are attached to three 6-V DC motors. They are mounted to the base of the robot, allowing it to rotate about its vertical axis. Furthermore, a servo motor connects a webcam to the robot with the servo’s rotational axis in the horizontal plane. The combined motions of the robot and servo motor allow the camera to yaw left and right as well as pitch up and down with the movement of the ball.

In order for the Arduino Uno and Raspberry Pi to communicate with each other, some modifications are made to the motor shield. First, the motor shield makes use of Arduino digital pins 3, 5, and 11 to activate the three DC motors while digital pins 4, 7, 8, and 12 are used to drive the DC motors via the 74HC595 serial-to-parallel latch. Digital pin 9 is used to control the servo motor attached to the webcam. This leaves digital pins 2 and 13 unused by the motor shield and available for use to communicate with the Raspberry Pi using the software serial protocol. Access to these pins on the Arduino is achieved by removing the corresponding header pins on the motor shield (see Photo 2).

— ADVERTISMENT—

Advertise Here

Photo 2 Motor shield indicating the removal of pins 2 and 13
Photo 2
Motor shield indicating the removal of pins 2 and 13

The Arduino and Raspberry Pi digital pins function at 5V and 3.3V, respectively. This can damage the Raspberry Pi if wired directly to an Arduino. Therefore, a BSS138 logic level converter is used to connect the Arduino’s digital D2 (RX) and D13 (TX) with the Raspberry Pi’s GPIO 14 (TX) and GPIO 15 (RX), respectively (see Figure 2).

Each of the components on the mobile platform is provided power independently. A 9-V battery pack is used to provide the Arduino Uno with power, and the onboard voltage regulator drops the voltage to the working range of 5 V. The DC motors are powered by a battery pack containing four 1.5-V alkaline AA batteries, which collectively generate 6 V. This arrangement is needed as a result of the Arduino’s power supply being insufficient to operate the three 6-V DC motors. Finally, a 5-V wireless rechargeable battery stick powers the Raspberry Pi.

The entire system of components is constructed to rest on top of a small plexiglass platform, which serves as the top surface of the mobile robot. This design provides the robot with full mobility and wireless communication with the user.

iOS APPLICATION DESIGN
Communication between the smart device and the mobile robot is achieved through a TCP/IP server hosted by the Raspberry Pi. The server uses a non-blocking socket that allows the Raspberry Pi to continuously run in a loop until data is received from the client executed on the mobile application. Messages are transmitted from the iOS device to the Raspberry Pi over a local Wi-Fi network. The user interface includes a button, which is used to establish a connection with the network, and displays the status of the connection on the screen for the user. Once connected, the user can choose which color ball the robot should track by tapping the picture of the ball on the touch screen (see Photo 3). Each option displayed on the mobile application sends a predefined flag to the Raspberry Pi, which updates the image processing routine with the appropriate color range to search for on the fly.

COMPUTER VISION
To detect the location of the center of the different colored balls in the image, a color segmentation approach has been used. This approach involves thresholding the image in the hue-saturation-value (HSV) space, and removing small amounts of noise through morphological open and close operations. The HSV range used to threshold the image is specified for each of the three colors and is updated as the user selects a new color on the mobile application (see Photo 4). The rate at which vision-based measurements are obtained has significant effects on the response of the system; therefore, images are captured at 30 frames per second. However, capturing images at this frame rate imparts a maximum image resolution of 640 × 480 pixels due to computation time.

Photo 3 User interface for the mobile application
Photo 3
User interface for the mobile application
Photo 4 Each colored ball detected during the image-processing routine (left-red, middle-yellow, and right-blue)
Photo 4
Each colored ball detected during the image-processing routine (left-red, middle-yellow, and right-blue)

A bang-bang controller is designed to drive the servo motors such that the location of the center of the detected object is regulated to the center of the image. This type of control is also referred to as an on-off control, in which the feedback controller switches between two states. Two independent control commands are determined from the vision-based measurements: vertical- and horizontal-control. A dead zone is located at the center of the image and indicates the acceptable location of the object in the image (see Figure 3). The dead zone is located between image coordinates (200,120) and (400,360). If the object is detected outside of the allowable horizontal range, the three DC motors at the base of the mobile robot are commanded to rotate in the appropriate direction at full power for 200 ms. Alternatively, if the object is located outside of the acceptable vertical range, the servo motor attached to the camera is set to rotate one degree in the necessary direction.

Figure 3 Schematic of a dead zone in the image plane
Figure 3
Schematic of a dead zone in the image plane

RESULTS
The ball-tracking robotic system demonstrated a desirable performance in a controlled environment. However, constraints imparted by the color segmentation approach require ideal lighting conditions and minimal background colors that could potentially interfere with the image processing routine. The bang-bang controller designed successfully tracked each colored ball with a slightly oscillatory behavior, which was caused by time delays introduced from image processing. A video of the project is available online: http://bit.ly/1lXsQQe.

FUTURE WORK
In future work, we would like to stream the video captured by the webcam to the smartphone in real time and allow the user to interact with the live video using gestures on the touch screen. This would include allowing the user to select which colored objects for the mobile robot to track by simply tapping on the image of the object on the screen without the need for pre-defined HSV values, as these values can be extracted directly from image data. Furthermore, users could segment objects to track from the video, which the robot could learn to recognize using more advanced learning algorithms rather than by color. Additionally, we would like to make use of the additional degrees of freedom our robot possesses by including depth tracking. 

Authors’ Note: This work is supported in part by the National Science Foundation grants DRK-12 DRL: 1417769, GK-12 Fellows DGE: 0741714, and RET Site EEC-1132482, and NY Space Grant Consortium grant 48240-7887. In addition, it is supported in part by the Central Brooklyn STEM Initiative (CBSI), which is funded by the Black Male Donor Collaborative, Brooklyn Community Foundation, J.P. Morgan Chase Foundation, Motorola Innovation Generation Grant, NY Space Grant Consortium, Xerox Foundation, and White Cedar Fund. Finally, this work was performed as part of a term project for the Advanced Mechatronics course taught by Professor Vikram Kapila.

— ADVERTISMENT—

Advertise Here

SOURCES
Arduino Uno
Arduino | www.arduino.cc
Raspberry Pi
Raspberry Pi Foundation | www.raspberrypi.org

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • MARCH 2016 #308 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Anthony Brill received his BS degree in Mechanical Engineering from the University of Nevada, Reno, in 2014. He is currently a MS student at the NYU Tandon School of Engineering, studying Mechanical Engineering. He is serving as a GK-12 Fellow, promoting K-12 STEM education. He conducts research in the Mechatronics and Controls Laboratory, where his interests include using smart mobile devices in closed loop feedback control.

Matthew Moorhead received his BS degree in Mechanical Engineering from the University of Nevada, Reno, in 2014. He is currently pursuing an MS degree in Mechanical Engineering at NYU Tandon School of Engineering, Brooklyn, NY, where he is serving as a GK-12 Fellow. Matthew also conducts research in the Mechatronics and Controls Laboratory with an interest in robotics and controls.

Jonghyun Bae is currently an MS candidate in Electrical Engineering at NYU Tandon School of Engineering with an emphasis in embedded systems and control.

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Smartphone-Controlled Omnidirectional Ball-Tracking Robot

by Anthony Brill, Matthew Moorhead, and Jonghyun Bae time to read: 7 min