The NanoBot Project
There are many situations where an autonomous vehicle or device is needed for mapping unknown environments without any human assistance. In this article, Dhairya details his project using Nvidia’s Jetson Nano to build a four-wheel robot that can be trained to autonomously survey a known environment or can be controlled via a laptop or a phone.
In this project, we will be building a four-wheel vehicular robot prototype built around the Nvidia’s Jetson Nano development board. There are several alternatives to this board currently available in the market. So, why choose this particular board? The two most popular alternatives are the Raspberry Pi 4 SBC (Pi) and the Google Coral Dev board.
There are several drawbacks for each that make Jetson Nano the clear choice for this project. The Pi is a lot cheaper and has a very active community, but when it comes to machine learning and deep learning applications, the Pi’s CPU and integrated GPU bottlenecks in terms of processing power.
The Google Coral board is a very costly alternative (around $200). Moreover, the community for this development board is not as developed as it is for the Jetson Nano and the Nano is specifically built for machine learning and deep learning projects like we are building here.
These are some of the obvious advantages of using Jetson Nano for this project:
- Pi-compatible GPIO, so it’s very easy to connect external peripherals
- Dedicated GPU unit
- 4GB massive RAM!
- Raspberry Pi Camera is also compatible
- Runs on a full-blown Linux operating system (Ubuntu)
- CUDA support too, which is the cherry on top
BUILDING THE NANOBOT
Now, let’s start building our project. Figure 1 shows the components needed for this project, and Table 1 provides a list. We will use the Donkey Car kit and make some customizations in it to add the Lidar sensor. First, we will unbox our RC Car. Then remove the top cover, which is held by steel pins. Please preserve the pins because later they are used to fix our custom base plate. Now, after the top cover of the car is off, first cut off the plastic holders of the wires that go into the ESC (electronic speed controller) of the car so that we can get them out and connect them to the servo shield.
Next, take a wooden plate and get to laser cutting! Please note that you can even 3D print this plate. The final wooden plate should look something like Figure 2. Next, affix the wooden plate to the car with the help of the steel pins you took out earlier. Next, you can 3D print a camera holder for the car. Due to a shortage of time, I didn’t do that myself. Now that this is done, simply connect all the components on the car. Attach a power bank to the car using double-sided tape, and then make all the necessary connections. The robot should now look something similar to what is shown in Figure 3. This wraps up the hardware setup.
Now comes the interesting but complex part: The software part of the project! First things first. When we get the Nano out of the box, we need to install an operating system. To do this, follow the tutorial at . The tutorial will help you with the following:
- Set up an SD Card for your Jetson Nano (includes OS installation onto the card)
- First boot and setup for the Nano
After you follow the tutorial step by step, you will have a Jetson Nano with the latest software running on Ubuntu Linux OS. Next up is the installation of ROS (robot operating system) on our Jetson Nano.
There are a number of tutorials on ROS installation in Linux, but most of them are outdated so I had to spend quite a lot of time to figure this one out. Problem is most tutorials cover the installation of ROS Kinetic, which is outdated and cannot be directly installed using pip install.
So, instead we will have to go with the ROS Melodic Full Desktop version. Go to the link at reference  to find very a well documented tutorial on how to install ROS Melodic for Ubuntu systems.
If the installation is successful, you will be able to see the catkin_ws folder has three subfolders and you can successfully compile using the
catkin_make command in this. Moreover, you can run the command
roscore in the terminal to start ROS service. This completes the ROS installation. As our next step, we will install all the other necessary components that will be required to control our robot wirelessly. We will use the software developed for the Donkey Car Project.
SOFTWARE ON RC CAR
Our next step is to install all the dependencies required to run the car. For this, we will use a popular open-source project called the Donkey Car project, which we will port to the Jetson Nano. So, just follow the next three steps to set it up.
Step 1: Install dependencies: Access your Nano through a display or simply SSH into your vehicle. Use the terminal for Ubuntu or Mac. For Windows, you’ll need PuTTy or a command prompt with SSH installed. Run these following commands, which will update the Ubuntu packages to the latest version and install other necessary packages for this project.
sudo apt-get update sudo apt-get upgrade sudo apt-get install build-essential python3 python3-dev python3-pip libhdf5-serial-dev hdf5-tools nano ntp
Optionally, you can install RPi.GPIO clone for Jetson Nano from the link at . This is not required for a default setup, but can be useful if using an LED or other GPIO-driven devices.
Step 2: Set up virtual Environment using the virtualenv package: Use theses commands to create a new virtual environment for our project and then activate it:
pip3 install virtualenv python3 -m virtualenv -p python3 env --system-site-packages echo “source env/bin/activate” >> ~/.bashrc source ~/.bashrc
Step 3: Install OpenCV: To install OpenCV on the Jetson Nano, you need to build it from source code. Building OpenCV from source code is going to take some time, so buckle up. If you get stuck, please find this great resource at , which will help you compile OpenCV. Note: In some cases, Python OpenCV may already be installed in your disc image. If the file exists, you can optionally copy it to your environment rather than build it from source. Nvidia has said it will drop support for this, so in the long term, we will probably need to build it. If this works…
mkdir ~/mycar cp /usr/lib/python3.6/dist-packages/cv2.cpython-36m-aarch64-linux-gnu.so ~/mycar/ cd ~/mycar python -c “import cv2”
…then you have a working version and can skip this portion of the guide. However, following the swap file portion of this guide has made performance more predictable and solves memory thrashing. The first step in building OpenCV is to define swap space on the Jetson Nano. The Jetson Nano has 4GB of RAM. This is not sufficient to build OpenCV from source. Therefore, we need to define additional swap space on the Nano to prevent memory thrashing. That can be done by simply executing the commands in Listing 1.
Now you should have enough swap space to build OpenCV. Let’s set up the Jetson Nano with the prerequisites to build OpenCV. Just execute the commands in Listing 2 to update the system to its latest version and then install all the prerequisites. Now you should have all the prerequisites you need. So, let’s go ahead and download the source code for OpenCV into a local directory. Just type in and execute the commands in Listing 3 in the given order.
Listing 1 Execute this code to define additional swap space on the Nano to prevent memory thrashing. # Allocates 4G of additional swap space at /var/swapfile sudo fallocate -l 4G /var/swapfile # Permissions sudo chmod 600 /var/swapfile # Make swap space sudo mkswap /var/swapfile # Turn on swap sudo swapon /var/swapfile # Automount swap space on reboot sudo bash -c ‘echo “/var/swapfile swap swap defaults 0 0” >> /etc/fstab’ # Reboot sudo reboot
Listing 2 Execute these commands to update the system to its latest version and then install all the prerequisites. # Update sudo apt-get update sudo apt-get upgrade # Pre-requisites sudo apt-get install build-essential cmake unzip pkg-config sudo apt-get install libjpeg-dev libpng-dev libtiff-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install libgtk-3-dev sudo apt-get install libatlas-base-dev gfortran sudo apt-get install python3-dev
Listing 3 Use these commands to download the source code for OpenCV into a local directory. # Create a directory for opencv mkdir -p projects/cv2 cd projects/cv2 # Download sources wget -O opencv.zip https://github.com/opencv/opencv/archive/4.1.0.zip wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.1.0.zip # Unzip unzip opencv.zip unzip opencv_contrib.zip # Rename mv opencv-4.1.0 opencv mv opencv_contrib-4.1.0 opencv_contrib
Next, let’s get our virtual environment (virtualenv) ready for OpenCV. For this, just install all the necessary packages required to run OpenCV. We will start with Numpy. Please install the version mentioned in the bash command only because this has been tested and verified to be working. Here are the commands:
# Install Numpy pip install numpy==1.16.4
Now let’s set up CMake correctly so it generates the correct OpenCV bindings for our virtual environment. This is shown in Listing 4. The
cmake command should show a summary of the configuration. Make sure that the Interpreter is set to the Python executable associated to your virtualenv. Note: There are several paths in the CMake setup, make sure they match where you downloaded and saved the OpenCV source.
Listing 4 Set up CMake correctly so it generates the correct OpenCV bindings for our virtual environment. # Create a build directory cd projects/cv2/opencv mkdir build cd build # Setup CMake cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D INSTALL_C_EXAMPLES=OFF \ -D OPENCV_ENABLE_NONFREE=ON \ # Contrib path -D OPENCV_EXTRA_MODULES_PATH=~/projects/cv2/opencv_contrib/modules \ # Your virtual environment’s Python executable # You need to specify the result of echo $(which python) -D PYTHON_EXECUTABLE=~/env/bin/python \ -D BUILD_EXAMPLES=ON ../opencv
To compile the code from the build folder, execute the following command:
This will take a while. Go grab a coffee or watch a movie. Once the compilation is complete, you are almost done. Only a few more steps to go. Type in the following commands:
The final step is to correctly link the built OpenCV native library to your virtualenv.
The native library should now be installed in a location that looks like:
Type in the final commands shown in Listing 5 to complete the installation procedure. To test the OpenCV installation, run python and type in the following commands:
import cv2 # Should print 4.1.0 print(cv2.__version__)
Now that we have all the pre-requisite software installed, we will proceed to install the Donkey Car Python source code, which will allow us to control our bot through a web interface, mobile phone or joystick.
First, change to a directory you would like to use as the head of your projects:
Next, get the latest Donkey Car source code from GitHub . Execute the commands in Listing 6.
Listing 5 The final commands shown here complete the installation procedure. Also shown here is what the output should like when you check it. Type in these final commands complete the installation procedure: # Go to the folder where OpenCV’s native library is built cd /usr/local/lib/python3.6/site-packages/cv2/python-3.6 # Rename mv cv2.cpython-36m-xxx-linux-gnu.so cv2.so # Go to your virtual environments site-packages folder cd ~/env/lib/python3.6/site-packages/ # Symlink the native library ln -s /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so cv2.so Congratulations! You are now done compiling OpenCV from source. A quick check to see if you did everything correctly is: ls -al You should see something that looks like: total 48 drwxr-xr-x 10 user user 4096 Jun 16 13:03 . drwxr-xr-x 5 user user 4096 Jun 16 07:46 .. lrwxrwxrwx 1 user user 60 Jun 16 13:03 cv2.so -> /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2.so -rw-r--r-- 1 user user 126 Jun 16 07:46 easy_install.py drwxr-xr-x 5 user user 4096 Jun 16 07:47 pip drwxr-xr-x 2 user user 4096 Jun 16 07:47 pip-19.1.1.dist-info drwxr-xr-x 5 user user 4096 Jun 16 07:46 pkg_resources drwxr-xr-x 2 user user 4096 Jun 16 07:46 __pycache__ drwxr-xr-x 6 user user 4096 Jun 16 07:46 setuptools drwxr-xr-x 2 user user 4096 Jun 16 07:46 setuptools-41.0.1.dist-info drwxr-xr-x 4 user user 4096 Jun 16 07:47 wheel drwxr-xr-x 2 user user 4096 Jun 16 07:47 wheel-0.33.4.dist-info
Listing 6 Get the latest Donkey Car source code from GitHub and then execute these commands. git clone https://github.com/autorope/donkeycar cd donkeycar git checkout master pip install -e .[nano] pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.3
The last command will install tensorflow-gpu our Jetson Nano.
That completes the setting up of the car-based platform. Now, the only thing that is left is to create a new application to run our nanobot. Just follow the tutorial at  and you’re good to go. Next, add the components that you’ll be using in the ROS. That includes the RPLIDAR A1, which will be used for simultaneous localization and mapping (SLAM).
SETTING UP RPLIDAR A1
With everything else is done, the only setup that is left is for our Lidar sensor. First, let’s know a little more about the device that we are using: RPLIDAR is a low cost 2D Lidar solution developed by SLAMTEC. It can scan 360-degree environment within 6m radius. The output of RPLIDAR is very suitable to build maps, do SLAM or build 3D models (Figure 4).
To install the RPLIDAR ROS package, clone this project from GitHub at  to your catkin’s workspace src folder. Once this is done, just run the
catkin_make to build
rplidarNodeClient, which are required.
This completes the installation part. Next, we will see how to test the RPLIDAR ROS package. First, check the authority of RPLIDAR’s serial-port:
ls -l /dev |grep ttyUSB
Add the authority of write: (such as /dev/ttyUSB0)
sudo chmod 666 /dev/ttyUSB0
There’re two ways to run RPLIDAR ROS package:
rplidarNode and view in the RVIZ:
roslaunch rplidar_ros view_rplidar.launch
You should see RPLIDIR’s scan result in the RVIZ application.
rplidarNode and view using test application:
roslaunch rplidar_ros rplidar.launch rosrun rplidar_ros rplidarNodeClient
You should see RPLIDAR’s scan result in the console.
You should change the USB device port mode with the authority of read and write. Make the port remap to a fixed name. Install the USB port as follows:
Check the remap using following command:
ls -l /dev | grep ttyUSB
Once you have change the USB port remap, you can change the launch file about the
serial_port value.<param name=”serial_port” type=”string” value=”/dev/rplidar”/>
Next, you attach the RPLIDAR sensor to your robot. Keep in mind the orientation of the Lidar sensor while installing it onto your robot. Figure 5 shows what the axes look like. Now that this is done, try to run a sample launch file view_rplidar and see the scan results of the topic /scan in RVIZ. It will look something like shown in Figure 6. To add SLAM to our application, there are several available algorithms. But I chose Hector SLAM for our project. So, let us move on to the next section.
HECTOR SLAM SETUP
The GitHub repo for Hector SLAM is at . We cannot use the native GitHub repo for Hector SLAM on the nano as we did for the RPLIDAR. We need to make some changes in the Hector SLAM files to get it up and running. Listing 7 shows the commands you need to run and the changes to the Hector SLAM files you need to make. Now that these changes are done, we can go ahead and run the following:
cd .. catkin_make
Listing 7 To get Hector SLAM up and running, shown here are the commands you need to run and the changes to the Hector SLAM files you need to make. First, here are the commands you need to run: cd catkin_ws/src/ git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam Now, after it has been cloned, we will need to make the following changes: In catkin_ws/src/rplidar_hector_slam/hector_slam/hector_mapping/launch/mapping_default.launch replace the second from last line with: <node pkg=”tf” type=”static_transform_publisher” name=”base_to_laser_broadcaster” args=”0 0 0 0 0 0 base_link laser 100” /> Replace the third line with: <arg name=”base_frame” default=”base_link”/>the fourth line with<arg name=”odom_frame” default=”base_link”/> In catkin_ws/src/rplidar_hector_slam/hector_slam/hector_mapping/launch/mapping_default.launch replace the third line with: <param name=”/use_sim_time” value=”false”/>
If everything goes as planned, this will be compiled successfully. This means we are ready to run Hector SLAM on our Nano. Follow these steps in order to do so:
- In your catkin workspace run source /devel/setup.bash
- Run chmod 666 /dev/ttyUSB0 or the serial path to your lidar
- Run roslaunch rplidar_ros rplidar.launch
- Run roslaunch hector_slam_launch tutorial.launch
- RVIZ should open up with SLAM data
The RVIZ SLAM data sample is shown in Figure 7. In addition to this, we can also leverage the capabilities of the Donkey Car code running on this to connect and get a camera feed on the web-based controller. But, before that, let’s examine some details about the cameras supported by the Jetson Nano.
The bad news is that no OVA (optical vector analyzer) sensor cameras are supported by the Nano—there are no drivers for those sensors. So, we need a Sony IMX 1219 camera sensor-based camera module or a good quality webcam to run on the Nano. Please note that the frame per second (FPS) readings will vary for each type. The supported cameras are:
- PiCam v2.1
- Any USB webcam from a reputable company
- Any CSI-based camera
DONKEY CAR APPLICATION
Next, we will cover how to start the donkey car application on our Jetson Nano so that we can control our bot through a web interface. The controller will be available for local networks at the following address:
<The IP address of your Nano here>:8887
This will start once you input the following into your Nano terminal:
#first get into the donkey car project you created cd ~/mycar python manage.py drive
If you take a look into the manage.py file, you notice it is linked to various Donkey Car project components that can be used in our project. There are different cameras supported by the Nano, but there is no PiCam Library for the Nano, so just comment out the PiCam part. For some reason, I was still getting errors while running the code, so I had to debug it further.
It turns out, that the camera that I was using (a Quantum webcam) was not detected by the code directly, even though it supports V4L cameras. So, I had to remove all the other options and just keep the one which supports the V4L camera. But there are still some additional steps that need to be performed.
First, we need to install the v4l2capture python library. Unfortunately, you cannot just install it via pip install or apt. So, just follow these steps (follow by typing each of them in order into your terminal):
git clone https://github.com/atareao/python3-v4l2capture sudo apt-get install libv4l-dev cd python3-v4l2capture python setup.py build pip install -e .
If everything goes fine, you can run any of the example programs given in the v4l2capture folder. Just type the following while in the python3-v4l2capture folder:
If everything goes right, the manage.py file will run successfully and you can access the web controller on any device connected to the local network. For some reason, my Quantum webcam was not getting detected by the application, so I couldn’t get my camera frame on the Web Control screen. But the code I am uploading has been tested for a CSI camera, the PiCam v2.1 and a Logitech C920 webcam. So, if you have any of these, your code will work just fine.
As a workaround, I took the camera frame directly from a Python program and just displayed it on the laptop via SSH. In this way, I can control my robot car from a distance without any difficulties. You can see this in the demo video posted on YouTube . The demonstration will show you how I mapped my own house with the NanoBot!
Our final step is to run everything we setup together and see how the demonstration exactly works. Now, after everything is set up, you can see that there are four terminals open on the Nano to run all the required files. Listing 8 shows the commands you need to run to get the NanoBot up and running.Figure 8 shows the final robot I built.
Listing 8 Commands you need to run to get the NanoBot up and running. # ----------- Terminal 1 -------------- cd ~/mycar #mycar is the name of the application you created python manage.py drive #do not forget to download and use the modify the code #-------------------------------------- # ----------- Terminal 2 -------------- roscore #start the ROS Melodic service on the Nano #-------------------------------------- # ----------- Terminal 3 -------------- sudo chmod 666 /dev/ttyUSBx #give read write permissions to the USB port of Lidar roslaunch rplidar_ros rplidar.launch #run the rplidar launch file #-------------------------------------- # ----------- Terminal 4 -------------- roslaunch hector_slam_launch tutorial.launch #launch the hector slam application #--------------------------------------
In this final section, I’ll discuss my future plans for how to make this project even better. Here’s some of the desired additions I would love to make:
Advanced ROS Implementation: Instead of using the Donkey Car project for driving the car, I would like the whole project to run on the ROS. Currently, I have developed a package that lets you control your car using the keyboard of your host, which will also be running a ROS instance. I will add the further parts as I develop them.
Obstacle avoidance using the on-chip Lidar sensor: There is no proper development for the Lidar sensor for Python. Only C++ development is at an advanced stage. So, I would like to develop a code that would allow me to detect and avoid any obstacles in the path. Currently, the obstacles are detected by the camera.
Addition of IMU and GPS via ROS: I am also working on application specific ROS packages for the IMU MPU6050 and the GPS Module UBLOX Neo-6M. This will allow me to added a “GPSGuided” functionality to my robot. I’ve already implemented this on a rover I developed using the Arduino Mega board. Now, I want to add the same to ROS.
I would like to give a special mention to the Donkey Car project creators! The project helped me a lot and made my life a lot easier. So, that is all for this article. Hope you it easy to follow. Feel free to contact me about any issues you face via email at email@example.com
 RPi.GPIO Clone for Nano: https://github.com/NVIDIA/jetson-gpio
 Nanobot donkey car application: https://github.com/Dhairya1007/NanoBot
 Rplidar ROS Package repository: https://github.com/robopeak/rplidar_ros
 Hector Slam GitHub Repo: https://github.com/tu-darmstadt-ros-pkg/hector_slam
 YouTube Video demonstration link:
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • DECEMBER 2021 #377 – Get a PDF of the issueSponsor this Article
Dhairya Parikh is an Electronics Engineer and an avid project developer. Dhairya makes projects that can bring a positive impact in a person’s life by making it easier. He is currently working as an IoT engineer. His projects are mainly related to IoT and machine learning. Dhairya can be contacted on firstname.lastname@example.org