A Robotics Example
While some embedded systems do just fine with a single microcontroller, there are circumstances when offloading some processing into a second processing unit, such as a second MCU, offers a lot of advantages. In this article, Jeff explores this situation in the context of a robotic system project that uses Arduino and an external motor driver.
When your tasks begin to slip, it might be time to get help. At home, when the job jar begins to overflow, it’s often time to call in a professional to fix the leak, repaint, change the oil and so forth. At work, your project might require additional help from a programmer, purchaser, designer, or other specialist. I believe a good manager is one who is able to handle any facet of the project, but can also step back and let those associates handle their areas of expertise without micromanaging.
And so it is with programming a MCU. You can write the project yourself or—with confidence in your function library—you can make calls to complete a task without having to code those library functions. You can do more by having to do less. All that said, computationally intensive routines can eat up all your computing power. This might suggest that you move up to a more capable processor, or divide and conquer the project by using multiple MCUs.
CASE IN POINT
I have a robot wheel base that uses an Arduino and an external motor driver board. The motors required more than the typical 2 A most Arduino motor shields can provide, so I went to an external motor driver for about $20. This requires direction and speed control outputs for each of the two motors. The wheel encoders require phase A and phase B inputs for each wheel.
It wasn’t long until the basic movement routines were written. Then I added encoder routines to handle measuring wheel movements. Finally, I made routines for adding acceleration/deceleration, positional and speed cooperation between the wheels. At that point, it was becoming clear that I was going to run out of processing power and application space. And I had not yet added a single sensor!
I considered using this Arduino as a separate processor just for wheel base movement. Certainly, someone must have integrated an MCU with motor drivers. Enter the motor controller. I chose to use a Basicmicro Roboclaw 2x7A motor controller (Figure 1) . This is the smallest in a line of compatible controllers. At $70, it cost more than my motor driver. However, it incorporates the use of the wheel encoders, so it has some pretty good intelligence. It can handle two motors at 7 A of continuous current each. I like the fact that I can substitute other models should I need more current—up to 2 x 160 A!
While I plan to connect this via serial to the Arduino, it can be used stand-alone with an RC receiver, or with analog inputs from potentiometers. The serial link can be “simple” TX only or “packet” TX/RX to provide feedback.
Your motor (and gearing system) can produce some maximum torque or rotational force on a wheel to overcome the load or weight of the robot. Larger loads require more torque. Motors are rated by the maximum torque they can produce. The load or resistance to move must be less than the motor torque, or the motor will not be able to move the load. This is the stalled state of the motor, and will draw maximum current, the “stall state current.” Motor drivers must be able to withstand this current, or the driver will be destroyed trying to dissipate excess heat—not to mention overheating the motor.
Starting from a standstill will most likely require this current until movement has started. High currents while starting are typical but temporary. As the speed increases, the torque required goes down, and so does the current. With the load completely removed, the motor rotates at its maximum speed, requiring minimum current. You will typically see this “no-load” rating (no-load current vs. speed) for a motor. You may also see a continuous current rating. This will be much less than the stall current, and your motor selection should be based on the ability to provide the required torque to run continuously without exceeding the continuous current rating. This is assuming you will need to run continuously.
I’ve measured the stall current of my wheel assembly and found it to be around 5 A. The no-load current is 1 A. The calculated stall torque at the wheel is about 44 foot-pounds (ft-lb) of force after all the gearing. That may sound like a lot, but this robot carries three gel-cell batteries, and the batteries themselves are over 22 lbs. Besides the six motor/power connections, there are seven other control inputs. Two of these are relegated to the wheel encoders, and two are for the motor control mode. The last three are up for grabs, but have specific functions that you might need, depending on the control mode chosen. For instance, if you are using the controller for a remote control (RC) vehicle, you might want to use S3 for a flip input. This input reverses the direction controls when it detects that the vehicle has been flipped upside down, like in the Robot Wars TV show.
Most people have heard of Isaac Asimov’s three laws of robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These rules were designed to put restraints in place by designers, to prevent robots from running amok. Although I’m not concerned with a robotic rebellion, I’ll be using at least one of these extra controller inputs as an emergency stop input that might come from either the manual stop (safety switch) or the computer emergency stop (self-destructive situation). The E-Stop (on logic-low) inputs can be defined as momentary (clears on input high) or latching (clears on only power cycling).
Small robots normally won’t have such redundant safety systems. You can just pick one up and turn it off. However, as the size and weight increase, this becomes impractical, so you will want to include ways to immobilize the mass if it gets into a destructive situation. “Situation” is a perfect word in this case, and doesn’t pertain just to emergencies. All programming of your robot is in response to a situation. It begins with the moment in time when the power is applied. And it continues until power is removed or is reduced to the point where operation cannot continue logically.
In small robots where there is one power supply, battery voltage can be affected by the motor’s current requirements. When batteries are unable to supply the required current, the voltage sags can play havoc on the computer, unless proper brownout circuitry and coding are implemented. When separate supplies are provided for the motors and the logic circuitry, this cause of concern is eliminated. However, good brownout protection should still be implemented in the logic circuitry, to halt the motor controller if the logic becomes unstable.
Hopefully, the emergency situation will never arise. Your programming should take advantage of all the sensors available, and make suitable decisions based on the available information. Since we’ve focused so far on the motor controller, let’s see what information is available from this device. The controller status will report on 16 anomalies in several motor controller functions. The user has control over many of these, such as max and min battery voltage levels, while others are prefigured, including motor driver fault—that is, the motor running with no encoder change. Although not all of these are considered emergency situations, they can be an indication of general health.
I CAN SEE CLEARLY
In relation to movement, the situation we are most interested in is having a clear path. With no other sensors, the only feedback we have is the stall state (or current level) of a motor. Since we can’t “see” obstructions, we must “feel” them by determining how ”hard” our unstoppable object is working. A sudden increase in motor current (or exceeding a certain level of current) could indicate contact with an immovable object.
Obviously, a better approach would be to add sensors to help determine a clear path. Contact sensors are an inexpensive way of detecting an obstructed path. It would be preferable to detect obstructions by means other than contact. A variety of presence sensors can detect relative distance by sending out frequencies and looking for echoes. Light IR, and ultrasonic sensors are three common types that react differently, depending on the environment. Detecting an obstruction prior to contact eliminates potential damage to the parties involved.
In many cases sensors on the front of a robot can be used to detect obstructions anywhere around the robot. if you are willing to rotate the robot toward the area of interest. Optionally, multiple sensors can be located around the robot, which allows this information to be gathered without rotating the robot. With a minimum robot diameter of about 16”, there is plenty of room around my robot to locate separate sensors. It was determined that 60 degrees was the maximum width of many common ultrasonic sensors. Dividing the 360-degree coverage into eight 45-degree sections would give a bit of overlap to each sensor’s field of view.
In keeping with the theme of “offloading intelligence,” a sensor module was developed to handle local control over a contact switch, an IR sensor, a short-range ultrasonic sensor and a long-range (Polaroid) ultrasonic sensor. The module has five visible LEDs that give local function status of each type of sensors connected (Figure 2). The IR and long-range sensors are optional. This gives the module some flexibility in how it is configured. Communication with the module is I2C and each has 16 address options. The user can therefore choose the number and configuration of each used on their robotic platform.
DIVIDE AND CONQUER
With a basic tri-wheeled vehicle run by a single processor, as in Figure 3, we have simple wheel control and ultrasonic sensor input. The behavior programmed into the MCU can handle moving the ultrasonic distance sensor, to take distance readings to the left, right and front of the vehicle, choose a direction and turn, based on the readings, and then move forward until it approaches an object and stop. You can manually drive the vehicle wirelessly from a Bluetooth device, but memory limitations soon grind programming advancements to a halt.
Run times of hours require larger batteries. Bigger batteries are heavier and require larger motors, and so you design a larger robot. So, what’s the end game? If you’re interested in building a battle ‘bot, an increase in size allows for weaponry and armor. However, the demand for smarts isn’t necessary. You’ll be driving it and making all the decisions. If you’re thinking more along the lines of autonomy, then you will most likely need an array of sensors beyond what I’ve just discussed.
I’ve shown a path that suggests a need for specialists. The first specialty enlisted offloading the motor operations to a controller capable of handling higher-level instructions. The second specialist, or in this case, group of specialists, with each monitoring a single defined area—floor topography, short-range, long-range and contact sensing. Together, they can provide continuous coverage of the area surrounding the robot. These are the basic requirements to allow an upper-echelon MCU—I’ll call it the “Movement” subsystem—the ability to move safely around an environment.
If we go back to Asimov’s three laws of robotics, we can get an idea of how we might proceed from here. The first law is covered by the movement and sensor subsystem—avoid harming humans (in our case any obstacle.) The second law requires the robot to act on requests from a human. This might be direct commands from a teleoperation (remote control), for instance. In this case, the sensors can prevent the robot from carrying out a request from the teleoperator (breaking law one). Law three requires the robot to protect itself. While at the bottom of the list, here’s where you really get to call the shots. Besides the task or behavior you have planned for your robot, the third rule may be the most important to the robot’s staying alive. Just like us humans without food and drink, a robot won’t last long without fuel. Once you are past the “It’s a fun toy” phase, unless your plan is to drag around an extension cord, you’ll need to provide way for your robot to recharge.
This means providing a place and method to accomplish recharging. The simplest is a docking station that contains conductive contacts. Just roll up to the contacts to receive a recharge. While the charging mechanism isn’t complicated in itself, it might make sense to create a “Power” subsystem. This is a subsystem that not only can handle recharging, but also can monitor the system batteries and power usage. Raising the “hunger” flag is a simple way to affect the task flow of the robot.
Being hungry won’t necessarily get you fed. Your robot will need to be able to find the food. Enter another subsystem ‘”Foraging.” We need to search for the location of the food. IR beacons are often used as a lighthouse. With the proper infrared circuitry, they can be “seen” from a distance and indicate a known point. To forage the robot must search (movement) and locate the beacon. Alternately, a “Mapping” subsystem could also be used. Having (or building) a map of the environment is a challenge in itself. By now I’m sure you’re beginning to see why separating certain tasks make sense.
A very popular subject today is AI or Artificial Intelligence. On one extreme, as we roam the web, we are constantly bombarded with reminders of things that we were looking at but did not purchase. At the other extreme, autonomous autos may soon be eliminating taxi cab and Uber drivers in many cities. The Big Blue AI “learned” how to succeed at chess by playing repeatedly and honing its game based on previous outcomes. Many consider that intelligence is the ability to make decisions based on the environment and previous experience. As humans, we consider our past events when attempting to make a decision, but our emotional state has an impact on that choice as well. I’m not sure that this elevates our level of intelligence, but it is what defines us as human.
In thinking along those lines, my plan is to try to make my robot a little more human, by adding physical/emotional (if you will) states to the repertoire of actions, and thus make decision-making less predictable. This might seem counter intuitive, but I’m looking for something a little less…eh…”robotic.” The actions of the robot might include, for example, explore, hide, follow, find, rest and converse. Physical and emotional states might include hungry, lonely, sick, sleepy and content.
To most of us, the weather has something to do with the way we feel. When sun goes down, we get sleepy. A sunny day makes us want to get outside to enjoy our surroundings. If some part of our body is aching, we don’t feel like doing much. When we get hungry, we hit the kitchen looking for a snack. A robot’s sensors can measure external stimuli, and their levels can impact the decision-making process.
This is accomplished by giving each input a weighted value. The stronger the weight, the more influence it has on the decision’s outcome. We might find that one combination of time of day, amount of light, and remaining power level might require recharging, whereas another combination of these factors might call for sleep. John Blankenship and Samuel Mishal turned me on to this line of thought in their book, Robot Programmer’s Bonanza .
What started as a small robot with a local controller was found to be easily overwhelmed, as its required operations became more sophisticated. If telerobotics or wall following is your aim, then this level of control could be sufficient for your task at hand. But if you dream of other things, you will find that just using a bigger, faster, more capable processor might not be the best solution. Today, we need ways of taking in complex input and performing local computations to simplify the information required to make use of it. We don’t need to spend precious time monitoring sensors, when all we really need is a single, binary output or value. It takes a camera and whole lot of number crunching to determine if any face is in view. We don’t care how that happens, and only need to know when one is found. It makes sense to let an external module handle that. That’s “offloading.”
Before MCUs, the processor was separate from its associated memory, which was attached in a parallel fashion using either separate (Harvard architecture) or combined (von Neumann architecture) data and code spaces. Either way, any I/O could be connected in parallel at a specific address, which allowed the transfer of data in parallel chunks—very efficiently. Most peripheral interfacing with MCUs today is done serially. Of the three most popular serial interfaces, UART, SPI and I2C, I’ve already used two. Most off-chip peripherals are connected using one of those. All three are hardwired connections, but any might, in fact, lead to communication in many other forms—Wi-Fi, Bluetooth, IR, fiber-optic, etc. While I prefer having control over what data are passed and how, you might like to take advantage of a higher-level cooperative robotic OS.
OPEN SOURCE ALTERNATIVES
The open-source community offers a variety of options for working at a high level of integration. Although these are really not operating systems, they provide services designed to combine a mixture of smart modules by implementing messaging and package management. Here are a few popular ones: the Player Project, OpenRTM-aist, Orca, RoCK, ROS,YARP and Microsoft Robotics Developer Studio 4. Just Google them to find more information. By far, the most popular and current package is ROS.
ROS is written for the Linux operating system, but within the last year, Microsoft has announced support for it (presently requiring 64-bit Windows 10). The latest ROS release was “Melodic Morenia” on May 23, 2018. I’m not ready to jump to such heights at this point. I’m still looking to keep things on the down low. I mentioned the Blankenship/Mishal book earlier. You can find it at robotbasic.org. There you’ll find many wonderful things, including RobotBASIC. Not only does it offer great, free simulation tools (not just robotics), but it can redirect robot simulation commands to an external robot that uses the RROS interfacing chip (module). This module, which includes a motor driver, can control a small robot with little additional hardware. The module and/or the robot are available for those who want immediate gratification.
I provided some code I wrote to mimic some of the RROS functionality back in my October 2013 and November 2013 articles (Circuit Cellar 279 and 280), to allow a RobotBASIC program to be used with an iRobot Create. It might be time to rethink this a bit more seriously.
Like it or not, you may find you need to offload intelligence to help simplify your project. It doesn’t have to be a robotics project to make use of offloading tasks to additional MCUs. It also can be useful to request sensor information. But you might find you really need to know only the status, if you have confidence in those making decisions for you. Don’t be afraid to pass on some responsibility. Trust is a two-way street. So much to do, so little time.
2 channel, 15 A Peak, 7.5 A Continuous per channel, 34 VDC, dual quadrature decoding motor controller
Basic Micro www.basicmicro.com/motor-controller
 Robot Programmer’s Bonanza by John Blankenship and Samuel Mishal,
published by McGraw-Hill, ISBN 978-0-07-154797-0
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • SEPTEMBER 2019 #350 – Get a PDF of the issueSponsor this Article