Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus:

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus:

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Web-Based Remote I/O Control

The RIO-2010 is a web-based remote I/O control module. The Ethernet-ready module is equipped with eight relays, 16 photo-isolated digital inputs, and a 1-Wire interface for digital temperature sensor connection. The RIO-2010’s built-in web server enables you to access the I/O and use a standard web browser to remotely control the RIO-2010’s relay.

The RIO-2010 can be easily integrated into supervisory control and data acquisition (SCADA) and industrial automation systems using the standard Modbus TCP protocol. The I/O module also comes with RS-485 serial interface for applications requiring Modbus RTU/ASCII. Its built-in web server enables you to use standard web-editing tools and Ajax dynamic page technology to customize your webpage.

Contact Artila for pricing.

Artila Electronics Co., Ltd.

DC Motor for Fine Rotary Motions

The RE 30 EB precious metal brushed motor features a low start-up voltage, even after a long period in standstill. With a 53-mNm rated torque, the powerful motor provides twice the power of an Maxon RE 25 EB. In addition, the RE 30 EB features minimal high-frequency interference.

The RE 30 EB motor is specifically designed for haptic applications (e.g., surgical robots). Therefore, the motor can also be used as a highly sensitive sensor, acting as the sense of touch to register mechanical resistance.

Contact Maxon for pricing.

Maxon Precision Motors

Dual-Display Digital Multimeter

The DM3058E digital multimeter (DMM) is designed with 5.5-digit resolution and dual display. The DMM can enable system integration and is suitable for high-precision, multifunction, and automatic measurement applications.

The DM3058E is capable of measuring up to 123 readings per second. It can quickly save or recall up to 10 preset configurations, including built-in cold terminal compensation for thermocouples.

The DMM provides a convenient and flexible platform with an easy-to-use design and a built-in help system for information acquisition. In addition, it supports 10 different measurement types including DC voltage (200 mV to approximately 1,000 V), AC voltage (200 mV to approximately 750 V), DC current (200 µA to approximately
10 A), AC current (20 mA to approximately 10 A), frequency measurement (20 Hz to approximately 1 MHz), 2-Wire and 4-Wire resistance (200 O to approximately 100 MO), and diode, continuity, and capacitance.

The DM3058 is ideal for research and development labs and educational applications, as well as low-end detection, maintenance, and quality tests where automation combined with capability and value are needed.

The DM3058E digital multimeter costs $449.

Rigol Technologies, Inc.

Two-Channel CW Laser Diode Driver with an MCU Interface

The iC-HT laser diode driver enables microcontroller-based activation of laser diodes in Continuous Wave mode. With this device, laser diodes can be driven by the optical output power (using APC), the laser diode current (using ACC), or a full controller-based power control unit.

The maximum laser diode current per channel is 750 mA. Both channels can be switched in parallel for high laser diode currents of up to 1.5 A. A current limit can also be configured for each channel.

Internal operating points and voltages can be output through ADCs. The integrated temperature sensor enables the system temperature to be monitored and can also be used to analyze control circuit feedback. Logarithmic DACs enable optimum power regulation across a large dynamic range. Therefore, a variety of laser diodes can be used.

The relevant configuration is stored in two equivalent memory areas. Internal current limits, a supply-voltage monitor, channel-specific interrupt-switching inputs, and a watchdog safeguard the laser diodes’ operation through iC-HT.

The device can be also operated by pin configuration in place of the SPI or I2C interface, where external resistors define the APC performance targets. An external supply voltage can be controlled through current output device configuration overlay (DCO) to reduce the system power dissipation (e.g., in battery-operated devices or systems).

The iC-HT operates on 2.8 to 8 V and can drive both blue and green laser diodes. The diode driver has a –40°C-to-125°C operating temperature range and is housed in a 5-mm × 5-mm, 28-pin QFN package.

The iC-HT costs $13.20 in 1,000-unit quantities.

iC-Haus GmbH

Accurate Measurement Power Analyzer

The PA4000 power analyzer provides accurate power measurements. It offers one to four input modules, built-in test modes, and standard PC interfaces.

The analyzer features innovative Spiral Shunt technology that enables you to lock onto complex signals. The Spiral Shunt design ensures stable, linear response over a range of input current levels, ambient temperatures, crest factors, and other variables. The spiral construction minimizes stray inductance (for optimum high-frequency performance) and provides high overload capability and improved thermal stability.

The PA4000’s additional features include 0.04% basic voltage and current accuracy, dual internal current shunts for optimal resolution, frequency detection algorithms for noisy waveform tracking, application-specific test modes to simplify setup. The analyzer  easily exports data to a USB flash drive or PC software. Harmonic analysis and communications ports are included as standard features.

Contact Tektronix for pricing.

Tektronix, Inc.

AAR Arduino Autonomous Mobile Robot

The AAR Arduino Robot is a small autonomous mobile robot designed for those new to robotics and for experienced Arduino designers. The robot is well suited for hobbyists and school projects. Designed in the Arduino open-source prototyping platform, the robot is easy to program and run.

The AAR, which is delivered fully assembled, comes with a comprehensive CD that includes all the software needed to write, compile, and upload programs to your robot. It also includes a firmware and hardware self test. For wireless control, the robot features optional Bluetooth technology and a 433-MHz RF.

The AAR robot’s features include an Atmel ATmega328P 8-bit AVR-RISC processor with a 16-MHz clock, Arduino open-source software, two independently controlled 3-VDC motors, an I2C bus, 14 digital I/Os on the processor, eight analog input lines, USB interface programming, an on-board odometer sensor on both wheels, a line tracker sensor, and an ISP connector for bootloader programming.

The AAR’s many example programs help you get your robot up and running. With many expansion kits available, your creativity is unlimited.

Contact Global Specialties for pricing.

Global Specialties

Embedded Sensor Innovation at MIT

During his June 5 keynote address at they 2013 Sensors Expo in Chicago, Joseph Paradiso presented details about some of the innovative embedded sensor-related projects at the MIT Media Lab, where he is the  Director of the Responsive Environments Group. The projects he described ranged from innovative ubiquitous computing installations for monitoring building utilities to a small sensor network that transmits real-time data from a peat bog in rural Massachusetts. Below I detail a few of the projects Paradiso covered in his speech.


Managed by the Responsive Enviroments group, the DoppelLab is a virtual environment that uses Unity 3D to present real-time data from numerous sensors in MIT Media Lab complex.

The MIT Responsive Environments Group’s DoppleLab

Paradiso explained that the system gathers real-time information and presents it via an interactive browser. Users can monitor room temperature, humidity data, RFID badge movement, and even someone’s Tweets has he moves throughout the complex.

Living Observatory

Paradiso demoed the Living Observatory project, which comprises numerous sensor nodes installed in a peat bog near Plymouth, MA. In addition to transmitting audio from the bog, the installation also logs data such as temperature, humidity, light, barometric pressure, and radio signal strength. The data logs are posted on the project site, where you can also listen to the audio transmission.

The Living Observatory (Source:


The GesturesEverywhere project provides a real-time data stream about human activity levels within the MIT Media Lab. It provides the following data and more:

  • Activity Level: you can see the Media Labs activity level over a seven-day period.
  • Presence Data: you can see the location of ID tags as people move in the building

The following video is a tracking demo posted on the project site.

The aforementioned projects are just a few of the many cutting-edge developments at the MIT Media Lab. Paradiso said the projects show how far ubiquitous computing technology has come. And they provide a glimpse into the future. For instance, these technologies lend themselves to a variety of building-, environment-, and comfort-related applications.

“In the early days of ubiquitous computing, it was all healthcare,” Paradiso said. “The next frontier is obviously energy.”

Embedded Wireless Made Simple

Last week at the 2013 Sensors Expo in Chicago, Anaren had interesting wireless embedded control systems on display. The message was straightforward: add an Anaren Integrated Radio (AIR) module to an embedded system and you’re ready to go wireless.

Bob Frankel demos embedded mobile control

Bob Frankel of Emmoco provided a embedded mobile control demonstration. By adding an AIR module to a light control system, he was able to use a tablet as a user interface.

The Anaren 2530 module in a light control system (Source: Anaren)

In a separate demonstration, Anaren electrical engineer Mihir Dani showed me how to achieve effective light control with an Anaren 2530 module and TI technology. The module is embedded within the light and compact remote enables him to manipulate variables such as light color and saturation.

Visit Anaren’s website for more information.

Infrared Communications for Atmel Microcontrollers

Are you planning an IR communications project? Do you need to choose a microcontroller? Check out the information Cornell University Senior Lecturer Bruce Land sent us about inexpensive IR communication with Atmel ATmega microcontrollers. It’s another example of the sort of indispensable information covered in Cornell’s excellent ECE4760 course.

Land informed us:

I designed a basic packet communication scheme using cheap remote control IR receivers and LED transmitters. The scheme supports 4800 baud transmission,
with transmitter ID and checksum. Throughput is about twenty 20-character packets/sec. The range is at least 3 meters with 99.9% packet receive and moderate (<30 mA) IR LED drive current.

On the ECE4760 project page, Land writes:

I improved Remin’s protocol by setting up the link software so that timing constraints on the IR receiver AGC were guaranteed to be met. It turns out that there are several types of IR reciever, some of which are better at short data bursts, while others are better for sustained data. I chose a Vishay TSOP34156 for its good sustained data characteristics, minimal burst timing requirements, and reasonable data rate. The system I build works solidly at 4800 baud over IR with 5 characters of overhead/packet (start token, transmitter number, 2 char checksum , end token). It works with increasing packet loss up to 9000 baud.

Here is the receiver circuit.

The receiver circuit (Source: B. Land, Cornell University ECE4760 Infrared Communications
for Atmel Mega644/1284 Microcontrollers)

Land explains:

The RC circuit acts a low-pass filter on the power to surpress spike noise and improve receiver performance. The RC circuit should be close to the receiver. The range with a 100 ohm resistor is at least 3 meters with the transmitter roughly pointing at the receiver, and a packet loss of less then 0.1 percent. To manage burst length limitations there is a short pause between characters, and only 7-bit characters are sent, with two stop bits. The 7-bit limit means that you can send all of the printing characters on the US keyboard, but no extended ASCII. All data is therefore sent as printable strings, NOT as raw hexidecimal.

Land’s writeup also includes a list of programs and packet format information.

CC269: Break Through Designer’s Block

Are you experiencing designer’s block? Having a hard time starting a new project? You aren’t alone. After more than 11 months of designing and programming (which invariably involved numerous successes and failures), many engineers are simply spent. But don’t worry. Just like every other year, new projects are just around the corner. Sooner or later you’ll regain your energy and find yourself back in action. Plus, we’re here to give you a boost. The December issue (Circuit Cellar 269) is packed with projects that are sure to inspire your next flurry of innovation.

Turn to page 16 to learn how Dan Karmann built the “EBikeMeter” Atmel ATmega328-P-based bicycle computer. He details the hardware and firmware, as well as the assembly process. The monitoring/logging system can acquire and display data such as Speed/Distance, Power, and Recent Log Files.

The Atmel ATmega328-P-based “EBikeMeter” is mounted on the bike’s handlebar.

Another  interesting project is Joe Pfeiffer’s bell ringer system (p. 26). Although the design is intended for generating sound effects in a theater, you can build a similar system for any number of other uses.

You probably don’t have to be coerced into getting excited about a home control project. Most engineers love them. Check out Scott Weber’s garage door control system (p. 34), which features a MikroElektronika RFid Reader. He built it around a Microchip Technology PIC18F2221.

The reader is connected to a breadboard that reads the data and clock signals. It’s built with two chips—the Microchip 28-pin PIC and the eight-pin DS1487 driver shown above it—to connect it to the network for testing. (Source: S. Weber, CC269)

Once considered a hobby part, Arduino is now implemented in countless innovative ways by professional engineers like Ed Nisley. Read Ed’s article before you start your next Arduino-related project (p. 44). He covers the essential, but often overlooked, topic of the Arduino’s built-in power supply.

A heatsink epoxied atop the linear regulator on this Arduino MEGA board helped reduce the operating temperature to a comfortable level. This is certainly not recommended engineering practice, but it’s an acceptable hack. (Source: E. Nisley, CC269)

Need to extract a signal in a noisy environment? Consider a lock-in amplifier. On page 50, Robert Lacoste describes synchronous detection, which is a useful way to extract a signal.

This month, Bob Japenga continues his series, “Concurrency in Embedded Systems” (p. 58). He covers “the mechanisms to create concurrently in your software through processes and threads.”

On page 64, George Novacek presents the second article in his series, “Product Reliability.” He explains the importance of failure rate data and how to use the information.

Jeff Bachiochi wraps up the issue with a article about using heat to power up electronic devices (p. 68). Fire and a Peltier device can save the day when you need to charge a cell phone!

Set aside time to carefully study the prize-winning projects from the Reneas RL78 Green Energy Challenge (p. 30). Among the noteworthy designs are an electrostatic cleaning robot and a solar energy-harvesting system.

Lastly, I want to take the opportunity to thank Steve Ciarcia for bringing the electrical engineering community 25 years of innovative projects, essential content, and industry insight. Since 1988, he’s devoted himself to the pursuit of EE innovation and publishing excellence, and we’re all better off for it. I encourage you to read Steve’s final “Priority Interrupt” editorial on page 80. I’m sure you’ll agree that there’s no better way to begin the next 25 years of innovation than by taking a moment to understand and celebrate our past. Thanks, Steve.

Pi-Face: A New Raspberry Pi Accessory

Ready for the Pi-Face Digital? What’s that? you ask.

Pi-Face at Electronica 2012 (Source:

Pi Interface Digital, or Pi-Face Digital, is a Raspberry Pi accessory board Premier Farnell will begin distributing in early 2013. You can plug it into a Raspberry Pi and start designing immediately. Plus, you can connect sensors to Pi-Face Digital for a variety of purposes, such as temperature- or pressure-monitoring applications.

The following useful information is posted at the University of Manchester’s School of Computer Science site.

Pi-Face Digital is the first of a range of interfaces to allow the Raspberry Pi to control and manipulate the real world. It allows the Raspberry Pi to read switches connected to it – a door sensor or pressure pad perhaps, a microswitch or reed switch, or a hand held button. With appropriate easy to write code, the Raspberry Pi then drives outputs, powering motors, actuator, LEDs, light bulbs or anything you can imagine to respond to the inputs… The hardware provides an easy and consistent programming interface, in Scratch (as shown running on a Raspberry Pi in the photograph) and Python with good observability to promote easy development, and reduce technology barriers.

It will cost approximately €20 to €30. You can register at element14.

Want to see Pi-Face in action? Check it out on! is an Elektor International Media publication.

DIY Green Energy Design Projects

Ready to start a low-power or energy-monitoring microcontroller-based design project? You’re in luck. We’re featuring eight award-winning, green energy-related designs that will help get your creative juices flowing.

The projects listed below placed at the top of Renesas’s RL78 Green Energy Challenge.

Electrostatic Cleaning Robot: Solar tracking mirrors, called heliostats, are an integral part of Concentrating Solar Power (CSP) plants. They must be kept clean to help maximize the production of steam, which generates power. Using an RL78, the innovative Electrostatic Cleaning Robot provides a reliable cleaning solution that’s powered entirely by photovoltaic cells. The robot traverses the surface of the mirror and uses a high voltage AC electric field to sweep away dust and debris.

Parts and circuitry inside the robot cleaner

Cloud Electrofusion Machine: Using approximately 400 times less energy than commercial electrofusion machines, the Cloud Electrofusion Machine is designed for welding 0.5″ to 2″ polyethylene fittings. The RL78-controlled machine is designed to read a barcode on the fitting which determines fusion parameters and traceability. Along with the barcode data, the system logs GPS location to an SD card, if present, and transmits the data for each fusion to a cloud database for tracking purposes and quality control.

Inside the electrofusion machine (Source: M. Hamilton)

The Sun Chaser: A GPS Reference Station: The Sun Chaser is a well-designed, solar-based energy harvesting system that automatically recalculates the direction of a solar panel to ensure it is always facing the sun. Mounted on a rotating disc, the solar panel’s orientation is calculated using the registered GPS position. With an external compass, the internal accelerometer, a DC motor and stepper motor, you can determine the solar panel’s exact position. The system uses the Renesas RDKRL78G13 evaluation board running the Micrium µC/OS-III real-time kernel.

[Video: ]

Water Heater by Solar Concentration: This solar water heater is powered by the RL78 evaluation board and designed to deflect concentrated amounts of sunlight onto a water pipe for continual heating. The deflector, armed with a counterweight for easy tilting, automatically adjusts the angle of reflection for maximum solar energy using the lowest power consumption possible.

RL78-based solar water heater (Source: P. Berquin)

Air Quality Mapper: Want to make sure the air along your daily walking path is clean? The Air Quality Mapper is a portable device designed to track levels of CO2 and CO gasses for constructing “Smog Maps” to determine the healthiest routes. Constructed with an RDKRL78G13, the Mapper receives location data from its GPS module, takes readings of the CO2 and CO concentrations along a specific route and stores the data in an SD card. Using a PC, you can parse the SD card data, plot it, and upload it automatically to an online MySQL database that presents the data in a Google map.

Air quality mapper design (Source: R. Alvarez Torrico)

Wireless Remote Solar-Powered “Meteo Sensor”: You can easily measure meteorological parameters with the “Meteo Sensor.” The RL78 MCU-based design takes cyclical measurements of temperature, humidity, atmospheric pressure, and supply voltage, and shares them using digital radio transceivers. Receivers are configured for listening of incoming data on the same radio channel. It simplifies the way weather data is gathered and eases construction of local measurement networks while being optimized for low energy usage and long battery life.

The design takes cyclical measurements of temperature, humidity, atmospheric pressure, and supply voltage, and shares them using digital radio transceivers. (Source: G. Kaczmarek)

Portable Power Quality Meter: Monitoring electrical usage is becoming increasingly popular in modern homes. The Portable Power Quality Meter uses an RL78 MCU to read power factor, total harmonic distortion, line frequency, voltage, and electrical consumption information and stores the data for analysis.

The portable power quality meter uses an RL78 MCU to read power factor, total harmonic distortion, line frequency, voltage, and electrical consumption information and stores the data for analysis. (Source: A. Barbosa)

High-Altitude Low-Cost Experimental Glider (HALO): The “HALO” experimental glider project consists of three main parts. A weather balloon is the carrier section. A glider (the payload of the balloon) is the return section. A ground base section is used for communication and display telemetry data (not part of the contest project). Using the REFLEX flight simulator for testing, the glider has its own micro-GPS receiver, sensors and low-power MCU unit. It can take off, climb to pre-programmed altitude and return to a given coordinate.

High-altitude low-cost experimental glider (Source: J. Altenburg)

Autonomous Mobile Robot (Part 1): Overview & Hardware

Welcome to “Robot Boot Camp.” In this two-part article series, I’ll explain what you can do with a basic mobile machine, a few sensors, and behavioral programming techniques. Behavioral programming provides distinct advantages over other programming techniques. It is independent of any environmental model, and it is more robust in the face of sensor error, and the behaviors can be stacked and run concurrently.

My objectives for my recent robot design were fairly modest. I wanted to build a robot that could cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. I knew that if I could meet such objective other more complex behaviors would be possible (e.g., self-docking on low power). There certainly many commercial robots on the market that could have met my requirements. But I decided that my best bet would be to roll my own. I wanted to keep things simple, and I wanted to fully understand the sensors and controls for behavioral autonomous operation. The TOMBOT is the fruit of that labor (see Photo 1a). A colleague came up with the name TOMBOT in honor of its inventor, and the name kind of stuck.

Photo 1a—The complete TOMBOT design. b—The graphics display is nice feature.

In this series of articles, I’ll present lessons learned and describe the hardware/software design process. The series will detail TOMBOT-style robot hardware and assembly, as well as behavior programming techniques using C code. By the end of the series, I’ll have covered a complete behavior programming library and API, which will be available for experimentation.


The TOMBOT robot is certainly minimal, no frills: two continuous-rotation, variable-speed control servos; two IR (850 nm) analog distance measurement sensors (4- to 30-cm range); two CdS photoconductive cells with good lux response in visible spectrum; and, finally, a front bumper (switch-activated) for collision detection. The platform is simple: servos and sensors on the left and right side of two level platforms. The bottom platform houses bumper, batteries, and servos. The top platform houses sensors and microcontroller electronics. The back part of the bottom platform uses a central skid for balance between the two servos (see Photo 1).

Given my background as a Microchip Developer and Academic Partner, I used a Microchip Technology PIC32 microcontroller, a PICkit 3 programmer/debugger, and a free Microchip IDE and 32-bit complier for TOMBOT. (Refer to the TOMBOT components list at the end of this article.)

It was a real thrill to design and build a minimal capability robot that can—with stacking programming behaviors—emulate some “intelligence.” TOMBOT is still a work in progress, but I recently had the privilege of demoing it to a first grade class in El Segundo, CA, as part of a Science Technology Engineering and Mathematics (STEM) initiative. The results were very rewarding, but more on that later.


A control system for a completely autonomous mobile robot must perform many complex information-processing tasks in real time, even for simple applications. The traditional method to building control systems for such robots is to separate the problem into a series of sequential functional components. An alternative approach is to use behavioral programming. The technique was introduced by Rodney Brooks out of the MIT Robotics Lab, and it has been very successful in the implementation of a lot of commercial robots, such as the popular Roomba vacuuming. It was even adopted for space applications like NASA’s Mars Rover and military seekers.

Programming a robot according to behavior-based principles makes the program inherently parallel, enabling the robot to attend simultaneously to all hazards it may encounter as well as any serendipitous opportunities that may arise. Each behavior functions independently through sensor registration, perception, and action. In the end, all behavior requests are prioritized and arbitrated before action is taken. By stacking the appropriate behaviors, using arbitrated software techniques, the robot appears to show (broadly speaking) “increasing intelligence.” The TOMBOT modestly achieves this objective using selective compile configurations to emulate a series of robot behaviors (i.e., Cruise, Home, Escape, Avoid, and Low Power). Figure 1 is a simple model illustration of a behavior program.

Figure 1: Behavior program

Joseph Jones’s Robot Programming: A Practical Guide to Behavior-Based Robotics (TAB Electronics, 2003) is a great reference book that helped guide me in this effort. It turns out that Jones was part of the design team for the Roomba product.

Debugging a mobile platform that is executing a series of concurrent behaviors can be daunting task. So, to make things easier, I implemented a complete remote control using a wireless link between the robot and a PC. With this link, I can enable or disable autonomous behavior, retrieve the robot sensor status and mode of operations, and curtail and avoid potential robot hazard. In addition to this, I implemented some additional operator feedback using a small graphics display, LEDs, and a simple sound buzzer. Note the TOMBOT’s power-up display in Photo 1b. We take Robot Boot Camp very seriously.

Minimalist System

As you can see in the robot’s block diagram (see Figure 2), the TOMBOT is very much a minimalist system with just enough components to demonstrate autonomous behaviors: Cruise, Escape, Avoid, and Home. All these behaviors require the use of left and right servos for autonomous maneuverability.

Figure 2: The TOMBOT system

The Cruise behavior just keeps the robot in motion in lieu of any stimulus. The Escape behavior uses the bumper to sense a collision and then 180° spin with reverse. The Avoid behavior makes use of continuous forward-looking IR sensors to veer left or right upon approaching a close obstacle. The Home behavior utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source. It all should add up to some very distinct “intelligent” operation. Figure 3 depicts the basic sensor and electronic layout.

Figure 3: Basic sensor and electronic layout

TOMBOT Assembly

The TOMBOT uses the low-cost robot platform (ArBot Chassis) and wheel set (X-Wheel assembly) from Budget Robotics (see Figure 4).

Figure 4: The platform and wheel set

A picture is worth a thousand words. Photo 2 shows two views of the TOMBOT prototype.

Photo 2a: The TOMBOT’s Sharp IR sensors, photo assembly, and more. b: The battery pack, right servo, and more.

Photo 2a shows dual Sharp IR sensors. Just below them is the photocell assembly. It is a custom board with dual CdS GL5528 photoconductive cells and 2.2-kΩ current-limiting resistors. Below this is a bumper assembly consisting of two SPDT Snap-action switches with lever (All Electronics Corp. CAT# SMS-196, left and right) fixed to a custom pre-fab plastic front bumper. Also shown is the solderless breakout board and left servo. Photo 2b shows the rechargeable battery pack that resides on the lower base platform and associated power switch. The electronics stack is visible. Here the XBee/Buzzer and graphics card modules residing on the 32-bit Experimenter. The Experimenter is plugged into a custom carrier board that allows for an interconnection to the solderless breakout to the rest of the system. Finally, note that the right servo is highlighted. The total TOMBOT package is not ideal; but remember, I’m talking about a prototype, and this particular configuration has held up nicely in several field demos.

I used Parallax (Futaba) continuous-rotation servos. They use a three-wire connector (+5 V, GND, and Control).

Figure 5 depicts a second-generation bumper assembly.  The same snap-action switches with extended levers are bent and fashioned to interconnect a bumper assembly as shown.

Figure 5: Second-generation bumper assembly

TOMBOT Electronics

A 32-bit Micro Experimenter is used as the CPU. This board is based the high-end Microchip Technology PIC32MX695F512H 64-pin TQFP with 128-KB RAM, 512-KB flash memory, and an 80-MHz clock. I did not want to skimp on this component during the prototype phase. In addition the 32-bit Experimenter supports a 102 × 64 monographic card with green/red backlight controls and LEDs. Since a full graphics library was already bundled with this Experimenter graphics card, it also represented good risk reduction during prototyping phase. Details for both cards are available on the Kiba website.

The Experimenter supports six basic board-level connections to outside world using JP1, JP2, JP3, JP4, BOT, and TOP headers.  A custom carrier board interfaces to the Experimenter via these connections and provides power and signal connection to the sensors and servos. The custom carrier accepts battery voltage and regulates it to +5 VDC. This +5 V is then further regulated by the Experimenter to its native +3.3-VDC operation. The solderless breadboard supports a resistor network to sense a +9-V battery voltage for a +3.3-V PIC processor. The breadboard also contains an LM324 quad op-amp to provide a buffer between +3.3-V logic of the processor and the required +5-V operation of the servo. Figure 6 is a detailed schematic diagram of the electronics.

Figure 6: The design’s circuitry

A custom card for the XBee radio carrier and buzzer was built that plugs into the Experimenter’s TOP and BOT connections. Photo 3 shows the modules and the carrier board. The robot uses a rechargeable 1,600-mAH battery system (typical of mid-range wireless toys) that provides hours of uninterrupted operation.

Photo 3: The modules and the carrier board

PIC32 On-Chip Peripherals

The major PIC32 peripheral connection for the Experimenter to rest of the system is shown. The TOMBOT uses PWM for servo, UART for XBee, SPI and digital for LCD, analog input channels for all the sensors, and digital for the buzzer and bumper detect. The key peripheral connection for the Experimenter to rest of the system is shown in Figure 7.

Figure 7: Peripheral usage

The PIC32 pinouts and their associated Experimenter connections are detailed in Figure 8.

Figure 8: PIC32 peripheral pinouts and EXP32 connectors

The TOMBOT Motion Basics and the PIC32 Output Compare Peripheral

Let’s review the basics for TOMBOT motor control. The servos use the Parallax (Futaba) Continuous Rotation Servos. With two-wheel control, the robot motion is controlled as per Table 1.

Table 1: Robot motion

The servos are controlled by using a 20-ms (500-Hz) pulse PWM pattern where the PWM pulse can from 1.0 ms to 2.0 ms. The effects on the servos for the different PWM are shown in Figure 9.

Figure 9: Servo PWM control

The PIC32 microcontroller (used in the Experimenter) has five Output Compare modules (OCX, where X =1 , 2, 3, 4, 5). We use two of these peripherals, specifically OC3, OC4 to generate the PWM to control the servo speed and direction. The OCX module can use either 16 Timer2 (TMR2) or 16 Timer3 (TMR3) or combined as 32-bit Timer23 as a time base and for period (PR) setting for the output pulse waveform. In our case, we are using Timer23 as a PR set to 20 ms (500 Hz). The OCXRS and OCXR registers are loaded with a 16-bit value to control width of the pulse generated during the output period. This value is compared against the Timer during each period cycle. The OCX output starts high and then when a match occurs OCX logic will generate a low on output. This will be repeated on a cycle-by-cycle basis (see Figure 10).

Figure 10: PWM generation

Next Comes Software

We set the research goals and objectives for our autonomous robot. We covered the hardware associated with this robot and in the next installment we will describe the software and operation.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.

MCU-Based Prosthetic Arm with Kinect

James Kim—a biomedical student at Ryerson University in Toronto, Canada—recently submitted an update on the status of an interesting prosthetic arm design project. The design features a Freescale 9S12 microcontroller and a Microsoft Kinect, which tracks arm movements that are then reproduced on the prosthetic arm.

He also submitted a block diagram.

Overview of the prosthetic arm system (Source: J. Kim)

Kim explains:

The 9S12 microcontroller board we use is Arduino form-factor compatible and was coded in C using Codewarrior.  The Kinect was coded in C# using Visual Studio using the latest version of Microsoft Kinect SDK 1.5.  In the article, I plan to discuss how the microcontroller was set up to do deterministic control of the motors (including the timer setup and the PID code used), how the control was implemented to compensate for gravitational effects on the arm, and how we interfaced the microcontroller to the PC.  This last part will involve a discussion of data logging as well as interfacing with the Kinect.

The Kinect tracks a user’s movement and the prosthetic arm replicates it. (Source: J. Kim, YouTube)

The system includes:

Circuit Cellar intends to publish an article about the project in an upcoming issue.