Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

I/O Raspberry Pi Expansion Card

The RIO is an I/O expansion card intended for use with the Raspberry Pi SBC. The card stacks on top of a Raspberry Pi to create a powerful embedded control and navigation computer in a small 20-mm × 65-mm × 85-mm footprint. The RIO is well suited for applications requiring real-world interfacing, such as robotics, industrial and home automation, and data acquisition and control.

RoboteqThe RIO adds 13 inputs that can be configured as digital inputs, 0-to-5-V analog inputs with 12-bit resolution, or pulse inputs capable of pulse width, duty cycle, or frequency capture. Eight digital outputs are provided to drive loads up to 1 A each at up to 24 V.
The RIO includes a 32-bit ARM Cortex M4 microcontroller that processes and buffers the I/O and creates a seamless communication with the Raspberry Pi. The RIO processor can be user-programmed with a simple BASIC-like programming language, enabling it to perform logic, conditioning, and other I/O processing in real time. On the Linux side, RIO comes with drivers and a function library to quickly configure and access the I/O and to exchange data with the Raspberry Pi.

The RIO features several communication interfaces, including an RS-232 serial port to connect to standard serial devices, a TTL serial port to connect to Arduino and other microcontrollers that aren’t equipped with a RS-232 transceiver, and a CAN bus interface.
The RIO is available in two versions. The RIO-BASIC costs $85 and the RIO-AHRS costs $175.

Roboteq, Inc.
www.roboteq.com

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Dual-Channel Waveform Generators

B&K Precision 4053 Waveform Generator

B&K Precision 4053 Waveform Generator

The 4050 Series is a new line of four dual-channel function/arbitrary waveform generators. The instruments can generate 5-to-50-MHz waveforms for applications requiring stable and precise sine, square, triangle, and pulse waveforms with modulation and arbitrary waveform capabilities.

All models provide a main output voltage that can be vary from 0 to 10 VPP into 50 Ω and a secondary output that can vary from 0 to 3 VPP into 50 Ω. The generators feature a 3.5” color LCD, a rotary control knob, and a numeric keypad with dedicated waveform keys and output buttons.

The 4050 Series provides users with 48 built-in arbitrary waveforms. Using the included waveform editing software via the standard USB interface on the rear, users can create and load up to 10 custom 16-kpt waveforms. For general-purpose interface bus (GPIB) connectivity, an optional USB-to-GPIB adapter is available.

The generators offer a variety of modulation schemes for modulated signal applications including amplitude and frequency modulation (AM/FM), double sideband amplitude modulation (DSB-AM), amplitude and frequency shift keying (ASK/FSK), phase modulation (PM), and pulse-width modulation (PWM). Additional standard features include a linear and logarithmic sweep function, a built-in counter, sync output, a trigger I/O terminal, and a USB host port on the front panel to save and recall instrument settings and waveforms. A standard external 10-MHz reference clock input is provided to synchronize the instrument to another generator.

The 4052 (5-MHz) costs $499, the 4053 (10 MHz) costs $599, the 4054 (25 MHz) costs $850, and the 4055 (50 MHz) costs $1,050. Note: B&K Precision is offering 10% off MSRP through November 30, 2013. See website for details.

B&K Precision Corp.
www.bkprecision.com

Data Acquisition Instrument

The DI-145 USB data acquisition instrument features four ±100-V analog channels and two dedicated digital inputs. The included DATAQ WinDaq data acquisition software (DAS) enables you to display and record data to a PC hard drive in real time. Once recorded, data can be played back, analyzed, or exported to an array of data acquisition and spreadsheet formats.

DATAQ also provides access to the DI-145 data protocol, which enables access to the DI-145 on any Windows, Linux, or MAC OS. In addition, .NET control is available to Windows users who wish to use a third-party programming language (e.g., Microsoft’s Visual Basic or National Instruments’s LabVIEW) to interface with the DI-145.

The four ±10-V fixed differential channels are protected from transient spikes up to ±150 V peak (±75 V, continuous). A 10-bit ADC provides 19.5-mV resolution across the full-scale measurement range. Digital inputs are protected up to ±30 VDC/peak AC. The digital inputs enable you to use a switch closure or TTL signal to remotely insert event marks or record data to disk.

The DI-145 measures 1.53” × 2.625” × 5.5” (3.89 cm × 6.67 cm × 13.97 cm) and weighs 3.6 oz. The data acquisition instrument costs $29 and includes a mini screwdriver, a USB cable, WinDaq/Lite DAS, access to the data protocol, and .NET control.

DATAQ Instruments, Inc.
www.dataq.com