Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

Arduino-Based Hand-Held Gaming System

gameduino2-WEBJames Bowman, creator of the Gameduino game adapter for microcontrollers, recently made an upgrade to the system adding a Future Technology Devices International (FTDI) FT800 chip to drive the graphics. Associate Editor Nan Price interviewed James about the system and its capabilities.

NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

Bowman-WEB

James Bowman

 JAMES: I live on the California coast in a small farming village between Santa Cruz and San Francisco. I moved here from London 17 years ago. I studied computing at Imperial College London.

NAN: What types of projects did you work on when you were employed by Silicon Graphics, 3dfx Interactive, and NVIDIA?

JAMES: Always software and hardware for GPUs. I began in software, which led me to microcode, which led to hardware. Before you know it you’ve learned Verilog. I was usually working near the boundary of software and hardware, optimizing something for cost, speed, or both.

NAN: How did you come up with the idea for the Gameduino game console?

JAMES: I paid for my college tuition by working as a games programmer for Nintendo and Sega consoles, so I was quite familiar with that world. It seemed a natural fit to try to give the Arduino some eye-catching color graphics. Some quick experiments with a breadboard and an FPGA confirmed that the idea was feasible.

NAN: The Gameduino 2 turns your Arduino into a hand-held modern gaming system. Explain the difference from the first version of Gameduino—what upgrades/additions have been made?

Gameduinofinal-WEB

The Gameduino2 uses a Future Technology Devices International chip to drive its graphics

JAMES: The original Gameduino had to use an FPGA to generate graphics, because in 2011 there was no such thing as an embedded GPU. It needs an external monitor and you had to supply your own inputs (e.g., buttons, joysticks, etc.). The Gameduino 2 uses the new Future Technology Devices International (FTDI) FT800 chip, which drives all the graphics. It has a built-in color resistive touchscreen and a three-axis accelerometer. So it is a complete game system—you just add the CPU.

NAN: How does the Arduino factor into the design?

GameduinoPCB-WEB

An Arduino, Ethernet adapter, and a Gameduino

 JAMES: Arduino is an interesting platform. It is 5 V, believe it or not, so the design needs a level shifter. Also, the Arduino is based on an 8-bit microcontroller, so the software stack needs to be carefully built to provide acceptable performance. The huge advantage of the Arduino is that the programming environment—the IDE, compiler, and downloader—is used and understood by hundreds of thousands of people.

 NAN: Is it easy or possible to customize the Gameduino 2?

 JAMES: I would have to say no. The PCB itself is entirely surface mount technology (SMT) and all the ICs are QFNs—they have no accessible pins! This is a long way from the DIP packages of yesterday, where you could change the circuit by cutting tracks and soldering onto the pins.

I needed a microscope and a hot air station to make the Gameduino2 prototype. That is a long way from the “kitchen table” tradition of the Arduino. Fortunately the Arduino’s physical design is very customization-friendly. Other devices can be stacked up, adding networking, hi-fi sound, or other sensor inputs.

 NAN: The Gameduino 2 project is on Kickstarter through November 7, 2013. Why did you decide to use Kickstarter crowdfunding for this project?

 JAMES: Kickstarter is great for small-scale inventors. The audience it reaches also tends to be interested in novel, clever things. So it’s a wonderful way to launch a small new product.

NAN: What’s next for Gameduino 2? Will the future see a Gameduino 3?

 JAMES: Product cycles in the Arduino ecosystem are quite long, fortunately, so a Gameduino 3 is distant. For the Gameduino 2, I’m writing a book, shipping the product, and supporting the developer community, which will hopefully make use of it.

 

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Client Profile: Pico Technology

Pico Technology
320 North Glenwood Boulevard
Tyler, TX 75702

Contact: sales@picotech.com

Embedded Products/Services: Pico Technology’s PicoScope 5000 series uses reconfigurable ADC technology to offer a choice of resolutions from 8 to 16 bits. For more information, visit www.picotech.com/picoscope5000.html.

PicoProduct information: The new PicoScope 5000 series oscilloscopes have a significantly different architecture. High-resolution ADCs can be applied to the input channels in different series and parallel combinations to boost the sampling rate or the resolution.

In Series mode, the ADCs are interleaved to provide 1 GB/s at 8 bits. In Parallel mode, multiple ADCs are sampled in phase on each channel to increase the resolution and dynamic performance (up to 16 bits).

In addition to their flexible resolution, the oscilloscopes have ultra-deep memory buffers of up to 512 MB to enable long captures at high sampling rates. They also feature standard, advanced software, including serial decoding, mask limit testing, and segmented memory.

The PicoScope 5000 series oscilloscopes are currently available at www.picotech.com.

The two-channel, 60-MHz model with built-function generator costs $1,153. The four-channel, 200-MHz model with built-in arbitrary waveform generator (AWG) costs $2,803. The pricing includes a set of matched probes, all necessary software, and a five-year warranty.

Dual-Channel Waveform Generators

B&K Precision 4053 Waveform Generator

B&K Precision 4053 Waveform Generator

The 4050 Series is a new line of four dual-channel function/arbitrary waveform generators. The instruments can generate 5-to-50-MHz waveforms for applications requiring stable and precise sine, square, triangle, and pulse waveforms with modulation and arbitrary waveform capabilities.

All models provide a main output voltage that can be vary from 0 to 10 VPP into 50 Ω and a secondary output that can vary from 0 to 3 VPP into 50 Ω. The generators feature a 3.5” color LCD, a rotary control knob, and a numeric keypad with dedicated waveform keys and output buttons.

The 4050 Series provides users with 48 built-in arbitrary waveforms. Using the included waveform editing software via the standard USB interface on the rear, users can create and load up to 10 custom 16-kpt waveforms. For general-purpose interface bus (GPIB) connectivity, an optional USB-to-GPIB adapter is available.

The generators offer a variety of modulation schemes for modulated signal applications including amplitude and frequency modulation (AM/FM), double sideband amplitude modulation (DSB-AM), amplitude and frequency shift keying (ASK/FSK), phase modulation (PM), and pulse-width modulation (PWM). Additional standard features include a linear and logarithmic sweep function, a built-in counter, sync output, a trigger I/O terminal, and a USB host port on the front panel to save and recall instrument settings and waveforms. A standard external 10-MHz reference clock input is provided to synchronize the instrument to another generator.

The 4052 (5-MHz) costs $499, the 4053 (10 MHz) costs $599, the 4054 (25 MHz) costs $850, and the 4055 (50 MHz) costs $1,050. Note: B&K Precision is offering 10% off MSRP through November 30, 2013. See website for details.

B&K Precision Corp.
www.bkprecision.com