Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

CC 276: MCU-Based Prosthetic Arm with Kinect

In its July issue, Circuit Cellar presents a project that combines the technology behind Microsoft’s Kinect gaming device with a prototype prosthetic arm.

The project team and  authors of the article include Jung Soo Kim, an undergraduate student in Biomedical Engineering at Ryerson University in Toronto, Canada, Nika Zolfaghari, a master’s student at Ryerson, and Dr. James Andrew Smith, who specializes in Biomedical Engineering at Ryerson.

“We designed an inexpensive, adaptable platform for prototype prosthetics and their testing systems,” the team says. “These systems use Microsoft’s Kinect for Xbox, a motion sensing device, to track a healthy human arm’s instantaneous movement, replicate the exact movement, and test a prosthetic prototype’s response.”

“Kelvin James was one of the first to embed a microprocessor in a prosthetic limb in the mid-1980s…,” they add. “With the maker movement and advances in embedded electronics, mechanical T-slot systems, and consumer-grade sensor systems, these applications now have more intuitive designs. Integrating Xbox provides a platform to test prosthetic devices’ control algorithms. Xbox also enables prosthetic arm end users to naturally train their arms.”

They elaborate on their choices in building the four main hardware components of their design, which include actuators, electronics, sensors, and mechanical support:

“Robotis Dynamixel motors combine power-dense neodymium motors from Maxon Motors with local angle sensing and high gear ratio transmission, all in a compact case. Atmel’s on-board 8-bit ATmega8 microcontroller, which is similar to the standard Arduino, has high (17-to-50-ms) latency. Instead, we used a 16-bit Freescale Semiconductor MC9S12 microcontroller on an Arduino-form-factor board. It was bulkier, but it was ideal for prototyping. The Xbox system provided high-level sensing. Finally, we used Twintec’s MicroRAX 10-mm profile T-slot aluminum to speed the mechanical prototyping.”

The team’s goal was to design a  prosthetic arm that is markedly different from others currently available. “We began by building a working prototype of a smooth-moving prosthetic arm,” they say in their article.

“We developed four quadrant-capable H-bridge-driven motors and proportional-derivative (PD) controllers at the prosthetic’s joints to run on a MC9S12 microcontroller. Monitoring the prosthetic’s angular position provided us with an analytic comparison of the programmed and outputted results.”

A Technological Arts Esduino microcontroller board is at the heart of the prosthetic arm design.

The team concludes that its project illustrates how to combine off-the-shelf Arduino-compatible parts, aluminum T-slots, servomotors, and a Kinect into an adaptable prosthetic arm.

But more broadly, they say, it’s a project that supports the argument that  “more natural ways of training and tuning prostheses” can be achieved because the Kinect “enables potential end users to manipulate their prostheses without requiring complicated scripting or programming methods.”

For more on this interesting idea, check out the July issue of Circuit Cellar. And for a video from an earlier Circuit Cellar post about this project, click here.

 

MCU-Based Prosthetic Arm with Kinect

James Kim—a biomedical student at Ryerson University in Toronto, Canada—recently submitted an update on the status of an interesting prosthetic arm design project. The design features a Freescale 9S12 microcontroller and a Microsoft Kinect, which tracks arm movements that are then reproduced on the prosthetic arm.

He also submitted a block diagram.

Overview of the prosthetic arm system (Source: J. Kim)

Kim explains:

The 9S12 microcontroller board we use is Arduino form-factor compatible and was coded in C using Codewarrior.  The Kinect was coded in C# using Visual Studio using the latest version of Microsoft Kinect SDK 1.5.  In the article, I plan to discuss how the microcontroller was set up to do deterministic control of the motors (including the timer setup and the PID code used), how the control was implemented to compensate for gravitational effects on the arm, and how we interfaced the microcontroller to the PC.  This last part will involve a discussion of data logging as well as interfacing with the Kinect.

The Kinect tracks a user’s movement and the prosthetic arm replicates it. (Source: J. Kim, YouTube)

The system includes:

Circuit Cellar intends to publish an article about the project in an upcoming issue.