John Safrit Wins the CC Code Challenge (Week 24)

We have a winner of last week’s CC Weekly Code Challenge, sponsored by IAR Systems! We posted a code snippet with an error and challenged the engineering community to find the mistake!

Congratulations to John Safrit of Mebane, North Carolina, United States  for winning the CC Weekly Code Challenge for Week 24! John will receive a CC T-shirt and a one-year subscription to Circuit Cellar.

John’s correct answer was randomly selected from the pool of responses that correctly identified an error in the code. John answered:

Line#22: need to terminate space string, add: space[d]=’';

2013_code_challenge_24_answer

You can see the complete list of weekly winners and code challenges here.

What is the CC Weekly Code Challenge?
Each week, Circuit Cellar’s technical editors purposely insert an error in a snippet of code. It could be a semantic error, a syntax error, a design error, a spelling error, or another bug the editors slip in. You are challenged to find the error.Once the submission deadline passes, Circuit Cellar will randomly select one winner from the group of respondents who submit the correct answer.

Inspired? Want to try this week’s challenge? Get started!

Submission Deadline: The deadline for each week’s challenge is Sunday, 12 PM EST. Refer to the Rules, Terms & Conditions for information about eligibility and prizes.

Rob Tholl Wins the CC Code Challenge (Week 23)

We have a winner of last week’s CC Weekly Code Challenge, sponsored by IAR Systems! We posted a code snippet with an error and challenged the engineering community to find the mistake!

Congratulations to Rob Tholl of Calgary, Alberta, Canada for winning the CC Weekly Code Challenge for Week 23! Rob will receive a CCGold Issues Archive.

Rob’s correct answer was randomly selected from the pool of responses that correctly identified an error in the code. Rob answered:

Line 14: need &array[c] to write to the proper memory location

2013_code_challenge_23_answer

You can see the complete list of weekly winners and code challenges here.

What is the CC Weekly Code Challenge?
Each week, Circuit Cellar’s technical editors purposely insert an error in a snippet of code. It could be a semantic error, a syntax error, a design error, a spelling error, or another bug the editors slip in. You are challenged to find the error.Once the submission deadline passes, Circuit Cellar will randomly select one winner from the group of respondents who submit the correct answer.

Inspired? Want to try this week’s challenge? Get started!

Submission Deadline: The deadline for each week’s challenge is Sunday, 12 PM EST. Refer to the Rules, Terms & Conditions for information about eligibility and prizes.

Mike Brown Wins the CC Code Challenge (Week 22)

We have a winner of last week’s CC Weekly Code Challenge, sponsored by IAR Systems! We posted a code snippet with an error and challenged the engineering community to find the mistake!

Congratulations to Mike Brown of Meldreth Cambs, United Kingdom for winning the CC Weekly Code Challenge for Week 22! Mike will receive an IAR Kickstart: KSK-TMPM061-JL.

Mike’s correct answer was randomly selected from the pool of responses that correctly identified an error in the code. Mike answered:

Line 9: Use “div.container” to select the div class ‘container’

Note: an acceptable alternate answer was to change the “class” to “id” on line 23 as indicated in the image below.

2013_code_challenge_22_answer

You can see the complete list of weekly winners and code challenges here.

What is the CC Weekly Code Challenge?
Each week, Circuit Cellar’s technical editors purposely insert an error in a snippet of code. It could be a semantic error, a syntax error, a design error, a spelling error, or another bug the editors slip in. You are challenged to find the error.Once the submission deadline passes, Circuit Cellar will randomly select one winner from the group of respondents who submit the correct answer.

Inspired? Want to try this week’s challenge? Get started!

Submission Deadline: The deadline for each week’s challenge is Sunday, 12 PM EST. Refer to the Rules, Terms & Conditions for information about eligibility and prizes.

Brian Shewan Wins the CC Code Challenge (Week 21)

We have a winner of last week’s CC Weekly Code Challenge, sponsored by IAR Systems! We posted a code snippet with an error and challenged the engineering community to find the mistake!

Congratulations to Brian Shewan of Nova Scotia, Canada for winning the CC Weekly Code Challenge for Week 21! Brian will receive an Elektor 2012 & 2011 Archive DVD.

Brians’s correct answer was randomly selected from the pool of responses that correctly identified an error in the code. Brian answered:

Line #4 – Missing ‘.’ after ‘PROGRAM-ID’. Change to “PROGRAM-ID. JUST-A-TEST.”

2013_code_challenge_21_answer

You can see the complete list of weekly winners and code challenges here.

What is the CC Weekly Code Challenge?
Each week, Circuit Cellar’s technical editors purposely insert an error in a snippet of code. It could be a semantic error, a syntax error, a design error, a spelling error, or another bug the editors slip in. You are challenged to find the error.Once the submission deadline passes, Circuit Cellar will randomly select one winner from the group of respondents who submit the correct answer.

Inspired? Want to try this week’s challenge? Get started!

Submission Deadline: The deadline for each week’s challenge is Sunday, 12 PM EST. Refer to the Rules, Terms & Conditions for information about eligibility and prizes.

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.