Scott Garman, Technical Evangelist

This article was a preview of an upcoming interview in the February issue of Circuit Cellar. The full interview is now available here.
Garman_web

Scott Garman is a Portland, OR-based Linux software engineer. Scott is very involved with the Yocto Project, an open-source collaboration that provides tools for the embedded Linux industry. Scott tells us about how he recently helped Intel launch MinnowBoard, the company’s first open-hardware SBC. The entire interview will be published in Circuit Cellar’s February issue.—Nan Price, Associate Editor

NAN: What is the Yocto Project?

 SCOTT: The Yocto Project is centered on the OpenEmbedded build system, which offers a tremendous amount of flexibility in how you can create embedded Linux distros. It gives you the ability to customize nearly every policy of your embedded Linux system.

I’ve developed training materials for new developers getting started with the Yocto Project, including “Getting Started with the Yocto Project—New Developer Screencast Tutorial.”

MinnowBoardWEB

Scott was involved with a MinnowBoard robotics and computer vision demo at LinuxCon Japan, May 2013.

NAN: Tell us about Intel’s recently introduced the MinnowBoard SBC.

SCOTT: The MinnowBoard is based on Intel’s Queens Bay platform, which pairs a Tunnel Creek Atom CPU (the E640 running at 1 GHz) with the Topcliff Platform controller hub. The board has 1 GB of RAM and includes PCI Express, which powers our SATA disk support and gigabit Ethernet. It’s an SBC that’s well suited for embedded applications that can use that extra CPU and especially I/O performance.

MinnowBoardOWI_web

Scott worked on a MinnowBoard demo built around an OWI Robotic Arm.

The MinnowBoard also has embedded bus standards including GPIO, I2C, SPI, and even CAN (used in automotive applications) support. We have an expansion connector on the board where we route these buses, as well as two lanes of PCI Express for custom high-speed I/O expansion.

NAN: What compelled Intel to make the MinnowBoard open hardware?

SCOTT: The main motivation for the MinnowBoard was to create an affordable Atom-based development platform for the Yocto Project. We also felt it was a great opportunity to try to release the board’s design as open hardware.

Member Profile: Scott Weber

Scott Weber

Scott Weber

LOCATION:
Arlington, Texas, USA

MEMBER STATUS:
Scott said he started his Circuit Cellar subscription late in the last century. He chose the magazine because it had the right mix of MCU programming and electronics.

TECH INTERESTS:
He has always enjoyed mixing discrete electronic projects with MCUs. In the early 1980s, he built a MCU board based on an RCA CDP1802 with wirewrap and programmed it with eight switches and a load button.

Back in the 1990s, Scott purchased a Microchip Technology PICStart Plus. “I was thrilled at how powerful and comprehensive the chip and tools were compared to the i8085 and CDP1802 devices I tinkered with years before,” he said.

RECENT EMBEDDED TECH ACQUISITION:
Scott said he recently treated himself to a brand-new Fluke 77-IV multimeter.

CURRENT PROJECTS:
Scott is building devices that can communicate through USB to MS Windows programs. “I don’t have in mind any specific system to control, it is something to learn and have fun with,” he said. “This means learning not only an embedded USB software framework, but also Microsoft Windows device drivers.”

THOUGHTS ON THE FUTURE OF EMBEDDED TECH:
“Embedded devices are popping up everywhere—in places most people don’t even realize they are being used. It’s fun discovering where they are being applied. It is so much easier to change the microcode of an MCU or FPGA as the unit is coming off the assembly line than it is to rewire a complex circuit design,” Scott said.

“I also like Member Profile Joe Pfeiffer’s final comment in Circuit Cellar 276: Surface-mount and ASIC devices are making a ‘barrier to entry’ for the hobbyist. You can’t breadboard those things! I gotta learn a good way to make my own PCBs!”

Designing Wireless Data Gloves

Kevin Marinelli, IT manager for the Mathematics Department at the University of Connecticut, recently answered CC.Post’s newsletter invitation to readers to tell us about their wearable electronics projects. Kevin exhibited his project,  “Wireless Data Gloves,” at the World Maker Faire New York in September. He spoke with Circuit Cellar Managing Editor Mary Wilson about the gloves, which are based on an Adafruit ATmega32U4 breakout board, use XBee modules for wireless communication, and enable wearers to visually manipulate data and 3-D graphics.

MARY: Tell us a little bit about yourself and your educational and professional background.

KEVIN: I am originally from Sydney, Nova Scotia, in Canada. From an early age I have

Kevin Marinelli

Kevin Marinelli

always been interested in taking things apart and creating new things. My degrees are a Bachelor’s in Computer Science from Dalhousie University in Halifax, Nova Scotia, and a Master’s in Computer Science from the University of New Brunswick in Fredericton, New Brunswick. I am currently working on my PhD in Computer Science at the University of Connecticut (UConn).

My first full-time employment was with ITS (the computer center) at Dalhousie University. After eight years, I moved on to an IT management position the Ocean Mapping Group at the University of New Brunswick. I am currently the IT manager for the Mathematics Department at  UConn.

I am also an active member of MakeHartford, which is a local group of makers in Hartford, Connecticut.

MARY: Describe the wireless data gloves you recently exhibited at the World Maker Faire in New York. What inspired the idea?

KEVIN: The idea was initially inspired 20 years ago when using a Polhemus 6 Degree-of-Freedom sensor for manipulating computer graphics when I was at the University of New Brunswick. The device used magnetic fields to locate a sensor in three-dimensional space and detect its orientation. The combined location and orientation data provides data with six degrees of freedom. I have been interested in creating six degrees of freedom input devices ever since. With the Arduino and current sensor technologies, that is now possible.

Wireless data gloves on display at World Maker Faire New York. (Photo: Rohit Mehta)

Wireless data gloves on display at World Maker Faire New York. (Photo: Rohit Mehta)

MARY: What do the gloves do? What applications are there? Can you provide an example of who might use them and for what purpose?

KEVIN: The data gloves allow me to use my hands to wirelessly transmit telemetry data to a base station computer, which collects the data and provides it to any application programs that need it.

There are a number of potential applications, such as manipulating 3-D computer graphics, measurement of data for medical applications, remote control of vehicles, remote control of animatronics and puppetry.

MARY: Can you tell me about the data gloves’s design and the components used?

KEVIN: The basic design guidelines were to make the gloves self-contained, lightweight, easy to program, wireless, and rechargeable. The main electronic components are an Adafruit ATmega32U4 breakout board  (Arduino Leonardo software compatible), a SparkFun 9d0f sensor board, an XBee Pro packet radio, a LiPo battery charger circuit, and a LiPo battery. These are all open hardware projects or, in the case of the battery, are ordinary consumer products.

The choice of the ATMega32U4 for the processor was made to provide a USB port without any external components such as an FTDI chip to convert between serial and USB communications. This frees up the serial port on the processor for communicating with the XBee radio.

For the sensors, the SparkFun 9dof board was perfect because of its miniscule size and

Top of glove

Top of glove

because it only requires four connections: two connections for power and two connections for I2C. The board has components with readily available data sheets, and there is access to working example code for the sensor board. This reduced the design work greatly by using an off-the-shelf product instead of designing one myself.

The choice of an 800-mAh LiPo battery provides an excellent lightweight rechargeable power supply in a small form factor. The relatively small battery powers the project for more than 24 h of continuous use.

Palm of glove

Palm of glove

A simple white cotton glove acts as the structure to mount the electronics. For user-controlled input, the glove has conductive fabric fingertips and palm. Touching a finger to the thumb, or the pad on the palm, closes an electrical pathway, which allows the microcontroller to detect the input.

For user-selectable input, each fingertip and the palm of the hand has a conductive fabric pad connected to the Adafruit microcontroller. The thumb and palm act as a voltage source, while the fingertips act as inputs to the microcontroller. This way, the microcontroller can detect which fingers are touching the thumb and the palm pads. Insulated wires of 30 gauge phosphor bronze are sewn into the glove to connect the pads to the microcontroller.

MARY: Are the gloves finished? What were some of the design challenges? Do you plan any changes to the design?

KEVIN: The initial glove design and second version of the prototype have been completed. The major design challenges were finding a microcontroller board with sufficient capabilities to fit on the back of a hand, and configuring the XBee radios. The data glove design will continue to evolve over the next year as newer and more compact components become available.

Initially I was designing and building my own microcontroller circuit based on the ATmega32U4, but Adafruit came out with a nice, usable, designed board for my needs. So I changed the design to use their board.

SparkFun has a well-designed micro USB-based LiPo battery charger circuit. This would have been ideal for my project except that it does not have an On/Off switch and only has some through-hole solder points for powering an external project. I used their CadSoft EAGLE files to redesign the circuit to make it slightly more compact, added in a power switch and a JST connector for the power output for projects.

The XBee radios were an interesting challenge on their own. My initial design used the standard XBee, but that caused communication complications when using multiple data gloves simultaneously. In reading Robert Faludi’s book Building Wireless Sensor Networks: With ZigBee, XBee, Arduino, and Processing, I learned that the XBee Pro was more suited to my needs because it could be configured on a private area network (PAN) with end-nodes for the data gloves and a coordinator for the base station.

One planned future change is to switch to the surface-mount version of the XBee Pro. This will reduce both the size and weight of the electronics for the project.

The current significant design challenge I am working on is how to prevent metal fatigue in the phosphor bronze wires as they bend when the hand and fingers flex. The fatigue problem occurs because I use a small diamond file to remove the Kapton insulation on the wires. This process introduces small nicks or makes the wires too thin, which then promotes the metal fatigue.

A third version is in the design stage. The new design will replace the SparkFun 9dof board with a smaller single-chip sensor, which I hope can be mounted directly on the Adafruit ATmega32U4 board.

MARY: What new skills or technologies did you learn from the project, if any?

KEVIN: Along the way to creating the gloves, I learned a great deal about modern electronics. My previous skills in electronics were learned in the ’70s with single-sided circuits with through-hole components and pre-made circuit boards. I can now design and create double-sided circuit boards with primarily surface-mounted components. For initial prototype designs, I use double-sided photosensitized circuit boards and etch them at home.

Learning to program Arduino boards and Arduino clones has been incredible. The fact that the boards can be programmed using C in a nice IDE with lots of support libraries for common programming tasks makes the platform an incredibly efficient tool. Having an enormous following makes it very easy to find technical support for solving problems with Arduino products and making Arduino clones.

Wireless networking is a key component for the success of the project. I was lucky to have a course in wireless sensor network design at UConn, which taught me how to leverage wireless technology and avoid many of the pitfalls. That, combined with some excellent reference books I found, insured that the networking is stable. The network design provides for more network bandwidth than a single pair of data gloves require, so it is feasible to have multiple people collaborating manipulating the same on the same project.

Designing microcontroller circuits using EAGLE has been an interesting experience. While most of the new components I use regularly in designs are available in libraries from Adafruit and SparkFun, I occasionally have to design my own parts in EAGLE. Using EAGLE to its fullest potential will still take some time, but I have become reasonably proficient with it.

For soldering, I mostly still use a standard temperature controlled soldering iron with a standard tip. Amazingly, this allows me to solder 0402 resistors and capacitors and up to 100 pitch chips. When I have components that need to be soldered under the surface, I use solder paste and a modified electric skillet. This allows me to directly control the temperature of the soldering and gives me direct access to monitoring the process.

The battery charger circuit on my data glove is hand soldered and has a number of 0402-sized components, as  well as a micro USB connector, which also is a challenge to hand solder properly.

MARY: Are there similar “data gloves” out there? How are yours different?

There are a number of data glove projects, which can be found on the Internet. Some are commercial products, while others are academic projects.

My gloves are unique in that they are lightweight and self-contained on the cotton glove. All other projects that you can find on the Internet are either hard-wired to a computer or have components such as the microcontroller, batteries, or radio strapped to the arm or body.

Also, because the main structure is a self-contained cotton glove; the gloves do not interfere with other activities such as typing on a keyboard, using a mouse, writing with a pen, or even drinking from a glass. This was quite handy when developing the software for the glove because I could test the software and make programming corrections without having the inconvenience of putting the gloves on and taking them off repeatedly.

MARY: Are you working on any other projects you’d like to briefly tell us about?

KEVIN: At UConn, we are lucky to have one of the few academic programs in puppetry in the US. In the spring, I plan on taking a fine arts course at UConn in designing and making marionette puppets. This will allow me to expand the use of my data gloves into controlling and manipulating puppets for performance art.

I am collaborating on designing circuit boards with a number of people in Hartford. The more interesting collaborations are with artists, where they think differently about technology than I do. Balam Soto of Open Wire Labs is a new media artist and one of the creative artists I collaborate with regularly. He is also a member of MakeHartford and presents at Maker Faires.

MARY: What was the response to the wireless data gloves at World Maker Faire New York?

KEVIN: The response to the data gloves was overwhelmingly positive. People were making comparisons to the Nintendo Power Glove and to the movie “Minority Report.” Several musicians commented that the gloves should be excellent for performing and recording virtual musical instruments such as a guitar, trumpet and drums.

For the demonstration, I showed a custom application; which allowed both hands (or two people) to interactively manipulate points and lines on a drawing. Many people were encouraged to use the gloves for themselves, which enhanced the quality of the feedback I received.

The gloves are large-sized to fit my hands, which was quite a challenge for younger children to use because their hands were “lost” in the gloves. Even with the size challenge, it was fun watching younger children manipulating the objects on the computer screen.

I look forward to the Maker Faire next year, when I will have implemented the newer design for the data gloves and will have additional software to demonstrate. I plan on trying to put together a presentation on some form of performance art using the data gloves.

Two Campuses, Two Problems, Two Solutions

In some ways, Salish Kootenai College (SKC)  based in Pablo, MT, and Penn State Erie, The Behrend College in Erie, PA, couldn’t be more different

SKC, whose main campus is on the Flathead Reservation, is open to all students but primarily serves Native Americans of the Bitterroot Salish, Kootenai, and Pend d’Orellies tribes. It has an enrollment of approximately 1,400. Penn State Erie has roughly 4,300.

But one thing the schools have in common is enterprising employees and students who recognized a problem on their campuses and came up with technical solutions. Al Anderson, IT director at the SKC, and Chris Coulston, head of the Computer Science and Software Engineering department at Penn State Erie, and his team have written articles about their “campus solutions” to be published in upcoming issues of Circuit Cellar.

In the summer of 2012, Anderson and the IT department he supervises direct-wired the SKC dorms and student housing units with fiber and outdoor CAT-5 cable to provide students better  Ethernet service.

The system is designed around the Raspberry Pi device. The Raspberry Pi queries the TMP102 temperature sensor. The Raspberry Pi is queried via the SNMP protocol.

The system is designed around the Raspberry Pi device. The Raspberry Pi queries the TMP102 temperature sensor. The Raspberry Pi is queried via the SNMP protocol.

“Prior to this, students accessed the Internet via a wireless network that provided very poor service.” Anderson says. “We wired 25 housing units, each with a small unmanaged Ethernet switch. These switches are daisy chained in several different paths back to a central switch.”

To maintain the best service, the IT department needed to monitor the system’s links from Intermapper, a simple network management protocol (SNMP) software. Also, the department had to monitor the temperature inside the utility boxes, because their exposure to the sun could cause the switches to get too hot.

This is the final installation of the Raspberry Pi. The clear acrylic case can be seen along with the TMP102 glued below the air hole drilled into the case. A ribbon cable was modified to connect the various pins of the TMP102 to the Raspberry Pi.

This is the final installation of the Raspberry Pi in the SKC system. The clear acrylic case can be seen along with the TMP102 glued below the air hole drilled into the case. A ribbon cable was modified to connect the various pins of the TMP102 to the Raspberry Pi.

“We decided to build our own monitoring system using a Raspberry Pi to gather temperature data and monitor the network,” Anderson says. “We installed a Debian Linux distro on the Raspberry Pi, added an I2C Texas Instruments TMP102 temperature sensor…, wrote a small Python program to get the temperature via I2C and convert it to Fahrenheit, installed SNMP server software on the Raspberry Pi, added a custom SNMP rule to display the temperature from the script, and finally wrote a custom SNMP MIB to access the temperature information as a string and integer.”

Anderson, 49, who has a BS in Computer Science, did all this even as he earned his MS in Computer Science, Networking, and Telecommunications through the Johns Hopkins University Engineering Professionals program.

Anderson’s article covers the SNMP server installation; I2C TMP102 temperature integration; Python temperature monitoring script; SNMP extension rule; and accessing the SNMP Extension via a custom MIB.

“It has worked flawlessly, and made it through the hot summer fine,” Anderson said recently. “We designed it with robustness in mind.”

Meanwhile, Chris Coulston, head of the Computer Science and Software Engineering department at Penn State Erie, and his team noticed that the shuttle bus

The mobile unit to be installed in the bus. bus

The mobile unit to be installed in the bus.

introduced as his school expanded had low ridership. Part of cause was the unpredictable timing of the bus, which has seven regular stops but also picks up students who flag it down.

“In order to address the issues of low ridership, a team of engineering students and faculty constructed an automated vehicle locator (AVL), an application to track the campus shuttle and to provide accurate estimates when the shuttle will arrive at each stop,” Coulston says.

The system’s three main hardware components are a user’s smartphone; a base station on campus; and a mobile tracker that stays on the traveling bus.

The base station consists of an XTend 900 MHz wireless modem connected to a Raspberry Pi, Coulston says. The Pi runs a web server to handle requests from the user’s smart phones. The mobile tracker consists of a GPS receiver, a Microchip Technology PIC 18F26K22 and an XTend 900 MHz wireless modem.

Coulston and his team completed a functional prototype by the time classes started in August. As a result, a student can call up a bus locater web page on his smartphone. The browser can load a map of the campus via the Google Maps JavaScript API, and JavaScript code overlays the bus and bus stops. You can see the bus locater page between 7:40 a.m. to 7 p.m. EST Monday through Friday.

“The system works remarkably well, providing reliable, accurate information about our campus bus,” Coulston says. “Best of all, it does this autonomously, with very little supervision on our part.  It has worked so well, we have received additional funding to add another base station to campus to cover an extended route coming next year.”

The base station for the mobile tracker is a sandwich of Raspberry Pi, interface board, and wireless modem.

The base station for the mobile tracker is a sandwich of Raspberry Pi, interface board, and wireless modem.

And while the system has helped Penn State Erie students make it to class on time, what does Coulston and his team’s article about it offer Circuit Cellar readers?

“This article should appeal to readers because it’s a web-enabled embedded application,” Coulston says. “We plan on providing users with enough information so that they can create their own embedded web applications.”

Look for the article in an upcoming issue. In the meantime, if you have a DIY wireless project you’d like to share with Circuit Cellar, please e-mail editor@circuitcellar.com.

 

 

 

 

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.