Talking Hands: American Sign Language Gesture Recognition Glove

Roberto developed a glove that enables communication between the user and those
around him. While the design is intended for use by people communicating in American Sign Language, you can apply what you learn in this article to a variety of communications applications.Capture
PHOTO 1-Here you see the finished product with all of the sensors sewn in. The use of string as opposed to adhesive for the sensors allowed the components to smoothly slide back and forth as the hand was articulated.

By Roberto Villalba

While studying at Cornell University in 2014, my lab partner Monica Lin and I designed and built a glove to be worn on the right hand that uses a machine learning (ML) algorithm to translate sign language into spoken English (see Photo 1). Our goal was to create a way for the speech impaired to be able to communicate with the general public more easily. Since every person’s hand is a unique size and shape, we aimed to create a device that could provide reliable translations regardless of those differences. Our device relies on a variety of sensors, such as flex sensors, a gyroscope, an accelerometer, and touch sensors to quantify the state of the user’s hand. These sensors allow us to capture the flex on each of the fingers, the hand’s orientation, rotation, and points of contact. By collecting a moderate amount of this data for each sign and feeding it into a ML algorithm, we are able to learn the association between sensor readings and their corresponding signs. We make use of a microcontroller to read, filter and send the data from the glove to a PC. Initially, some data is gathered from the users and the information is used to train a classifier that learns to differentiate between signs. Once the training is done, the user is able to put on the glove and make gestures which the computer then turns into audible output.

After performing some calculation and characterizing our flex sensors, we decided to use a 10-kΩ resistor. Note that the rightmost point goes into one of the microcontroller’s ADC.

FIGURE 1-After performing some calculation and characterizing our flex sensors, we decided to use a 10-kΩ resistor. Note that the rightmost point goes into one of the microcontroller’s ADC.

We use the microcontroller’s analog-to digital converter (ADC) to read the voltage drop across each of the flex sensors. We then move on to reading the linear acceleration and rotation values from the accelerometer and gyro sensor using I 2C. And finally, we get binary readings from each of the touch sensors regarding if there exists contact or not. We perform as many readings as possible within a given window of time and use all of this data to do some smoothing. This information is then sent through serial to the PC where it is gathered and processed. Python must listen to information coming in from the microprocessor and either store data or predict based on already learned information. Our code includes scripts for gathering data, loading stored data, classifying the data that is being streamed live, and some additional scripts to help with visualization of sensor readings and so on.

The design comprises an Atmel ATmega1284P microcontroller and a glove onto which the various sensors and necessary wires were sewn. Each finger has one Spectra Symbol flex sensor stitched on the backside of the glove. The accelerometer and gyro sensors are attached to the center of the back of the glove. The two contact sensors were made out of copper tape and wire that was affixed to four key locations.

Since each flex sensor has a resistance that varies depending on how much the finger is bent, we attached each flex sensor as part of a voltage divider circuit in order to obtain a corresponding voltage that can then be input into the microcontroller.


We determined a good value for R1 by analyzing expected values from the flex sensor. Each one has a flat resistance of 10 k and a maximum expected resistance (obtained by measuring its resistance on a clenched fist) of about 27 k. In order to obtain the maximum range of possible output voltages from the divider circuit given an input voltage of 5 V, we plotted the expected ranges using the above equation and values of R1 in the range of 10 to 22 k. We found that the differences between the ranges were negligible and opted to use 10 k for R1 (see Figure 1).

Our resulting voltage divider has an output range of about 1 V. We were initially concerned that the resulting values from the microcontroller’s ADC converter would be too close together for the learning algorithm to discern between different values sufficiently. We planned to address this by increasing the input voltage to the voltage divider if necessary, but we found that the range of voltages described earlier was sufficient and performed extremely well.

The InvenSense MPU-6050 accelerometer and gyro sensor packet operates on a lower VCC (3.3 V) compared to the microcontroller’s 5 V. So as not to burn out the chip, we created a voltage regulator using an NPN transistor and a trimpot, connected as shown. The trimpot was adjusted so that the output of the regulator reads 3.3 V. This voltage also serves as the source for the pull-up resistors on the SDA and SCL wires to the microcontroller. Since the I 2C devices are capable only of driving the input voltages low, we connect them to VCC via two 4.7-k pull-up resistors (see Figure 2).

As described later, we found that we needed to add contact sensors to several key spots on the glove (see Figure 3). These would essentially function as switches that would pull the microcontroller input pins to ground to signal contact (be sure to set up the microcontroller pins to use the internal pull up resistors).

Figure 2: Here we see the schematic of the voltage regulator circuit that we created in order to obtain 3.3 V. The bottom of the schematic shows how this same regulator was used to pull up the signals at SCL and SDA. Figure 3: The contact sensor circuitry was quite simple. The input pins of the microcontroller are set to the internal pull-up resistors and whenever the two corresponding copper ends on the fingers touch the input is pulled low.

Figure 2: Here we see the schematic of the voltage regulator circuit that we created in order to obtain 3.3 V. The bottom of the schematic shows how this same regulator was used to pull up the signals at SCL and SDA.

Figure 3: The contact sensor circuitry was quite simple. The input pins of the microcontroller are set to the internal pull-up resistors and whenever the two corresponding copper ends on the fingers touch the input is pulled low.

Interfacing with the MPU-6050 required I 2C communication, for which we chose to use Peter Fleury’s public I 2C library for AVR microcontrollers. I 2C is designed to support multiple devices using a single dedicated data (SDA) bus and a single clock (SCL) bus. Even though we were only using the interface for the microcontroller to regularly poll the MPU6050, we had to adhere to the I 2C protocol. Fleury’s library provided us with macros for issuing start and stop conditions from the microcontroller (which represent different signals that the microcontroller is requesting data from the MPU-6050 or is releasing control of the bus). These provided macros allowed for us to easily initialize the I 2C interface, set up the MPU-6050, and request and receive the accelerometer and gyroscope data (described later).

Figure 4: The image is the visual output received from plotting sequences of sensor readings. The clear divisions across the horizontal signal the different signs A, B, C, and D, respectively.

Figure 4: The image is the visual output received from plotting sequences of sensor readings. The clear divisions across the horizontal signal the different signs A, B, C, and D, respectively.

While testing our I2C communication with the MPU-6050, we found that the microcontroller would on rare occasions hang while waiting for data from the I2C bus. To prevent this from stalling our program, we enabled a watchdog timer that would reset the system every 0.5 seconds, unless our program continued to progress to regular checkpoint intervals, at which time we would reset the watchdog timer to prevent it from unnecessarily resetting the system. We were able to leverage the fact that our microcontroller’s work consists primarily of continuously collecting sensor data and sending packets to a separate PC.

Photo 2: In this image we see the hand gestures for R, U, and V. As you can tell, there is not much difference in the hand’s orientation or the amount of flex on the fingers. However, note that the copper pieces make different kinds of contact for each of the signs.

Photo 2: In this image we see the hand gestures for R, U, and V. As you can tell, there is not much difference in the hand’s orientation or the amount of flex on the fingers. However, note that the copper pieces make different kinds of contact for each of the signs.

For the majority of the code, we used Dan Henriksson and Anton Cervin’s TinyRealTime kernel. The primary reason for using this kernel is that we wanted to take advantage of the already implemented non-blocking UART library in order to communicate with the PC. While we only had a single thread running, we tried to squeeze in as much computation as possible while the data was being transmitted.

The program first initializes the I 2C, the MPU, and the ADC. After it enters an infinite loop it resets the watchdog timer and gets 16 readings from all of the sensors: accelerometers, gyroscopes, flex-sensors, and touch sensors. We then take all of the sensor values and compute filtered values by summing all of the 16 readings from each sensor. Since summation of the IMU sensors can produce overflow, we make sure to shift all of their values by 8 before summing them up. The data is then wrapped up into byte array packet that is organized in the form of a header (0xA1B2C3D4), the data, and a checksum of the data. Each of the sensors is stored into 2 bytes and the checksum is calculated by summing up the unsigned representation of each of the bytes in the data portion of the packet into a 2-byte integer. Once the packet has been created it is sent through the USB cable into the computer and the process repeats.

Communication with the microcontroller was established through the use of Python’s socket and struct libraries. We created a class called SerialWrapper whose main goal is to receive data from the microcontroller. It does so by opening a port and running a separate thread that waits on new data to be available. The data is then scanned for the header and a packet of the right length is removed when available. The checksum is then calculated and verified, and, if valid, the data is unpacked into the appropriate values and fed into a queue for other processes to extract. Since we know the format of the packet, we can use the struct library to extract all of the data from the packet, which is in a byte array format. We then provide the user with two modes of use. One that continuously captures and labels data in order to make a dataset, and another that continuously tries to classify incoming data. Support Vector Machines (SVM) are a widely used set of ML algorithms that learn to classify by using a kernel. While the kernel can take various forms, the most common kind are the linear SVMs. Simply put, the classification, or sign, for a set of readings is decided by taking the dot product of the readings and the classifier. While this may seem like a simple approach, the results are quite impressive. For more information about SVMs, take a look at scikit-learn’s “Support Vector Machines” (

For the purposes of this project we chose to focus primarily on the alphabet, a-z, and we added two more labels, “nothing” and “relaxed”, to the set. Our rationale for providing the classifier “nothing” was in order to have a class that was made up of mostly noise. This class would not only provide negative instances to help learn our other classes, but it also gave the classifier a way of outputting that the gestured sign is not recognized as one of the ones that we care about. In addition, we didn’t want the classifier to be trying to predict any of the letters when the user was simply standing by, thus we taught it what a “relaxed” state was. This state was simply the position that the user put his/her hand when they were not signing anything. In total there were 28 signs or labels. For our project we made extensive use of Python’s scikit-learn library. Since we were using various kinds of sensors with drastically different ranges of values, it was important to scale all of our data so that the SVM would have an easier time classifying. To do so we made use of the preprocessing tools available from scikit-learn. We chose to take all of our data and scale it so that the mean for each sensor was centered at zero and the readings had unit variance. This approach brought about drastic improvements in our performance and is strongly recommended. The classifier that we ended up using was a SVM that is provided by scikit-learn under the name of SVC.

Figure 5: The confusion matrix demonstrates how many times each label is predicted and how many times that prediction is accurate. We would like to see a perfect diagonal line, but we see that one square does not adhere to this. This square corresponds to “predicted V when it was really U” and it shows about a 66% accuracy.

Figure 5: The confusion matrix demonstrates how many times each label is predicted and how many times that prediction is accurate. We would like to see a perfect diagonal line, but we see that one square does not adhere to this. This square corresponds to “predicted V when it was really U” and it shows about a 66% accuracy.

Another part that was crucial to us as developers was the use of plotting in order to visualize the data and qualify how well a learning algorithm should be able to predict the various signs. The main tool that was developed for this was the plotting of a sequence of sensor readings as an image (see Figure 4). Since each packet contained a value for each of the sensors (13 in total), we could concatenate multiple packets to create a matrix. Each row is thus one of the sensor and we look at a row from left to right we get progressively later sensor readings. In addition, every packet makes up a column. This matrix could then be plotted with instances of the same sign grouped together and the differences between these and the others could then be observed. If the difference is clear to us, then the learning algorithm should have no issue telling them apart. If this is not the case, then it is possible that the algorithm could struggle more and changes to the approach could have been necessary.

The final step to classification is to pass the output of the classifier through a final level of filtering and debouncing before the output reaches the user. To accomplish this, we fill up a buffer with the last 10 predictions and only consider something a valid prediction if it has been predicted for at least nine out of the last 10 predictions. Furthermore, we debounce this output and only notify the user if this is a novel prediction and not just a continuation of the previous. We print this result on the screen and also make use of Peter Parente’s pyttsx text-to-speech x-platform to output the result as audio in the case that it is neither “nothing” or “relaxed.”

Our original glove did not have contact sensors on the index and middle fingers. As a result, it had a hard time predicting “R,” “U,” and “V” properly. These signs are actually quite similar to each other in terms of hand orientation and flex. To mitigate this, we added two contact sensors: one set on the tips of the index and middle fingers to detect “R,” and another pair in between the index and middle fingers to discern between “U” and “V.”

As you might have guessed, the speed of our approach is limited by the rate of communication between the microcontroller and the computer and by the rate at which we are able to poll the ADC on the microprocessor. We determined how quickly we could send data to the PC by sending data serially and increasing the send rate until we noticed a difference between the rate at which data was being received and the rate at which data was being sent. We then reduced the send frequency back to a reasonable value and converted this into a loop interval (about 3 ms).

We then aimed to gather as much data as possible from the sensors in between packet transmission. To accomplish this, we had the microcontroller gather as much data as possible between packets. And in addition to sending a packet, the microcontroller also sent the number of readings that it had performed. We then used this number to come up with a reasonable number of values to poll before aggregating the data and sending it to the PC. We concluded that the microcontroller was capable of reading and averaging each of the sensors 16 times each, which for our purposes would provide enough room to do some averaging.

The Python algorithm is currently limited by the rate at which the microcontroller sends data to the PC and the time that it takes the speech engine to say the word or letter. The rate of transfer is currently about thirty hertz and we wait to fill a buffer with about ten unanimous predictions. This means that the fastest that we could output a prediction would be about three times per second which for our needs was suitable. Of course, one can mess around with the values in order to get faster but slightly less accurate predictions. However, we felt that the glove was responsive enough at three predictions per second.

While we were able to get very accurate predictions, we did see some slight variations in accuracy depending on the size of the person’s hands. The accuracy of each flexsensor is limited beyond a certain point. Smaller hands will result in a larger degree of bend. As a result, the difference between slightly different signs with a lot of flex tends to be smaller for users with more petite hands. For example, consider the signs for “M” and “S.” The only difference between these signs is that “S” will elicit slightly more flex in the fingers. However, for smaller hands, the change in the resistance from the flex-sensor is small, and the algorithm may be unable to discern the difference between these signs.

Figure 6: We can see that even with very small amounts of data the classifier does quite well. After gathering just over 60 readings per sign it achieves an accuracy of over 98%.

Figure 6: We can see that even with very small amounts of data the classifier does quite well. After gathering just over 60 readings per sign it achieves an accuracy of over 98%.

In the end, our current classifier was able to achieve an accuracy of 98% (the error being composed almost solely of u, v sign confusion) on a task of 28 signs, the full alphabet as well as “relaxed” and “nothing” (see Figure 5). A random classifier would guess correctly 4% of the time, clearly indicating that our device is quite accurate. It is however worth noting that the algorithm could greatly benefit from improved touch sensors (seeing as the most common mistake is confusing U for V), being trained on a larger population of users, and especially on larger datasets. With a broad enough data set we could provide the new users with a small test script that only covers difficult letters to predict and relies on the already available data for the rest. The software has currently been trained on the two team members and it has been tested on some users outside of the team. The results were excellent for the team members that trained the glove and mostly satisfying though not perfect for the other volunteers. Since the volunteers did not have a chance to train the glove and were not very familiar with the signs, it is hard to say if their accuracy was a result of overfitting, individual variations in signing, or inexperience with American Sign Language. Regardless, the accuracy of the software on users who trained was near perfect and mostly accurate for users that did not know American Sign Language prior to and did not train the glove.

Lastly it is worth noting that the amount of data necessary for training the classifier was actually surprisingly small. With about 60 instances per label the classifier was able to reach the 98% mark. Given that we receive 30 samples per second and that there are 28 signs, this would mean that gathering data for training could be done in under a minute (see Figure 6).

The project met our expectations. Our initial goal was to create a system capable of recognizing and classifying gestures. We were able to do so with more than 98% average accuracy across all 28 classes. While we did not have a solid time requirement for the rate of prediction, the resulting speed made using the glove comfortable and it did not feel sluggish. Looking ahead, it would make sense to improve our approach for the touch sensors since the majority of the ambiguity in signs come from the difference between U and V. We want to use materials that lend themselves more seamlessly to clothing and provide a more reliable connection. In addition, it will be beneficial to test and train our project on a large group of people since this would provide us with richer data and more consistency. Lastly, we hope to make the glove wireless, which would allow it to easily communicate with phones and other devices and make the system truly portable.

Arduino, “MPU-6050 Accelerometer + Gyro,” Main/MPU-6050.

Atmel Corp., “8-Bit AVR Microcontroller with 128K Bytes In-System Programmable Flash: ATmega1284P,” 8059D­AVR­ 11/09, 2009,
www.atmel. com/images/doc8059.pdf.

Fleury, “AVR-Software,” 2006,

Lund University, “Tiny Real Time,” 2006, www.control.lth. se/~anton/tinyrealtime/.

Parente, “pyttsx – Text-tospeech x-platform,” pyttsx “struct–Interpret Strings as Packed Binary Data,” library/struct.html.

scikit-learn, “Preprocessing Data,”
http:// html.

“Support Vector Machines,”

Spectra Symbol, “Flex Sensor FS,” 2015,

Villalba and M. Lin, “Sign Language Glove,” ECE4760, Cornell University, 2014,

ATmega1284P Microcontroller Atmel |

MPU-6050 MEMS MotionTracking Device InvenSense |

Article originally published in Circuit Cellar June 2016, Issue #311

Battery-Free IoT Start Up Raises $19 Million

Wiliot, a fabless semiconductor start-up company, has closed an investment round with Qualcomm Ventures and M Ventures. The announcement was made in conjunction with the opening of the Active & Intelligent Packaging Industry Association (AIPIA) Conference in Amsterdam where the company will make its first public presentation to leaders in the packaging industry.

The latest investment round comes on the heels of a Series A Round financing effort that yielded $14m with forward-thinking strategic technology investors Grove Ventures, Norwest Venture Partners, and 83North Venture Capital. This first round closed in January, the month Wiliot was founded. In all, Wiliot has raised a total of $19 million in its first 10 months as a semiconductor company.

Wiliot-Scaling-IoT-with-Battery-Free-Bluetooth-1Wiliot, whose research and development arm is based in Israel, is on course to develop a wireless technology that will eliminate a reliance on batteries or wired power to vastly accelerate the Internet of Things with the vision of creating a world of “Smart Everything.” The new technology, which powers itself by harvesting energy from radio waves, enables a sensor as small as a fingernail, as thin as a sheet of paper, and an order of magnitude reduction in price and cost of maintenance.

With proof of concepts scheduled to start in 2H 2018, and a delivery to market date in early 2019, Wiliot’s technology will revolutionize the current Bluetooth beacon marketplace which after more than five years has reached a floor on reductions in cost, size and ease of maintenance that have hindered their widespread adoption.

Wiliot |

Sensor Node Gets LoRaWAN Certification

Advantech offers its standardized M2.COM IoT LoRaWAN certified sensor node WISE-1510 with integrated ARM Cortex-M4 processor and LoRa transceiver. The module the  is able to provide multi-interfaces for sensors and I/O control such as UART, I2C, SPI, GPIO, PWM and ADC. The WISE-1510 sensor node is well suited for for smart cities, WISE-1510_3D _S20170602171747agriculture, metering, street lighting and environment monitoring. With power consumption optimization and wide area reception, LoRa  sensors or applications with low data rate requirements can achieve years of battery life and kilometers of long distance connection.

WISE-1510 has has received LoRaWAN certification from the LoRa Alliance. Depending on deployment requirements, developers can select to use Public LoRaWAN network services or build a private LoRa system with WISE-3610 LoRa IoT gateway. Advantech’s WISE-3610  is a Qualcomm ARM Cortex A7 based hardware platform with private LoRa ecosystem solution that can connect up to 500 WISE-1510 sensor node devices. Powered by Advantech’s WISE-PaaS IoT Software Platform, WISE-3610 features automatic cloud connection through its WISE-PaaS/WISE Agent service, manages wireless nodes and data via WSN management APIs, and helps customers streamline their IoT data acquisition development through sensor service APIs, and WSN drivers.

Developers can leverage microprocessors on WISE-1510 to build their own applications. WISE-1510 offers unified software—ARM Mbed OS and SDK for easy development with APIs and related documents. Developers can also find extensive resources from Github such as code review, library integration and free core tools. WISE-1510 also offers worldwide certification which allow developers to leverage their IoT devices anywhere. Using Advantech’s WISE-3610 LoRa IoT Gateway, WISE-1510 can be connected to WISE-  PaaS/RMM or  ARM Mbed Cloud service with IoT communication protocols including LWM2M, CoAP, and MQTT. End-to-end integration assists system integrators to overcome complex challenges and helps them build IoT applications quickly and easily.

WISE-1510 features and specifications:

  • ARM Cortex-M4 core processor
  • Compatible support for public LoRaWAN or private LoRa networks
  • Great for low power/wide range applications
  • Multiple I/O interfaces for sensor and control
  • Supports wide temperatures  -40 °C to 85 °C

Advantech |

Microcontroller Family Provides 25 Sensing Functions for 25 Cents

Texas Instruments (TI) has unveiled its lowest-cost ultra-low-power MSP430 microcontrollers for sensing applications. Developers can now implement simple sensing solutions through a variety of integrated mixed-signal features in this family of MSP430 value line sensing MCUs, available for as low as US $0.25 in high volumes. Additions to the family include two new entry-level devices and a new TI LaunchPa development kit for quick and easy evaluation. Developers can implement simple sensing functions with TI’s lowest-cost microcontroller family

United_States_QuarterDevelopers now have the flexibility to customize 25 common system-level functions including timers, input/output expanders, system reset controllers, electrically erasable programmable read-only memory (EEPROM) and more, using a library of code examples. A common core architecture, a tools and software ecosystem, and extensive documentation including migration guides make it easy for developers to choose the best MSP430 value line sensing MCU for each of their designs. Designers can scale from the 0.5-kB MSP430FR2000 MCU to the rest of the MSP430 sensing and measurement MCU portfolio for applications that require up to 256 kB of memory, higher performance or more analog peripherals.

The new MSP430FR2000 and MSP430FR2100 MCUs (with 0.5 kB and 1 kB of memory, respectively) and the new development kit join the MSP430 value line sensing family which includes the MSP430FR2111, MSP430FR2311, MSP430FR2033, MSP430FR2433 and MSP430FR4133 microcontroller families and their related development tools and software.

Developers can purchase the value line sensing portfolio through the TI store, priced as low as US$0.29 in 1,000-unit quantities and US $0.25 in higher volumes. Additionally, the new MSP430FR2433 LaunchPad development kit (MSP-EXP430FR2433) is available from the TI store and authorized distributors for US $9.99. Today through Dec. 31, 2017, the TI store is offering the LaunchPad kit for a promotional price of US $4.30.

Texas Instruments |

Mini Sensor Dies Target IoT and Autos

TDK has announced new miniaturized EPCOS MEMS pressure sensor dies. The automotive versions of the C33 series boast dimensions of just 1 mm x 1 mm x 0.4 mm. They are designed for absolute pressures of 1.2 bar to 10 bar and are qualified based on bild-wo-background-en-HighResolutionDataAEC-Q101. The typical operating voltage is 3 V. With a supply voltage of 5 V they offer sensitivities of between 15 mV/bar and 80 mV/bar, depending on the type. The miniaturized pressure sensors are suitable for a temperature range from -40 °C to +135 °C and can even withstand 140 °C for short periods. They also offer a very long-term stability of ± 0.35% FS (full scale).

The C39 type, with its footprint of just 0.65 mm x 0.65 mm is especially suitable for IoT and consumer applications. One noteworthy feature of the C39 is its low insertion height of just 0.24 mm, which makes the low-profile MEMS pressure sensor die ideal for applications in smartphones and wearables, for example, where space requirements are critical. The C39 is designed for an absolute pressure of 1.2 bar and, like the C33 series, offers long-term stability of ± 0.35% FS. All the pressure sensor dies operate on the piezoresistive principle and deliver, via a Wheatstone bridge, an analog signal that is proportional to the applied pressure and the supply voltage.

Further information on the products at

TDK-Lambda |

Microcontrollers Target Smart Water Meters

Texas Instruments has unveiled a new family of MSP430 microcontrollers with an integrated ultrasonic sensing analog front end that enables smart water meters to deliver higher accuracy and lower power consumption. In addition, TI introduced two new reference designs that make it easier to design modules for adding automated meter reading (AMR) capabilities to existing mechanical water meters. The new MCUs and reference designs support the growing demand for more accurate water meters and remote meter reading to enable efficient water resource management, accurate measurement and timely billing.

New ultrasonic MCUs and new reference designs make both electronic and mechanical water meters smarter (PRNewsfoto/Texas Instruments Incorporated)

New ultrasonic MCUs and new reference designs make both electronic and mechanical water meters smarter.

As part of the ultra-low-power MSP430 MCU portfolio for sensing and measurement, the new MSP430FR6047 MCU family lets developers add more intelligence to flow meters by taking advantage of a complete waveform capture feature and analog-to-digital converter (ADC)-based signal processing. This technique enables more accurate measurement than competitive devices, with precision of 25 ps or better, even at flow rates less than 1 liter per hour. In addition, the integrated MSP430FR6047 devices reduce water meter system component count by 50 percent and power consumption by 25 percent, enabling a meter to operate without having to charge the battery for 10 or more years. The new MCUs also integrate a low-energy accelerator module for advanced signal processing, 256 KB of ferroelectric random access memory (FRAM), a LCD driver and a metering test interface.

The MSP430 Ultrasonic Sensing Design Center offers a comprehensive development ecosystem that allows developers to get to market in months. The design center provides tools for quick development and flexibility for customization, including software libraries, a GUI, evaluation modules with metrology and DSP libraries.

TI’s new Low-Power Water Flow Measurement with Inductive Sensing Reference Design is a compact solution for the electronic measurement of mechanical flow meters with low power consumption for longer battery life. Enabled by the single-chip SimpleLink dual-band CC1350 wireless MCU, this reference design also gives designers the ability to add dual-band wireless communications for AMR networks. Designers can take advantage of the reference design’s small footprint to easily retrofit existing mechanical flow meters, enabling water utilities to add AMR capability while avoiding expensive replacement of deployed meters. The CC1350 wireless MCU consumes only 4 µA while measuring water flow rates, enabling longer product life.

A second new reference design is an ultra-low power solution based on the SimpleLink Sub-1 GHz CC1310 wireless MCU. The Low-Power Wireless M-Bus Communications Module Reference Design uses TI’s wireless M-Bus software stack and supports all wireless M-Bus operating modes in the 868-MHz band. This reference design provides best-in-class power consumption and flexibility to support wireless M-Bus deployments across multiple regions.

Texas Instruments |

Gas Monitoring and Sensing (Part 1)

Fun with Fragrant Analysis

Gas sensing technology has come long way since the days of canaries in coal mines. This month columnist Jeff covers the background issues surrounding gas monitoring and sensing. Then he describes how he uses sensors, A/D conversion and Arduino technologies to do oxygen measurement.

By Jeff Bachiochi

When coal miners began dropping like flies, it was determined that poisonous gas was the culprit. To date there was no test to detect the presence of this odorless ghost. Sacrificial canaries became the guinea pigs, giving up their lives to save the miners. These birds are especially sensitive to methane and carbon monoxide. When the song bird stopped singing, miners headed for a breath of fresh air until the mine could be cleared of the silent killer.

Seemingly ripe for disaster, the flame height of an oil lamp was used for detecting dangerous conditions in the 1800s. A shrinking flame indicated reduced oxygen, while a stronger flame indicated the presence of methane—or other combustible gas. Flame arrestors kept the combustion internal to the lamp, preventing external gas ignition unless it was dropped.

In the 1900s, it was discovered that the current through an electric heater was affected when nearby combustible gases increased in temperature. The use of a catalytic material—such as palladium—lowers the temperature at which combustion takes place. Using these heaters in a Whetstone bridge configuration—where one leg is exposed to the gas—can create an easily measured imbalance proportional to the concentration of the combustible gas.

Infrared light can be used to measure the concentration of many hydrocarbon gases. When compared to a gas-free path, the IR absorption through a gas can indicate the concentration of hydrocarbon molecules. Gases can be identified by their molecular makeup. That is the amount of each element present. Absorption bands can be identified by dispersion through diffraction or non-dispersion through filtration. Concentration is the relationship of a particular wavelength between a reference path and a gas absorption path.

Table Untitled

This summary of the basic gas sensing methods includes their advantages, disadvantages and general areas of use. (Source:,

There are many techniques available today for monitoring gases. Refer to Table 1 for a breakdown of gas monitoring methods and their associated advantages and disadvantages. HAZMAT Class 2 in United States identifies all gases which can be compressed and stored for transportation. Even though we are not directly dealing with storage or transportation, the class is further defined by three groups of gases: flammable, toxic and others (non-flammable).

Read the full article in the October 327 issue of Circuit Cellar

We’ve made the October 2017 issue of Circuit Cellar available as a sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.
Don’t miss out on upcoming issues of Circuit Cellar. Subscribe today!

Analog Devices Collaborates on IoT Farm-to-Fork Project

Analog Devices has announced a collaboration with The Cornucopia Project and to explore the local food supply chain and use this work as a vehicle for educating students at ConVal Regional High School in Peterborough, NH, and local farmers on 21st century agriculture skills. The initiative instructs student farmers how to use Internet of Things and blockchain technologies to track the conditions and movement of produce from “Farm to Fork” to make decisions that improve quality, yields, and profitability. Together with the Cornucopia Project, the endeavor is funded by Analog Devices and, with both companies also providing technical training.

Analog Devices Smart Agriculture Manager Erick Olsen (center) and Senior Engineer Rob O'Reilly are pictured alongside ConVal Regional High School Farm to Fork Fellows viewing tomatoes grown with the company's crop monitoring solution. (Photo: Business Wire)

Analog Devices Smart Agriculture Manager Erick Olsen (center) and Senior Engineer Rob O’Reilly are pictured alongside ConVal Regional High School Farm to Fork Fellows viewing tomatoes grown with the company’s crop monitoring solution. (Photo: Business Wire)

For the project, Analog Devices is providing a prototype version of its crop monitoring solution, which will be capable of measuring environmental factors that help farmers make sound decisions about crops related to irrigation, fertilization, pest management, and harvesting. The sensor-to-cloud, Internet of Things solution enables farmers to make better decisions based on accumulated learning from the near-real-time monitoring. These 24/7 measurements are combined with a near infrared (NIR) miniaturized spectrometer that conducts non-destructive analysis of food quality not previously possible in a farm environment.

The Cornucopia Project, a non-profit located in Peterborough, N.H., provides garden and agricultural programs to students from elementary through high school. Student farmers in its Farm to Fork program learn how to use advanced sensor instrumentation in their greenhouse, which provide valuable data to assess the attributes of tomatoes, and how these factors affect taste and quality. The program also educates students on how crops can be tracked throughout the agricultural supply chain to support food quality, sustainability, traceability and nutrition. is contributing its blockchain technology to model the entire fresh produce supply chain, combining the crop growing data, transportation, and storage conditions. Blockchain – a distributed ledger, consensus data technology that is used to maintain a continuously growing list of records – will track crop lifecycle from seed to distributor to retailer to consumer, bringing transparency and accountability to the agricultural supply chain.

Analog Devices |

Graphene Enables Broad Spectrum Sensor Development

Team successfully marries a CMOS IC with graphene, resulting in a camera able to image visible and infrared light simultaneously.

Graphene Enables Broad Spectrum Sensor Development

By Wisse Hettinga

Researchers at ICFO—the Institute of Photonic Sciences, located in Catalonia, Spain—have developed a broad-spectrum sensor by depositing graphene with colloidal quantum dots onto a standard, off-the-shelf read-out integrated circuit. It is the first-time scientists and engineers were able to integrate a CMOS circuit with graphene to create a camera capable of imaging visible and infrared light at the same time. Circuit Cellar visited ICFO

Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.

Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.

and talked with Stijn Goossens, one of the lead researchers of the study.


GOOSSENS: ICFO is a research institute devoted to the science and technologies of light. We carry out frontier research in fundamental science in optics and photonics as well as applied research with the objective of developing products that can be brought to market. The institute is based in Castelldefels, in the metropolitan area of Barcelona (Catalonia region of Spain).

HETTINGA: Over the last 3 to 4 years, you did research on how to combine graphene and CMOS. What is the outcome?

GOOSSENS: We’ve been able to create a sensor that is capable of imaging both visible and infrared light at the same time. A sensor like this can be very useful for many applications—automotive solutions and food inspection, to name a few. Moreover, being able to image infrared light can enable night vision features in a smartphone.

HETTINGA: For your research, you are using a standard off-the-shelf CMOS read-out circuit correct?

GOOSSENS: Indeed. We’re using a standard CMOS circuit. These circuits have all the electronics available to read the charges induced in the graphene, the rows and columns selects and the drivers to make the signal available for further processing by a computer or smartphone. For us, it’s a very easy platform to work on as a starting point. We can deposit the graphene and quantum dot layer on top of the CMOS sensor (Photo 1).

PHOTO 1 The CMOS image sensor serves as the base for the graphene layer.

The CMOS image sensor serves as the base for the graphene layer.

HETTINGA: What is the shortcoming of normal sensors that can be overcome by using graphene?

GOOSSENS: Normal CMOS imaging sensors only work with visible light. Our solution can image visible and infrared light. We use the CMOS circuit for reading the signal from the graphene and quantum dot sensors. Tt acts more like an ‘infrastructure’ solution. Graphene is a 2D material with very special specifications: it is strong, flexible, almost 100 percent transparent and is a very good conductor.

HETTINGA: How does the graphene sensor work?

GOOSSENS: There are different layers (Figure 1). There’s a layer of colloidal quantum dots. A quantum dot is a nano-sized semiconductor. Due to its small size, the optical and electronic properties differ from larger size particles. The quantum dots turn the photons they receive into an electric charge. This electric charge is then transferred to the graphene layer that acts like a highly sensitive charge sensor. With the CMOS circuit, we then read the change in resistance of the graphene and multiplex the signal from the different pixels on one output line.

FIGURE 1 The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.

The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.

HETTINGA: What hurdles did you have to overcome in the development?

GOOSSENS: You always encounter difficulties during the course of a research study and sometimes you’re close to giving up. However, we knew it would work. And with the right team, the right technologies and the lab at ICFO we have shown it is indeed possible. The biggest problem was the mismatch we faced between the graphene layer and the CMOS layer. When there’s a mismatch, that means there’s a lack of an efficient resistance read-out of the graphene—but we were able to solve that problem.

HETTINGA: What is the next step in the research?

GOOSSENS: Together with the European Graphene Flagship project, we are developing a production machine that will allow us to start a more automated production process for these graphene sensors.

HETTINGA: Where will we see graphene-based cameras?

GOOSSENS: One of the most interesting applications will be related to self-driving cars. A self-driving car needs a clear vision to function efficiently. If you want to be able to drive a car through a foggy night or under extreme weather conditions, you’ll definitely need an infrared camera to see what’s ahead of you. Today’s infrared cameras are expensive. With our newly-developed image sensor, you will have a very effective, low-cost solution. Another application will be in the food inspection area. When fruit ripens, the infrared light absorption changes. With our camera, you can measure this change in absorption, which will allow you to identify which fruits to buy in the supermarket. We expect this technology to be integrated in smartphone cameras in the near future.


This article appeared in the September 326 issue of Circuit Cellar

Sensor-Based IoT Development Platform With Bluetooth

Fujitsu Components America’s BlueBrain development platform for high-performance IoT applications is now available with a development breakout board and interface board. It enables designers to easily create a wireless monitoring and data collection system via Bluetooth. The enhanced BlueBrain Sensor-Based IoT System Platform will be available in this summer as a standard product through distribution. Jointly Fujitsu Components America bluebrain-sbs highdeveloped with CRATUS Technology, the BlueBrain platform features a high-performance CORTEX-M4 microcontroller from STMicroelectronics and a Bluetooth Low Energy wireless module from Fujitsu Components. The embedded hardware, software, and industry-standard interfaces and peripherals reduce the time and expertise needed to develop and deploy wireless, sensor-based products running simple or complicated algorithms.

The Breakout Board provides switch inputs and LED outputs to test I/O ports and functions, as well as programming interfaces for proof of concept and application development. The Interface Board provides additional sensors and interfaces and may also be used in parallel to expand the development platform. The BlueBrain Edge Processing Module attaches to a standard, 32-Pin 1.6” X 0.7” EEPROM-style IC socket, or equivalent footprint, on a mezzanine board to address specific markets and applications including industrial, agriculture, automotive and telematics, retail, smart buildings and civil infrastructure. Pricing for the BlueBrain Sensor-Based IoT System Platform is $425.

Fujitsu Components America |

Robots with a Vision

Machine chine vision is a field of electrical engineering that’s changing how we interact with our environment, as well as the ways by which machines communicate with each other. Circuit Cellar has been publishing articles on the subject since the mid-1990s. The technology has come a long way since then. But it’s important (and exciting) to regularly review past projects to learn from the engineers who paved the way for today’s ground-breaking innovations.

In Circuit Cellar 92, a team of engineers (Bill Bailey, Jon Reese, Randy Sargent, Carl Witty, and Anne Wright) from Newton Labs, a pioneer in robot engineering, introduced readers to the M1 color-sensitive robot. The robot’s main functions were to locate and carry tennis balls. But as you can imagine, the underlying technology was also used to do much more.

The engineering team writes:

Machine vision has been a challenge for AI researchers for decades. Many tasks that are simple for humans can only be accomplished by computers in carefully controlled laboratory environments, if at all. Still, robotics is benefiting today from some simple vision strategies that are achievable with commercially available systems.

In this article, we fill you in on some of the technical details of the Cognachrome vision system and show its application to a challenging and exciting task—the 1996 International AAAI Mobile Robot Competition in Portland, Oregon… In 1996, the contest was for an autonomous robot to collect 10 tennis balls and 2 quickly and randomly moving, self-powered squiggle balls and deliver them to a holding pen within 15 min.

In M1’s IR sensor array, each LED is fired in turn and detected reflections are latched by the 74HC259 into an eight-bit byte.

In M1’s IR sensor array, each LED is fired in turn and detected reflections are latched by the 74HC259 into an eight-bit byte.

At the time of the conference, we had already been manufacturing the Cognachrome for a while and saw this contest as an excellent way to put our ideas (and our board) to the test. We outfitted a general-purpose robot called M1 with a Cognachrome and a gripper and wrote software for it to catch and carry tennis balls… M1 follows the wall using an infrared obstacle detector. The code drives two banks of four infrared LEDs one at a time, each modulated at 40 kHz.

The left half of M1’s infrared sensor array is composed of a Sharp GP1U52X infrared detector sandwiched between four infrared LEDs

The left half of M1’s infrared sensor array is composed of a Sharp GP1U52X infrared detector sandwiched between four infrared LEDs

Two standard Sharp GP1U52X infrared remote-control reception modules detect reflections. The 74HC163/74HC238 combination fires each LED in turn, and the ’HC259 latches detected reflections. This system provides reliable obstacle detection in the 8–12″ range.

The figure above shows the schematic. The photo shows the IR sensors.

The system provides only yes/no information about obstacles in the eight directions around the front half of the robot. However, M1 can crudely estimate distance to large obstacles (e.g., walls) via patterns in the reflections. The more adjacent directions with detected reflections, the closer the obstacle probably is.

Download the Entire Article

Galdi Taps Eurotech’s IoT Gateway for Food Packaging Market

Eurotech has announced a design win with Galdi, a leading producer of packaging machines for the food market. Galdi chose Eurotech’s Multi-Service IoT Gateway ReliaGATE 10-20 to communicate with its production machines for valuable data collection, management and remote monitoring through Eurotech IoT Integration Platform Everyware Cloud.


Galdi selected Eurotech gateway because of its globally-compliant Wi-Fi and cellular certifications. Implementing this IoT technology will enable Galdi to remotely manage its plants and its customers by providing greater access to valuable data.

The ReliaGATE 10-20 is an industrial grade smart IoT gateway that provides communications, computation power and a simplified application framework for IoT platform integration and services applications. The gateway offers a variety of communication interfaces including cellular, Wi-Fi and Bluetooth enabling connectivity to a wide range of sensors and edge devices essential in M2M/IoT applications. It also includes interfaces for wired connectivity such as Dual Gigabit Ethernet, CANBus, up to four serial ports and three USB ports. ReliaGATE 10-20 is simple to manage and delivers out-of-the-box connectivity and intuitive configuration of the routing parameters thanks to a web GUI and over-the-air options.

Eurotech |

BLE Module Boasts Integrated MEMS Sensors

Telit has announced BlueMod+S42M, a Bluetooth Low Energy (BLE) 4.2, standalone, single-mode module with embedded 3-axis accelerometer, temperature and humidity sensors. The cost-effective component is optimized for efficiency and simplicity in end-device design and manufacturing, delivering reliable Bluetooth Low Energy functionality with robust endpoint security, motion and environmental sensors and essential features that reduce development costs, bill of materials, and time to market.

Telit BlueMod S42 FrontDynamic RGB

Ideal for large scale projects, the BlueMod+S42M seamlessly expedites device design across a wide range of industrial and consumer applications areas. The embedded sensors are necessary for high-value, fragile asset tracking, and time- or temperature-sensitive applications such as cold chain monitoring in the pharmaceutical and agriculture industries.

Telit |

Graphene Revolution

The Wonderful Material That Will Change
the World of Electronics

The amazing properties of graphene have researchers, students, and inventors dreaming about exciting new applications, from unbreakable touchscreens to fast-charging batteries.

By Wisse Hettinga

Prosthetic hand with graphene electrodes

Prosthetic hand with graphene electrodes

Graphene gained popularity because of the way it is produced—the “Scotch tape method.” In fact, two scientists, Andre Geim and Kostya Novoselov, received a Nobel Prize in 2004 for their work with the material. Their approach is straightforward. Using Scotch tape, they repeatedly removed small layers of graphite (indeed, the black stuff found in pencils) until there was only one 2-D layer of atoms left—graphene. Up to that point, many scientists understood the promise of this wonderful material, but no one had been able to get obtain a single layer of atoms. After the breakthrough, many universities started looking for graphene-related applications.

Innovative graphene-related research is underway all over the world. Today, many European institutes and universities work together under the Graphene Flagship initiative (, which was launched by the European Union in 2009. The initiative’s aim is to exchange knowledge and collaborate on research projects.

Graphene was a hot topic at the 2017 Mobile World Congress (MWC) in Barcelona, Spain. This article covers a select number of applications talked about at the show. But for the complete coverage, check out the video here:


The Istituto Italiano di Tecnologia (IIT) in Genova, Italy, recently developed a sensor from a cellulose and graphene composite. The sensor can be made in the form of a bracelet that fits around the arm in order to pick up the small signals associated with muscle movement. The signals are processed and used to drive a robotic prosthetic hand. Once the comfortable bracelet is placed on the wrist, it transduces the movement of the hand into electrical signals that are used to move the artificial hand in a spectacular way. More information:


The Scotch tape method used by the Nobel Prize winners inspired a lot of companies around the world to start producing graphene. Today, a wide variety of methods can be used depending on the actual application of the material. Graphenea (San Sebastian, Spain) is using different processes for the production of graphene products. One of them is Chemical Vapor Deposition. With this method, it is possible to create graphene on thin foil, silicon based or in form of oxide. They source many universities and research institutes that do R&D for new components such as supercapacitors, solar, batteries, and many more applications. The big challenge is to develop an industrial process that will combine graphene material with the conventional CMOS technology. In this way, the characteristics of graphene can enhance today’s components to make them useful for new applications. A good example is optical datatransfer. More information:

Transfer graphene on top of a silicon device to add more functionality

Transfer graphene on top of a silicon device to add more functionality


High-speed data communication comes in all sizes and infrastructures. But on the small scale, there are many challenges. Graphene enables new optical communication on the chip level. A consortium of CNIT, Ericsson, Nokia, and IMEC have developed graphene photonics integration for high-speed transmission systems. At MWC, they showcased a packaged graphene-based modulator operating over several optical telecommunications bands. I saw the first package transmitters with optical modulators based on graphene. The modulator is only one-tenth of a millimeter. The transfer capacity is 10 Gbps, but the aim is to bring that to 100 Gbps in a year’s time. The applications will be able to play a key role in the development of 5G technology. More information:

Optical modulator based on graphene technology

Optical modulator based on graphene technology


FGV Cambridge Nanosystems recently developed a novel “spray-on” graphene heating system that provides uniform, large-area heating. The material can be applied to paintings or walls and turned into a ‘heating’ area that can be wirelessly controlled via a mobile app. The same methodology can also double as a temperature sensor, where you can control light intensity by sensing body temperature. More information:

Graphene-based heater

Graphene-based heater


Atheletes can benefit from light, strong, sensor-based shoes that that can monitor their status. To make this happen, the University of Cambridge developed a 3-D printed shoe with embedded graphene foam sensors that can monitor the pressure applied. They combine complicated structural design with accurate sensing function. The graphene foam sensor can be used for measuring the number of steps and the weight of the person. More information:

Graphene pressure sensors embedded in shoes

Graphene pressure sensors embedded in shoes


More wireless fidelity can be expected when graphene-based receivers come into play. The receivers based on graphene are small and flexible and can be used for integration into clothes and other textile applications. AMO GmbH and RWTH Aachen University are developing the first flexible Wi-Fi receiver. The underlying graphene MMIC process enables the fabrication of the Wi-Fi receiver on both flexible and rigid substrates. This flexible Wi-Fi receiver is the first graphene-based front-end receiver for any type of modulated signal. The research shows that this technology can be used up to 90 GHz, which opens it up to new applications in IoT and mobile phones. More information:

Using graphene in flexible Wi-Fi receiver

Using graphene in flexible Wi-Fi receiver


Santiago Cartamil-Bueno, a PhD student at TU Delft, was the first to observe a change in colors of small graphene “balloons.” These balloons appear when pressure is applied in a double layer of graphene. When this graphene is placed over silicon with small indents, the balloons can move in and out the silicon dents. If the graphene layer is closer to the silicon, they turn blue. If it is farther away from the silicon, they will turn red. Santiago observed this effect first and is researching the possibilities to turn this effect into high-resolution display. It uses the light from the environment and turns it into a very low-power consumption process. The resolution is very high; a typical 5″ display would be able to show images with 8K to 12K resolution. More information:

New Scalable Biometric Sensor Platform for Wearables and the IoT

Valencell and STMicroelectronics recently launched a new development kit for biometric wearables. Featuring STMicro’s compact SensorTile turnkey multi-sensor module and Valencell’s Benchmark biometric sensor system, the platform offers designers a scalable solution for designers building biometric hearables and wearables.

The SensorTile IoT module’s specs and features:

  • 13.5 mm × 13.5 mm
  • STM32L4 microcontroller
  • Bluetooth Low Energy chipset
  • a wide spectrum of MEMS sensors (accelerometer, gyroscope, magnetometer, pressure, and temperature sensor)
  • Digital MEMS microphone

Valencell’s Benchmark sensor system’s specs and features:

  • PerformTek processor communicates with host processor using a simple UART or I2C interface protocol
  • Acquires heart rate, VO2, and calorie data
  • Standard flex connector interface

Source: Valencell