Analog Devices has announced a collaboration with The Cornucopia Project and ripe.io to explore the local food supply chain and use this work as a vehicle for educating students at ConVal Regional High School in Peterborough, NH, and local farmers on 21st century agriculture skills. The initiative instructs student farmers how to use Internet of Things and blockchain technologies to track the conditions and movement of produce from “Farm to Fork” to make decisions that improve quality, yields, and profitability. Together with the Cornucopia Project, the endeavor is funded by Analog Devices and ripe.io, with both companies also providing technical training.
Analog Devices Smart Agriculture Manager Erick Olsen (center) and Senior Engineer Rob O’Reilly are pictured alongside ConVal Regional High School Farm to Fork Fellows viewing tomatoes grown with the company’s crop monitoring solution. (Photo: Business Wire)
For the project, Analog Devices is providing a prototype version of its crop monitoring solution, which will be capable of measuring environmental factors that help farmers make sound decisions about crops related to irrigation, fertilization, pest management, and harvesting. The sensor-to-cloud, Internet of Things solution enables farmers to make better decisions based on accumulated learning from the near-real-time monitoring. These 24/7 measurements are combined with a near infrared (NIR) miniaturized spectrometer that conducts non-destructive analysis of food quality not previously possible in a farm environment.
The Cornucopia Project, a non-profit located in Peterborough, N.H., provides garden and agricultural programs to students from elementary through high school. Student farmers in its Farm to Fork program learn how to use advanced sensor instrumentation in their greenhouse, which provide valuable data to assess the attributes of tomatoes, and how these factors affect taste and quality. The program also educates students on how crops can be tracked throughout the agricultural supply chain to support food quality, sustainability, traceability and nutrition.
ripe.io is contributing its blockchain technology to model the entire fresh produce supply chain, combining the crop growing data, transportation, and storage conditions. Blockchain – a distributed ledger, consensus data technology that is used to maintain a continuously growing list of records – will track crop lifecycle from seed to distributor to retailer to consumer, bringing transparency and accountability to the agricultural supply chain.
Team successfully marries a CMOS IC with graphene, resulting in a camera able to image visible and infrared light simultaneously.
Graphene Enables Broad Spectrum Sensor Development
By Wisse Hettinga
Researchers at ICFO—the Institute of Photonic Sciences, located in Catalonia, Spain—have developed a broad-spectrum sensor by depositing graphene with colloidal quantum dots onto a standard, off-the-shelf read-out integrated circuit. It is the first-time scientists and engineers were able to integrate a CMOS circuit with graphene to create a camera capable of imaging visible and infrared light at the same time. Circuit Cellar visited ICFO
Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.
and talked with Stijn Goossens, one of the lead researchers of the study.
HETTINGA: What is ICFO?
GOOSSENS: ICFO is a research institute devoted to the science and technologies of light. We carry out frontier research in fundamental science in optics and photonics as well as applied research with the objective of developing products that can be brought to market. The institute is based in Castelldefels, in the metropolitan area of Barcelona (Catalonia region of Spain).
HETTINGA: Over the last 3 to 4 years, you did research on how to combine graphene and CMOS. What is the outcome?
GOOSSENS: We’ve been able to create a sensor that is capable of imaging both visible and infrared light at the same time. A sensor like this can be very useful for many applications—automotive solutions and food inspection, to name a few. Moreover, being able to image infrared light can enable night vision features in a smartphone.
HETTINGA: For your research, you are using a standard off-the-shelf CMOS read-out circuit correct?
GOOSSENS: Indeed. We’re using a standard CMOS circuit. These circuits have all the electronics available to read the charges induced in the graphene, the rows and columns selects and the drivers to make the signal available for further processing by a computer or smartphone. For us, it’s a very easy platform to work on as a starting point. We can deposit the graphene and quantum dot layer on top of the CMOS sensor (Photo 1).
PHOTO 1 The CMOS image sensor serves as the base for the graphene layer.
HETTINGA: What is the shortcoming of normal sensors that can be overcome by using graphene?
GOOSSENS: Normal CMOS imaging sensors only work with visible light. Our solution can image visible and infrared light. We use the CMOS circuit for reading the signal from the graphene and quantum dot sensors. Tt acts more like an ‘infrastructure’ solution. Graphene is a 2D material with very special specifications: it is strong, flexible, almost 100 percent transparent and is a very good conductor.
HETTINGA: How does the graphene sensor work?
GOOSSENS: There are different layers (Figure 1). There’s a layer of colloidal quantum dots. A quantum dot is a nano-sized semiconductor. Due to its small size, the optical and electronic properties differ from larger size particles. The quantum dots turn the photons they receive into an electric charge. This electric charge is then transferred to the graphene layer that acts like a highly sensitive charge sensor. With the CMOS circuit, we then read the change in resistance of the graphene and multiplex the signal from the different pixels on one output line.
FIGURE 1 The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.
HETTINGA: What hurdles did you have to overcome in the development?
GOOSSENS: You always encounter difficulties during the course of a research study and sometimes you’re close to giving up. However, we knew it would work. And with the right team, the right technologies and the lab at ICFO we have shown it is indeed possible. The biggest problem was the mismatch we faced between the graphene layer and the CMOS layer. When there’s a mismatch, that means there’s a lack of an efficient resistance read-out of the graphene—but we were able to solve that problem.
HETTINGA: What is the next step in the research?
GOOSSENS: Together with the European Graphene Flagship project, we are developing a production machine that will allow us to start a more automated production process for these graphene sensors.
HETTINGA: Where will we see graphene-based cameras?
GOOSSENS: One of the most interesting applications will be related to self-driving cars. A self-driving car needs a clear vision to function efficiently. If you want to be able to drive a car through a foggy night or under extreme weather conditions, you’ll definitely need an infrared camera to see what’s ahead of you. Today’s infrared cameras are expensive. With our newly-developed image sensor, you will have a very effective, low-cost solution. Another application will be in the food inspection area. When fruit ripens, the infrared light absorption changes. With our camera, you can measure this change in absorption, which will allow you to identify which fruits to buy in the supermarket. We expect this technology to be integrated in smartphone cameras in the near future.
Fujitsu Components America’s BlueBrain development platform for high-performance IoT applications is now available with a development breakout board and interface board. It enables designers to easily create a wireless monitoring and data collection system via Bluetooth. The enhanced BlueBrain Sensor-Based IoT System Platform will be available in this summer as a standard product through distribution. Jointly developed with CRATUS Technology, the BlueBrain platform features a high-performance CORTEX-M4 microcontroller from STMicroelectronics and a Bluetooth Low Energy wireless module from Fujitsu Components. The embedded hardware, software, and industry-standard interfaces and peripherals reduce the time and expertise needed to develop and deploy wireless, sensor-based products running simple or complicated algorithms.
The Breakout Board provides switch inputs and LED outputs to test I/O ports and functions, as well as programming interfaces for proof of concept and application development. The Interface Board provides additional sensors and interfaces and may also be used in parallel to expand the development platform. The BlueBrain Edge Processing Module attaches to a standard, 32-Pin 1.6” X 0.7” EEPROM-style IC socket, or equivalent footprint, on a mezzanine board to address specific markets and applications including industrial, agriculture, automotive and telematics, retail, smart buildings and civil infrastructure. Pricing for the BlueBrain Sensor-Based IoT System Platform is $425.
Machine chine vision is a field of electrical engineering that’s changing how we interact with our environment, as well as the ways by which machines communicate with each other. Circuit Cellar has been publishing articles on the subject since the mid-1990s. The technology has come a long way since then. But it’s important (and exciting) to regularly review past projects to learn from the engineers who paved the way for today’s ground-breaking innovations.
In Circuit Cellar 92, a team of engineers (Bill Bailey, Jon Reese, Randy Sargent, Carl Witty, and Anne Wright) from Newton Labs, a pioneer in robot engineering, introduced readers to the M1 color-sensitive robot. The robot’s main functions were to locate and carry tennis balls. But as you can imagine, the underlying technology was also used to do much more.
The engineering team writes:
Machine vision has been a challenge for AI researchers for decades. Many tasks that are simple for humans can only be accomplished by computers in carefully controlled laboratory environments, if at all. Still, robotics is benefiting today from some simple vision strategies that are achievable with commercially available systems.
In this article, we fill you in on some of the technical details of the Cognachrome vision system and show its application to a challenging and exciting task—the 1996 International AAAI Mobile Robot Competition in Portland, Oregon… In 1996, the contest was for an autonomous robot to collect 10 tennis balls and 2 quickly and randomly moving, self-powered squiggle balls and deliver them to a holding pen within 15 min.
In M1’s IR sensor array, each LED is fired in turn and detected reflections are latched by the 74HC259 into an eight-bit byte.
At the time of the conference, we had already been manufacturing the Cognachrome for a while and saw this contest as an excellent way to put our ideas (and our board) to the test. We outfitted a general-purpose robot called M1 with a Cognachrome and a gripper and wrote software for it to catch and carry tennis balls… M1 follows the wall using an infrared obstacle detector. The code drives two banks of four infrared LEDs one at a time, each modulated at 40 kHz.
The left half of M1’s infrared sensor array is composed of a Sharp GP1U52X infrared detector sandwiched between four infrared LEDs
Two standard Sharp GP1U52X infrared remote-control reception modules detect reflections. The 74HC163/74HC238 combination fires each LED in turn, and the ’HC259 latches detected reflections. This system provides reliable obstacle detection in the 8–12″ range.
The figure above shows the schematic. The photo shows the IR sensors.
The system provides only yes/no information about obstacles in the eight directions around the front half of the robot. However, M1 can crudely estimate distance to large obstacles (e.g., walls) via patterns in the reflections. The more adjacent directions with detected reflections, the closer the obstacle probably is.
Eurotech has announced a design win with Galdi, a leading producer of packaging machines for the food market. Galdi chose Eurotech’s Multi-Service IoT Gateway ReliaGATE 10-20 to communicate with its production machines for valuable data collection, management and remote monitoring through Eurotech IoT Integration Platform Everyware Cloud.
Galdi selected Eurotech gateway because of its globally-compliant Wi-Fi and cellular certifications. Implementing this IoT technology will enable Galdi to remotely manage its plants and its customers by providing greater access to valuable data.
The ReliaGATE 10-20 is an industrial grade smart IoT gateway that provides communications, computation power and a simplified application framework for IoT platform integration and services applications. The gateway offers a variety of communication interfaces including cellular, Wi-Fi and Bluetooth enabling connectivity to a wide range of sensors and edge devices essential in M2M/IoT applications. It also includes interfaces for wired connectivity such as Dual Gigabit Ethernet, CANBus, up to four serial ports and three USB ports. ReliaGATE 10-20 is simple to manage and delivers out-of-the-box connectivity and intuitive configuration of the routing parameters thanks to a web GUI and over-the-air options.
Telit has announced BlueMod+S42M, a Bluetooth Low Energy (BLE) 4.2, standalone, single-mode module with embedded 3-axis accelerometer, temperature and humidity sensors. The cost-effective component is optimized for efficiency and simplicity in end-device design and manufacturing, delivering reliable Bluetooth Low Energy functionality with robust endpoint security, motion and environmental sensors and essential features that reduce development costs, bill of materials, and time to market.
Ideal for large scale projects, the BlueMod+S42M seamlessly expedites device design across a wide range of industrial and consumer applications areas. The embedded sensors are necessary for high-value, fragile asset tracking, and time- or temperature-sensitive applications such as cold chain monitoring in the pharmaceutical and agriculture industries.
The Wonderful Material That Will Change
the World of Electronics
The amazing properties of graphene have researchers, students, and inventors dreaming about exciting new applications, from unbreakable touchscreens to fast-charging batteries.
By Wisse Hettinga
Prosthetic hand with graphene electrodes
Graphene gained popularity because of the way it is produced—the “Scotch tape method.” In fact, two scientists, Andre Geim and Kostya Novoselov, received a Nobel Prize in 2004 for their work with the material. Their approach is straightforward. Using Scotch tape, they repeatedly removed small layers of graphite (indeed, the black stuff found in pencils) until there was only one 2-D layer of atoms left—graphene. Up to that point, many scientists understood the promise of this wonderful material, but no one had been able to get obtain a single layer of atoms. After the breakthrough, many universities started looking for graphene-related applications.
Innovative graphene-related research is underway all over the world. Today, many European institutes and universities work together under the Graphene Flagship initiative (http://graphene-flagship.eu), which was launched by the European Union in 2009. The initiative’s aim is to exchange knowledge and collaborate on research projects.
Graphene was a hot topic at the 2017 Mobile World Congress (MWC) in Barcelona, Spain. This article covers a select number of applications talked about at the show. But for the complete coverage, check out the video here:
WEARABLE SENSORS FOR PROSTHETICS
The Istituto Italiano di Tecnologia (IIT) in Genova, Italy, recently developed a sensor from a cellulose and graphene composite. The sensor can be made in the form of a bracelet that fits around the arm in order to pick up the small signals associated with muscle movement. The signals are processed and used to drive a robotic prosthetic hand. Once the comfortable bracelet is placed on the wrist, it transduces the movement of the hand into electrical signals that are used to move the artificial hand in a spectacular way. More information: www.iit.it
GRAPHENE & CONVENTIONAL CMOS TECHNOLOGIES
The Scotch tape method used by the Nobel Prize winners inspired a lot of companies around the world to start producing graphene. Today, a wide variety of methods can be used depending on the actual application of the material. Graphenea (San Sebastian, Spain) is using different processes for the production of graphene products. One of them is Chemical Vapor Deposition. With this method, it is possible to create graphene on thin foil, silicon based or in form of oxide. They source many universities and research institutes that do R&D for new components such as supercapacitors, solar, batteries, and many more applications. The big challenge is to develop an industrial process that will combine graphene material with the conventional CMOS technology. In this way, the characteristics of graphene can enhance today’s components to make them useful for new applications. A good example is optical datatransfer. More information: www.graphenea.com
Transfer graphene on top of a silicon device to add more functionality
T5G DATA COMMUNICATION
High-speed data communication comes in all sizes and infrastructures. But on the small scale, there are many challenges. Graphene enables new optical communication on the chip level. A consortium of CNIT, Ericsson, Nokia, and IMEC have developed graphene photonics integration for high-speed transmission systems. At MWC, they showcased a packaged graphene-based modulator operating over several optical telecommunications bands. I saw the first package transmitters with optical modulators based on graphene. The modulator is only one-tenth of a millimeter. The transfer capacity is 10 Gbps, but the aim is to bring that to 100 Gbps in a year’s time. The applications will be able to play a key role in the development of 5G technology. More information: www.cnit.it/en/.
Optical modulator based on graphene technology
THE ART OF HEATING
FGV Cambridge Nanosystems recently developed a novel “spray-on” graphene heating system that provides uniform, large-area heating. The material can be applied to paintings or walls and turned into a ‘heating’ area that can be wirelessly controlled via a mobile app. The same methodology can also double as a temperature sensor, where you can control light intensity by sensing body temperature. More information: www.cambridgenanosystems.com
FOAM SENSOR FOR SHOES
Atheletes can benefit from light, strong, sensor-based shoes that that can monitor their status. To make this happen, the University of Cambridge developed a 3-D printed shoe with embedded graphene foam sensors that can monitor the pressure applied. They combine complicated structural design with accurate sensing function. The graphene foam sensor can be used for measuring the number of steps and the weight of the person. More information: www.cam.ac.uk
Graphene pressure sensors embedded in shoes
FLEXIBLE WI-FI RECEIVER
More wireless fidelity can be expected when graphene-based receivers come into play. The receivers based on graphene are small and flexible and can be used for integration into clothes and other textile applications. AMO GmbH and RWTH Aachen University are developing the first flexible Wi-Fi receiver. The underlying graphene MMIC process enables the fabrication of the Wi-Fi receiver on both flexible and rigid substrates. This flexible Wi-Fi receiver is the first graphene-based front-end receiver for any type of modulated signal. The research shows that this technology can be used up to 90 GHz, which opens it up to new applications in IoT and mobile phones. More information: www.amo.de
Using graphene in flexible Wi-Fi receiver
5″ DISPLAY WITH UP TO 12K RESOLUTION
Santiago Cartamil-Bueno, a PhD student at TU Delft, was the first to observe a change in colors of small graphene “balloons.” These balloons appear when pressure is applied in a double layer of graphene. When this graphene is placed over silicon with small indents, the balloons can move in and out the silicon dents. If the graphene layer is closer to the silicon, they turn blue. If it is farther away from the silicon, they will turn red. Santiago observed this effect first and is researching the possibilities to turn this effect into high-resolution display. It uses the light from the environment and turns it into a very low-power consumption process. The resolution is very high; a typical 5″ display would be able to show images with 8K to 12K resolution. More information: www.delta.tudelft.nl/artikel/ballooning-graphene-may-be-used-as-pixel/32619
Valencell and STMicroelectronics recently launched a new development kit for biometric wearables. Featuring STMicro’s compact SensorTile turnkey multi-sensor module and Valencell’s Benchmark biometric sensor system, the platform offers designers a scalable solution for designers building biometric hearables and wearables.
The SensorTile IoT module’s specs and features:
13.5 mm × 13.5 mm
Bluetooth Low Energy chipset
a wide spectrum of MEMS sensors (accelerometer, gyroscope, magnetometer, pressure, and temperature sensor)
Digital MEMS microphone
Valencell’s Benchmark sensor system’s specs and features:
PerformTek processor communicates with host processor using a simple UART or I2C interface protocol
STMicroelectronics’s miniature SensorTile sensor board of its type comprises an MEMS accelerometer, gyroscope, magnetometer, pressure sensor, and a MEMS microphone. With the on-board low-power STM32L4 microcontroller, the SensorTile can be used as a sensing and connectivity hub for developing products ranging from wearables to Internet of Things (IoT) devices.
The 13.5 mm × 13.5 mm SensorTile features a Bluetooth Low-Energy (BLE) transceiver including an onboard miniature single-chip balun, as well as a broad set of system interfaces that support use as a sensor-fusion hub or as a platform for firmware development. You can plug it into a host board. At power-up, it immediately starts streaming inertial, audio, and environmental data to STMicro’s BlueMS free smartphone app.
Software development is simple with an API based on the STM32Cube Hardware Abstraction Layer and middleware components, including the STM32 Open Development Environment. It’s fully compatible with the Open Software eXpansion Libraries (Open.MEMS, Open.RF, and Open.AUDIO), as well as numerous third-party embedded sensing and voice-processing projects. Example programs are available (e.g., software for position sensing, activity recognition, and low-power voice communication).
The complete kit includes a cradle board, which carries the 13.5 mm × 13.5 mm SensorTile core system in standalone or hub mode and can be used as a reference design. This compact yet fully loaded board contains a humidity and temperature sensor, a micro-SD card socket, as well as a lithium-polymer battery (LiPo) charger. The pack also contains a LiPo rechargeable battery and a plastic case that provides a convenient housing for the cradle, SensorTile, and battery combination.
SensorTile kit’s main features, specs, and benefits:
Cradle/expansion board with an analog audio output, a micro-USB connector, and an Arduino-like interface that can be plugged into any STM32 Nucleo board to expand developers’ options for system and software development.
LSM6DSM 3-D accelerometer and 3-D gyroscope
LSM303AGR 3-D magnetometer and 3-D accelerometer
LPS22HB pressure sensor/barometer
MP34DT04 digital MEMS microphone
BlueNRG-MS network processor with integrated 2.4-GHz radio
Interested in developing cloud-connected wireless sensing products? Silicon Labs recently introduced its Thunderboard Sense Kit for developing cloud-connected devices with multiple sensing and connectivity options. The “inspiration kit” provides you with all the hardware and software needed to develop battery-powered wireless sensor nodes for the IoT.
The Thunderboard Sense Kit’s features and benefits:
Silicon Labs EFR32 Mighty Gecko multiprotocol wireless SoC with a 2.4-GHz chip antenna
ARM Cortex-M4 processor-based
Supports Bluetooth low energy, ZigBee, Thread, and proprietary protocols
Silicon Labs EFM8 Sleepy Bee microcontroller enabling fine-grained power control
Silicon Labs Si7021 relative humidity and temperature sensor
Silicon Labs Si1133 UV index and ambient light sensor
Bosch Sensortec BMP280 barometric pressure sensor
Cambridge CCS811 indoor air quality gas sensor
InvenSense ICM-20648 six-axis inertial sensor
Knowles SPV1840 MEMS microphone
Four high-brightness RGB LEDs
On-board SEGGER J-Link debugger for easy programming and debugging
USB Micro-B connector with virtual COM port and debug access
Mini Simplicity connector to access energy profiling and wireless network debugging
20 breakout pins to connect to external breadboard hardware
CR2032 coin cell battery connector and external battery connector
Silicon Labs’s Simplicity Studio tools support the Thunderboard Sense
The Thunderboard Sense kit (SLTB001A) costs $36. All hardware schematics, open-source design files, mobile apps, and cloud software are included for free.
Mouser Electronics is now offering Omron Electronic Components’s fully integrated B5T HVC-P2 face detection sensor modules. The Human Vision Component (HVC) plug-in modules are based on Omron’s OKAO Vision Image Sensing Technology, which is used to quickly and accurately detect human bodies and faces.
Well suite for a variety of IoT applications, the face detection sensor modules comprise a camera and a separate main board that are connected via a flexible flat cable, which enables you to install it on the edge of a flat display unit. The boards feature UART and USB interfaces to control the module and send the data output (as no image output, 160 × 120 pixels, or 320 × 240 pixels) to an external system.
Available in both wide-angle (90-degree lens) and long-distance lenses (50-degree lens), the B5T HVC-P2 modules can detect a human body up to four times per second. The long-distance module can detect and presume attributes (e.g., gender and age, sight line, and facial expression) from a maximum distance of 3 m. The wide-angle module can cover an area 100 cm × 75 cm from a distance of 50 cm.
Biomedical signals obtained from the human body can be beneficial in a variety of scenarios in a healthcare setting. For example, physicians can use the noninvasive sensing, recording, and processing of a heart’s electrical activity in the form of electrocardiograms (ECGs) to help make informed decisions about a patient’s cardiovascular health. A typical biomedical signal acquisition system will consist of sensors, preamplifiers, filters, analog-to-digital conversion, processing and analysis using computers, and the visual display of the outputs. Given the digital nature of these signals, intelligent methods and computer algorithms can be developed for analysis of the signals. Such processing and analysis of signals might involve the removal of instrumentation noise, power line interference, and any artifacts that act as interference to the signal of interest. The analysis can be further enhanced into a computer-aided decision-making tool by incorporating digital signal processing methods and algorithms for feature extraction and pattern analysis. In many cases, the pattern analysis module is developed to reveal hidden parameters of clinical interest, and thereby improve the diagnostic and monitoring of clinical events.
The methods used for biomedical signal processing can be categorized into five generations. In the first generation, the techniques developed in the 1970s and 1980s were based on time-domain approaches for event analysis (e.g., using time-domain correlation approaches to detect arrhythmic events from ECGs). In the second generation, with the implementation of the Fast Fourier Transform (FFT) technique, many spectral domain approaches were developed to get a better representation of the biomedical signals for analysis. For example, the coherence analysis of the spectra of brain waves also known as electroencephalogram (EEG) signals have provided an enhanced understanding of certain neurological disorders, such as epilepsy. During the 1980s and 1990s, the third generation of techniques was developed to handle the time-varying dynamical behavior of biomedical signals (e.g., the characteristics of polysomnographic (PSG) signals recorded during sleep possess time-varying properties reflecting the subject’s different sleep stages). In these cases, Fourier-based techniques cannot be optimally used because by definition Fourier provides only the spectral information and doesn’t provide a time-varying representation of signals. Therefore, the third-generation algorithms were developed to process the biomedical signals to provide a time-varying representation, and clinical events can be temporally localized for many practical applications.
This essay appears in Circuit Cellar 315, October 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!
These algorithms were essentially developed for speech signals for telecommunications applications, and they were adapted and modified for biomedical applications. The nearby figure illustrates an example of knee vibration signal obtained from two different knee joints, their spectra, and joint time-frequency representations. With the advancement in computing technologies, for the past 15 years, many algorithms have been developed for machine learning and building intelligent systems. Therefore, the fourth generation of biomedical signal analysis involved the automatic quantification, classification, and recognition of time-varying biomedical signals by using advanced signal-processing concepts from time-frequency theory, neural networks, and nonlinear theory.
During the last five years, we’ve witnessed advancements in sensor technologies, wireless technologies, and material science. The development of wearable and ingestible electronic sensors mark the fifth generation of biomedical signal analysis. And as the Internet of Things (IoT) framework develops further, new opportunities will open up in the healthcare domain. For instance, the continuous and long-term monitoring of biomedical signals will soon become a reality. In addition, Internet-connected health applications will impact healthcare delivery in many positive ways. For example, it will become increasingly effective and advantageous to monitor elderly and chronically ill patients in their homes rather than hospitals.
These technological innovations will provide great opportunities for engineers to design devices from a systems perspective by taking into account patient safety, low power requirements, interoperability, and performance requirements. It will also provide computer and data scientists with a huge amount of data with variable characteristics.
The future of biomedical signal analysis looks very promising. We can expect innovative healthcare solutions that will improve everyone’s quality of life.
Sridhar (Sri) Krishnan earned a BE degree in Electronics and Communication Engineering at Anna University in Madras, India. He earned MSc and PhD degrees in Electrical and Computer Engineering at the University of Calgary. Sri is a Professor of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario, Canada, and he holds the Canada Research Chair position in Biomedical Signal Analysis. Since July 2011, Sri has been an Associate Dean (Research and Development) for the Faculty of Engineering and Architectural Science. He is also the Founding Co-Director of the Institute for Biomedical Engineering, Science and Technology (iBEST). He is an Affiliate Scientist at the Keenan Research Centre at St. Michael’s Hospital in Toronto.
With the advent of the Internet of Things (IoT), the need for ultra-low power passive remote sensing is on the rise for battery-powered technologies. Always-on motion-sensing technologies are a great option to turn to. Digital cameras have come light years from where they were a decade ago, but low power they are not. When low-power technologies need always-on remote sensing, infrared motion sensors are a great option to turn to.
Passive infrared (PIR) sensors and passive infrared detectors (PIDs) are electronic devices that detect infrared light emitted from objects within their field of view. These devices typically don’t measure light per se; rather, they measure the delta of a system’s latent energy. This change generates a very small potential across a crystalline material (gallium nitride, cesium nitrate, among others), which can be amplified to create a usable signal.
Infrared technology was built on a foundation of older motion-sensing technologies that came before. Motion sensing was first utilized in the early 1940s, primarily for military purposes nearing the end of World War II. Radar and ultrasonic detectors were the progenitors of motion-sensing technologies seen today, relying on reflecting sound waves to determine the location of objects in a detection environment. Though effective for its purpose, its use was limited to military applications and was not a reasonable option for commercial users.
The viability of motion detection tools began to change as infrared-sensing options entered development. The birth of modern PIR sensors began towards the end of the sixties, when companies began to seek alternatives to the already available motion technologies that were fast becoming outdated.
The modern versions of these infrared motion sensors have taken root in many industries due to the affordability and flexibility of their use. The future of motion sensors is PID, and it has several advantages over its counterparts:
Saving Energy—PIDs are energy efficient. The electricity required to operate PIDs is minimal, with most units actually reducing the user’s energy consumption when compared to other commercial motion-sensing devices.
Inexpensive—Cost isn’t a barrier to entry for those wanting to deploy IR motion sensing technology. This sensor technology makes each individual unit affordable, allowing users to deploy multiple sensors for maximum coverage without breaking the bank.
Durability—It’s hard to match the ruggedness of PIDs. Most units don’t employ delicate circuitry that is easily jarred or disrupted; PIDs are routinely used outdoors and in adverse environments that would potentially damage other styles of detectors.
Simple and Small—The small size of PIDs work to their advantage. Innocuous sensors are ideal for security solutions that aren’t obtrusive or easily noticeable. This simplicity makes PIDs desirable for commercial security, when businesses want to avoid installing obvious security infrastructure throughout their buildings.
Wide Lens Range—The wide field of vision that PIDs have allow for comprehensive coverage of each location in which they are placed. PIDs easily form a “grid” of infrared detection that is ideal for detecting people, animals, or any other type of disruption that falls within the lens range.
Easy to Interface With—PIDs are flexible. The compact and simple nature of PIDs lets the easily integrate with other technologies, including public motion detectors for businesses and appliances like remote controls.
With the wealth of advantages PIDs have over other forms of motion-sensing technology, it stands to reason that PIR sensors and PIDs will have a place in the future of motion sensor development. Though other options are available, PIDs operate with simplicity, energy-efficiency, and a level of durability that other technologies can’t match. Though there are some exciting new developments in the field of motion-sensing technology, including peripherals for virtual reality and 3-D motion control, the reliability of infrared motion technology will have a definite role in the evolution of motion sensing technology in the years to come.
As the Head Hardware Engineer at Cyndr (www.cyndr.co), Kyle Engstrom is the company’s lead electron wrangler and firmware designer. He specializes in analog electronics and power systems. Kyle has bachelor’s degrees in electrical engineering and geology. His life as a rock hound lasted all of six months before he found his true calling in engineering. Kyle has worked three years in the aerospace industry designing cutting-edge avionics.
Infineon Technologies recently introduced a new family of Hall sensors targeted at cost-effective, compact designs. Available as latch and switch-type devices, the sensors in the TLx496x family have precise switching points, stable operation, and a low power consumption.
The TLx496x Hall sensors consume no more than 1.6 mA. In addition, they have an integrated Hall element, a voltage regulator (to power the Hall element and the active circuits), choppers (ensure that the temperature remains stable), an oscillator, and an output driver.
The TLE496x-xM series is well suited for automotive applications (e.g., power windows and sunroofs, trunk locks, and windshield wipers) with an operating voltage of 3 to 5.5 V. The TLI496x-xM series units function like the TLE496x-xM units, but it is specified for a temperature range of –40° to 125°C and is JESD47 qualified. The TLI496x-xM is used in BLDC motors in e-bikes and fans in PCs and in electric drives in building automation. The TLV496x-xTA/B series was specifically developed for the cost-effective, contactless positioning. Typical applications are BLDC motors in home appliances (e.g., dishwashers), compressors in air-conditioners, and more. Despite the pressure to cut costs, these applications need very precise Hall latches or Hall switches (unipolar/bipolar) for temperatures ranging from –40 °C to 125 °C. The TLV496x-xTA/B versions have a power consumption of 1.6 mA and an ESD protection up to 4 kVH Human Body Model (HBM). The output has overcurrent protection and automatically switches off at high temperatures.
The Hall sensors of all three series are available in high volume. Development support includes online simulation tools and application manuals.
Bosch Sensortec recently announced the launch of the compact BMX160 nine-axis motion sensor, which a great option for small, power-constrained applications ranging from “smart” wearables to virtual reality (VR) devices. Housed in a compact 2.5 × 3 × 0.95 mm3 package, the sensor combines advanced accelerometer, gyroscope, and geomagnetic sensor technologies.
The BMX160’s features, specs, and benefits:
Compact size: 2.5 × 3 × 0.95 mm3
Reduces power consumption below 1.5 mA
Enables Android wearable applications that rely on sensor data
You can use the sensor with the Bosch Sensortec BSX sensor data fusion software library.
Pin- and register-compatibility with the six-axis BMI160 IMU s
Built-in power management unit
BMX160 samples are now available for development partners.