The Importance of Widely Available Wireless Links for UAV Systems

Readily available, first-rate wireless links are essential for building and running safe UAV systems. David Weight, principal electronics engineer at Waittcircuit, recently shared his thoughts on the importance developing and maintaining high-quality wireless links as the UAV industry expands.

weightOne of the major challenges that is emerging in the UAV industry is maintaining wireless links with high availability. As UAVs start to share airspace with other vehicles, we need to demonstrate that a control link can be maintained in a wide variety of environments, including interference and non-line of sight. We are starting to see software defined radio used to build radios which are frequency agile and capable of using multiple modulation techniques. For example, being able to use direct links in open spaces where these are most effective, but being able to change to 4G type signals when entering more built-up areas as these areas can pose issues for direct links, but have good coverage for existing commercial telecoms. Being able to change the frequency and modulation also means that, where interference or poor signal paths are found, frequencies can be changed to avoid interference, or in extreme cases, be reduced to lower bands which allow control links to be maintained. This may mean that not all the data can be transmitted back, but it will keep the link alive and continue to transmit sufficient information to allow the pilot to control the UAV safely. — David Weight (Principal Electronics Engineer, Wattcircuit, UK)

Brain Controlled-Tech and the Future of Wireless

Wireless IoT devices are becoming increasingly common in both private and public spaces. Phil Vreugdenhil, an instructor at Camosun College in Canada, recently shared his thoughts on the future of ‘Net-connected wireless technology and the ways users will interact with it.

VreugdenhilI see brain-controlled software and hardware seamlessly interacting with wireless IoT devices.  I also foresee people interacting with their enhanced realities through fully integrated NEMS (nano-electromechancical systems) which also communicate directly with the brain, bypassing the usual pathways (eyes, ears, nose, touch, taste) much like cochlear implants and bionic eyes. I see wireless health-monitoring systems and AI doctors drastically improving efficiency in the medical system. But, I also see the safety and security pitfalls within these future systems. The potential for hacking somebody’s personal systems and altering or deleting the data they depend upon for survival makes the future of wireless technology seem scarier than it will probably be. — Phil Vreugdenhil (Instructor, Camosun College, Canada)

The Future of Test-First Embedded Software

The term “test-first” software development comes from the original days of extreme programming (XP). In Kent Beck’s 1999 book, Extreme Programming Explained: Embrace Change (Addison-Wesley), his direction is to create an automated test before making any changes to the code.

Nowadays, test-first development usually means test-driven development (TDD): a well-defined, continuous feedback cycle of code, test, and refactor. You write a test, write some code to make it pass, make improvements, and then repeat. Automation is key though, so you can run the tests easily at any time.

TDD is well regarded as a useful software development technique. The proponents of TDD (including myself) like the way in which the code incrementally evolves from the interface as well as the comprehensive test suite that is created. The test suite is the safety net that allows the code to be refactored freely, without worry of breaking anything. It’s a powerful tool in the battle against code rot.

To date, TDD has had greater adoption in web and application development than with embedded software. Recent advances in unit test tools however are set to make TDD more accessible for embedded development.

In 2011 James Grenning published his book, Test Driven Development for Embedded C (Pragmatic Bookshelf). Six years later, this is still the authoritative reference for embedded test-first development and the entry point to TDD for many embedded software developers. It explains how TDD works in detail for an unfamiliar audience and addresses many of the traditional concerns, like how will this work with custom hardware. Today, the book is still completely relevant, but when it was published, the state-of-the art tools were simple unit test and mocking frameworks. These frameworks require a lot of boilerplate code to run tests, and any mock objects need to be created manually.

In the rest of the software world though, unit test tools are significantly more mature. In most other languages used for web and application development, it’s easy to create and run many unit tests, as well as to create mock objects automatically.
Since 2011, the current state of TDD tools has advanced considerably with the development of the open-source tool Ceedling. It automates running of unit tests and generation of mock objects in C applications, making it a lot easier to do TDD. Today, if you want to test-drive embedded software in C, you don’t need to roll-your-own test build system or mocks.

With better tools making unit testing easier, I suspect that in the future test-first development will be more widely adopted by embedded software developers. While previously relegated to the few early adopters willing to put in the effort, with tools lowering the barrier to entry it will be easier for everyone to do TDD.
Besides the tools to make TDD easier, another driving force behind greater adoption of test-first practices will be the simple need to produce better-quality embedded software. As embedded software continues its infiltration into all kinds of devices that run our lives, we’ll need to be able to deliver software that is more reliable and more secure.

Currently, unit tests for embedded software are most popular in regulated industries—like medical or aviation—where the regulators essentially force you to have unit tests. This is one part of a strategy to prevent you from hurting or killing people with your code. The rest of the “unregulated” embedded software world should take note of this approach.

With the rise of the Internet of things (IoT), our society is increasingly dependent on embedded devices connected to the Internet. In the future, the reliability and security of the software that runs these devices is only going to become more critical. There may not be a compelling business case for it now, but customers—and perhaps new regulators—are going to increasingly demand it. Test-first software can be one strategy to help us deal with this challenge.


This article appears in Circuit Cellar 318.


Matt Chernosky wants to help you build better embedded software—test-first with TDD. With years of experience in the automotive, industrial, and medical device fields, he’s excited about improving embedded software development. Learn more from Matt about getting started with embedded TDD at electronvector.com.

Taking the “Hard” Out of Hardware

There’s this belief among my software developer friends that electronics are complicated, hardware is hard, and that you need a degree before you can design anything to do with electricity. They honestly believe that building electronics is more complicated than writing intricate software—that is, the software which powers thousands of people’s lives all around the world. It’s this mentality that confuses me. How can you write all of this incredible software, but a believe a simple 555 timer circuit is complicated?

I wanted to discover where the idea that “hardware is hard” came from and how I could disprove it. I started with something with which almost everyone is familiar, LEGO. I spent my childhood playing with these tiny plastic bricks, building anything my seven-year-old mind could dream up, creating intricate constructions from seemingly simplistic pieces. Much like the way you build LEGO designs, electronic systems are built upon a foundation of simple components.

When you decide to design/build a system, you want to first start by breaking down the system into components and functional sections that are easy to understand. You can use this approach for both digital and analog systems. The example I like use to explain this is a phase-locked loop frequency modulator demodulator/detector, a seemingly complicated device used to decode frequency modulated radio signals. This system sounds like it would be impossible to build, especially for someone who isn’t familiar with electronics. I can recognize that from experience. I remember the first year of my undergraduate studies where my lecturers would place extremely detailed circuit schematics up on a chalkboard and expect us to be able to understand high-level functionality. I recall the panic this induced in a number of my peers and very likely put them off electronics in later years. One of the biggest problems that an electronics instructor faces is teaching complexity without scaring away students.

This essay appears in Circuit Cellar 317, December 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

 

What many people either don’t realize or aren’t taught is that most systems can be broken down into composite pieces. The PLL FM demodulator breaks into three main elements: the phase detector, a voltage controlled oscillator (VCO) and a loop filter. These smaller pieces, or “building blocks,” can then be separated even further. For example, the loop filter—an element of the circuit used to remove high-frequency—is constructed from a simple combination of resistors, capacitors, and operational amplifiers (see Figure 1).Figure 1

I’m going to use a bottom-up approach to explain the loop filter segment of this system using simple resistors (R) and capacitors (C). It is this combination of resistors and capacitors allows you to create passive RC filters—circuits which work by allowing only specific frequencies to pass to the output. Figure 2 shows a low-pass filter. This is used to remove high-frequency signals from the output of a circuit. Note: I’m avoiding as much math as possible in this explanation, as you don’t need numerical examples to demonstrate behavior. That can come later! The performance of this RC filter can be improved by adding an amplification stage using an op-amp, as we’ll see next.Figure 2

Op-amps are a nice example of abstraction in electronics. We don’t normally worry about their internals, much like a CPU or other ICs, and rather treat them like functional boxes with inputs and an output. As you can see in Figure 3, the op-amp is working in a “differential” mode to try to equalize the voltages at its negative and positive terminals. It does this by outputting the difference and feeding it back to the negative terminal via a feedback loop created by the potential divider (voltage divider) at R2 and R3. The differential effect between the op-amp’s two input terminals causes a “boosted” output that is determined by the values of R2 and R3. This amplification, in combination with the low-pass passive filter, creates what’s known as a low-pass active filter.Figure 3

The low-pass active filter would be one of a number of filtering elements within the loop filter, and we already built up one of the circuit’s three main elements! This example starts to show how behavior is cumulative. As you gain knowledge about fundamental components, you’ll start to understand how more complex systems work. Almost all of electronic systems have this building block format. So, yes, there might be a number of behaviors to understand. But as soon as you learn the fundamentals, you can start to design and build complicated systems of your own!

Alex Bucknall earned a Bachelor’s in Electronic Engineering at the University of Warwick, UK. He is particularily interested in FPGAs and communications systems. Alex works as a Developer Evangelist for Sigfox, which is offering simple and low-energy communication solutions for the Internet of Things.

The Future of Ultra-Low Power Signal Processing

One of my favorite quotes comes from the IEEE Signal Processing magazine in 2010. They attempted to answer the question: What does ultra-low power consumption mean? And they came to the conclusion that it is where the “power source lasts longer than the useful life of the product.”[1] It’s a great answer because it’s scalable. It applies equally to signal processing circuitry inside an embedded IoT device that can never be accessed or recharged and to signal processing inside a car where the petrol for the engine dominates the operating lifetime, not the signal processing power. It also describes exactly what a lot of science fiction has always envisioned: no changing or recharging of batteries, which people forget to do or never have enough batteries for. Rather, we have devices that simply always work.Figure 1

My research focuses on healthcare applications and creating “wearable algorithms”—that is, signal processing implementations that fit within the very small power budgets available in wearable devices. Historically, this focused on data reduction to save power. It’s well known that wireless data transmission is very power intensive. By using some power to reduce the amount of data that has to be sent, it’s possible to save lots of power in the wireless transmission stage and so to increase the overall battery lifetime.

This argument has been known for a long time. There are papers dating back to at least the 1990s based on it. It’s also readily achievable. Inevitably, it depends on the precise situation, but we showed in 2014 that the power consumption of a wireless sensor node could be brought down to the level of a node without a wireless transmitter (one that uses local flash memory) using easily available, easy-to-use, off-the-shelf-devices.[2]

This essay appears in Circuit Cellar 316, November 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

Today, there are many additional benefits that are being enabled by the emerging use of ultra-low power signal processing embedded in the wearable itself, and these new applications are driving the research challenges: increased device functionality; minimized system latency; reliable, robust operation over unreliable wireless links; reduction in the amount of data to be analyzed offline; better quality recordings (e.g., with motion artifact removal to prevent signal saturations); new closed-loop recording—stimulation devices; and real-time data redaction for privacy, ensuring personal data never leaves the wearable.

It’s these last two that are the focus for my research now. They’re really important for enabling new “bioelectronic” medical devices which apply electrical stimulation as an alternative to classical pharmacological treatments. These “bioelectronics” will be fully data-driven, analyzing physiological measurements in real-time and using this to decide when to optimally trigger an intervention. Doing such as analysis on a wearable sensor node though requires ultra-low power signal processing that has all of the feature extraction and signal classification operating within a power budget of a few 100 µW or less.

To achieve this, most works do not use any specific software platform. Instead they achieve very low-power consumption by using only dedicated and highly customized hardware circuits. While there are many different approaches to realizing low-power fully custom electronics, for the hardware, the design trends are reasonably established: very low supply voltages, typically in the 0.5 to 1 V range; highly simplified circuit architectures, where a small reduction in processing accuracy leads to substantial power savings; and the use of extensive analogue processing in the very lowest power consumption circuits.[3]

Less well established are the signal processing functions for ultra-low power. Focusing on feature extractions, our 2015 review highlighted that the majority (more than half) of wearable algorithms created to date are based upon frequency information, with wavelet transforms being particularly popular.[4] This indicates a potential over-reliance on time–frequency decompositions as the best algorithmic starting points. It seems unlikely that time–frequency decompositions would provide the best, or even suitable, feature extraction across all signal types and all potential applications. There is a clear opportunity for creating wearable algorithms that are based on other feature extraction methods, such as the fractal dimension or Empirical Mode Decomposition.

Investigating this requires studying the three-way trade-off between algorithm performance (e.g., correct detections), algorithm cost (e.g., false detections), and power consumption. We know how to design signal processing algorithms, and we know how to design ultra-low power circuitry. However, combining the two opens many new degrees of freedom in the design space, and there are many opportunities and work to do in mapping feature extractions and classifiers into sub-1-V power supply dedicated hardware.


[1] G. Frantz, et al, “Ultra-low power signal processing,” IEEE Signal Processing Magazine, vol. 27, no. 2, 2010.
[2] S. A. Imtiaz, A. J. Casson, and E. Rodriguez-Villegas, “Compression in Wearable Sensor Nodes,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 4, 2014.
[3] A. J. Casson, et al, “Wearable Algorithms,” in E. Sazonov and M. R. Neuman (eds.), Wearable Sensors, Elsevier, 2014.
[4] A. J. Casson, “Opportunities and Challenges for Ultra Low Power Signal Processing in Wearable Healthcare,” 23rd European Signal Processing Conference, Nice, 2015.


Alex Casson is a lecturer in the Sensing, Imaging, and Signal Processing Department at the University of Manchester. His research focuses on creating next-generation human body sensors, developing both the required hardware and software. Dr. Casson earned an undergraduate degree at the University of Oxford and a PhD from Imperial College London.

The Future of Biomedical Signal Analysis Technology

Biomedical signals obtained from the human body can be beneficial in a variety of scenarios in a healthcare setting. For example, physicians can use the noninvasive sensing, recording, and processing of a heart’s electrical activity in the form of electrocardiograms (ECGs) to help make informed decisions about a patient’s cardiovascular health. A typical biomedical signal acquisition system will consist of sensors, preamplifiers, filters, analog-to-digital conversion, processing and analysis using computers, and the visual display of the outputs. Given the digital nature of these signals, intelligent methods and computer algorithms can be developed for analysis of the signals. Such processing and analysis of signals might involve the removal of instrumentation noise, power line interference, and any artifacts that act as interference to the signal of interest. The analysis can be further enhanced into a computer-aided decision-making tool by incorporating digital signal processing methods and algorithms for feature extraction and pattern analysis. In many cases, the pattern analysis module is developed to reveal hidden parameters of clinical interest, and thereby improve the diagnostic and monitoring of clinical events.Figure1

The methods used for biomedical signal processing can be categorized into five generations. In the first generation, the techniques developed in the 1970s and 1980s were based on time-domain approaches for event analysis (e.g., using time-domain correlation approaches to detect arrhythmic events from ECGs). In the second generation, with the implementation of the Fast Fourier Transform (FFT) technique, many spectral domain approaches were developed to get a better representation of the biomedical signals for analysis. For example, the coherence analysis of the spectra of brain waves also known as electroencephalogram (EEG) signals have provided an enhanced understanding of certain neurological disorders, such as epilepsy. During the 1980s and 1990s, the third generation of techniques was developed to handle the time-varying dynamical behavior of biomedical signals (e.g., the characteristics of polysomnographic (PSG) signals recorded during sleep possess time-varying properties reflecting the subject’s different sleep stages). In these cases, Fourier-based techniques cannot be optimally used because by definition Fourier provides only the spectral information and doesn’t provide a time-varying representation of signals. Therefore, the third-generation algorithms were developed to process the biomedical signals to provide a time-varying representation, and   clinical events can be temporally localized for many practical applications.

This essay appears in Circuit Cellar 315, October 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

These algorithms were essentially developed for speech signals for telecommunications applications, and they were adapted and modified for biomedical applications. The nearby figure illustrates an example of knee vibration signal obtained from two different knee joints, their spectra, and joint time-frequency representations. With the advancement in computing technologies, for the past 15 years, many algorithms have been developed for machine learning and building intelligent systems. Therefore, the fourth generation of biomedical signal analysis involved the automatic quantification, classification, and recognition of time-varying biomedical signals by using advanced signal-processing concepts from time-frequency theory, neural networks, and nonlinear theory.

During the last five years, we’ve witnessed advancements in sensor technologies, wireless technologies, and material science. The development of wearable and ingestible electronic sensors mark the fifth generation of biomedical signal analysis. And as the Internet of Things (IoT) framework develops further, new opportunities will open up in the healthcare domain. For instance, the continuous and long-term monitoring of biomedical signals will soon become a reality. In addition, Internet-connected health applications will impact healthcare delivery in many positive ways. For example, it will become increasingly effective and advantageous to monitor elderly and chronically ill patients in their homes rather than hospitals.

These technological innovations will provide great opportunities for engineers to design devices from a systems perspective by taking into account patient safety, low power requirements, interoperability, and performance requirements. It will also provide computer and data scientists with a huge amount of data with variable characteristics.

The future of biomedical signal analysis looks very promising. We can expect  innovative healthcare solutions that will improve everyone’s quality of life.

Sridhar (Sri) Krishnan earned a BE degree in Electronics and Communication Engineering at Anna University in Madras, India. He earned MSc and PhD degrees in Electrical and Computer Engineering at the University of Calgary. Sri is a Professor of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario, Canada, and he holds the Canada Research Chair position in Biomedical Signal Analysis. Since July 2011, Sri has been an Associate Dean (Research and Development) for the Faculty of Engineering and Architectural Science. He is also the Founding Co-Director of the Institute for Biomedical Engineering, Science and Technology (iBEST). He is an Affiliate Scientist at the Keenan Research Centre at St. Michael’s Hospital in Toronto.

The Hunt for Power Remote Sensing

With the advent of the Internet of Things (IoT), the need for ultra-low power passive remote sensing is on the rise for battery-powered technologies. Always-on motion-sensing technologies are a great option to turn to. Digital cameras have come light years from where they were a decade ago, but low power they are not. When low-power technologies need always-on remote sensing, infrared motion sensors are a great option to turn to.

Passive infrared (PIR) sensors and passive infrared detectors (PIDs) are electronic devices that detect infrared light emitted from objects within their field of view. These devices typically don’t measure light per se; rather, they measure the delta of a system’s latent energy. This change generates a very small potential across a crystalline material (gallium nitride, cesium nitrate, among others), which can be amplified to create a usable signal.

Infrared technology was built on a foundation of older motion-sensing technologies that came before. Motion sensing was first utilized in the early 1940s, primarily for military purposes nearing the end of World War II. Radar and ultrasonic detectors were the progenitors of motion-sensing technologies seen today, relying on reflecting sound waves to determine the location of objects in a detection environment. Though effective for its purpose, its use was limited to military applications and was not a reasonable option for commercial users.

This essay appears in Circuit Cellar 314 (September 2016).

 
The viability of motion detection tools began to change as infrared-sensing options entered development. The birth of modern PIR sensors began towards the end of the sixties, when companies began to seek alternatives to the already available motion technologies that were fast becoming outdated.

The modern versions of these infrared motion sensors have taken root in many industries due to the affordability and flexibility of their use. The future of motion sensors is PID, and it has several advantages over its counterparts:

  • Saving Energy—PIDs are energy efficient. The electricity required to operate PIDs is minimal, with most units actually reducing the user’s energy consumption when compared to other commercial motion-sensing devices.
  • Inexpensive—Cost isn’t a barrier to entry for those wanting to deploy IR motion sensing technology. This sensor technology makes each individual unit affordable, allowing users to deploy multiple sensors for maximum coverage without breaking the bank.
  • Durability—It’s hard to match the ruggedness of PIDs. Most units don’t employ delicate circuitry that is easily jarred or disrupted; PIDs are routinely used outdoors and in adverse environments that would potentially damage other styles of detectors.
  • Simple and Small—The small size of PIDs work to their advantage. Innocuous sensors are ideal for security solutions that aren’t obtrusive or easily noticeable. This simplicity makes PIDs desirable for commercial security, when businesses want to avoid installing obvious security infrastructure throughout their buildings.
  • Wide Lens Range—The wide field of vision that PIDs have allow for comprehensive coverage of each location in which they are placed. PIDs easily form a “grid” of infrared detection that is ideal for detecting people, animals, or any other type of disruption that falls within the lens range.
  • Easy to Interface With—PIDs are flexible. The compact and simple nature of PIDs lets the easily integrate with other technologies, including public motion detectors for businesses and appliances like remote controls.

With the wealth of advantages PIDs have over other forms of motion-sensing technology, it stands to reason that PIR sensors and PIDs will have a place in the future of motion sensor development. Though other options are available, PIDs operate with simplicity, energy-efficiency, and a level of durability that other technologies can’t match. Though there are some exciting new developments in the field of motion-sensing technology, including peripherals for virtual reality and 3-D motion control, the reliability of infrared motion technology will have a definite role in the evolution of motion sensing technology in the years to come.

As the Head Hardware Engineer at Cyndr (www.cyndr.co), Kyle Engstrom is the company’s lead electron wrangler and firmware designer. He specializes in analog electronics and power systems. Kyle has bachelor’s degrees in electrical engineering and geology. His life as a rock hound lasted all of six months before he found his true calling in engineering. Kyle has worked three years in the aerospace industry designing cutting-edge avionics.

Software-Programmable FPGAs

Modern workloads demand higher computational capabilities at low power consumption and cost. As traditional multi-core machines do not meet the growing computing requirements, architects are exploring alternative approaches. One solution is hardware specialization in the form of application specific integrated circuits (ASICs) to perform tasks at higher performance and lower power than software implementations. The cost of developing custom ASICs, however, remains high. Reconfigurable computing fabrics, such as field-programmable gate arrays (FPGAs), offer a promising alternative to custom ASICs. FPGAs couple the benefits of hardware acceleration with flexibility and lower cost.

FPGA-based reconfigurable computing has recently taken the spotlight in academia and industry as evidenced by Intel’s high-profile acquisition of Altera and Microsoft’s recent announcement to deploy thousands of FPGAs to speed up Bing search. In the coming years, we should expect to see hardware/software co-designed systems supported by reconfigurable computing to become common. Conventional RTL design methodologies, however, cannot productively manage the growing complexity of algorithms we wish to accelerate using FPGAs. Consequently, FPGA programmability is a major challenge that must be addressed both technologically by leveraging high-level software abstractions (e.g., language and compilers), run-time analysis tools, and readily available libraries and benchmarks, as well as scholastically through the education of rising hardware/software engineers.

Recent efforts related to software-programmable FPGAs have focused on designing high-level synthesis (HLS) compilers. Inspired by classical C-to-gates tools, HLS compilers automatically transform programs written in traditional untimed software languages to timed hardware descriptions. State-of-the-art HLS tools include Xilinx’s Vivado HLS (C/C++) and SDAccel (OpenCL) as well as Altera’s OpenCL SDK. Although HLS is effective at translating C/C++ or OpenCL programs to RTL hardware, compilers are only a part of the story in realizing truly software-programmable FPGAs.

 
Efficient memory management is central to software development. Unfortunately, unlike traditional software programming, current FPGA design flows require application-specific memories to sustain high performance hardware accelerators. Features such as dynamic memory allocation, pointer chasing, complex data structures, and irregular memory access patterns are also ill-supported by FPGAs. In lieu of basic software memory abstractions techniques, experts must design custom hardware memories. Instead, more extensible software memory abstractions would facilitate software-programmability of FPGAs.

In addition to high-level programming and memory abstractions, run-time analysis tools such as debuggers and profilers are essential to software programming. Hardware debuggers and profilers in the form of hardware/co-simulation tools, however, are not ready for tackling exascale systems. In fact, one of the biggest barriers to realizing software-programmable FPGAs are the hours, even days, it takes to generate bitstreams and run hardware/software co-simulators. Lengthy compilation and simulation times cause debugging and profiling to consume the majority of FPGA development cycles and deter agile software development practices. The effect is compounded when FPGAs are integrated into heterogeneous systems with CPUs and GPUs over complex memory hierarchies. New tools, following architectural simulators, may aid in rapidly gathering performance, power, and area utilization statistics for FPGAs in heterogeneous systems. Another solution to long compilation and simulation times is using overlay architectures. Overlay architectures mask the FPGA’s bit-level configurability with a fixed network of simple processing nodes. The fixed hardware in overlay architectures enables faster programmability at the expense of finer grained, bit-level parallelism of FPGAs.

Another key facet of software programming is readily available libraries and benchmarks. Current FPGA development is marred with vendor specific IPs cores that span limited domains. As FPGAs become more software-programmable, we should expect to see more domain experts providing vendor agnostic FPGA-based libraries and benchmarks. Realistic, representative, and reproducible vendor-agnostic libraries and benchmarks will not only make FPGA development more accessible but also serve as reference solutions for developers.

Finally, the future of software-programmable FPGAs lies not only in technological advancements but also in educating the next generation of hardware/software co-designing engineers. Software engineers are rarely concerned with the downstream architecture except when exercising expert optimizations. Higher-level abstractions and run-time analysis tools will improve FPGA programmability but developers will still need a working knowledge of FPGAs to design competitive hardware accelerators. Following reference libraries and benchmarks, software engineers must become fluent with the notion of pipelining, unrolling, partitioning memory into local SRAM blocks and hardened IPs. Terms like throughout, latency, area utilization, power and cycle time will enter software engineering vernacular.

Recent advances in HLS compilers have demonstrated the feasibility of software-programmable FPGAs. Now, a combination of higher-level abstractions, run-time analysis tools, libraries and benchmarks must be pioneered alongside trained hardware/software co-designing engineers to realize a cohesive software engineering infrastructure for FPGAs.
 

Udit Gupta earned a BS in Electrical and Computer Engineering at Cornell University. He is currently studying toward a PhD in Computer Science at Harvard University. Udit’s past research includes exploring software-programmable FPGAs by leveraging intelligent design automation tools and evaluating high-level synthesis compilers with realistic benchmarks. He is especially interested in vertically integrated systems—exploring the computing stack from applications, tools, languages, and compilers to downstream architectures

The Future of Sensor Technology for the IoT

Sensors are at the heart of many of the most innovative and game-changing Internet of Things (IoT) applications. We asked five engineers to share their thoughts on the future of sensor technology.


ChrisCantrellCommunication will be the fastest growth area in sensor technology. A good wireless link allows sensors to be placed in remote or dynamic environments where physical cables are impractical. Home Internet of Things (IoT) sensors will continue to leverage home Wi-Fi networks, but outdoor and physically-remote sensors will evolve to use cell networks. Cell networks are not just for voice anymore. Just ask your children. Phones are for texting—not for talking. The new 5G mobile service that rolls out in 2017 is designed with the Internet of Things in mind. Picocells and Microcells will better organize our sensors into manageable domains. What is the best cellular data plan for your refrigerator and toaster? I can’t wait for the TV commercials. — Christopher Cantrell (Software Engineer, CGI Federal)


TylerSensors of the future will conglomerate into microprocessor controlled blocks that are accessed over a network. For instance, weather sensors will display temperature, barometric pressure, humidity, wind speed, and direction along with latitude, longitude, altitude, and time thrown in for good measure, and all of this will be available across a single I2C link. Wide area network sensor information will be available across the Internet using encrypted links. Configuration and calibration can be done using webpages and all documentation will be stored online on the sensors themselves. Months’ worth of history will be saved to MicroSD drives or something similar. These are all things that we can dream of and implement today. Tomorrow’s sensors will solve tomorrow’s problems and we can really only make out the barest of glimpses of what tomorrow will hold. It will be entertaining to watch the future unfold and see how much we missed. — David C. Tyler (Retired Computer Scientist)



Quo vadis electronics? During the past few decades, electrical engineering has gone through an unprecedented growth. As a result, we see electronics to control just about everything around us. To be sure, what we call electronics today is in fact a symbiosis of hardware and software. At one time every electrical engineer worth his salt had to be able to solder and to write a program. A competent software engineer today may not understand what makes the hardware tick, just as a hardware engineer may not understand software, because it’s often too much for one person to master. In most situations, however, hardware depends on software and vice versa. While current technology enables us to do things we could not even dream about just a few years ago, when it comes to controlling or monitoring physical quantities, we remain limited by what the data sensors can provide. To mimic human intellect and more, we need sensors to convert reality into electrical signal. For that research scientists in the fields of physics, chemistry, biology, mathematics, and so forth work hard to discover novel, advanced sensors. Once a new sensor principle has been found, hardware and software engineers will go to work to exploit its detection capabilities in practical use. In my mind, research into new sensors is presently the most important activity for sustaining progress in the field of electronic control. — George Novacek (Engineer, Columnist, Circuit Cellar)


GustafikIt’s hard to imagine the future of sensors going against the general trend of lower power, greater distribution, smaller physical size, and improvements in all of the relevant parameters. With the proliferation of small connected devices beyond industrial and specialized use into homes and to average users (IoT), great advances and price drops are to be expected. Tech similar to that, once reserved for top-end industrial sensor networks, will be readily available. As electrical engineers, we will just have to adjust as always. After years of trying to avoid the realm of RF magic, I now find myself reading up on the best way to integrate a 2.4-GHz antenna onto my PCB. Fortunately, there is an abundance of tools, application notes, and tutorials from both the manufacturers and the community to help us with this next step. And with the amazing advances in computational power, neural networks, and various other data processing, I am eager to see what kind of additional information and predictions we can squeeze out of all those measurements. All in all, I am looking forward to a better, more connected future. And, as always, it’s a great time to be an electrical engineer. — David Gustafik (Hardware Developer, MicroStep-MIS)


MittalMiniature IoT, sensor, and embedded technologies are the future. Today, IoT technology is a favorite focus among many electronics startups and even big corporations. In my opinion, sensor-based medical applications are going to be very important in our day-to-day lives in the not-so-distant future. BioMEMS sensors integrated on a chip have already made an impact in industry with devices like glucometers and alcohol detectors. These types of BioMEMS sensors, if integrated inside mobile phones for many medical applications, can address many human needs. Another interesting area is wireless charging. Imagine if you could charge all your devices wirelessly as soon as you walk into your home. Wouldn’t that be a great innovation that would make your life easier? So, technology has a very good future provided it can bring out solutions which can really solve human needs. — Nishant Mittal (Master’s Student, IIT Bombay, Mumbai)

The Future of Electronic Measurement Systems

Trends in test and measurement systems follow broader technological trends. A measurement device’s fundamental purpose is to translate a measurable quantity into something that can be discerned by a human.  As such, the display technology of the day informed much of the design and performance limitations of early electronic measurement systems. Analog meters, cathode ray tubes, and paper strip recorder systems dominated.  Measurement hardware could be incredibly innovative, but such equipment could only be as good as its ability to display the measurement result to the user. Early analog multimeters could only be as accurate as a person’s ability to read to which dash mark the needle pointed.ipad_hand

In the early days, the broader electronics market was still in its infancy and didn’t offer much from which to draw. Test equipment manufacturers developed almost everything in house, including display technology. In its heyday, Tektronix even manufactured its own cathode ray tubes. As the nascent electronics market matured, measurement equipment evolved to leverage the advances being made. Display technology stopped being such an integral piece. No longer shackled with the burden of developing everything in house, equipment makers were able to develop instruments faster and focus more on the measurement elements alone. Advances in digital electronics made digital oscilloscopes practical. Faster and cheaper processors and larger memories (and faster ADCs to fill them) then led to digital oscilloscopes dominating the market. Soon, test equipment was influenced by the rise of the PC and even began running consumer-grade operating systems.

Measurement systems of the future will continue to follow this trend and adopt advances made by the broader tech sector. Of course, measurement specs will continue to improve, driven by newly invented technologies and semiconductor process improvements. But, other trends will be just as important. As new generations raised on Apple and Android smartphones start their engineering careers, the industry will give them the latest advances in user interfaces that they have come to expect. We are already seeing test equipment start to adopt touchscreen technologies. This trend will continue as more focus is put on interface design. The latest technologies talked about today, such as haptic feedback, will appear in the instruments of tomorrow. These UI improvements will help engineers better extract the data they need.

As chip integration follows its ever steady course, bench-top equipment will get smaller. Portable measurement equipment will get lighter and last longer as they leverage low-power mobile chipsets and new battery technologies. And the lines between portable and bench-top equipment will be blurred just as laptops have replaced desktops over the last decade. As equipment makers chase higher margins, they will increasingly focus on software to help interpret measurement data. One can imagine a subscription service to a cloud-based platform that provides better insights from the instrument on the bench.

At Aeroscope Labs (www.aeroscope.io), a company I cofounded, we are taking advantage of many broader trends in the electronics market. Our Aeroscope oscilloscope probe is a battery-powered device in a pen-sized form factor that wirelessly syncs to a tablet or phone. It simply could not exist without the amazing advances in the tech sector of the past 10 years. Because of the rise of the Internet of Things (IoT), we have access to many great radio systems on a chip (SoCs) along with corresponding software stacks and drivers. We don’t have to develop a radio from scratch like one would have to do 20 years ago. The ubiquity of smart phones and tablets means that we don’t have to design and build our own display hardware or system software. Likewise, the popularity of portable electronics has pushed the cost of lithium polymer batteries way down. Without these new batteries, the battery life would be mere minutes instead of the multiple hours that we are able to achieve.

Just as with my company, other new companies along with the major players will continue to leverage these broader trends to create exciting new instruments. I’m excited to see what is in store.

Jonathan Ward is cofounder of Aeroscope Labs (www.aeroscope.io), based in Boulder, CO. Aeroscope Labs is developing the world’s first wireless oscilloscope probe. Jonathan has always had a passion for measurement tools and equipment. He started his career at Agilent Technologies (now Keysight) designing high-performance spectrum analyzers. Most recently, Jonathan developed high-volume consumer electronics and portable chemical analysis equipment in the San Francisco Bay Area. In addition to his decade of industry experience, he holds an MS in Electrical Engineering from Columbia University and a BSEE from Case Western Reserve University.

The Future of Robotics Technology

Advancements in technology mean that the dawn of a new era of robotics is upon us. Automation is moving out of the factory and in to the real world. As this happens, we will see significant increases in productivity as well as drastic cuts in employment. We have an opportunity to markedly improve the lives of all people. Will we seize it?

For decades, the biggest limitations in robotics were related to computing and perception. Robots couldn’t make sense of their environments and so were fixed to the floor. Their movements were precalculated and repetitive. Now, however, we are beginning to see those limitations fall away, leading to a step-change in the capabilities of robotic systems. Robots now understand their environment with high fidelity, and safely navigate through it.

On the sensing side, we’re seeing multiple order of magnitude reductions in the cost of 3-D sensors used for mapping, obstacle avoidance, and task comprehension. Time of flight cameras such as those in the Microsoft Kinect or Google Tango devices are edging their way into the mainstream in high volumes. LIDAR sensors commonly used on self-driving cars were typically $60,000 or more just a few years ago. This year at the Consumer Electronics Show (CES), however, two companies, Quanergy and Velodyne, announced new solid-state LIDAR devices that eliminate all moving parts and carry a sub-$500 price point.

Understanding 3-D sensor data is a computationally intensive task, but advancements in general purpose GPU computing have introduced new ways to quickly process the information. Smartphones are pushing the development of small, powerful processors, and we’re seeing companies like NVIDIA shipping low cost GPU/CPU combos such as the X1 that are ideal for many robotics applications.

To make sense of all this data, we’re seeing significant improvements in software for robotics. The open-source Robot Operating System (ROS), for example, is widely used in industry and at 9 years old, just hit version 2.0. Meanwhile advances in machine learning mean that computers can now perform many tasks better than humans.

All these advancements mean that robots are moving beyond the factory floor and in to the real world. Soon we’ll see a litany of problems being solved by robotics. Amazon already uses robots to lower warehousing costs, and several new companies are looking to solve the last mile delivery problem. Combined with self-driving cars and trucks this will mean drastic cost reductions for the logistics industry, with a ripple effect that lowers the cost of all goods.

As volumes go up, we will see cost reductions in expensive mechanical components such as motors and linkages. In five years, most of the patents for metal 3-D printers will expire, which will bring on a wave of competition to lower costs for new manufacturing methods.
While many will benefit greatly from these advances, there are worrying implications for others. Truck driver is the most common job in nearly every state, but within a decade those jobs will see drastic cuts. Delivery companies like Amazon Fresh and Google Shopping Express currently rely on fleets of human drivers, as do taxi services Uber and Lyft. It seems reasonable that those companies will move to automated vehicles.

Meanwhile, there are a great number of unskilled jobs that have already reduced workers to near machines. Fast food restaurants, for example, provide clear cut scripts for workers to follow, eliminating any reliance on human intelligence. It won’t be long before robots are smart enough to do those jobs too. Some people believe new jobs will be created to replace the old ones, but I believe that at some point robots will simply surpass low-skilled workers in capability and become more desirable laborers. It is my deepest hope that long before that happens, we as a society take a serious look at the way we share the collective wealth of our Earth. Robots should not simply replace workers, but eliminate the need for humans to work for survival. Robots can so significantly increase productivity that we can eliminate scarcity for all of life’s necessities. In doing so, we can provide all people with wealth and freedom unseen in human history.

Making that happen is technologically simple, but will require significant changes to the way we think about society. We need many new thinkers to generate ideas, and would do well to explore concepts like basic income and the work of philosophers like Karl Marx and Friedrich Engels, among others. The most revolutionary aspect of the change robotics brings will not be the creation of new wealth, but in how it enables access to the wealth we already have.

Taylor Alexander is a multidisciplinary engineer focused on robotics. He is founder of Flutter Wireless and works as a Software Engineer at a secretive robotics startup in Silicon Valley. When he’s not designing for open source, he’s reading about the social and political implications of robotics and writing for his blog at tlalexander.com.

This essay appears in Circuit Cellar 308, March 2016.

The Future of Wireless: Imagination Drives Innovation

Wireless system design is one of the hottest fields in electrical engineering. We recently asked 10 engineers to prognosticate on the future of wireless technology. Alexander Popov, a Bulgaria-based engineer, writes:

These days, we are constantly connected to the Internet.5 Popov orange People expect quality service both at home and on the go. Cellular networks are meeting this demand with 4G and upcoming 5G technologies. A single person now uses as much bandwidth as an entire Internet provider 20 years ago. We are immersed in a pool of information, but are no longer its sole producers. The era of Internet of Things is upon us, and soon there will be more IoT devices than there are people. They require quite a different ecosystem than we people use. Тheir pattern of information flow is usually sporadic, with small chunks of data. Connecting to a generic Wi-Fi or cellular network is not efficient. IoT devices utilize well established protocols like Bluetooth LE and ZigBee, but dedicated ones like LPWAN and 6LoWPAN are also being developed and probably more will follow. We will see more sophisticated and intelligent wireless networks, probably sharing resources on different layers to form a larger WAN. An important aspect of IoT devices is their source of power. Energy harvesting and wireless power will evolve to become a standard part of the “smart” ecosystem. Improved technologies in chip manufacturing processes aid hardware not only by lowering power consumption and reducing size, but also with dedicated embedded communication stack and chip coils. The increased amount and different types of information will allow software technologies like cloud computing and big data analysis to thrive. With information so deep in our personal lives, we may see new security standards offering better protection for our privacy. All these new technologies alone will be valuable, but the possibilities they offer combined are only limited by our imaginations. Best be prepared to explore and sketch your ideas now! — Alexander Popov, Bulgaria (Director Product Management, Minerva Networks)

The Future of Wireless: Global Internet Network

Advances in wireless technologies are driving innovation in virtually every industry, from automobiles to consumer electronics. We recently asked 10 engineers to prognosticate on the future of wireless technology. Eileen Liu, a software engineer at Lockheed Martin, writes:10 Liu

Wireless technology has become increasingly prevalent in our daily lives. It has become commonplace to look up information on smartphones via invisible networks and to connect to peripheral devices using Bluetooth connections. So what should we expect to see next in the world of wireless technology? One of the major things to keep an eye on is the effort for a global Internet network. Facebook and Google are potentially collaborating, working on drones and high-altitude helium balloons with router-like payloads. These solar-powered payloads make a radio link to a telecommunications network on Earth’s surface and broadcast Internet coverage downwards. Elon Musk and Greg Wyler are both working on a different approach, using flotillas of low-orbiting satellites. With such efforts, high-speed Internet access could become possible for the most remote locations on Earth, bringing access to the 60% of the world’s population that currently do not have access. Another technology to look out for is wireless power transfer. This technology allows multiple devices to charge simultaneously without a tether and without a dependency on directionality. Recent developments have mostly been in the realm of mobile phones and laptops, but this could expand to other electronic devices and automobiles that depend on batteries. A third technology to look out for is car-to-car communications. Several companies have been developing autonomous cars, using sensor systems to detect road conditions and surrounding vehicles. These sensors have shown promise, but have limited range and field-of-view and can easily be obstructed. Car-to-car communications allow vehicles to broadcast position, speed, steering-wheel position, and other data to surrounding vehicles with a range of several hundred meters. By networking cars together wirelessly, we could be one step closer to safe autonomous driving. — Eileen Liu, United States (Software Engineer, Lockheed Martin)

The Future of Wireless: Deployment Matters

Each day, wireless technology becomes more pervasive as new electronics systems hit the market and connect to the Internet. We recently asked 10 engineers to prognosticate on the future of wireless technology. Penn State Professor Chris Coulston writes:9 Coulston green

With the Internet of Things still the big thing, we should expect exciting developments in embedded wireless in 2016 and beyond. Incremental advances in speed and power consumption will allow manufactures to brag about having the latest and greatest chip. However, all this potential is lost unless you can deploy it easily. The Futurelec FT-232 serial-to-USB bridge is a success because it trades off some of the functionality of a complex protocol for a more familiar, less burdensome, protocol.  The demand for simplified protocols should drive manufacturers to develop solutions making complex protocols more accessible. Cutting the cord means different things to different people. While Bluetooth Low Energy (BLE) has allowed a wide swath of gadgets to go wireless, these devices still require the presence of some intermediary (like a smart phone) to manage data transfer to the cloud. Expect to see the development of intermediate technologies enabling BLE to “cut the cord” to smart phones. Security of wireless communication will continue to be an important element of any conversation involving new wireless technology. Fortunately, the theoretical tools need to secure communication are well understood. Expect to see these tools trickle down as standard subsystems in embedded processors. The automotive industry is set to transform itself with self-driving cars. This revolution in transportation must be accompanied by wireless technologies allowing our cars to talk to our devices, each other and perhaps the roadways. This is an area that is ripe for some surprising and exciting developments enabling developers to innovate in this new domain. We live in interesting times with embedded systems playing a large role in consumer and industrial systems. With better and more accessible technology in your grasp, I hope that you have great and innovative 2016! — Chris Coulston, United States (Associate Professor, Electrical & Computer Engineering, Penn State Erie)

The Future of Wireless: IoT “Connect Anywhere” Solutions

Wireless communications have revolutionized virtually every industry, from healthcare to defense to consumer electronics. We recently asked 10 engineers to prognosticate on the future of wireless technology. France-based engineer Robert Lacoste writes:3 Lacoste purple

I don’t know if the forecasts about the Internet of Things (IoT) are realistic (some analysts predict from 20 to 100 billion devices in the next five years), but I’m sure it will be a huge market. And 99% of IoT products are and will be wireless. Currently, the vast majority of “things” connect to the Internet through a user’s smartphone, used as a gateway typically through a Bluetooth Smart link. Other devices (e.g., home control or smart metering) require the installation of a dedicated fixed RF-to-Internet gateway, using ZigBee, 6lowPan, or something similar. But the next big thing will be the availability of “connect anywhere” solutions, through low-power wide area networks, nicknamed LPWA. Even if the underlying technology is not actually new (i.e., using very low bit rates to achieve long range at low powers), the contenders are numerous: LORA Alliance, INGENU, SIGFOX, WEIGHTLESS, and a couple of others. At the same time, the traditional telcos are developing very similar solutions using cellular bands and variants of the 3GPP protocols. EC-GSM, LTE-MTC, and NB-IOT are the most discussed alternatives. So, the first big question is this: Which one (or ones, as a one-size-fits-all solution is unlikely) will be the winner? The second big question has to do with whether or not IoT products will be useful for society. But that’s another story! — Robert Lacoste, France (Founder, Alciom; Columnist, Circuit Cellar)