The Future of Embedded FPGAs

The embedded FPGA is not new, but only recently has it started becoming a mainstream solution for designing chips, SoCs, and MCUs. A key driver is today’s high-mask costs of advanced ICs.  For a chip company designing in high nodes, a change in RTL could cost millions of dollars and set the design schedule back by months. Another driver is constantly changing standards. The embedded FPGA is so compelling because it provides designers with the flexibility to update RTL at any time after fabrication, even in-system. Chip designers, management, and even the CFO like it.Tate Fig1

Given these benefits, the embedded FPGA is here to stay. However, like any technology, it will evolve to become better and more widespread. Looking back to the 1990s when ARM and others offered embedded processor IP, the technology evolved to where embedded processors appear widely on most logic chips today. This same trend will happen with embedded FPGAs. In the last few years, the number of embedded FPGA suppliers has increased dramatically: Achronix, Adicsys, Efinix, Flex Logix, Menta, NanoXplore, and QuickLogic. The first sign of market adoption was DARPA’s agreement with Flex Logix to provide TSMC 16FFC embedded FPGA for a wide range of US government applications. This first customer was critical as it validated the technology and paved the way for others to adopt.

There are a number of things driving the adoption of the embedded FPGA:

  • Mask costs are increasing rapidly: approximately $1 million for 40 nm, $2 million for 28 nm, and $4 million for 16 nm.
  • The size of design teams required to design advanced node is increasing. Fewer chips are being designed, but they want the same functions as in the past.
  • Standards are constantly changing.
  • Data centers require programmable protocols.
  • AI and machine learning algorithms

Surprisingly, embedded FPGAs don’t compete with FPGA chips. FPGA chips are used for rapid prototyping and lower-volume products that can’t justify the increasing cost of ASIC development. When systems with FPGAs hit high volume, FPGAs are generally converted to ASICs for cost reduction.

In contrast, embedded FPGAs don’t use external FPGAs and they can do things external FPGAs can’t, such as:

  • They are lower power because SERDES aren’t needed. Standard CMOS interfaces can run 1 GHz+ in 16 nm for embedded FPGA with hundreds and thousands of interconnects available.
  • Embedded FPGA is lower cost per LUT. There is no expensive packaging and a one-third of the die area of an FPGA chip is SERDES, PLLs, DDR PHYs, etc. that are no longer needed.
  • 1-GHz operations in the control path
  • Embedded FPGAs can be optimized: lots of MACs (Multiplier-Accumulators) for DSP or none; exactly the kind of RAM needed or none.
  • Tiny embedded FPGAs of just 100 LUTs up to very large embedded FPGAs of greater than 100K LUTs
  • Embedded FPGAs can be optimized for very low power operation or very high performance.

The following markets are likely to see widespread utilization of embedded FPGAs: the Internet of Things (IoT); MCUs and customizable programmable blocks on the processor bus; defense electronics; networking chips; reconfigurable wireless base stations; flexible, reconfigurable ASICs and SoCs; and AI and deep Learning accelerators.

To integrate embedded FPGAs, chip designers need them to have the following characteristics: silicon proven IP; density in LUTs/square millimeters similar to FPGA chips; a wide range of array sizes from hundreds of LUTs to hundreds of thousands of LUTs; options for a lot of DSP support and the kind of RAM a customer needs; IP proven in the process node a company wants with support of their chosen VT options and metal stack; an IP implementation optimized for power or performance; and proven software tools.

Over time, embedded FPGA IP will be available on every significant foundry from 180 to 7 nm supporting a wide range of applications. This means embedded FPGA suppliers must be capable of cost-effectively “porting” their architecture to new process nodes in a short time (around six months). This is especially true because process nodes keep getting updated over time and each major step requires an IP redesign.

Early adopters of embedded FPGA will have chips with wider market potential, longer life, and higher ROI, giving designers a competitive edge over late adopters. Similar benefits will accrue to systems designers. Clearly, this technology is changing the way chips are designed, and companies will soon learn that they can’t afford to “not” adopt embedded FPGA.

This article appears in Circuit Cellar 323.

Geoff Tate is CEO/Cofounder of Flex Logix Technologies. He earned a BSc in Computer Science from the University of Alberta and an MBA from Harvard University. Prior to cofounding Rambus in 1990, Geoff served as Senior Vice President of Microprocessors and Logic at AMD.

The Future of Network-on-Chip (NoC) Architectures

Adding multiple processing cores on the same chip has become the de facto design choice as we continue extracting increasing performance per watt from our chips. Chips powering smartphones and laptops comprise four to eight cores. Those powering servers comprise tens of cores. And those in supercomputers have hundreds of cores. As transistor sizes decrease, the number of cores on-chip that can fit in the same area continues to increase, providing more processing capability each generation. But to use this capability, the interconnect fabric connecting the cores is of paramount importance to enable sharing or distributing data. It must provide low latency (for high-quality user experience), high throughput (to maintain a rate of output), and low power (so the chip doesn’t overheat).

Ideally, each core should have a dedicated connection to a core with which it’s intended to communicate. However, having dedicated point-to-point wires between all cores wouldn’t be feasible due to area, power, and wire layout constraints. Instead, for scalability, cores are connected by a shared network-on-chip (NoC). For small core counts (eight to 16), NoCs are simple buses, rings, or crossbars. However, these topologies aren’t too scalable: buses require a centralized arbiter and offer limited bandwidth; rings perform distributed arbitration but the maximum latency increases linearly with the number of cores; and crossbars offer tremendous bandwidth but are area and power limited. For large core counts, meshes are the most scalable. A mesh is formed by laying out a grid of wires and adding routers at the intersections which decide which message gets to use each wire segment each cycle, thus transmitting messages hop by hop. Each router has four ports (one in each direction) and one or more ports connecting to a core. Optimized mesh NoCs today take one to two cycles at every hop.

Today’s commercial many-core chips are fairly homogeneous, and thus the NoCs within them are also homogeneous and regular. But the entire computing industry is going through a massive transformation due to emerging technology, architecture, and application trends. These, in turn, will have implications for the NoC designs of the future. Let’s consider some of these trends.

An exciting and potentially disruptive technology for on-chip networks is photonics. Its advantage is extremely high-bandwidth and no electrical power consumption once the signal becomes optical, enabling a few millimeters to a few meters at the same power. Optical fibers have already replaced electronic cables for inter-chassis interconnections within data centers, and optical backplanes are emerging viable alternatives between racks of a chassis. Research in photonics for shorter interconnects—from off-die I/O to DRAM and for on-chip networks—is currently active. In 2015, researchers at Berkeley demonstrated a microprocessor chip with on-chip photonic devices for the modulation of an external laser light source and on-chip silicon waveguides as the transmission medium. These chips directly communicated via optical signals. In 2016, researchers at the Singapore-MIT Alliance for Research and Technology demonstrated LEDs as on-chip light sources using novel III-V materials. NoC architectures inspired by these advances in silicon photonics (light sources, modulators, detectors, and photonic switches) are actively researched. Once challenges related to reliable and low-power photonic devices and circuits are addressed, silicon photonics might partially or completely replace on-chip electrical wires and provide high-bandwidth data delivery to multiple processing cores.

Read more Tech the Future Essays

The performance and energy scaling that used to accompany transistor technology scaling has diminished. While we have billions of transistors on-chip, switching them simultaneously can exceed a chip’s power budget. Thus, general-purpose processing cores are being augmented with specialized accelerators that would only be turned on for specific applications. This is known as dark silicon. For instance, GPUs accelerate graphics and image processing, DSPs accelerate signal processing, cryptographic accelerators perform fast encryption and decryption, and so on. Such domain-specific accelerators are 100× to 1000× more efficient than general-purpose cores. Future chips will be built using tens to hundreds of cores and accelerators, with only a subset of them being active at any time depending on the application. This places an additional burden on the NoC. First, since the physical area of each accelerator isn’t uniform (unlike cores), future NoCs are expected be irregular and heterogeneous. This creates questions about topologies, algorithms for routing, and managing contention. Second, traffic over the NoC may have dynamically varying latency/bandwidth requirements based on the currently active cores and accelerators. This will require quality-of-service guarantees from the NoC, especially for chips operating in real-time IoT environments or inside data centers with tight end-to-end latency requirements.

The aforementioned domain-specific accelerators are massively parallel engines with NoCs within them, which need to be tuned for the algorithm. For instance, there’s a great deal of interest in architectures/accelerators for deep neural networks (DNN), which have shown unprecedented accuracy in vision and speech recognition tasks. Example ASICs include IBM’s TrueNorth, Google’s Tensor Processing Unit (TPU), and MIT’s Eyeriss. At an abstract level, these ASICs comprise hundreds of interconnected multiply-accumulate units (the basic computation inside a neuron). The traffic follows a map-reduce style. Map (or scatter) inputs (e.g., image pixels or filter weights in case of convolutional neural networks) to the MAC units. Reduce (or gather) partial or final outputs that are then mapped again to neurons of this/subsequent layers. The NoC needs to perform this map-reduce operation for massive datasets in a pipelined manner such that MAC units are not idle. Building a highly scalable, low-latency, high-bandwidth NoC for such DNN accelerators will be an active research area, as novel DNN algorithms continue to emerge.

You now have an idea of what’s coming in terms of the hardware-software co-design of NoCs. Future computer systems will become even more heterogeneous and distributed, and the NoC will continue to remain the communication backbone tying these together and providing high performance at low energy.

This essay appears in Circuit Cellar 322.

Dr. Tushar Krishna is an Assistant Professor of ECE at Georgia Tech. He holds a PhD (MIT), an MSE (Princeton), and a BTech (IIT Delhi). Dr. Krishna spent a year as a post-doctoral researcher at Intel and a semester at the Singapore-MIT Alliance for Research and Technology.

The Future of Embedded Computing

Although my academic background is in cybernetics and artificial intelligence, and my career started out in production software development, I have been lucky enough to spend the last few years diving head first into embedded systems development. There have been some amazing steps forward in embedded computing in recent years, and I’d like to share with you some of my favorite recent advances, and how I think they will progress.

While ever-decreasing costs of embedded computing hardware is expected and not too exciting, I think there have been a few key price points that are an indicator of things to come. In the last few months, we have seen the release of Application Processor development boards that are below $10. Tiny gigahertz-level processors that are Linux-ready for an amazingly low price. The most well-known is the Raspberry Pi Zero, which is created by the Raspberry Pi Foundation, who I believe will continue to push this impressive level of development capability into schools, really giving the next generation of engineers (and non-engineers) some hands-on experience. Perhaps a less well known release is C.H.I.P, the new development platform from Next Thing Co. The hardware is like the Pi Zero, but the drive behind the company is quite different. We’ll discuss this more later.

While the hobbyist side of embedded computing is not new, the communities and resources that are being built are exciting. Most of you will have heard of Arduino and Raspberry Pi. The Pi is a low-cost, easy-to-use Linux computer. Arduino is an open-source platform consisting of a super-simple IDE, tons of libraries, and a huge range of development boards. These have set a standard for member of the maker community who expect affordable hardware, open-source designs, and strong community support, and some companies are stepping up to this.

Next Thing Co. has the goal of creating things to inspire creativity. In addition to developing low-cost hardware, they try to remove the pain from the design process and only open-source, well-documented products will do. This ethos is embodied in their C.H.I.P Pro, which is not just an open-source Linux System-on-Module. It’s built around their own GR8 IC, which contains an Allwinner 1-GHz ARM Cortex-A8, as well as 256 MB of DDR3 built in, accompanied with an open datasheet requiring no NDA, and with a one-unit minimum order quantity. This really eliminates the headaches of high-speed routing between DDR3 and the processor, and it reduces the manufacturing complexities of creating a custom Linux ready PCB. Innovation and progress like this provide a lot more value than the many other companies just producing insufficiently documented breakout boards for existing chips. I think that this will be a company to watch, and I can’t wait to see what their next ambitious project will be.

We’ve all been witnessing the ever-increasing performance of embedded systems, as successive generations of smart phones and tablets are released, but when I talk about high performance I don’t refer to a measly 2+GHz Octa-core system with a few Gig of RAM, I’m talking about embedded supercomputing!

As far as I’m concerned, the one to watch here is NVIDIA. Their recent Tegra series sees them bringing massively parallel GPU processing to affordable embedded devices. The Tegra 4 had a quadcore CPU and 72 GPU cores. The TK1 has a quadcore CPU and 192 GPU cores, and the most recent TX1 has an octacore CPU and a 256 GPU cores that provide over 1 Teraflops of processing power. These existing devices are very impressive, but NVIDIA are not slowing down development, with the Xavier expected to appear at the end of 2017. Boasting 512 GPU cores and a custom octacore CPU architecture, the Xavier claims to deliver 20 trillion operations per second for only 20-W power consumption.

NVIDIA is developing these systems with the intent for them to enable embedded artificial intelligence (AI) with a focus on autonomous vehicles and real-time computer vision. This is an amazing goal, as AI has historically lacked the processing power to make it practical in many applications, and I’m hoping that NVIDIA is putting an end to that. In addition to their extremely capable hardware, they are providing great software resources and support for developing deep learning systems.

We are on the horizon of some exciting advancements in the field of embedded computing. In addition to seeing an ever-growing number of IoT and smart devices, I believe that during the next few years we’ll see embedded computing enable great advancements in AI and smart cities. Backyard developers will be enabled to create more impressive and advanced systems, and technical literacy will become more widespread.

This essay appears in Circuit Cellar 321.


Steve Samuels ( is a Cofounder and Prototype Engineer at Think Engineer LLP, a research, development and prototyping company that specializes in creating full system prototypes and proof-of-concepts for next-generation products and services. Steve has spent most of his career in commercial research and development in domains such as transportation, satellite communications, and space robotics. Having worked in a lot of different technical areas, his main technical interests are embedded systems and machine learning.

Lightweight Systems and the Future of Wireless Technology

Last November, we published engineer Alex Bucknall’s essay “Taking the ‘Hard’ Out of Hardware.” We recently followed up with him to get his thoughts on the future of ‘Net-connected wireless devices and the Internet of Things (IoT).

BucknallAs we enter an age of connected devices, sensors, and objects (aka the Internet of Things), we’re beginning to see a drive for lightweight systems that allow for low power, low complexity, and long-distance communication protocols. More of the world is becoming connected and not all of these embedded devices can afford high-capacity batteries or to be connected to mains power. We’ll see a collection of protocols that can provide connectivity with just a few milliwatts of power that can be delivered through means of energy harvesting such as solar power. It’ll become essential for embedded sensors to communicate from remote locations where current standards like Wi-Fi and BLE fall behind due to range constraints. Low-Power Wide Area Networks (LPWANs) will strive to fill this gap with protocols such as Sigfox, LoRa, NB-IoT, and others stepping up to the plate. The next hurdle will be the exciting big data challenge as we start to learn more about our world via the Internet of Things! — Alex Bucknall (Developer Evangelist, Sigfox, France)

The Future of Automation

The robot invasion isn’t coming. It’s already here. One would be hard-pressed to find anything in modern “industrialized” society that doesn’t rely on a substantial level of automation during its life cycle—whether in its production, consumption, use, or (most probably) all of the above. Regardless of the definition du jour, “robots” are indeed well on their way to taking over—and not in the terrifying, apocalyptic, “Skynet” kind of way, but in a way that will universally improve the human condition.

Of course, the success of this r/evolution relies on an almost incomprehensible level of compounding innovations and advancements accompanied by a mountain of associated technical complexities and challenges. The good news is many of these challenges have already been addressed—albeit in a piecemeal manner—by focused professionals in their respective fields. The real obstacle to progress, therefore, ultimately lies in the compilation and integration of a variety of technologies and techniques from heterogeneous industries, with the end goal being a collection of cohesive systems that can be easily and intuitively implemented in industry.

Two of the most promising and critical aspects of robotics and automation today are human-machine collaboration and flexible manufacturing. Interestingly (and, perhaps, fortuitously), their problem sets are virtually identical, as the functionality of both systems inherently revolves around constantly changing and wildly unpredictable environments and tasks. These machines, therefore, have to be heavily adaptable to and continuously “aware” of their surroundings in order to maintain not only a high level of performance, but also to consistently perform safely and reliably.

Not unlike humans, machines rely on their ability to collect, analyze, and act on external data, oftentimes in a deterministic fashion—in other words, certain things must happen in a pre-defined amount of time for the system to perform as intended. These data can range from the very slow and simple (e.g., calculating temperature by reading the voltage of a thermocouple once a second) to the extremely fast and complex (e.g., running control loops for eight brushless electric motors 25,000-plus times a second). Needless to say, giving a machine the ability to perceive—be it through sight, sound, and/or touch—and act on its surroundings in “real time” is no easy task.

Read more Tech the Future essays and get inspired!

Computer vision (sight and perception), speech recognition (sound and language), and precision motion control (touch and motor skills) are things most people take for granted, as they are collectively fundamental to survival. Machines, however, are not “born” with—nor have they evolved—these abilities. Piling on additional layers of complexity like communication and the ability to acquire knowledge/learn new tasks, and it becomes menacingly apparent how substantial the challenge of creating intelligent and connected automated systems really is.

While the laundry list of requirements might seem nearly impossible to address, fortunately the tools used for integrating these exceedingly complex systems have undergone their own period of hyper growth in the last several years. In much the same way developers, researchers, engineers, and entrepreneurs have picked off industry- and application-specific problems related to the aforementioned technical hurdles, as have the people behind the hardware and software that make it possible for these independently developed, otherwise standalone solutions to be combined and interwoven, thus resulting in truly world-changing innovations.

For developers, only in the last few years has it become practical to leverage the combination of embedded technologies like the power-efficient, developer-friendly mobile application processor with the flexibility and raw “horsepower” of programmable logic (i.e., field-programmable gate arrays, which have historically been reserved for the aerospace/defense and telecommunication industries) at scales never previously imagined. And with rapidly growing developer communities, the platforms built around these technologies are directly facilitating the advancement of automation, and doing it all under a single silicon “roof.” There’s little doubt that increasing access to these new tools will usher in a more nimble, intelligent, safe, and affordable wave of robotics.

Looking forward, automation will undoubtedly continue to play an ever-increasingly central role in day-to-day life. As a result, careful consideration must be given to facilitating human-machine (and machine-machine) collaboration in order to accelerate innovation and overcome the technical and societal impacts bred from the disruption of the status quo. The pieces are already there, now it’s time to assemble them.

This article appears in Circuit Cellar 320.

Ryan Cousins is cofounder and CEO of krtkl, Inc. (“critical”), a San Francisco-based embedded systems company. The krtkl team created snickerdoodle—an affordable and highly reconfigurable platform for developing mechatronic, audio/video, computer vision, networking, and wireless communication systems. Ryan has a BS in mechanical engineering from UCLA.  He has experience in R&D, project management, and business development in the medical and embedded systems industries. Learn more at or

The Importance of Widely Available Wireless Links for UAV Systems

Readily available, first-rate wireless links are essential for building and running safe UAV systems. David Weight, principal electronics engineer at Waittcircuit, recently shared his thoughts on the importance developing and maintaining high-quality wireless links as the UAV industry expands.

weightOne of the major challenges that is emerging in the UAV industry is maintaining wireless links with high availability. As UAVs start to share airspace with other vehicles, we need to demonstrate that a control link can be maintained in a wide variety of environments, including interference and non-line of sight. We are starting to see software defined radio used to build radios which are frequency agile and capable of using multiple modulation techniques. For example, being able to use direct links in open spaces where these are most effective, but being able to change to 4G type signals when entering more built-up areas as these areas can pose issues for direct links, but have good coverage for existing commercial telecoms. Being able to change the frequency and modulation also means that, where interference or poor signal paths are found, frequencies can be changed to avoid interference, or in extreme cases, be reduced to lower bands which allow control links to be maintained. This may mean that not all the data can be transmitted back, but it will keep the link alive and continue to transmit sufficient information to allow the pilot to control the UAV safely. — David Weight (Principal Electronics Engineer, Wattcircuit, UK)

Brain Controlled-Tech and the Future of Wireless

Wireless IoT devices are becoming increasingly common in both private and public spaces. Phil Vreugdenhil, an instructor at Camosun College in Canada, recently shared his thoughts on the future of ‘Net-connected wireless technology and the ways users will interact with it.

VreugdenhilI see brain-controlled software and hardware seamlessly interacting with wireless IoT devices.  I also foresee people interacting with their enhanced realities through fully integrated NEMS (nano-electromechancical systems) which also communicate directly with the brain, bypassing the usual pathways (eyes, ears, nose, touch, taste) much like cochlear implants and bionic eyes. I see wireless health-monitoring systems and AI doctors drastically improving efficiency in the medical system. But, I also see the safety and security pitfalls within these future systems. The potential for hacking somebody’s personal systems and altering or deleting the data they depend upon for survival makes the future of wireless technology seem scarier than it will probably be. — Phil Vreugdenhil (Instructor, Camosun College, Canada)

The Future of Test-First Embedded Software

The term “test-first” software development comes from the original days of extreme programming (XP). In Kent Beck’s 1999 book, Extreme Programming Explained: Embrace Change (Addison-Wesley), his direction is to create an automated test before making any changes to the code.

Nowadays, test-first development usually means test-driven development (TDD): a well-defined, continuous feedback cycle of code, test, and refactor. You write a test, write some code to make it pass, make improvements, and then repeat. Automation is key though, so you can run the tests easily at any time.

TDD is well regarded as a useful software development technique. The proponents of TDD (including myself) like the way in which the code incrementally evolves from the interface as well as the comprehensive test suite that is created. The test suite is the safety net that allows the code to be refactored freely, without worry of breaking anything. It’s a powerful tool in the battle against code rot.

To date, TDD has had greater adoption in web and application development than with embedded software. Recent advances in unit test tools however are set to make TDD more accessible for embedded development.

In 2011 James Grenning published his book, Test Driven Development for Embedded C (Pragmatic Bookshelf). Six years later, this is still the authoritative reference for embedded test-first development and the entry point to TDD for many embedded software developers. It explains how TDD works in detail for an unfamiliar audience and addresses many of the traditional concerns, like how will this work with custom hardware. Today, the book is still completely relevant, but when it was published, the state-of-the art tools were simple unit test and mocking frameworks. These frameworks require a lot of boilerplate code to run tests, and any mock objects need to be created manually.

In the rest of the software world though, unit test tools are significantly more mature. In most other languages used for web and application development, it’s easy to create and run many unit tests, as well as to create mock objects automatically.
Since 2011, the current state of TDD tools has advanced considerably with the development of the open-source tool Ceedling. It automates running of unit tests and generation of mock objects in C applications, making it a lot easier to do TDD. Today, if you want to test-drive embedded software in C, you don’t need to roll-your-own test build system or mocks.

With better tools making unit testing easier, I suspect that in the future test-first development will be more widely adopted by embedded software developers. While previously relegated to the few early adopters willing to put in the effort, with tools lowering the barrier to entry it will be easier for everyone to do TDD.
Besides the tools to make TDD easier, another driving force behind greater adoption of test-first practices will be the simple need to produce better-quality embedded software. As embedded software continues its infiltration into all kinds of devices that run our lives, we’ll need to be able to deliver software that is more reliable and more secure.

Currently, unit tests for embedded software are most popular in regulated industries—like medical or aviation—where the regulators essentially force you to have unit tests. This is one part of a strategy to prevent you from hurting or killing people with your code. The rest of the “unregulated” embedded software world should take note of this approach.

With the rise of the Internet of things (IoT), our society is increasingly dependent on embedded devices connected to the Internet. In the future, the reliability and security of the software that runs these devices is only going to become more critical. There may not be a compelling business case for it now, but customers—and perhaps new regulators—are going to increasingly demand it. Test-first software can be one strategy to help us deal with this challenge.

This article appears in Circuit Cellar 318.

Matt Chernosky wants to help you build better embedded software—test-first with TDD. With years of experience in the automotive, industrial, and medical device fields, he’s excited about improving embedded software development. Learn more from Matt about getting started with embedded TDD at

Taking the “Hard” Out of Hardware

There’s this belief among my software developer friends that electronics are complicated, hardware is hard, and that you need a degree before you can design anything to do with electricity. They honestly believe that building electronics is more complicated than writing intricate software—that is, the software which powers thousands of people’s lives all around the world. It’s this mentality that confuses me. How can you write all of this incredible software, but a believe a simple 555 timer circuit is complicated?

I wanted to discover where the idea that “hardware is hard” came from and how I could disprove it. I started with something with which almost everyone is familiar, LEGO. I spent my childhood playing with these tiny plastic bricks, building anything my seven-year-old mind could dream up, creating intricate constructions from seemingly simplistic pieces. Much like the way you build LEGO designs, electronic systems are built upon a foundation of simple components.

When you decide to design/build a system, you want to first start by breaking down the system into components and functional sections that are easy to understand. You can use this approach for both digital and analog systems. The example I like use to explain this is a phase-locked loop frequency modulator demodulator/detector, a seemingly complicated device used to decode frequency modulated radio signals. This system sounds like it would be impossible to build, especially for someone who isn’t familiar with electronics. I can recognize that from experience. I remember the first year of my undergraduate studies where my lecturers would place extremely detailed circuit schematics up on a chalkboard and expect us to be able to understand high-level functionality. I recall the panic this induced in a number of my peers and very likely put them off electronics in later years. One of the biggest problems that an electronics instructor faces is teaching complexity without scaring away students.

This essay appears in Circuit Cellar 317, December 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!


What many people either don’t realize or aren’t taught is that most systems can be broken down into composite pieces. The PLL FM demodulator breaks into three main elements: the phase detector, a voltage controlled oscillator (VCO) and a loop filter. These smaller pieces, or “building blocks,” can then be separated even further. For example, the loop filter—an element of the circuit used to remove high-frequency—is constructed from a simple combination of resistors, capacitors, and operational amplifiers (see Figure 1).Figure 1

I’m going to use a bottom-up approach to explain the loop filter segment of this system using simple resistors (R) and capacitors (C). It is this combination of resistors and capacitors allows you to create passive RC filters—circuits which work by allowing only specific frequencies to pass to the output. Figure 2 shows a low-pass filter. This is used to remove high-frequency signals from the output of a circuit. Note: I’m avoiding as much math as possible in this explanation, as you don’t need numerical examples to demonstrate behavior. That can come later! The performance of this RC filter can be improved by adding an amplification stage using an op-amp, as we’ll see next.Figure 2

Op-amps are a nice example of abstraction in electronics. We don’t normally worry about their internals, much like a CPU or other ICs, and rather treat them like functional boxes with inputs and an output. As you can see in Figure 3, the op-amp is working in a “differential” mode to try to equalize the voltages at its negative and positive terminals. It does this by outputting the difference and feeding it back to the negative terminal via a feedback loop created by the potential divider (voltage divider) at R2 and R3. The differential effect between the op-amp’s two input terminals causes a “boosted” output that is determined by the values of R2 and R3. This amplification, in combination with the low-pass passive filter, creates what’s known as a low-pass active filter.Figure 3

The low-pass active filter would be one of a number of filtering elements within the loop filter, and we already built up one of the circuit’s three main elements! This example starts to show how behavior is cumulative. As you gain knowledge about fundamental components, you’ll start to understand how more complex systems work. Almost all of electronic systems have this building block format. So, yes, there might be a number of behaviors to understand. But as soon as you learn the fundamentals, you can start to design and build complicated systems of your own!

Alex Bucknall earned a Bachelor’s in Electronic Engineering at the University of Warwick, UK. He is particularily interested in FPGAs and communications systems. Alex works as a Developer Evangelist for Sigfox, which is offering simple and low-energy communication solutions for the Internet of Things.

The Future of Ultra-Low Power Signal Processing

One of my favorite quotes comes from the IEEE Signal Processing magazine in 2010. They attempted to answer the question: What does ultra-low power consumption mean? And they came to the conclusion that it is where the “power source lasts longer than the useful life of the product.”[1] It’s a great answer because it’s scalable. It applies equally to signal processing circuitry inside an embedded IoT device that can never be accessed or recharged and to signal processing inside a car where the petrol for the engine dominates the operating lifetime, not the signal processing power. It also describes exactly what a lot of science fiction has always envisioned: no changing or recharging of batteries, which people forget to do or never have enough batteries for. Rather, we have devices that simply always work.Figure 1

My research focuses on healthcare applications and creating “wearable algorithms”—that is, signal processing implementations that fit within the very small power budgets available in wearable devices. Historically, this focused on data reduction to save power. It’s well known that wireless data transmission is very power intensive. By using some power to reduce the amount of data that has to be sent, it’s possible to save lots of power in the wireless transmission stage and so to increase the overall battery lifetime.

This argument has been known for a long time. There are papers dating back to at least the 1990s based on it. It’s also readily achievable. Inevitably, it depends on the precise situation, but we showed in 2014 that the power consumption of a wireless sensor node could be brought down to the level of a node without a wireless transmitter (one that uses local flash memory) using easily available, easy-to-use, off-the-shelf-devices.[2]

This essay appears in Circuit Cellar 316, November 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

Today, there are many additional benefits that are being enabled by the emerging use of ultra-low power signal processing embedded in the wearable itself, and these new applications are driving the research challenges: increased device functionality; minimized system latency; reliable, robust operation over unreliable wireless links; reduction in the amount of data to be analyzed offline; better quality recordings (e.g., with motion artifact removal to prevent signal saturations); new closed-loop recording—stimulation devices; and real-time data redaction for privacy, ensuring personal data never leaves the wearable.

It’s these last two that are the focus for my research now. They’re really important for enabling new “bioelectronic” medical devices which apply electrical stimulation as an alternative to classical pharmacological treatments. These “bioelectronics” will be fully data-driven, analyzing physiological measurements in real-time and using this to decide when to optimally trigger an intervention. Doing such as analysis on a wearable sensor node though requires ultra-low power signal processing that has all of the feature extraction and signal classification operating within a power budget of a few 100 µW or less.

To achieve this, most works do not use any specific software platform. Instead they achieve very low-power consumption by using only dedicated and highly customized hardware circuits. While there are many different approaches to realizing low-power fully custom electronics, for the hardware, the design trends are reasonably established: very low supply voltages, typically in the 0.5 to 1 V range; highly simplified circuit architectures, where a small reduction in processing accuracy leads to substantial power savings; and the use of extensive analogue processing in the very lowest power consumption circuits.[3]

Less well established are the signal processing functions for ultra-low power. Focusing on feature extractions, our 2015 review highlighted that the majority (more than half) of wearable algorithms created to date are based upon frequency information, with wavelet transforms being particularly popular.[4] This indicates a potential over-reliance on time–frequency decompositions as the best algorithmic starting points. It seems unlikely that time–frequency decompositions would provide the best, or even suitable, feature extraction across all signal types and all potential applications. There is a clear opportunity for creating wearable algorithms that are based on other feature extraction methods, such as the fractal dimension or Empirical Mode Decomposition.

Investigating this requires studying the three-way trade-off between algorithm performance (e.g., correct detections), algorithm cost (e.g., false detections), and power consumption. We know how to design signal processing algorithms, and we know how to design ultra-low power circuitry. However, combining the two opens many new degrees of freedom in the design space, and there are many opportunities and work to do in mapping feature extractions and classifiers into sub-1-V power supply dedicated hardware.

[1] G. Frantz, et al, “Ultra-low power signal processing,” IEEE Signal Processing Magazine, vol. 27, no. 2, 2010.
[2] S. A. Imtiaz, A. J. Casson, and E. Rodriguez-Villegas, “Compression in Wearable Sensor Nodes,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 4, 2014.
[3] A. J. Casson, et al, “Wearable Algorithms,” in E. Sazonov and M. R. Neuman (eds.), Wearable Sensors, Elsevier, 2014.
[4] A. J. Casson, “Opportunities and Challenges for Ultra Low Power Signal Processing in Wearable Healthcare,” 23rd European Signal Processing Conference, Nice, 2015.

Alex Casson is a lecturer in the Sensing, Imaging, and Signal Processing Department at the University of Manchester. His research focuses on creating next-generation human body sensors, developing both the required hardware and software. Dr. Casson earned an undergraduate degree at the University of Oxford and a PhD from Imperial College London.

The Future of Biomedical Signal Analysis Technology

Biomedical signals obtained from the human body can be beneficial in a variety of scenarios in a healthcare setting. For example, physicians can use the noninvasive sensing, recording, and processing of a heart’s electrical activity in the form of electrocardiograms (ECGs) to help make informed decisions about a patient’s cardiovascular health. A typical biomedical signal acquisition system will consist of sensors, preamplifiers, filters, analog-to-digital conversion, processing and analysis using computers, and the visual display of the outputs. Given the digital nature of these signals, intelligent methods and computer algorithms can be developed for analysis of the signals. Such processing and analysis of signals might involve the removal of instrumentation noise, power line interference, and any artifacts that act as interference to the signal of interest. The analysis can be further enhanced into a computer-aided decision-making tool by incorporating digital signal processing methods and algorithms for feature extraction and pattern analysis. In many cases, the pattern analysis module is developed to reveal hidden parameters of clinical interest, and thereby improve the diagnostic and monitoring of clinical events.Figure1

The methods used for biomedical signal processing can be categorized into five generations. In the first generation, the techniques developed in the 1970s and 1980s were based on time-domain approaches for event analysis (e.g., using time-domain correlation approaches to detect arrhythmic events from ECGs). In the second generation, with the implementation of the Fast Fourier Transform (FFT) technique, many spectral domain approaches were developed to get a better representation of the biomedical signals for analysis. For example, the coherence analysis of the spectra of brain waves also known as electroencephalogram (EEG) signals have provided an enhanced understanding of certain neurological disorders, such as epilepsy. During the 1980s and 1990s, the third generation of techniques was developed to handle the time-varying dynamical behavior of biomedical signals (e.g., the characteristics of polysomnographic (PSG) signals recorded during sleep possess time-varying properties reflecting the subject’s different sleep stages). In these cases, Fourier-based techniques cannot be optimally used because by definition Fourier provides only the spectral information and doesn’t provide a time-varying representation of signals. Therefore, the third-generation algorithms were developed to process the biomedical signals to provide a time-varying representation, and   clinical events can be temporally localized for many practical applications.

This essay appears in Circuit Cellar 315, October 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

These algorithms were essentially developed for speech signals for telecommunications applications, and they were adapted and modified for biomedical applications. The nearby figure illustrates an example of knee vibration signal obtained from two different knee joints, their spectra, and joint time-frequency representations. With the advancement in computing technologies, for the past 15 years, many algorithms have been developed for machine learning and building intelligent systems. Therefore, the fourth generation of biomedical signal analysis involved the automatic quantification, classification, and recognition of time-varying biomedical signals by using advanced signal-processing concepts from time-frequency theory, neural networks, and nonlinear theory.

During the last five years, we’ve witnessed advancements in sensor technologies, wireless technologies, and material science. The development of wearable and ingestible electronic sensors mark the fifth generation of biomedical signal analysis. And as the Internet of Things (IoT) framework develops further, new opportunities will open up in the healthcare domain. For instance, the continuous and long-term monitoring of biomedical signals will soon become a reality. In addition, Internet-connected health applications will impact healthcare delivery in many positive ways. For example, it will become increasingly effective and advantageous to monitor elderly and chronically ill patients in their homes rather than hospitals.

These technological innovations will provide great opportunities for engineers to design devices from a systems perspective by taking into account patient safety, low power requirements, interoperability, and performance requirements. It will also provide computer and data scientists with a huge amount of data with variable characteristics.

The future of biomedical signal analysis looks very promising. We can expect  innovative healthcare solutions that will improve everyone’s quality of life.

Sridhar (Sri) Krishnan earned a BE degree in Electronics and Communication Engineering at Anna University in Madras, India. He earned MSc and PhD degrees in Electrical and Computer Engineering at the University of Calgary. Sri is a Professor of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario, Canada, and he holds the Canada Research Chair position in Biomedical Signal Analysis. Since July 2011, Sri has been an Associate Dean (Research and Development) for the Faculty of Engineering and Architectural Science. He is also the Founding Co-Director of the Institute for Biomedical Engineering, Science and Technology (iBEST). He is an Affiliate Scientist at the Keenan Research Centre at St. Michael’s Hospital in Toronto.

The Hunt for Power Remote Sensing

With the advent of the Internet of Things (IoT), the need for ultra-low power passive remote sensing is on the rise for battery-powered technologies. Always-on motion-sensing technologies are a great option to turn to. Digital cameras have come light years from where they were a decade ago, but low power they are not. When low-power technologies need always-on remote sensing, infrared motion sensors are a great option to turn to.

Passive infrared (PIR) sensors and passive infrared detectors (PIDs) are electronic devices that detect infrared light emitted from objects within their field of view. These devices typically don’t measure light per se; rather, they measure the delta of a system’s latent energy. This change generates a very small potential across a crystalline material (gallium nitride, cesium nitrate, among others), which can be amplified to create a usable signal.

Infrared technology was built on a foundation of older motion-sensing technologies that came before. Motion sensing was first utilized in the early 1940s, primarily for military purposes nearing the end of World War II. Radar and ultrasonic detectors were the progenitors of motion-sensing technologies seen today, relying on reflecting sound waves to determine the location of objects in a detection environment. Though effective for its purpose, its use was limited to military applications and was not a reasonable option for commercial users.

This essay appears in Circuit Cellar 314 (September 2016).

The viability of motion detection tools began to change as infrared-sensing options entered development. The birth of modern PIR sensors began towards the end of the sixties, when companies began to seek alternatives to the already available motion technologies that were fast becoming outdated.

The modern versions of these infrared motion sensors have taken root in many industries due to the affordability and flexibility of their use. The future of motion sensors is PID, and it has several advantages over its counterparts:

  • Saving Energy—PIDs are energy efficient. The electricity required to operate PIDs is minimal, with most units actually reducing the user’s energy consumption when compared to other commercial motion-sensing devices.
  • Inexpensive—Cost isn’t a barrier to entry for those wanting to deploy IR motion sensing technology. This sensor technology makes each individual unit affordable, allowing users to deploy multiple sensors for maximum coverage without breaking the bank.
  • Durability—It’s hard to match the ruggedness of PIDs. Most units don’t employ delicate circuitry that is easily jarred or disrupted; PIDs are routinely used outdoors and in adverse environments that would potentially damage other styles of detectors.
  • Simple and Small—The small size of PIDs work to their advantage. Innocuous sensors are ideal for security solutions that aren’t obtrusive or easily noticeable. This simplicity makes PIDs desirable for commercial security, when businesses want to avoid installing obvious security infrastructure throughout their buildings.
  • Wide Lens Range—The wide field of vision that PIDs have allow for comprehensive coverage of each location in which they are placed. PIDs easily form a “grid” of infrared detection that is ideal for detecting people, animals, or any other type of disruption that falls within the lens range.
  • Easy to Interface With—PIDs are flexible. The compact and simple nature of PIDs lets the easily integrate with other technologies, including public motion detectors for businesses and appliances like remote controls.

With the wealth of advantages PIDs have over other forms of motion-sensing technology, it stands to reason that PIR sensors and PIDs will have a place in the future of motion sensor development. Though other options are available, PIDs operate with simplicity, energy-efficiency, and a level of durability that other technologies can’t match. Though there are some exciting new developments in the field of motion-sensing technology, including peripherals for virtual reality and 3-D motion control, the reliability of infrared motion technology will have a definite role in the evolution of motion sensing technology in the years to come.

As the Head Hardware Engineer at Cyndr (, Kyle Engstrom is the company’s lead electron wrangler and firmware designer. He specializes in analog electronics and power systems. Kyle has bachelor’s degrees in electrical engineering and geology. His life as a rock hound lasted all of six months before he found his true calling in engineering. Kyle has worked three years in the aerospace industry designing cutting-edge avionics.

Software-Programmable FPGAs

Modern workloads demand higher computational capabilities at low power consumption and cost. As traditional multi-core machines do not meet the growing computing requirements, architects are exploring alternative approaches. One solution is hardware specialization in the form of application specific integrated circuits (ASICs) to perform tasks at higher performance and lower power than software implementations. The cost of developing custom ASICs, however, remains high. Reconfigurable computing fabrics, such as field-programmable gate arrays (FPGAs), offer a promising alternative to custom ASICs. FPGAs couple the benefits of hardware acceleration with flexibility and lower cost.

FPGA-based reconfigurable computing has recently taken the spotlight in academia and industry as evidenced by Intel’s high-profile acquisition of Altera and Microsoft’s recent announcement to deploy thousands of FPGAs to speed up Bing search. In the coming years, we should expect to see hardware/software co-designed systems supported by reconfigurable computing to become common. Conventional RTL design methodologies, however, cannot productively manage the growing complexity of algorithms we wish to accelerate using FPGAs. Consequently, FPGA programmability is a major challenge that must be addressed both technologically by leveraging high-level software abstractions (e.g., language and compilers), run-time analysis tools, and readily available libraries and benchmarks, as well as scholastically through the education of rising hardware/software engineers.

Recent efforts related to software-programmable FPGAs have focused on designing high-level synthesis (HLS) compilers. Inspired by classical C-to-gates tools, HLS compilers automatically transform programs written in traditional untimed software languages to timed hardware descriptions. State-of-the-art HLS tools include Xilinx’s Vivado HLS (C/C++) and SDAccel (OpenCL) as well as Altera’s OpenCL SDK. Although HLS is effective at translating C/C++ or OpenCL programs to RTL hardware, compilers are only a part of the story in realizing truly software-programmable FPGAs.

Efficient memory management is central to software development. Unfortunately, unlike traditional software programming, current FPGA design flows require application-specific memories to sustain high performance hardware accelerators. Features such as dynamic memory allocation, pointer chasing, complex data structures, and irregular memory access patterns are also ill-supported by FPGAs. In lieu of basic software memory abstractions techniques, experts must design custom hardware memories. Instead, more extensible software memory abstractions would facilitate software-programmability of FPGAs.

In addition to high-level programming and memory abstractions, run-time analysis tools such as debuggers and profilers are essential to software programming. Hardware debuggers and profilers in the form of hardware/co-simulation tools, however, are not ready for tackling exascale systems. In fact, one of the biggest barriers to realizing software-programmable FPGAs are the hours, even days, it takes to generate bitstreams and run hardware/software co-simulators. Lengthy compilation and simulation times cause debugging and profiling to consume the majority of FPGA development cycles and deter agile software development practices. The effect is compounded when FPGAs are integrated into heterogeneous systems with CPUs and GPUs over complex memory hierarchies. New tools, following architectural simulators, may aid in rapidly gathering performance, power, and area utilization statistics for FPGAs in heterogeneous systems. Another solution to long compilation and simulation times is using overlay architectures. Overlay architectures mask the FPGA’s bit-level configurability with a fixed network of simple processing nodes. The fixed hardware in overlay architectures enables faster programmability at the expense of finer grained, bit-level parallelism of FPGAs.

Another key facet of software programming is readily available libraries and benchmarks. Current FPGA development is marred with vendor specific IPs cores that span limited domains. As FPGAs become more software-programmable, we should expect to see more domain experts providing vendor agnostic FPGA-based libraries and benchmarks. Realistic, representative, and reproducible vendor-agnostic libraries and benchmarks will not only make FPGA development more accessible but also serve as reference solutions for developers.

Finally, the future of software-programmable FPGAs lies not only in technological advancements but also in educating the next generation of hardware/software co-designing engineers. Software engineers are rarely concerned with the downstream architecture except when exercising expert optimizations. Higher-level abstractions and run-time analysis tools will improve FPGA programmability but developers will still need a working knowledge of FPGAs to design competitive hardware accelerators. Following reference libraries and benchmarks, software engineers must become fluent with the notion of pipelining, unrolling, partitioning memory into local SRAM blocks and hardened IPs. Terms like throughout, latency, area utilization, power and cycle time will enter software engineering vernacular.

Recent advances in HLS compilers have demonstrated the feasibility of software-programmable FPGAs. Now, a combination of higher-level abstractions, run-time analysis tools, libraries and benchmarks must be pioneered alongside trained hardware/software co-designing engineers to realize a cohesive software engineering infrastructure for FPGAs.

Udit Gupta earned a BS in Electrical and Computer Engineering at Cornell University. He is currently studying toward a PhD in Computer Science at Harvard University. Udit’s past research includes exploring software-programmable FPGAs by leveraging intelligent design automation tools and evaluating high-level synthesis compilers with realistic benchmarks. He is especially interested in vertically integrated systems—exploring the computing stack from applications, tools, languages, and compilers to downstream architectures

The Future of Sensor Technology for the IoT

Sensors are at the heart of many of the most innovative and game-changing Internet of Things (IoT) applications. We asked five engineers to share their thoughts on the future of sensor technology.

ChrisCantrellCommunication will be the fastest growth area in sensor technology. A good wireless link allows sensors to be placed in remote or dynamic environments where physical cables are impractical. Home Internet of Things (IoT) sensors will continue to leverage home Wi-Fi networks, but outdoor and physically-remote sensors will evolve to use cell networks. Cell networks are not just for voice anymore. Just ask your children. Phones are for texting—not for talking. The new 5G mobile service that rolls out in 2017 is designed with the Internet of Things in mind. Picocells and Microcells will better organize our sensors into manageable domains. What is the best cellular data plan for your refrigerator and toaster? I can’t wait for the TV commercials. — Christopher Cantrell (Software Engineer, CGI Federal)

TylerSensors of the future will conglomerate into microprocessor controlled blocks that are accessed over a network. For instance, weather sensors will display temperature, barometric pressure, humidity, wind speed, and direction along with latitude, longitude, altitude, and time thrown in for good measure, and all of this will be available across a single I2C link. Wide area network sensor information will be available across the Internet using encrypted links. Configuration and calibration can be done using webpages and all documentation will be stored online on the sensors themselves. Months’ worth of history will be saved to MicroSD drives or something similar. These are all things that we can dream of and implement today. Tomorrow’s sensors will solve tomorrow’s problems and we can really only make out the barest of glimpses of what tomorrow will hold. It will be entertaining to watch the future unfold and see how much we missed. — David C. Tyler (Retired Computer Scientist)

Quo vadis electronics? During the past few decades, electrical engineering has gone through an unprecedented growth. As a result, we see electronics to control just about everything around us. To be sure, what we call electronics today is in fact a symbiosis of hardware and software. At one time every electrical engineer worth his salt had to be able to solder and to write a program. A competent software engineer today may not understand what makes the hardware tick, just as a hardware engineer may not understand software, because it’s often too much for one person to master. In most situations, however, hardware depends on software and vice versa. While current technology enables us to do things we could not even dream about just a few years ago, when it comes to controlling or monitoring physical quantities, we remain limited by what the data sensors can provide. To mimic human intellect and more, we need sensors to convert reality into electrical signal. For that research scientists in the fields of physics, chemistry, biology, mathematics, and so forth work hard to discover novel, advanced sensors. Once a new sensor principle has been found, hardware and software engineers will go to work to exploit its detection capabilities in practical use. In my mind, research into new sensors is presently the most important activity for sustaining progress in the field of electronic control. — George Novacek (Engineer, Columnist, Circuit Cellar)

GustafikIt’s hard to imagine the future of sensors going against the general trend of lower power, greater distribution, smaller physical size, and improvements in all of the relevant parameters. With the proliferation of small connected devices beyond industrial and specialized use into homes and to average users (IoT), great advances and price drops are to be expected. Tech similar to that, once reserved for top-end industrial sensor networks, will be readily available. As electrical engineers, we will just have to adjust as always. After years of trying to avoid the realm of RF magic, I now find myself reading up on the best way to integrate a 2.4-GHz antenna onto my PCB. Fortunately, there is an abundance of tools, application notes, and tutorials from both the manufacturers and the community to help us with this next step. And with the amazing advances in computational power, neural networks, and various other data processing, I am eager to see what kind of additional information and predictions we can squeeze out of all those measurements. All in all, I am looking forward to a better, more connected future. And, as always, it’s a great time to be an electrical engineer. — David Gustafik (Hardware Developer, MicroStep-MIS)

MittalMiniature IoT, sensor, and embedded technologies are the future. Today, IoT technology is a favorite focus among many electronics startups and even big corporations. In my opinion, sensor-based medical applications are going to be very important in our day-to-day lives in the not-so-distant future. BioMEMS sensors integrated on a chip have already made an impact in industry with devices like glucometers and alcohol detectors. These types of BioMEMS sensors, if integrated inside mobile phones for many medical applications, can address many human needs. Another interesting area is wireless charging. Imagine if you could charge all your devices wirelessly as soon as you walk into your home. Wouldn’t that be a great innovation that would make your life easier? So, technology has a very good future provided it can bring out solutions which can really solve human needs. — Nishant Mittal (Master’s Student, IIT Bombay, Mumbai)

The Future of Electronic Measurement Systems

Trends in test and measurement systems follow broader technological trends. A measurement device’s fundamental purpose is to translate a measurable quantity into something that can be discerned by a human.  As such, the display technology of the day informed much of the design and performance limitations of early electronic measurement systems. Analog meters, cathode ray tubes, and paper strip recorder systems dominated.  Measurement hardware could be incredibly innovative, but such equipment could only be as good as its ability to display the measurement result to the user. Early analog multimeters could only be as accurate as a person’s ability to read to which dash mark the needle pointed.ipad_hand

In the early days, the broader electronics market was still in its infancy and didn’t offer much from which to draw. Test equipment manufacturers developed almost everything in house, including display technology. In its heyday, Tektronix even manufactured its own cathode ray tubes. As the nascent electronics market matured, measurement equipment evolved to leverage the advances being made. Display technology stopped being such an integral piece. No longer shackled with the burden of developing everything in house, equipment makers were able to develop instruments faster and focus more on the measurement elements alone. Advances in digital electronics made digital oscilloscopes practical. Faster and cheaper processors and larger memories (and faster ADCs to fill them) then led to digital oscilloscopes dominating the market. Soon, test equipment was influenced by the rise of the PC and even began running consumer-grade operating systems.

Measurement systems of the future will continue to follow this trend and adopt advances made by the broader tech sector. Of course, measurement specs will continue to improve, driven by newly invented technologies and semiconductor process improvements. But, other trends will be just as important. As new generations raised on Apple and Android smartphones start their engineering careers, the industry will give them the latest advances in user interfaces that they have come to expect. We are already seeing test equipment start to adopt touchscreen technologies. This trend will continue as more focus is put on interface design. The latest technologies talked about today, such as haptic feedback, will appear in the instruments of tomorrow. These UI improvements will help engineers better extract the data they need.

As chip integration follows its ever steady course, bench-top equipment will get smaller. Portable measurement equipment will get lighter and last longer as they leverage low-power mobile chipsets and new battery technologies. And the lines between portable and bench-top equipment will be blurred just as laptops have replaced desktops over the last decade. As equipment makers chase higher margins, they will increasingly focus on software to help interpret measurement data. One can imagine a subscription service to a cloud-based platform that provides better insights from the instrument on the bench.

At Aeroscope Labs (, a company I cofounded, we are taking advantage of many broader trends in the electronics market. Our Aeroscope oscilloscope probe is a battery-powered device in a pen-sized form factor that wirelessly syncs to a tablet or phone. It simply could not exist without the amazing advances in the tech sector of the past 10 years. Because of the rise of the Internet of Things (IoT), we have access to many great radio systems on a chip (SoCs) along with corresponding software stacks and drivers. We don’t have to develop a radio from scratch like one would have to do 20 years ago. The ubiquity of smart phones and tablets means that we don’t have to design and build our own display hardware or system software. Likewise, the popularity of portable electronics has pushed the cost of lithium polymer batteries way down. Without these new batteries, the battery life would be mere minutes instead of the multiple hours that we are able to achieve.

Just as with my company, other new companies along with the major players will continue to leverage these broader trends to create exciting new instruments. I’m excited to see what is in store.

Jonathan Ward is cofounder of Aeroscope Labs (, based in Boulder, CO. Aeroscope Labs is developing the world’s first wireless oscilloscope probe. Jonathan has always had a passion for measurement tools and equipment. He started his career at Agilent Technologies (now Keysight) designing high-performance spectrum analyzers. Most recently, Jonathan developed high-volume consumer electronics and portable chemical analysis equipment in the San Francisco Bay Area. In addition to his decade of industry experience, he holds an MS in Electrical Engineering from Columbia University and a BSEE from Case Western Reserve University.