Taking the “Hard” Out of Hardware

There’s this belief among my software developer friends that electronics are complicated, hardware is hard, and that you need a degree before you can design anything to do with electricity. They honestly believe that building electronics is more complicated than writing intricate software—that is, the software which powers thousands of people’s lives all around the world. It’s this mentality that confuses me. How can you write all of this incredible software, but a believe a simple 555 timer circuit is complicated?

I wanted to discover where the idea that “hardware is hard” came from and how I could disprove it. I started with something with which almost everyone is familiar, LEGO. I spent my childhood playing with these tiny plastic bricks, building anything my seven-year-old mind could dream up, creating intricate constructions from seemingly simplistic pieces. Much like the way you build LEGO designs, electronic systems are built upon a foundation of simple components.

When you decide to design/build a system, you want to first start by breaking down the system into components and functional sections that are easy to understand. You can use this approach for both digital and analog systems. The example I like use to explain this is a phase-locked loop frequency modulator demodulator/detector, a seemingly complicated device used to decode frequency modulated radio signals. This system sounds like it would be impossible to build, especially for someone who isn’t familiar with electronics. I can recognize that from experience. I remember the first year of my undergraduate studies where my lecturers would place extremely detailed circuit schematics up on a chalkboard and expect us to be able to understand high-level functionality. I recall the panic this induced in a number of my peers and very likely put them off electronics in later years. One of the biggest problems that an electronics instructor faces is teaching complexity without scaring away students.

This essay appears in Circuit Cellar 317, December 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

 

What many people either don’t realize or aren’t taught is that most systems can be broken down into composite pieces. The PLL FM demodulator breaks into three main elements: the phase detector, a voltage controlled oscillator (VCO) and a loop filter. These smaller pieces, or “building blocks,” can then be separated even further. For example, the loop filter—an element of the circuit used to remove high-frequency—is constructed from a simple combination of resistors, capacitors, and operational amplifiers (see Figure 1).Figure 1

I’m going to use a bottom-up approach to explain the loop filter segment of this system using simple resistors (R) and capacitors (C). It is this combination of resistors and capacitors allows you to create passive RC filters—circuits which work by allowing only specific frequencies to pass to the output. Figure 2 shows a low-pass filter. This is used to remove high-frequency signals from the output of a circuit. Note: I’m avoiding as much math as possible in this explanation, as you don’t need numerical examples to demonstrate behavior. That can come later! The performance of this RC filter can be improved by adding an amplification stage using an op-amp, as we’ll see next.Figure 2

Op-amps are a nice example of abstraction in electronics. We don’t normally worry about their internals, much like a CPU or other ICs, and rather treat them like functional boxes with inputs and an output. As you can see in Figure 3, the op-amp is working in a “differential” mode to try to equalize the voltages at its negative and positive terminals. It does this by outputting the difference and feeding it back to the negative terminal via a feedback loop created by the potential divider (voltage divider) at R2 and R3. The differential effect between the op-amp’s two input terminals causes a “boosted” output that is determined by the values of R2 and R3. This amplification, in combination with the low-pass passive filter, creates what’s known as a low-pass active filter.Figure 3

The low-pass active filter would be one of a number of filtering elements within the loop filter, and we already built up one of the circuit’s three main elements! This example starts to show how behavior is cumulative. As you gain knowledge about fundamental components, you’ll start to understand how more complex systems work. Almost all of electronic systems have this building block format. So, yes, there might be a number of behaviors to understand. But as soon as you learn the fundamentals, you can start to design and build complicated systems of your own!

Alex Bucknall earned a Bachelor’s in Electronic Engineering at the University of Warwick, UK. He is particularily interested in FPGAs and communications systems. Alex works as a Developer Evangelist for Sigfox, which is offering simple and low-energy communication solutions for the Internet of Things.

The Future of Ultra-Low Power Signal Processing

One of my favorite quotes comes from the IEEE Signal Processing magazine in 2010. They attempted to answer the question: What does ultra-low power consumption mean? And they came to the conclusion that it is where the “power source lasts longer than the useful life of the product.”[1] It’s a great answer because it’s scalable. It applies equally to signal processing circuitry inside an embedded IoT device that can never be accessed or recharged and to signal processing inside a car where the petrol for the engine dominates the operating lifetime, not the signal processing power. It also describes exactly what a lot of science fiction has always envisioned: no changing or recharging of batteries, which people forget to do or never have enough batteries for. Rather, we have devices that simply always work.Figure 1

My research focuses on healthcare applications and creating “wearable algorithms”—that is, signal processing implementations that fit within the very small power budgets available in wearable devices. Historically, this focused on data reduction to save power. It’s well known that wireless data transmission is very power intensive. By using some power to reduce the amount of data that has to be sent, it’s possible to save lots of power in the wireless transmission stage and so to increase the overall battery lifetime.

This argument has been known for a long time. There are papers dating back to at least the 1990s based on it. It’s also readily achievable. Inevitably, it depends on the precise situation, but we showed in 2014 that the power consumption of a wireless sensor node could be brought down to the level of a node without a wireless transmitter (one that uses local flash memory) using easily available, easy-to-use, off-the-shelf-devices.[2]

This essay appears in Circuit Cellar 316, November 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

Today, there are many additional benefits that are being enabled by the emerging use of ultra-low power signal processing embedded in the wearable itself, and these new applications are driving the research challenges: increased device functionality; minimized system latency; reliable, robust operation over unreliable wireless links; reduction in the amount of data to be analyzed offline; better quality recordings (e.g., with motion artifact removal to prevent signal saturations); new closed-loop recording—stimulation devices; and real-time data redaction for privacy, ensuring personal data never leaves the wearable.

It’s these last two that are the focus for my research now. They’re really important for enabling new “bioelectronic” medical devices which apply electrical stimulation as an alternative to classical pharmacological treatments. These “bioelectronics” will be fully data-driven, analyzing physiological measurements in real-time and using this to decide when to optimally trigger an intervention. Doing such as analysis on a wearable sensor node though requires ultra-low power signal processing that has all of the feature extraction and signal classification operating within a power budget of a few 100 µW or less.

To achieve this, most works do not use any specific software platform. Instead they achieve very low-power consumption by using only dedicated and highly customized hardware circuits. While there are many different approaches to realizing low-power fully custom electronics, for the hardware, the design trends are reasonably established: very low supply voltages, typically in the 0.5 to 1 V range; highly simplified circuit architectures, where a small reduction in processing accuracy leads to substantial power savings; and the use of extensive analogue processing in the very lowest power consumption circuits.[3]

Less well established are the signal processing functions for ultra-low power. Focusing on feature extractions, our 2015 review highlighted that the majority (more than half) of wearable algorithms created to date are based upon frequency information, with wavelet transforms being particularly popular.[4] This indicates a potential over-reliance on time–frequency decompositions as the best algorithmic starting points. It seems unlikely that time–frequency decompositions would provide the best, or even suitable, feature extraction across all signal types and all potential applications. There is a clear opportunity for creating wearable algorithms that are based on other feature extraction methods, such as the fractal dimension or Empirical Mode Decomposition.

Investigating this requires studying the three-way trade-off between algorithm performance (e.g., correct detections), algorithm cost (e.g., false detections), and power consumption. We know how to design signal processing algorithms, and we know how to design ultra-low power circuitry. However, combining the two opens many new degrees of freedom in the design space, and there are many opportunities and work to do in mapping feature extractions and classifiers into sub-1-V power supply dedicated hardware.


[1] G. Frantz, et al, “Ultra-low power signal processing,” IEEE Signal Processing Magazine, vol. 27, no. 2, 2010.
[2] S. A. Imtiaz, A. J. Casson, and E. Rodriguez-Villegas, “Compression in Wearable Sensor Nodes,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 4, 2014.
[3] A. J. Casson, et al, “Wearable Algorithms,” in E. Sazonov and M. R. Neuman (eds.), Wearable Sensors, Elsevier, 2014.
[4] A. J. Casson, “Opportunities and Challenges for Ultra Low Power Signal Processing in Wearable Healthcare,” 23rd European Signal Processing Conference, Nice, 2015.


Alex Casson is a lecturer in the Sensing, Imaging, and Signal Processing Department at the University of Manchester. His research focuses on creating next-generation human body sensors, developing both the required hardware and software. Dr. Casson earned an undergraduate degree at the University of Oxford and a PhD from Imperial College London.

The Future of Biomedical Signal Analysis Technology

Biomedical signals obtained from the human body can be beneficial in a variety of scenarios in a healthcare setting. For example, physicians can use the noninvasive sensing, recording, and processing of a heart’s electrical activity in the form of electrocardiograms (ECGs) to help make informed decisions about a patient’s cardiovascular health. A typical biomedical signal acquisition system will consist of sensors, preamplifiers, filters, analog-to-digital conversion, processing and analysis using computers, and the visual display of the outputs. Given the digital nature of these signals, intelligent methods and computer algorithms can be developed for analysis of the signals. Such processing and analysis of signals might involve the removal of instrumentation noise, power line interference, and any artifacts that act as interference to the signal of interest. The analysis can be further enhanced into a computer-aided decision-making tool by incorporating digital signal processing methods and algorithms for feature extraction and pattern analysis. In many cases, the pattern analysis module is developed to reveal hidden parameters of clinical interest, and thereby improve the diagnostic and monitoring of clinical events.Figure1

The methods used for biomedical signal processing can be categorized into five generations. In the first generation, the techniques developed in the 1970s and 1980s were based on time-domain approaches for event analysis (e.g., using time-domain correlation approaches to detect arrhythmic events from ECGs). In the second generation, with the implementation of the Fast Fourier Transform (FFT) technique, many spectral domain approaches were developed to get a better representation of the biomedical signals for analysis. For example, the coherence analysis of the spectra of brain waves also known as electroencephalogram (EEG) signals have provided an enhanced understanding of certain neurological disorders, such as epilepsy. During the 1980s and 1990s, the third generation of techniques was developed to handle the time-varying dynamical behavior of biomedical signals (e.g., the characteristics of polysomnographic (PSG) signals recorded during sleep possess time-varying properties reflecting the subject’s different sleep stages). In these cases, Fourier-based techniques cannot be optimally used because by definition Fourier provides only the spectral information and doesn’t provide a time-varying representation of signals. Therefore, the third-generation algorithms were developed to process the biomedical signals to provide a time-varying representation, and   clinical events can be temporally localized for many practical applications.

This essay appears in Circuit Cellar 315, October 2016. Subscribe to Circuit Cellar to read project articles, essays, interviews, and tutorials every month!

These algorithms were essentially developed for speech signals for telecommunications applications, and they were adapted and modified for biomedical applications. The nearby figure illustrates an example of knee vibration signal obtained from two different knee joints, their spectra, and joint time-frequency representations. With the advancement in computing technologies, for the past 15 years, many algorithms have been developed for machine learning and building intelligent systems. Therefore, the fourth generation of biomedical signal analysis involved the automatic quantification, classification, and recognition of time-varying biomedical signals by using advanced signal-processing concepts from time-frequency theory, neural networks, and nonlinear theory.

During the last five years, we’ve witnessed advancements in sensor technologies, wireless technologies, and material science. The development of wearable and ingestible electronic sensors mark the fifth generation of biomedical signal analysis. And as the Internet of Things (IoT) framework develops further, new opportunities will open up in the healthcare domain. For instance, the continuous and long-term monitoring of biomedical signals will soon become a reality. In addition, Internet-connected health applications will impact healthcare delivery in many positive ways. For example, it will become increasingly effective and advantageous to monitor elderly and chronically ill patients in their homes rather than hospitals.

These technological innovations will provide great opportunities for engineers to design devices from a systems perspective by taking into account patient safety, low power requirements, interoperability, and performance requirements. It will also provide computer and data scientists with a huge amount of data with variable characteristics.

The future of biomedical signal analysis looks very promising. We can expect  innovative healthcare solutions that will improve everyone’s quality of life.

Sridhar (Sri) Krishnan earned a BE degree in Electronics and Communication Engineering at Anna University in Madras, India. He earned MSc and PhD degrees in Electrical and Computer Engineering at the University of Calgary. Sri is a Professor of Electrical and Computer Engineering at Ryerson University in Toronto, Ontario, Canada, and he holds the Canada Research Chair position in Biomedical Signal Analysis. Since July 2011, Sri has been an Associate Dean (Research and Development) for the Faculty of Engineering and Architectural Science. He is also the Founding Co-Director of the Institute for Biomedical Engineering, Science and Technology (iBEST). He is an Affiliate Scientist at the Keenan Research Centre at St. Michael’s Hospital in Toronto.

The Hunt for Power Remote Sensing

With the advent of the Internet of Things (IoT), the need for ultra-low power passive remote sensing is on the rise for battery-powered technologies. Always-on motion-sensing technologies are a great option to turn to. Digital cameras have come light years from where they were a decade ago, but low power they are not. When low-power technologies need always-on remote sensing, infrared motion sensors are a great option to turn to.

Passive infrared (PIR) sensors and passive infrared detectors (PIDs) are electronic devices that detect infrared light emitted from objects within their field of view. These devices typically don’t measure light per se; rather, they measure the delta of a system’s latent energy. This change generates a very small potential across a crystalline material (gallium nitride, cesium nitrate, among others), which can be amplified to create a usable signal.

Infrared technology was built on a foundation of older motion-sensing technologies that came before. Motion sensing was first utilized in the early 1940s, primarily for military purposes nearing the end of World War II. Radar and ultrasonic detectors were the progenitors of motion-sensing technologies seen today, relying on reflecting sound waves to determine the location of objects in a detection environment. Though effective for its purpose, its use was limited to military applications and was not a reasonable option for commercial users.

This essay appears in Circuit Cellar 314 (September 2016).

 
The viability of motion detection tools began to change as infrared-sensing options entered development. The birth of modern PIR sensors began towards the end of the sixties, when companies began to seek alternatives to the already available motion technologies that were fast becoming outdated.

The modern versions of these infrared motion sensors have taken root in many industries due to the affordability and flexibility of their use. The future of motion sensors is PID, and it has several advantages over its counterparts:

  • Saving Energy—PIDs are energy efficient. The electricity required to operate PIDs is minimal, with most units actually reducing the user’s energy consumption when compared to other commercial motion-sensing devices.
  • Inexpensive—Cost isn’t a barrier to entry for those wanting to deploy IR motion sensing technology. This sensor technology makes each individual unit affordable, allowing users to deploy multiple sensors for maximum coverage without breaking the bank.
  • Durability—It’s hard to match the ruggedness of PIDs. Most units don’t employ delicate circuitry that is easily jarred or disrupted; PIDs are routinely used outdoors and in adverse environments that would potentially damage other styles of detectors.
  • Simple and Small—The small size of PIDs work to their advantage. Innocuous sensors are ideal for security solutions that aren’t obtrusive or easily noticeable. This simplicity makes PIDs desirable for commercial security, when businesses want to avoid installing obvious security infrastructure throughout their buildings.
  • Wide Lens Range—The wide field of vision that PIDs have allow for comprehensive coverage of each location in which they are placed. PIDs easily form a “grid” of infrared detection that is ideal for detecting people, animals, or any other type of disruption that falls within the lens range.
  • Easy to Interface With—PIDs are flexible. The compact and simple nature of PIDs lets the easily integrate with other technologies, including public motion detectors for businesses and appliances like remote controls.

With the wealth of advantages PIDs have over other forms of motion-sensing technology, it stands to reason that PIR sensors and PIDs will have a place in the future of motion sensor development. Though other options are available, PIDs operate with simplicity, energy-efficiency, and a level of durability that other technologies can’t match. Though there are some exciting new developments in the field of motion-sensing technology, including peripherals for virtual reality and 3-D motion control, the reliability of infrared motion technology will have a definite role in the evolution of motion sensing technology in the years to come.

As the Head Hardware Engineer at Cyndr (www.cyndr.co), Kyle Engstrom is the company’s lead electron wrangler and firmware designer. He specializes in analog electronics and power systems. Kyle has bachelor’s degrees in electrical engineering and geology. His life as a rock hound lasted all of six months before he found his true calling in engineering. Kyle has worked three years in the aerospace industry designing cutting-edge avionics.

Software-Programmable FPGAs

Modern workloads demand higher computational capabilities at low power consumption and cost. As traditional multi-core machines do not meet the growing computing requirements, architects are exploring alternative approaches. One solution is hardware specialization in the form of application specific integrated circuits (ASICs) to perform tasks at higher performance and lower power than software implementations. The cost of developing custom ASICs, however, remains high. Reconfigurable computing fabrics, such as field-programmable gate arrays (FPGAs), offer a promising alternative to custom ASICs. FPGAs couple the benefits of hardware acceleration with flexibility and lower cost.

FPGA-based reconfigurable computing has recently taken the spotlight in academia and industry as evidenced by Intel’s high-profile acquisition of Altera and Microsoft’s recent announcement to deploy thousands of FPGAs to speed up Bing search. In the coming years, we should expect to see hardware/software co-designed systems supported by reconfigurable computing to become common. Conventional RTL design methodologies, however, cannot productively manage the growing complexity of algorithms we wish to accelerate using FPGAs. Consequently, FPGA programmability is a major challenge that must be addressed both technologically by leveraging high-level software abstractions (e.g., language and compilers), run-time analysis tools, and readily available libraries and benchmarks, as well as scholastically through the education of rising hardware/software engineers.

Recent efforts related to software-programmable FPGAs have focused on designing high-level synthesis (HLS) compilers. Inspired by classical C-to-gates tools, HLS compilers automatically transform programs written in traditional untimed software languages to timed hardware descriptions. State-of-the-art HLS tools include Xilinx’s Vivado HLS (C/C++) and SDAccel (OpenCL) as well as Altera’s OpenCL SDK. Although HLS is effective at translating C/C++ or OpenCL programs to RTL hardware, compilers are only a part of the story in realizing truly software-programmable FPGAs.

 
Efficient memory management is central to software development. Unfortunately, unlike traditional software programming, current FPGA design flows require application-specific memories to sustain high performance hardware accelerators. Features such as dynamic memory allocation, pointer chasing, complex data structures, and irregular memory access patterns are also ill-supported by FPGAs. In lieu of basic software memory abstractions techniques, experts must design custom hardware memories. Instead, more extensible software memory abstractions would facilitate software-programmability of FPGAs.

In addition to high-level programming and memory abstractions, run-time analysis tools such as debuggers and profilers are essential to software programming. Hardware debuggers and profilers in the form of hardware/co-simulation tools, however, are not ready for tackling exascale systems. In fact, one of the biggest barriers to realizing software-programmable FPGAs are the hours, even days, it takes to generate bitstreams and run hardware/software co-simulators. Lengthy compilation and simulation times cause debugging and profiling to consume the majority of FPGA development cycles and deter agile software development practices. The effect is compounded when FPGAs are integrated into heterogeneous systems with CPUs and GPUs over complex memory hierarchies. New tools, following architectural simulators, may aid in rapidly gathering performance, power, and area utilization statistics for FPGAs in heterogeneous systems. Another solution to long compilation and simulation times is using overlay architectures. Overlay architectures mask the FPGA’s bit-level configurability with a fixed network of simple processing nodes. The fixed hardware in overlay architectures enables faster programmability at the expense of finer grained, bit-level parallelism of FPGAs.

Another key facet of software programming is readily available libraries and benchmarks. Current FPGA development is marred with vendor specific IPs cores that span limited domains. As FPGAs become more software-programmable, we should expect to see more domain experts providing vendor agnostic FPGA-based libraries and benchmarks. Realistic, representative, and reproducible vendor-agnostic libraries and benchmarks will not only make FPGA development more accessible but also serve as reference solutions for developers.

Finally, the future of software-programmable FPGAs lies not only in technological advancements but also in educating the next generation of hardware/software co-designing engineers. Software engineers are rarely concerned with the downstream architecture except when exercising expert optimizations. Higher-level abstractions and run-time analysis tools will improve FPGA programmability but developers will still need a working knowledge of FPGAs to design competitive hardware accelerators. Following reference libraries and benchmarks, software engineers must become fluent with the notion of pipelining, unrolling, partitioning memory into local SRAM blocks and hardened IPs. Terms like throughout, latency, area utilization, power and cycle time will enter software engineering vernacular.

Recent advances in HLS compilers have demonstrated the feasibility of software-programmable FPGAs. Now, a combination of higher-level abstractions, run-time analysis tools, libraries and benchmarks must be pioneered alongside trained hardware/software co-designing engineers to realize a cohesive software engineering infrastructure for FPGAs.
 

Udit Gupta earned a BS in Electrical and Computer Engineering at Cornell University. He is currently studying toward a PhD in Computer Science at Harvard University. Udit’s past research includes exploring software-programmable FPGAs by leveraging intelligent design automation tools and evaluating high-level synthesis compilers with realistic benchmarks. He is especially interested in vertically integrated systems—exploring the computing stack from applications, tools, languages, and compilers to downstream architectures

The Future of Sensor Technology for the IoT

Sensors are at the heart of many of the most innovative and game-changing Internet of Things (IoT) applications. We asked five engineers to share their thoughts on the future of sensor technology.


ChrisCantrellCommunication will be the fastest growth area in sensor technology. A good wireless link allows sensors to be placed in remote or dynamic environments where physical cables are impractical. Home Internet of Things (IoT) sensors will continue to leverage home Wi-Fi networks, but outdoor and physically-remote sensors will evolve to use cell networks. Cell networks are not just for voice anymore. Just ask your children. Phones are for texting—not for talking. The new 5G mobile service that rolls out in 2017 is designed with the Internet of Things in mind. Picocells and Microcells will better organize our sensors into manageable domains. What is the best cellular data plan for your refrigerator and toaster? I can’t wait for the TV commercials. — Christopher Cantrell (Software Engineer, CGI Federal)


TylerSensors of the future will conglomerate into microprocessor controlled blocks that are accessed over a network. For instance, weather sensors will display temperature, barometric pressure, humidity, wind speed, and direction along with latitude, longitude, altitude, and time thrown in for good measure, and all of this will be available across a single I2C link. Wide area network sensor information will be available across the Internet using encrypted links. Configuration and calibration can be done using webpages and all documentation will be stored online on the sensors themselves. Months’ worth of history will be saved to MicroSD drives or something similar. These are all things that we can dream of and implement today. Tomorrow’s sensors will solve tomorrow’s problems and we can really only make out the barest of glimpses of what tomorrow will hold. It will be entertaining to watch the future unfold and see how much we missed. — David C. Tyler (Retired Computer Scientist)



Quo vadis electronics? During the past few decades, electrical engineering has gone through an unprecedented growth. As a result, we see electronics to control just about everything around us. To be sure, what we call electronics today is in fact a symbiosis of hardware and software. At one time every electrical engineer worth his salt had to be able to solder and to write a program. A competent software engineer today may not understand what makes the hardware tick, just as a hardware engineer may not understand software, because it’s often too much for one person to master. In most situations, however, hardware depends on software and vice versa. While current technology enables us to do things we could not even dream about just a few years ago, when it comes to controlling or monitoring physical quantities, we remain limited by what the data sensors can provide. To mimic human intellect and more, we need sensors to convert reality into electrical signal. For that research scientists in the fields of physics, chemistry, biology, mathematics, and so forth work hard to discover novel, advanced sensors. Once a new sensor principle has been found, hardware and software engineers will go to work to exploit its detection capabilities in practical use. In my mind, research into new sensors is presently the most important activity for sustaining progress in the field of electronic control. — George Novacek (Engineer, Columnist, Circuit Cellar)


GustafikIt’s hard to imagine the future of sensors going against the general trend of lower power, greater distribution, smaller physical size, and improvements in all of the relevant parameters. With the proliferation of small connected devices beyond industrial and specialized use into homes and to average users (IoT), great advances and price drops are to be expected. Tech similar to that, once reserved for top-end industrial sensor networks, will be readily available. As electrical engineers, we will just have to adjust as always. After years of trying to avoid the realm of RF magic, I now find myself reading up on the best way to integrate a 2.4-GHz antenna onto my PCB. Fortunately, there is an abundance of tools, application notes, and tutorials from both the manufacturers and the community to help us with this next step. And with the amazing advances in computational power, neural networks, and various other data processing, I am eager to see what kind of additional information and predictions we can squeeze out of all those measurements. All in all, I am looking forward to a better, more connected future. And, as always, it’s a great time to be an electrical engineer. — David Gustafik (Hardware Developer, MicroStep-MIS)


MittalMiniature IoT, sensor, and embedded technologies are the future. Today, IoT technology is a favorite focus among many electronics startups and even big corporations. In my opinion, sensor-based medical applications are going to be very important in our day-to-day lives in the not-so-distant future. BioMEMS sensors integrated on a chip have already made an impact in industry with devices like glucometers and alcohol detectors. These types of BioMEMS sensors, if integrated inside mobile phones for many medical applications, can address many human needs. Another interesting area is wireless charging. Imagine if you could charge all your devices wirelessly as soon as you walk into your home. Wouldn’t that be a great innovation that would make your life easier? So, technology has a very good future provided it can bring out solutions which can really solve human needs. — Nishant Mittal (Master’s Student, IIT Bombay, Mumbai)

The Future of Electronic Measurement Systems

Trends in test and measurement systems follow broader technological trends. A measurement device’s fundamental purpose is to translate a measurable quantity into something that can be discerned by a human.  As such, the display technology of the day informed much of the design and performance limitations of early electronic measurement systems. Analog meters, cathode ray tubes, and paper strip recorder systems dominated.  Measurement hardware could be incredibly innovative, but such equipment could only be as good as its ability to display the measurement result to the user. Early analog multimeters could only be as accurate as a person’s ability to read to which dash mark the needle pointed.ipad_hand

In the early days, the broader electronics market was still in its infancy and didn’t offer much from which to draw. Test equipment manufacturers developed almost everything in house, including display technology. In its heyday, Tektronix even manufactured its own cathode ray tubes. As the nascent electronics market matured, measurement equipment evolved to leverage the advances being made. Display technology stopped being such an integral piece. No longer shackled with the burden of developing everything in house, equipment makers were able to develop instruments faster and focus more on the measurement elements alone. Advances in digital electronics made digital oscilloscopes practical. Faster and cheaper processors and larger memories (and faster ADCs to fill them) then led to digital oscilloscopes dominating the market. Soon, test equipment was influenced by the rise of the PC and even began running consumer-grade operating systems.

Measurement systems of the future will continue to follow this trend and adopt advances made by the broader tech sector. Of course, measurement specs will continue to improve, driven by newly invented technologies and semiconductor process improvements. But, other trends will be just as important. As new generations raised on Apple and Android smartphones start their engineering careers, the industry will give them the latest advances in user interfaces that they have come to expect. We are already seeing test equipment start to adopt touchscreen technologies. This trend will continue as more focus is put on interface design. The latest technologies talked about today, such as haptic feedback, will appear in the instruments of tomorrow. These UI improvements will help engineers better extract the data they need.

As chip integration follows its ever steady course, bench-top equipment will get smaller. Portable measurement equipment will get lighter and last longer as they leverage low-power mobile chipsets and new battery technologies. And the lines between portable and bench-top equipment will be blurred just as laptops have replaced desktops over the last decade. As equipment makers chase higher margins, they will increasingly focus on software to help interpret measurement data. One can imagine a subscription service to a cloud-based platform that provides better insights from the instrument on the bench.

At Aeroscope Labs (www.aeroscope.io), a company I cofounded, we are taking advantage of many broader trends in the electronics market. Our Aeroscope oscilloscope probe is a battery-powered device in a pen-sized form factor that wirelessly syncs to a tablet or phone. It simply could not exist without the amazing advances in the tech sector of the past 10 years. Because of the rise of the Internet of Things (IoT), we have access to many great radio systems on a chip (SoCs) along with corresponding software stacks and drivers. We don’t have to develop a radio from scratch like one would have to do 20 years ago. The ubiquity of smart phones and tablets means that we don’t have to design and build our own display hardware or system software. Likewise, the popularity of portable electronics has pushed the cost of lithium polymer batteries way down. Without these new batteries, the battery life would be mere minutes instead of the multiple hours that we are able to achieve.

Just as with my company, other new companies along with the major players will continue to leverage these broader trends to create exciting new instruments. I’m excited to see what is in store.

Jonathan Ward is cofounder of Aeroscope Labs (www.aeroscope.io), based in Boulder, CO. Aeroscope Labs is developing the world’s first wireless oscilloscope probe. Jonathan has always had a passion for measurement tools and equipment. He started his career at Agilent Technologies (now Keysight) designing high-performance spectrum analyzers. Most recently, Jonathan developed high-volume consumer electronics and portable chemical analysis equipment in the San Francisco Bay Area. In addition to his decade of industry experience, he holds an MS in Electrical Engineering from Columbia University and a BSEE from Case Western Reserve University.

The Future of Robotics Technology

Advancements in technology mean that the dawn of a new era of robotics is upon us. Automation is moving out of the factory and in to the real world. As this happens, we will see significant increases in productivity as well as drastic cuts in employment. We have an opportunity to markedly improve the lives of all people. Will we seize it?

For decades, the biggest limitations in robotics were related to computing and perception. Robots couldn’t make sense of their environments and so were fixed to the floor. Their movements were precalculated and repetitive. Now, however, we are beginning to see those limitations fall away, leading to a step-change in the capabilities of robotic systems. Robots now understand their environment with high fidelity, and safely navigate through it.

On the sensing side, we’re seeing multiple order of magnitude reductions in the cost of 3-D sensors used for mapping, obstacle avoidance, and task comprehension. Time of flight cameras such as those in the Microsoft Kinect or Google Tango devices are edging their way into the mainstream in high volumes. LIDAR sensors commonly used on self-driving cars were typically $60,000 or more just a few years ago. This year at the Consumer Electronics Show (CES), however, two companies, Quanergy and Velodyne, announced new solid-state LIDAR devices that eliminate all moving parts and carry a sub-$500 price point.

Understanding 3-D sensor data is a computationally intensive task, but advancements in general purpose GPU computing have introduced new ways to quickly process the information. Smartphones are pushing the development of small, powerful processors, and we’re seeing companies like NVIDIA shipping low cost GPU/CPU combos such as the X1 that are ideal for many robotics applications.

To make sense of all this data, we’re seeing significant improvements in software for robotics. The open-source Robot Operating System (ROS), for example, is widely used in industry and at 9 years old, just hit version 2.0. Meanwhile advances in machine learning mean that computers can now perform many tasks better than humans.

All these advancements mean that robots are moving beyond the factory floor and in to the real world. Soon we’ll see a litany of problems being solved by robotics. Amazon already uses robots to lower warehousing costs, and several new companies are looking to solve the last mile delivery problem. Combined with self-driving cars and trucks this will mean drastic cost reductions for the logistics industry, with a ripple effect that lowers the cost of all goods.

As volumes go up, we will see cost reductions in expensive mechanical components such as motors and linkages. In five years, most of the patents for metal 3-D printers will expire, which will bring on a wave of competition to lower costs for new manufacturing methods.
While many will benefit greatly from these advances, there are worrying implications for others. Truck driver is the most common job in nearly every state, but within a decade those jobs will see drastic cuts. Delivery companies like Amazon Fresh and Google Shopping Express currently rely on fleets of human drivers, as do taxi services Uber and Lyft. It seems reasonable that those companies will move to automated vehicles.

Meanwhile, there are a great number of unskilled jobs that have already reduced workers to near machines. Fast food restaurants, for example, provide clear cut scripts for workers to follow, eliminating any reliance on human intelligence. It won’t be long before robots are smart enough to do those jobs too. Some people believe new jobs will be created to replace the old ones, but I believe that at some point robots will simply surpass low-skilled workers in capability and become more desirable laborers. It is my deepest hope that long before that happens, we as a society take a serious look at the way we share the collective wealth of our Earth. Robots should not simply replace workers, but eliminate the need for humans to work for survival. Robots can so significantly increase productivity that we can eliminate scarcity for all of life’s necessities. In doing so, we can provide all people with wealth and freedom unseen in human history.

Making that happen is technologically simple, but will require significant changes to the way we think about society. We need many new thinkers to generate ideas, and would do well to explore concepts like basic income and the work of philosophers like Karl Marx and Friedrich Engels, among others. The most revolutionary aspect of the change robotics brings will not be the creation of new wealth, but in how it enables access to the wealth we already have.

Taylor Alexander is a multidisciplinary engineer focused on robotics. He is founder of Flutter Wireless and works as a Software Engineer at a secretive robotics startup in Silicon Valley. When he’s not designing for open source, he’s reading about the social and political implications of robotics and writing for his blog at tlalexander.com.

This essay appears in Circuit Cellar 308, March 2016.

The Future of Wireless: Imagination Drives Innovation

Wireless system design is one of the hottest fields in electrical engineering. We recently asked 10 engineers to prognosticate on the future of wireless technology. Alexander Popov, a Bulgaria-based engineer, writes:

These days, we are constantly connected to the Internet.5 Popov orange People expect quality service both at home and on the go. Cellular networks are meeting this demand with 4G and upcoming 5G technologies. A single person now uses as much bandwidth as an entire Internet provider 20 years ago. We are immersed in a pool of information, but are no longer its sole producers. The era of Internet of Things is upon us, and soon there will be more IoT devices than there are people. They require quite a different ecosystem than we people use. Тheir pattern of information flow is usually sporadic, with small chunks of data. Connecting to a generic Wi-Fi or cellular network is not efficient. IoT devices utilize well established protocols like Bluetooth LE and ZigBee, but dedicated ones like LPWAN and 6LoWPAN are also being developed and probably more will follow. We will see more sophisticated and intelligent wireless networks, probably sharing resources on different layers to form a larger WAN. An important aspect of IoT devices is their source of power. Energy harvesting and wireless power will evolve to become a standard part of the “smart” ecosystem. Improved technologies in chip manufacturing processes aid hardware not only by lowering power consumption and reducing size, but also with dedicated embedded communication stack and chip coils. The increased amount and different types of information will allow software technologies like cloud computing and big data analysis to thrive. With information so deep in our personal lives, we may see new security standards offering better protection for our privacy. All these new technologies alone will be valuable, but the possibilities they offer combined are only limited by our imaginations. Best be prepared to explore and sketch your ideas now! — Alexander Popov, Bulgaria (Director Product Management, Minerva Networks)

The Future of Wireless: Global Internet Network

Advances in wireless technologies are driving innovation in virtually every industry, from automobiles to consumer electronics. We recently asked 10 engineers to prognosticate on the future of wireless technology. Eileen Liu, a software engineer at Lockheed Martin, writes:10 Liu

Wireless technology has become increasingly prevalent in our daily lives. It has become commonplace to look up information on smartphones via invisible networks and to connect to peripheral devices using Bluetooth connections. So what should we expect to see next in the world of wireless technology? One of the major things to keep an eye on is the effort for a global Internet network. Facebook and Google are potentially collaborating, working on drones and high-altitude helium balloons with router-like payloads. These solar-powered payloads make a radio link to a telecommunications network on Earth’s surface and broadcast Internet coverage downwards. Elon Musk and Greg Wyler are both working on a different approach, using flotillas of low-orbiting satellites. With such efforts, high-speed Internet access could become possible for the most remote locations on Earth, bringing access to the 60% of the world’s population that currently do not have access. Another technology to look out for is wireless power transfer. This technology allows multiple devices to charge simultaneously without a tether and without a dependency on directionality. Recent developments have mostly been in the realm of mobile phones and laptops, but this could expand to other electronic devices and automobiles that depend on batteries. A third technology to look out for is car-to-car communications. Several companies have been developing autonomous cars, using sensor systems to detect road conditions and surrounding vehicles. These sensors have shown promise, but have limited range and field-of-view and can easily be obstructed. Car-to-car communications allow vehicles to broadcast position, speed, steering-wheel position, and other data to surrounding vehicles with a range of several hundred meters. By networking cars together wirelessly, we could be one step closer to safe autonomous driving. — Eileen Liu, United States (Software Engineer, Lockheed Martin)

The Future of Wireless: Deployment Matters

Each day, wireless technology becomes more pervasive as new electronics systems hit the market and connect to the Internet. We recently asked 10 engineers to prognosticate on the future of wireless technology. Penn State Professor Chris Coulston writes:9 Coulston green

With the Internet of Things still the big thing, we should expect exciting developments in embedded wireless in 2016 and beyond. Incremental advances in speed and power consumption will allow manufactures to brag about having the latest and greatest chip. However, all this potential is lost unless you can deploy it easily. The Futurelec FT-232 serial-to-USB bridge is a success because it trades off some of the functionality of a complex protocol for a more familiar, less burdensome, protocol.  The demand for simplified protocols should drive manufacturers to develop solutions making complex protocols more accessible. Cutting the cord means different things to different people. While Bluetooth Low Energy (BLE) has allowed a wide swath of gadgets to go wireless, these devices still require the presence of some intermediary (like a smart phone) to manage data transfer to the cloud. Expect to see the development of intermediate technologies enabling BLE to “cut the cord” to smart phones. Security of wireless communication will continue to be an important element of any conversation involving new wireless technology. Fortunately, the theoretical tools need to secure communication are well understood. Expect to see these tools trickle down as standard subsystems in embedded processors. The automotive industry is set to transform itself with self-driving cars. This revolution in transportation must be accompanied by wireless technologies allowing our cars to talk to our devices, each other and perhaps the roadways. This is an area that is ripe for some surprising and exciting developments enabling developers to innovate in this new domain. We live in interesting times with embedded systems playing a large role in consumer and industrial systems. With better and more accessible technology in your grasp, I hope that you have great and innovative 2016! — Chris Coulston, United States (Associate Professor, Electrical & Computer Engineering, Penn State Erie)

The Future of Wireless: IoT “Connect Anywhere” Solutions

Wireless communications have revolutionized virtually every industry, from healthcare to defense to consumer electronics. We recently asked 10 engineers to prognosticate on the future of wireless technology. France-based engineer Robert Lacoste writes:3 Lacoste purple

I don’t know if the forecasts about the Internet of Things (IoT) are realistic (some analysts predict from 20 to 100 billion devices in the next five years), but I’m sure it will be a huge market. And 99% of IoT products are and will be wireless. Currently, the vast majority of “things” connect to the Internet through a user’s smartphone, used as a gateway typically through a Bluetooth Smart link. Other devices (e.g., home control or smart metering) require the installation of a dedicated fixed RF-to-Internet gateway, using ZigBee, 6lowPan, or something similar. But the next big thing will be the availability of “connect anywhere” solutions, through low-power wide area networks, nicknamed LPWA. Even if the underlying technology is not actually new (i.e., using very low bit rates to achieve long range at low powers), the contenders are numerous: LORA Alliance, INGENU, SIGFOX, WEIGHTLESS, and a couple of others. At the same time, the traditional telcos are developing very similar solutions using cellular bands and variants of the 3GPP protocols. EC-GSM, LTE-MTC, and NB-IOT are the most discussed alternatives. So, the first big question is this: Which one (or ones, as a one-size-fits-all solution is unlikely) will be the winner? The second big question has to do with whether or not IoT products will be useful for society. But that’s another story! — Robert Lacoste, France (Founder, Alciom; Columnist, Circuit Cellar)

Managing an Open-Source Project

Open-source projects may be one of the greatest things that have happened during these last decades. We use or see them on a daily basis (e.g., Wikipedia, Android, and Linux), and sometimes we can’t imagine our lives without them. They are a great way to learn from experienced individuals, contribute to something bigger than oneself, and be part of a great community. But how do you manage such a project when contributors are not remunerated and scattered all over the globe? In this short article, I’ll describe my experience managing the Mooltipass Offline Password Keeper Project.mooltipass_left

Mootlipass is a compact offline encrypted password keeper that remembers your credentials. I launched the project in December 2013 on Hackaday.com, which was the perfect place to promote such an idea and call for contributors. While there was ample interest and an appreciable number of applicants, it rapidly became apparent that people tend to overestimate their spare time and their ability to contribute. Only 40% of all applicants stayed with us until the end of the first stage: agreeing on the tools and conventions to use. After a month, the project infrastructure was based on GitHub (code versioning and management), Dropbox (file exchange), Trello (project management and task assignment), and Google groups (general and developer discussions).

A sense of community was one of the key aspects that helped us succeed, as contributors were not remunerated. We agreed on a consensus-based decision making process so that one person would not have too much control. I assigned tasks based on the contributors’ preferences and availabilities, which kept everyone motivated.

Once the development started, the strict rules we had agreed on were enforced and pull requests were always reviewed. This ensured that contributors could easily come and go as they pleased while reminding them that their code was part of a bigger code base. Feature and aesthetic design decisions were made by the Hackaday readers through online polls, and the name “Mooltipass” came from an avid project follower. We wanted to keep readers constantly involved in our project to make sure the final design would please everyone.

Overall, there were many key elements to our success: visibility, pragmatism, openness, and determination. Launching via an established online community gave us a great start and enabled us to build a strong backing. Individuals of all ages and backgrounds participated in our discussions.

Taking the face-to-face aspect out of project management was tricky. Frank and honest conversations between developers were therefore highly encouraged. And we had to remind participants to not take negative or critical feedback personally. Fortunately, we quickly realized during the project development process that most contributors had exactly the same mindset.

In addition to the project contributors, it was also necessary to manage the general public. Patience was the key. We carefully addressed the many questions and concerns we received. Although several anonymous users had input that wasn’t helpful, on several occasions random people sent in tips that helped to improve our code and algorithms. We offered people the opportunity to implement the isolated features they wanted by contributing to our repository, which helped cut many Google group discussions short. After all, the entire development process was completely transparent.

Thinking about managing an open-source project of your own? It isn’t for the faint of heart. While running the project, I felt as though I was both a contributor and “benevolent dictator.” Fortunately, my engineering and managerial skills were strong enough to see the project through.

It was heartwarming to see that all 15 developers joined the adventure for the fun of it or to work on a device they wanted to use later on. Only one contributor was let go during the development process due to extremely slow progress. After 1,500 commits, a year of development, a $130,000 crowdfunding campaign, and delivering all units by August 2015, the Mooltipass project was a success. It is a fascinating testament to the efficacy of an open-source, crowdfunded project.

Mathieu Stephan is a Switzerland-based high-speed electronics engineer who develops and manufactures consumer products (www.limpkin.fr). Most of his projects are in the domotics domain, which is where he feels he can help people the most. Mathieu documents his open-source creations on his website in an effort to encourage others to get involved in the electronics world and share their work. He holds a BS in Electrical Engineering and Computer Science from ESIEE Paris and an MS in Informatics from EPFL in Switzerland.

This essay appears in Circuit Cellar 306, January 2016.

The Future of Hardware Design

The future of hardware design is in the cloud. Many companies are already focused on the Internet of Things (IoT) and creating hardware to be interconnected in the cloud. However, can we get to a point where we build hardware itself in the cloud?

Traditional methods of building hardware in the cloud recalls the large industry of EDA software packages—board layouts, 3-D circuit assemblies, and chip design. It’s arguable that this industry emphasizes mechanical design, focusing on intricate chip placement, 3-D space, and connections. There are also cloud-based SPICE simulators for electronics—a less-than-user-friendly experience with limited libraries of generic parts. Simulators that do have a larger library also tend to have a larger associated cost. Finding exact parts can be a frustrating experience. A SPICE transistor typically does not have a BOM part number requiring a working design to become a sourcing hunt amongst several vendor offerings.123D Circuits with Wifi Module

What if I want to create real hardware in the cloud, and build a project like those in Circuit Cellar articles? This is where I see the innovation that is changing the future of how we make electronics. We now have cloud platforms that provide you with the experience of using actual parts from vendors and interfacing them with a microcontroller. Component lists including servo motors, IR remotes with buttons, LCDs, buzzers with sound, and accelerometers are needed if you’re actually building a project. Definitive parts carried by vendors and not just generic ICs are crucial. Ask any design engineer—they have their typical parts that they reuse and trust in every design. They need to verify that these parts move and work, so having an online platform with these parts allows for a real world simulation.

An Arduino IDE that allows for real-time debugging and stepping through code in the cloud is powerful. Advanced microcontroller IDEs do not have external components in their simulators or environment. A platform that can interconnect a controller with external components in simulation mirrors real life closer than anything else. By observing rises in computer processing power, many opportunities may be realized in the future with other more complex MCUs.

Most hardware designers are unaware of the newest cloud offerings or have not worked with a platform enough to evaluate it as a game-changer. But imagine if new electronics makers and existing engineers could learn and innovate without hardware for free in the cloud.

I remember spending considerable time working on circuit boards to learn the hardware “maker” side of electronics. I would typically start with a breadboard to build basic circuits. Afterwards it was migrated to a protoboard to build a smaller, robust circuit that could be soldered together. Several confident projects later, I jumped to designing and producing PCB boards that eventually led to an entirely different level in the semiconductor industry. Once the boards were designed, all the motors, sensors, and external parts could be assembled to the board for testing.

Traditionally, an assembled PCB was needed to run the hardware design—to test it for reliability, to program it, and to verify it works as desired. Parts could be implemented separately, but in the end, a final assembled design was required for software testing, peripheral integration, and quality testing. Imagine how this is now different using a hardware simulation. The quality aspect will always be tied to actual hardware testing, but the design phase is definitely undergoing disruption. A user can simply modify and test until the design works to their liking, and then design it straight away to a PCB after several online designs failures, all without consequence.

With an online simulation platform, aspiring engineers can now have experiences different from my traditional one. They don’t need labs or breadboards to blink LEDs. The cloud equalizes access to technology regardless of background. Hardware designs can flow like software. Instead of sending electronics kits to countries with importation issues, hardware designs can be shared online and people can toggle buttons and user test it. Students do not have to buy expensive hardware, batteries, or anything more than a computer.

An online simulation platform also affects the design cycle. Hardware design cycles can be fast when needed, but it’s not like software. But by merging the two sides means thousands can access a design and provide feedback overnight, just like a Facebook update. Changes to a design can be done instantly and deployed at the same time—an unheard of cycle time. That’s software’s contribution to the traditional hardware one.
There are other possibilities for hardware simulation on the end product side of the market. For instance, crowdfunding websites have become popular destinations for funding projects. But should we trust a simple video representing a working prototype and then buy the hardware ahead of a production? Why can’t we play with the real hardware online? By using an online simulation of actual hardware, even less can be invested in terms of hardware costs, and in the virtual environment, potential customers can experience the end product built on a real electronic design.

Subtle changes tend to build up and then avalanche to make dramatic changes in how industries operate. Seeing the early signs—realizing something should be simpler—allows you to ask questions and determine where market gaps exist. Hardware simulation in the cloud will change the future of electronics design, and it will provide a great platform for showcasing your designs and teaching others about the industry.

John Young is the Product Marketing Manager for Autodesk’s 123D Circuits (https://123d.circuits.io/) focusing on building a free online simulator for electronics. He has a semiconductor background in designing products—from R&D to market launch for Freescale and Renesas. His passion is finding the right market segment and building new/revamped products. He holds a BSEE from Florida Atlantic University, an MBA from the Thunderbird School of Global Management and is pursuing a project management certification from Stanford.

The Future of Circuit Design

The cloud is changing the way we build circuits. In the near future we won’t make our own symbols, or layout our own traces, review our own work, or even talk to our manufacturers. We are moving from a world of desktop, offline, email-based engineering into a bold new world powered by collaborative tools and the cloud.

I know that’s a strong statement, so let me try to explain. I think a lot about how we work as engineers. How our days are filled, how we go about our tasks, and how we accomplish our missions. But also how it’s all changing, what the future of our work looks like, and how the cloud, outsourcing, and collaboration are changing everything.Homuth schem

For the past five years I’ve been a pioneer. I started the first company to attempt to build a fully-cloud circuit design tool. That was years before anyone else even thought it was possible. It was before Google docs went mainstream, and before Github became the center of the software universe. I didn’t build it because I have some love affair with the cloud (though I do now), or because deep down inside I wanted to make CAD software (eek!), I did it because I believed in a future of work that required collaboration.

So how does it work? Well, instead of double clicking an icon on your desktop, you open your web-browser and navigate to upverter.com. Then, instead of opening a file on your harddrive, you open one of your designs stored in the cloud. It loads, looks, and feels exactly the same as your existing design tools. You make your changes, and it automatically saves a new version, work some more, and ultimately export your Geber files in exactly the same way as you would with a desktop tool.

The biggest difference is that instead of working alone, instead of creating every symbol yourself, or emailing files, you are part of an ecosystem. You can request parts, and invite your teammates or your manufacturer to participate in the design. They can make comments and recommendations—right there in the editor. You can share your design by emailing a URL. You can check part inventory and pricing in real-time. You get notified when your colleagues do work, when changes get made, and when parts get updates. It feels a lot like how it’s supposed to work and maybe the best yet, it’s cheaper too.

Let me dispel a few myths.

The cloud is insecure: Of course it is. Almost every system has a flaw. But what you need to ask instead is relative security. Is the cloud any less secure than your desktop? And the answer shouldn’t surprise you. The cloud is about 10× MORE secure than your office desktop (let alone your phone or laptop). It turns out when companies employ people to worry about security they do a better job than the IT guys at your office park.

The cloud is slow: Not true. Web browsers have gotten so fast over the past decade that today compiled C code is only 3× faster thana JavaScript. In that same time your computer got 5× faster than it used to be, and that desktop software you’re running was written in the 90s (that’s a bad thing). And there is more compute power, available to the cloud that anywhere on Earth. All of which adds up to most cloud apps actually running faster than the desktop apps they replace.

Collaboration is for teams: True. But even if you feel like you’re on a team of one, no one really works alone these days. You order parts from a vendor, someone else did your reference design, you don’t manufacture your boards yourself. There could be as many as a dozen people supporting you that you don’t even realize. Imagine if they had the full context of what you’re building? Imagine if you could truly collaborate instead of putting up with emails and phone calls.

I believe the future of hardware design, and the future of circuits, is in the cloud. I believe that working together is such a superpower that everyone will have to do it. It will change the way we work, the way we engineer, and the way we ship product. Hardware designed in the future, is hardware designed in the cloud.

Zak Homuth is the CEO and co-founder of Upverter, as well as a Y Combinator alumni. At Upverter, Zak has overseen product development and design from the beginning, including the design toolchain, collaborative community and ondemand simulators. Improving the rate of innovation in hardware engineering, including introducing collaboration and sharing, has been one of his central interests for almost a decade, stemming from his time as an hardware engineer working on telecommunication hardware. Prior to Upverter, Zak founded an electronics manufacturing service, and served as the company’s CEO. Before that, he founded a consulting company, which provided software and hardware services. Zak has worked for IBM, Infosys, and Sandvine and attended the University of Waterloo, where he studied Computer Engineering before taking a leave of absence.