The Future of Electronic Measurement Systems

Trends in test and measurement systems follow broader technological trends. A measurement device’s fundamental purpose is to translate a measurable quantity into something that can be discerned by a human.  As such, the display technology of the day informed much of the design and performance limitations of early electronic measurement systems. Analog meters, cathode ray tubes, and paper strip recorder systems dominated.  Measurement hardware could be incredibly innovative, but such equipment could only be as good as its ability to display the measurement result to the user. Early analog multimeters could only be as accurate as a person’s ability to read to which dash mark the needle pointed.ipad_hand

In the early days, the broader electronics market was still in its infancy and didn’t offer much from which to draw. Test equipment manufacturers developed almost everything in house, including display technology. In its heyday, Tektronix even manufactured its own cathode ray tubes. As the nascent electronics market matured, measurement equipment evolved to leverage the advances being made. Display technology stopped being such an integral piece. No longer shackled with the burden of developing everything in house, equipment makers were able to develop instruments faster and focus more on the measurement elements alone. Advances in digital electronics made digital oscilloscopes practical. Faster and cheaper processors and larger memories (and faster ADCs to fill them) then led to digital oscilloscopes dominating the market. Soon, test equipment was influenced by the rise of the PC and even began running consumer-grade operating systems.

Measurement systems of the future will continue to follow this trend and adopt advances made by the broader tech sector. Of course, measurement specs will continue to improve, driven by newly invented technologies and semiconductor process improvements. But, other trends will be just as important. As new generations raised on Apple and Android smartphones start their engineering careers, the industry will give them the latest advances in user interfaces that they have come to expect. We are already seeing test equipment start to adopt touchscreen technologies. This trend will continue as more focus is put on interface design. The latest technologies talked about today, such as haptic feedback, will appear in the instruments of tomorrow. These UI improvements will help engineers better extract the data they need.

As chip integration follows its ever steady course, bench-top equipment will get smaller. Portable measurement equipment will get lighter and last longer as they leverage low-power mobile chipsets and new battery technologies. And the lines between portable and bench-top equipment will be blurred just as laptops have replaced desktops over the last decade. As equipment makers chase higher margins, they will increasingly focus on software to help interpret measurement data. One can imagine a subscription service to a cloud-based platform that provides better insights from the instrument on the bench.

At Aeroscope Labs (www.aeroscope.io), a company I cofounded, we are taking advantage of many broader trends in the electronics market. Our Aeroscope oscilloscope probe is a battery-powered device in a pen-sized form factor that wirelessly syncs to a tablet or phone. It simply could not exist without the amazing advances in the tech sector of the past 10 years. Because of the rise of the Internet of Things (IoT), we have access to many great radio systems on a chip (SoCs) along with corresponding software stacks and drivers. We don’t have to develop a radio from scratch like one would have to do 20 years ago. The ubiquity of smart phones and tablets means that we don’t have to design and build our own display hardware or system software. Likewise, the popularity of portable electronics has pushed the cost of lithium polymer batteries way down. Without these new batteries, the battery life would be mere minutes instead of the multiple hours that we are able to achieve.

Just as with my company, other new companies along with the major players will continue to leverage these broader trends to create exciting new instruments. I’m excited to see what is in store.

Jonathan Ward is cofounder of Aeroscope Labs (www.aeroscope.io), based in Boulder, CO. Aeroscope Labs is developing the world’s first wireless oscilloscope probe. Jonathan has always had a passion for measurement tools and equipment. He started his career at Agilent Technologies (now Keysight) designing high-performance spectrum analyzers. Most recently, Jonathan developed high-volume consumer electronics and portable chemical analysis equipment in the San Francisco Bay Area. In addition to his decade of industry experience, he holds an MS in Electrical Engineering from Columbia University and a BSEE from Case Western Reserve University.

The Future of Robotics Technology

Advancements in technology mean that the dawn of a new era of robotics is upon us. Automation is moving out of the factory and in to the real world. As this happens, we will see significant increases in productivity as well as drastic cuts in employment. We have an opportunity to markedly improve the lives of all people. Will we seize it?

For decades, the biggest limitations in robotics were related to computing and perception. Robots couldn’t make sense of their environments and so were fixed to the floor. Their movements were precalculated and repetitive. Now, however, we are beginning to see those limitations fall away, leading to a step-change in the capabilities of robotic systems. Robots now understand their environment with high fidelity, and safely navigate through it.

On the sensing side, we’re seeing multiple order of magnitude reductions in the cost of 3-D sensors used for mapping, obstacle avoidance, and task comprehension. Time of flight cameras such as those in the Microsoft Kinect or Google Tango devices are edging their way into the mainstream in high volumes. LIDAR sensors commonly used on self-driving cars were typically $60,000 or more just a few years ago. This year at the Consumer Electronics Show (CES), however, two companies, Quanergy and Velodyne, announced new solid-state LIDAR devices that eliminate all moving parts and carry a sub-$500 price point.

Understanding 3-D sensor data is a computationally intensive task, but advancements in general purpose GPU computing have introduced new ways to quickly process the information. Smartphones are pushing the development of small, powerful processors, and we’re seeing companies like NVIDIA shipping low cost GPU/CPU combos such as the X1 that are ideal for many robotics applications.

To make sense of all this data, we’re seeing significant improvements in software for robotics. The open-source Robot Operating System (ROS), for example, is widely used in industry and at 9 years old, just hit version 2.0. Meanwhile advances in machine learning mean that computers can now perform many tasks better than humans.

All these advancements mean that robots are moving beyond the factory floor and in to the real world. Soon we’ll see a litany of problems being solved by robotics. Amazon already uses robots to lower warehousing costs, and several new companies are looking to solve the last mile delivery problem. Combined with self-driving cars and trucks this will mean drastic cost reductions for the logistics industry, with a ripple effect that lowers the cost of all goods.

As volumes go up, we will see cost reductions in expensive mechanical components such as motors and linkages. In five years, most of the patents for metal 3-D printers will expire, which will bring on a wave of competition to lower costs for new manufacturing methods.
While many will benefit greatly from these advances, there are worrying implications for others. Truck driver is the most common job in nearly every state, but within a decade those jobs will see drastic cuts. Delivery companies like Amazon Fresh and Google Shopping Express currently rely on fleets of human drivers, as do taxi services Uber and Lyft. It seems reasonable that those companies will move to automated vehicles.

Meanwhile, there are a great number of unskilled jobs that have already reduced workers to near machines. Fast food restaurants, for example, provide clear cut scripts for workers to follow, eliminating any reliance on human intelligence. It won’t be long before robots are smart enough to do those jobs too. Some people believe new jobs will be created to replace the old ones, but I believe that at some point robots will simply surpass low-skilled workers in capability and become more desirable laborers. It is my deepest hope that long before that happens, we as a society take a serious look at the way we share the collective wealth of our Earth. Robots should not simply replace workers, but eliminate the need for humans to work for survival. Robots can so significantly increase productivity that we can eliminate scarcity for all of life’s necessities. In doing so, we can provide all people with wealth and freedom unseen in human history.

Making that happen is technologically simple, but will require significant changes to the way we think about society. We need many new thinkers to generate ideas, and would do well to explore concepts like basic income and the work of philosophers like Karl Marx and Friedrich Engels, among others. The most revolutionary aspect of the change robotics brings will not be the creation of new wealth, but in how it enables access to the wealth we already have.

Taylor Alexander is a multidisciplinary engineer focused on robotics. He is founder of Flutter Wireless and works as a Software Engineer at a secretive robotics startup in Silicon Valley. When he’s not designing for open source, he’s reading about the social and political implications of robotics and writing for his blog at tlalexander.com.

This essay appears in Circuit Cellar 308, March 2016.

The Future of Wireless: Imagination Drives Innovation

Wireless system design is one of the hottest fields in electrical engineering. We recently asked 10 engineers to prognosticate on the future of wireless technology. Alexander Popov, a Bulgaria-based engineer, writes:

These days, we are constantly connected to the Internet.5 Popov orange People expect quality service both at home and on the go. Cellular networks are meeting this demand with 4G and upcoming 5G technologies. A single person now uses as much bandwidth as an entire Internet provider 20 years ago. We are immersed in a pool of information, but are no longer its sole producers. The era of Internet of Things is upon us, and soon there will be more IoT devices than there are people. They require quite a different ecosystem than we people use. Тheir pattern of information flow is usually sporadic, with small chunks of data. Connecting to a generic Wi-Fi or cellular network is not efficient. IoT devices utilize well established protocols like Bluetooth LE and ZigBee, but dedicated ones like LPWAN and 6LoWPAN are also being developed and probably more will follow. We will see more sophisticated and intelligent wireless networks, probably sharing resources on different layers to form a larger WAN. An important aspect of IoT devices is their source of power. Energy harvesting and wireless power will evolve to become a standard part of the “smart” ecosystem. Improved technologies in chip manufacturing processes aid hardware not only by lowering power consumption and reducing size, but also with dedicated embedded communication stack and chip coils. The increased amount and different types of information will allow software technologies like cloud computing and big data analysis to thrive. With information so deep in our personal lives, we may see new security standards offering better protection for our privacy. All these new technologies alone will be valuable, but the possibilities they offer combined are only limited by our imaginations. Best be prepared to explore and sketch your ideas now! — Alexander Popov, Bulgaria (Director Product Management, Minerva Networks)

The Future of Wireless: Global Internet Network

Advances in wireless technologies are driving innovation in virtually every industry, from automobiles to consumer electronics. We recently asked 10 engineers to prognosticate on the future of wireless technology. Eileen Liu, a software engineer at Lockheed Martin, writes:10 Liu

Wireless technology has become increasingly prevalent in our daily lives. It has become commonplace to look up information on smartphones via invisible networks and to connect to peripheral devices using Bluetooth connections. So what should we expect to see next in the world of wireless technology? One of the major things to keep an eye on is the effort for a global Internet network. Facebook and Google are potentially collaborating, working on drones and high-altitude helium balloons with router-like payloads. These solar-powered payloads make a radio link to a telecommunications network on Earth’s surface and broadcast Internet coverage downwards. Elon Musk and Greg Wyler are both working on a different approach, using flotillas of low-orbiting satellites. With such efforts, high-speed Internet access could become possible for the most remote locations on Earth, bringing access to the 60% of the world’s population that currently do not have access. Another technology to look out for is wireless power transfer. This technology allows multiple devices to charge simultaneously without a tether and without a dependency on directionality. Recent developments have mostly been in the realm of mobile phones and laptops, but this could expand to other electronic devices and automobiles that depend on batteries. A third technology to look out for is car-to-car communications. Several companies have been developing autonomous cars, using sensor systems to detect road conditions and surrounding vehicles. These sensors have shown promise, but have limited range and field-of-view and can easily be obstructed. Car-to-car communications allow vehicles to broadcast position, speed, steering-wheel position, and other data to surrounding vehicles with a range of several hundred meters. By networking cars together wirelessly, we could be one step closer to safe autonomous driving. — Eileen Liu, United States (Software Engineer, Lockheed Martin)

The Future of Wireless: Deployment Matters

Each day, wireless technology becomes more pervasive as new electronics systems hit the market and connect to the Internet. We recently asked 10 engineers to prognosticate on the future of wireless technology. Penn State Professor Chris Coulston writes:9 Coulston green

With the Internet of Things still the big thing, we should expect exciting developments in embedded wireless in 2016 and beyond. Incremental advances in speed and power consumption will allow manufactures to brag about having the latest and greatest chip. However, all this potential is lost unless you can deploy it easily. The Futurelec FT-232 serial-to-USB bridge is a success because it trades off some of the functionality of a complex protocol for a more familiar, less burdensome, protocol.  The demand for simplified protocols should drive manufacturers to develop solutions making complex protocols more accessible. Cutting the cord means different things to different people. While Bluetooth Low Energy (BLE) has allowed a wide swath of gadgets to go wireless, these devices still require the presence of some intermediary (like a smart phone) to manage data transfer to the cloud. Expect to see the development of intermediate technologies enabling BLE to “cut the cord” to smart phones. Security of wireless communication will continue to be an important element of any conversation involving new wireless technology. Fortunately, the theoretical tools need to secure communication are well understood. Expect to see these tools trickle down as standard subsystems in embedded processors. The automotive industry is set to transform itself with self-driving cars. This revolution in transportation must be accompanied by wireless technologies allowing our cars to talk to our devices, each other and perhaps the roadways. This is an area that is ripe for some surprising and exciting developments enabling developers to innovate in this new domain. We live in interesting times with embedded systems playing a large role in consumer and industrial systems. With better and more accessible technology in your grasp, I hope that you have great and innovative 2016! — Chris Coulston, United States (Associate Professor, Electrical & Computer Engineering, Penn State Erie)

The Future of Wireless: IoT “Connect Anywhere” Solutions

Wireless communications have revolutionized virtually every industry, from healthcare to defense to consumer electronics. We recently asked 10 engineers to prognosticate on the future of wireless technology. France-based engineer Robert Lacoste writes:3 Lacoste purple

I don’t know if the forecasts about the Internet of Things (IoT) are realistic (some analysts predict from 20 to 100 billion devices in the next five years), but I’m sure it will be a huge market. And 99% of IoT products are and will be wireless. Currently, the vast majority of “things” connect to the Internet through a user’s smartphone, used as a gateway typically through a Bluetooth Smart link. Other devices (e.g., home control or smart metering) require the installation of a dedicated fixed RF-to-Internet gateway, using ZigBee, 6lowPan, or something similar. But the next big thing will be the availability of “connect anywhere” solutions, through low-power wide area networks, nicknamed LPWA. Even if the underlying technology is not actually new (i.e., using very low bit rates to achieve long range at low powers), the contenders are numerous: LORA Alliance, INGENU, SIGFOX, WEIGHTLESS, and a couple of others. At the same time, the traditional telcos are developing very similar solutions using cellular bands and variants of the 3GPP protocols. EC-GSM, LTE-MTC, and NB-IOT are the most discussed alternatives. So, the first big question is this: Which one (or ones, as a one-size-fits-all solution is unlikely) will be the winner? The second big question has to do with whether or not IoT products will be useful for society. But that’s another story! — Robert Lacoste, France (Founder, Alciom; Columnist, Circuit Cellar)

Managing an Open-Source Project

Open-source projects may be one of the greatest things that have happened during these last decades. We use or see them on a daily basis (e.g., Wikipedia, Android, and Linux), and sometimes we can’t imagine our lives without them. They are a great way to learn from experienced individuals, contribute to something bigger than oneself, and be part of a great community. But how do you manage such a project when contributors are not remunerated and scattered all over the globe? In this short article, I’ll describe my experience managing the Mooltipass Offline Password Keeper Project.mooltipass_left

Mootlipass is a compact offline encrypted password keeper that remembers your credentials. I launched the project in December 2013 on Hackaday.com, which was the perfect place to promote such an idea and call for contributors. While there was ample interest and an appreciable number of applicants, it rapidly became apparent that people tend to overestimate their spare time and their ability to contribute. Only 40% of all applicants stayed with us until the end of the first stage: agreeing on the tools and conventions to use. After a month, the project infrastructure was based on GitHub (code versioning and management), Dropbox (file exchange), Trello (project management and task assignment), and Google groups (general and developer discussions).

A sense of community was one of the key aspects that helped us succeed, as contributors were not remunerated. We agreed on a consensus-based decision making process so that one person would not have too much control. I assigned tasks based on the contributors’ preferences and availabilities, which kept everyone motivated.

Once the development started, the strict rules we had agreed on were enforced and pull requests were always reviewed. This ensured that contributors could easily come and go as they pleased while reminding them that their code was part of a bigger code base. Feature and aesthetic design decisions were made by the Hackaday readers through online polls, and the name “Mooltipass” came from an avid project follower. We wanted to keep readers constantly involved in our project to make sure the final design would please everyone.

Overall, there were many key elements to our success: visibility, pragmatism, openness, and determination. Launching via an established online community gave us a great start and enabled us to build a strong backing. Individuals of all ages and backgrounds participated in our discussions.

Taking the face-to-face aspect out of project management was tricky. Frank and honest conversations between developers were therefore highly encouraged. And we had to remind participants to not take negative or critical feedback personally. Fortunately, we quickly realized during the project development process that most contributors had exactly the same mindset.

In addition to the project contributors, it was also necessary to manage the general public. Patience was the key. We carefully addressed the many questions and concerns we received. Although several anonymous users had input that wasn’t helpful, on several occasions random people sent in tips that helped to improve our code and algorithms. We offered people the opportunity to implement the isolated features they wanted by contributing to our repository, which helped cut many Google group discussions short. After all, the entire development process was completely transparent.

Thinking about managing an open-source project of your own? It isn’t for the faint of heart. While running the project, I felt as though I was both a contributor and “benevolent dictator.” Fortunately, my engineering and managerial skills were strong enough to see the project through.

It was heartwarming to see that all 15 developers joined the adventure for the fun of it or to work on a device they wanted to use later on. Only one contributor was let go during the development process due to extremely slow progress. After 1,500 commits, a year of development, a $130,000 crowdfunding campaign, and delivering all units by August 2015, the Mooltipass project was a success. It is a fascinating testament to the efficacy of an open-source, crowdfunded project.

Mathieu Stephan is a Switzerland-based high-speed electronics engineer who develops and manufactures consumer products (www.limpkin.fr). Most of his projects are in the domotics domain, which is where he feels he can help people the most. Mathieu documents his open-source creations on his website in an effort to encourage others to get involved in the electronics world and share their work. He holds a BS in Electrical Engineering and Computer Science from ESIEE Paris and an MS in Informatics from EPFL in Switzerland.

This essay appears in Circuit Cellar 306, January 2016.

The Future of Hardware Design

The future of hardware design is in the cloud. Many companies are already focused on the Internet of Things (IoT) and creating hardware to be interconnected in the cloud. However, can we get to a point where we build hardware itself in the cloud?

Traditional methods of building hardware in the cloud recalls the large industry of EDA software packages—board layouts, 3-D circuit assemblies, and chip design. It’s arguable that this industry emphasizes mechanical design, focusing on intricate chip placement, 3-D space, and connections. There are also cloud-based SPICE simulators for electronics—a less-than-user-friendly experience with limited libraries of generic parts. Simulators that do have a larger library also tend to have a larger associated cost. Finding exact parts can be a frustrating experience. A SPICE transistor typically does not have a BOM part number requiring a working design to become a sourcing hunt amongst several vendor offerings.123D Circuits with Wifi Module

What if I want to create real hardware in the cloud, and build a project like those in Circuit Cellar articles? This is where I see the innovation that is changing the future of how we make electronics. We now have cloud platforms that provide you with the experience of using actual parts from vendors and interfacing them with a microcontroller. Component lists including servo motors, IR remotes with buttons, LCDs, buzzers with sound, and accelerometers are needed if you’re actually building a project. Definitive parts carried by vendors and not just generic ICs are crucial. Ask any design engineer—they have their typical parts that they reuse and trust in every design. They need to verify that these parts move and work, so having an online platform with these parts allows for a real world simulation.

An Arduino IDE that allows for real-time debugging and stepping through code in the cloud is powerful. Advanced microcontroller IDEs do not have external components in their simulators or environment. A platform that can interconnect a controller with external components in simulation mirrors real life closer than anything else. By observing rises in computer processing power, many opportunities may be realized in the future with other more complex MCUs.

Most hardware designers are unaware of the newest cloud offerings or have not worked with a platform enough to evaluate it as a game-changer. But imagine if new electronics makers and existing engineers could learn and innovate without hardware for free in the cloud.

I remember spending considerable time working on circuit boards to learn the hardware “maker” side of electronics. I would typically start with a breadboard to build basic circuits. Afterwards it was migrated to a protoboard to build a smaller, robust circuit that could be soldered together. Several confident projects later, I jumped to designing and producing PCB boards that eventually led to an entirely different level in the semiconductor industry. Once the boards were designed, all the motors, sensors, and external parts could be assembled to the board for testing.

Traditionally, an assembled PCB was needed to run the hardware design—to test it for reliability, to program it, and to verify it works as desired. Parts could be implemented separately, but in the end, a final assembled design was required for software testing, peripheral integration, and quality testing. Imagine how this is now different using a hardware simulation. The quality aspect will always be tied to actual hardware testing, but the design phase is definitely undergoing disruption. A user can simply modify and test until the design works to their liking, and then design it straight away to a PCB after several online designs failures, all without consequence.

With an online simulation platform, aspiring engineers can now have experiences different from my traditional one. They don’t need labs or breadboards to blink LEDs. The cloud equalizes access to technology regardless of background. Hardware designs can flow like software. Instead of sending electronics kits to countries with importation issues, hardware designs can be shared online and people can toggle buttons and user test it. Students do not have to buy expensive hardware, batteries, or anything more than a computer.

An online simulation platform also affects the design cycle. Hardware design cycles can be fast when needed, but it’s not like software. But by merging the two sides means thousands can access a design and provide feedback overnight, just like a Facebook update. Changes to a design can be done instantly and deployed at the same time—an unheard of cycle time. That’s software’s contribution to the traditional hardware one.
There are other possibilities for hardware simulation on the end product side of the market. For instance, crowdfunding websites have become popular destinations for funding projects. But should we trust a simple video representing a working prototype and then buy the hardware ahead of a production? Why can’t we play with the real hardware online? By using an online simulation of actual hardware, even less can be invested in terms of hardware costs, and in the virtual environment, potential customers can experience the end product built on a real electronic design.

Subtle changes tend to build up and then avalanche to make dramatic changes in how industries operate. Seeing the early signs—realizing something should be simpler—allows you to ask questions and determine where market gaps exist. Hardware simulation in the cloud will change the future of electronics design, and it will provide a great platform for showcasing your designs and teaching others about the industry.

John Young is the Product Marketing Manager for Autodesk’s 123D Circuits (https://123d.circuits.io/) focusing on building a free online simulator for electronics. He has a semiconductor background in designing products—from R&D to market launch for Freescale and Renesas. His passion is finding the right market segment and building new/revamped products. He holds a BSEE from Florida Atlantic University, an MBA from the Thunderbird School of Global Management and is pursuing a project management certification from Stanford.

The Future of Circuit Design

The cloud is changing the way we build circuits. In the near future we won’t make our own symbols, or layout our own traces, review our own work, or even talk to our manufacturers. We are moving from a world of desktop, offline, email-based engineering into a bold new world powered by collaborative tools and the cloud.

I know that’s a strong statement, so let me try to explain. I think a lot about how we work as engineers. How our days are filled, how we go about our tasks, and how we accomplish our missions. But also how it’s all changing, what the future of our work looks like, and how the cloud, outsourcing, and collaboration are changing everything.Homuth schem

For the past five years I’ve been a pioneer. I started the first company to attempt to build a fully-cloud circuit design tool. That was years before anyone else even thought it was possible. It was before Google docs went mainstream, and before Github became the center of the software universe. I didn’t build it because I have some love affair with the cloud (though I do now), or because deep down inside I wanted to make CAD software (eek!), I did it because I believed in a future of work that required collaboration.

So how does it work? Well, instead of double clicking an icon on your desktop, you open your web-browser and navigate to upverter.com. Then, instead of opening a file on your harddrive, you open one of your designs stored in the cloud. It loads, looks, and feels exactly the same as your existing design tools. You make your changes, and it automatically saves a new version, work some more, and ultimately export your Geber files in exactly the same way as you would with a desktop tool.

The biggest difference is that instead of working alone, instead of creating every symbol yourself, or emailing files, you are part of an ecosystem. You can request parts, and invite your teammates or your manufacturer to participate in the design. They can make comments and recommendations—right there in the editor. You can share your design by emailing a URL. You can check part inventory and pricing in real-time. You get notified when your colleagues do work, when changes get made, and when parts get updates. It feels a lot like how it’s supposed to work and maybe the best yet, it’s cheaper too.

Let me dispel a few myths.

The cloud is insecure: Of course it is. Almost every system has a flaw. But what you need to ask instead is relative security. Is the cloud any less secure than your desktop? And the answer shouldn’t surprise you. The cloud is about 10× MORE secure than your office desktop (let alone your phone or laptop). It turns out when companies employ people to worry about security they do a better job than the IT guys at your office park.

The cloud is slow: Not true. Web browsers have gotten so fast over the past decade that today compiled C code is only 3× faster thana JavaScript. In that same time your computer got 5× faster than it used to be, and that desktop software you’re running was written in the 90s (that’s a bad thing). And there is more compute power, available to the cloud that anywhere on Earth. All of which adds up to most cloud apps actually running faster than the desktop apps they replace.

Collaboration is for teams: True. But even if you feel like you’re on a team of one, no one really works alone these days. You order parts from a vendor, someone else did your reference design, you don’t manufacture your boards yourself. There could be as many as a dozen people supporting you that you don’t even realize. Imagine if they had the full context of what you’re building? Imagine if you could truly collaborate instead of putting up with emails and phone calls.

I believe the future of hardware design, and the future of circuits, is in the cloud. I believe that working together is such a superpower that everyone will have to do it. It will change the way we work, the way we engineer, and the way we ship product. Hardware designed in the future, is hardware designed in the cloud.

Zak Homuth is the CEO and co-founder of Upverter, as well as a Y Combinator alumni. At Upverter, Zak has overseen product development and design from the beginning, including the design toolchain, collaborative community and ondemand simulators. Improving the rate of innovation in hardware engineering, including introducing collaboration and sharing, has been one of his central interests for almost a decade, stemming from his time as an hardware engineer working on telecommunication hardware. Prior to Upverter, Zak founded an electronics manufacturing service, and served as the company’s CEO. Before that, he founded a consulting company, which provided software and hardware services. Zak has worked for IBM, Infosys, and Sandvine and attended the University of Waterloo, where he studied Computer Engineering before taking a leave of absence.

The Future of Commodity Hardware Security and Your Data

The emergence of the smartphone industry has enabled the commodity hardware market to expand at an astonishing rate. Providers are creating cheap, compact, and widely compatible hardware, which bring about underestimated and unexplored security vulnerabilities. Often, this hardware is coupled with back end and front end software designed to handle data-sensitive applications such as mobile point-of-sale, home security, and health and fitness, among others. Given the personal data passed through these hardware devices and the infancy of much of the market, potential security holes are a unique and growing concern. Hardware providers face many challenges when dealing with these security vulnerabilities, foremost among them being distribution and consequent deprecation issues, and the battle of cost versus security.

The encryption chip for the Square Reader, a commodity hardware device, is located in the bottom right hand corner instead of on the magnetic head. This drastically reduces the cost of the device.

The encryption chip for the Square Reader, a commodity hardware device, is located in the bottom right hand corner instead of on the magnetic head. This drastically reduces the cost of the device.

An important part of designing a hardware device is being prepared for a straightforward hardware deprecation. However, this can be a thorn in a provider’s side, especially when dealing with widespread production. These companies create on the order of millions of copies of each revision of their hardware. If the hardware has a critical security vulnerability post-distribution, the provider must develop a way to not only deprecate the revision, but also fix the problem and distribute the fix to their customers. A hardware security vulnerability can be very detrimental to companies unless a clever solution through companion software is possible to patch the issue and avoid a hardware recall. In lieu of this, products may require a full recall, which can be messy and ineffective unless the provider has a way to prevent future, malicious use of the insecure previous revision.

Many hardware providers have begun opting out of conventional product payments and have instead turned to subscription or use-based payments. Hence, the provider may charge low prices for the actual hardware, but still maintain high yields, typically through back end or front end companion software. For example, Arlo creates a home security camera with a feature that allows users to save videos through their cloud service and view the videos on their smartphone. The price of the camera (their hardware) is mid-range when measured against their competitors, but they charge a monthly fee for extra cloud storage. This enables Arlo to have a continual source of income beyond their hardware product. The hardware can be seen as a hook to a more stable source of income, so long as consumers continue to use their products. For this reason, it is critical that providers minimize costs of their hardware, even down to a single dollar—especially given their large-scale production. Unfortunately, the cost of the hardware is typically directly related to the security of the system. For example, a recent vulnerability found by me and my colleagues in the latest model Square Reader is the ability to convert the Reader to a credit card skimmer via a hardware encryption bypass. This vulnerability was possible due to the placement of the encryption chip on a ribbon cable offset from the magnetic head. If the encryption chip and magnetic head had been mounted to the Reader as an assembly, the attack would not have been possible. However, there is a drastic difference in the cost, on the order of several dollars per part, and therefore security was sacrificed for the bottom line. This is the kind of challenging decision every hardware company has to make in order to meet their business metrics, and often it can be difficult to find a middle ground where security is not sacrificed for expense.

New commodity hardware will continue to integrate into our personal lives and personal data as it becomes cheaper, more compact, and universally compatible. For these reasons, commodity hardware continues to present undetermined and intriguing security vulnerabilities. Concurrently, hardware providers confront these demanding security challenges unique to their industry. They face design issues for proper hardware deprecation due to massive distribution, and they play a constant tug-of-war between cost constraints and security, which typically ends with a less secure device. These potential security holes will remain a concern so long as the smartphone industry and commodity hardware market advance.

Alexandrea Mellen is the founder and chief developer at Terrapin Computing, LLC, which makes mobile applications. She presented as a briefing speaker at Black Hat USA 2015 (“Mobile Point of Scam: Attacking the Square Reader”). She also works in engineering sales at The Mellen Company, which manufactures and designs high-temperature lab furnaces. She has previously worked at New Valence Robotics, a 3-D printing company, as well as The Dorm Room Fund, a student-run venture firm. She holds a BS in Computer Engineering from Boston University. During her undergraduate years, she completed research on liquid metal batteries at MIT with Group Sadoway. See alexandreamellen.com for more information.

The Future of Engineering Research & Systems Modeling

So many bytes, so little time. Five years ago, I found myself looking for a new career. After 20 years in the automotive sector, the economic downturn hit home and my time had come. I was lucky enough to find a position at the University of Notre Dame designing and building lab instrumentation and data acquisition equipment in the Department of Civil and Environmental Engineering & Earth Sciences, and teaching microprocessor programming in the evenings at Ivy Tech Community College. The transition from industry to the academic world has been challenging and rewarding. Component and System modeling using computer simulation is an integral part of all engineering disciplines. Much of the industry simulation software started out in a university computer lab.PIVtank

A successful computer simulation of a physical phenomenon has several requirements. The first requirement is a stable model based on a set of equations relating to the physics and scale of the event. For complex systems, this model may not exist, and a simplified model may be used to approximate the event as close as possible. Assumptions are made where data is scarce. The second requirement is a set of initial conditions that all the equation variables need to start the simulation. These values are usually determined by running real-world experiments and capturing a “snapshot” of the event at a specific time. The quality of this data depends on the technology available at the time. The technology behind sensors and data acquisition for these experiments is evolving at an incredible rate. Some sensors that may have cost $500 10 years ago are available now for $5 and have been miniaturized to one tenth of its original size to fit into a cell phone or smart band. Equipment that was too large to be used out of a lab environment is now pocket sized and portable. Researchers are taking advantage of this, and taking much more data than ever imagined.

So how will this affect the future of simulation? Multicore processors and distributed computing are allowing researchers to run more simulations and get results quicker. Our world has become Internet driven and people want data immediately, so data must become available as close to real-time as possible. As more and more sensors become wireless, low cost, energy efficient, and “smart” due to the Internet of Things movement, empirical data is available from places never before conceived. Imagine the possible advancements in weather modeling and forecasting if every cell phone in the world sent temperature, humidity, barometric pressure, GPS, and light intensity data to a cloud database automatically. More sensors lead to higher simulation resolution and more accuracy.

A popular saying, “garbage in = garbage out,” still applies, and is the bane of the Internet. Our future programmers must be able to sift through all of this new data and determine the good from the bad. Evil hackers enjoy destroying databases, so security is a major concern. Some of this new technology that could be useful in research is being rejected by the public due to criminal use. For example, a UAV “drone” that can survey a farmer’s crop can also deliver contraband or cause havoc at an airport or sporting event. While these issues are tackled in the courtroom and the FAA, researchers are waiting to take more data.

Simulation is still only a guess at what may happen under specific conditions based on assumptions of how our world works. The advancements in sensor and data acquisition technology will continue to improve the accuracy of these guesses, as long as we can depend on the reliability of the input sources and keep the evil hackers out of the databases. Schools still need to train students on how to determine good data from questionable data. The terabyte question for the future of simulation is whether or not we will be able to find the data we need in the format we need, searching through all these new data sources in less time than it would take to run the original experiments ourselves. So many bytes, so little time.

R. Scott Coppersmith earned a BSc in Electrical Engineering at Michigan Technological University. He held several engineering positions in the automotive industry from the late 1980s until 2010 when he joined the University of Notre Dame’s Civil Engineering and Geological Sciences department as Research Engineer to help build a Environmental Fluid Dynamics laboratory and assist students, faculty, and visiting researchers with their projects. Scott also teaches a variety of engineering courses (e.g., Intro to Microcontrollers and Graphic Communication for Manufacturing) at Ivy Tech Community College.

Trends in Custom Peripheral Cores for Digital Sensor Interfaces

Building ever-smarter technology, we perpetually require more sensors to collect increasing amounts of data for our decision-making machines. Power and bandwidth constraints require signals from individual sensors to be aggregated, fused and condensed locally by sensor hubs before being passed to a local application processor or transmitted to the cloud.

FPGAs are often used for sensor hubs because they handle multiple parallel data paths in real time extremely well and can be very low power. ADC parallel interfaces and simple serial shift register interfaces are straightforward to implement in FPGA logic. However, interfacing FPGAs with more complex serial devices—which are becoming more common as analog and digital circuitry are integrated—or serializing collected data is often less straightforward. Typically, serial interfaces are implemented in FPGA fabric as a state machine where a set of registers represents the state of the serial interface, and each clock cycle, logic executes depending on the inputs and state registers. For anything but the most trivial serial interface, the HDL code for these state machines quickly balloons into a forest of parallel if-elseif-else trees that are difficult to understand or maintain and take large amounts of FPGA fabric to implement. Iterating the behavior of these state machines requires recompiling the HDL and reprogramming the FPGA for each change which is frustratingly time consuming.

Custom soft cores offer an alternate solution. Soft cores, sometimes known as IP cores, are not new in FPGA development, and most FPGA design tools include a library of cores that can be imported for free or purchased. Often these soft cores take the form of microcontrollers such as the Cortex M1, Microblaze, lowRISC, etc., which execute a program from memory and enable applications to be implemented as a combination of HDL (Verilog, VHDL, etc.) and procedural microcode (assembly, C, C++, etc.).

While off-the-shelf soft core microprocessors are overkill and too resource intensive for implementing single serial interfaces, we can easily create our own custom soft cores when we need them that use fewer resources and are easier to program than a state machine. For the purpose of this article, a custom soft core is a microcontroller with an instruction set, registers, and peripheral interfaces created specifically to efficiently accomplish a given task. The soft core executes a program from memory on the FPGA, which makes program iteration rapid because the memory can be reprogrammed without resynthesizing or reconfiguring the whole FPGA fabric. We program the soft core procedurally in assembly, which mentally maps to serial interface protocols more easily than HDL. Sensor data is made available to the FPGA fabric through register interfaces, which we also define according to the needs of our application.

Having implemented custom soft cores many times in FPGA applications, I am presently developing an open-source example/template soft core that is posted on GitHub (https://github.com/DanielCasner/i2c_softcore). For this example, I am interfacing with a Linear Technology LTC2991 sensor that has internal configuration, status, and data registers, which must be set and read over I2C (which is notoriously difficult to implement in HDL). The soft core has 16-bit instructions defined specifically for this application and executes from block ram. The serial program is written in assembly and compiled by a Python script. I hope that this example will demonstrate how straightforward and beneficial creating custom soft cores can be.

While I have been discussing soft cores for FPGAs in this article, an interesting related trend in microprocessors is the inclusion of minion cores, sometimes also called programmable real-time units (PRUs) or programmable peripherals. While not fully customizable as in FPGA fabric, these cores are very similar to the soft cores discussed, as they have limited instruction sets optimized for serial interfaces and are intended to have simple programs that execute independently of the application to interface with sensors and other peripherals. By freeing the main processor core of direct interface requirements, they can improve performance and often simplify development. In the future, I would expect more and more MCUs to include minion cores among their peripherals.

As the amount of data to be processed and efficiently requirements increase, we should expect to see heterogeneous processing in FPGAs and microcontrollers increasing and be ready to shift our mental programming models to take advantage of the many different paradigms available.

Daniel Casner is a robotics engineer at Anki, co-founder of Robot Garden, hardware start-up adviser, and embedded systems consultant. He is passionate about building clever consumer devices, adding intelligence to objects, and smart buildings or any other cyber-physical system. His specialties include: design for manufacture and salable production; cyber-physical security; reverse engineering; electronics and firmware; signal processing; and prototype development.

This essay appears in Circuit Cellar 301.

Interconnect Defects (ICDs) Explained

What is an Interconnect Defect (ICD)? An ICD is a condition that can interfere with the internal circuit connections in a printed circuit board (PCB). These internal connections occur where the innerlayer circuit has a drilled hole put through it. PCB processing adds additional copper into the drilled hole to connect the innerlayer circuits together and bring the circuit to the PCB board surface where connectors or components are placed to provide final function.

If there is a defect at or near this interconnect or plating and innerlayer copper, it could lead to failure of a specific circuit (or net). This defect typically causes open circuits, but could be intermittent at high temperatures. Of significant concern is that the functionality may be fine as the PCB is built, but will fail in assembly or usage, becoming a reliability risk. This latency for the defect has put ICDs on the serious defect list in the industry. Another item is that ICDs have increased in frequency over the past five to seven years, making this a higher priority issue.

The majority of ICDs fall into two categories: debris-based ICDs and copper bond failure ICDs. Debris-based ICDs are caused by material left behind by the hole drilling process. This material is supposed to be removed from the holes, but is not when ICDs are found. Some causes are drill debris residues, drill smear and particles (glass and inorganic fillers) embedded into the innerlayer copper surface. The increases in this ICD type seems to be related to the increased usage of low Dk/low Df materials that use inorganic filler types. These materials generate more drilling debris and are often more chemically resistant materials, compared to standard FR-4 epoxy materials. This combination of effects makes the drilled holes much more difficult to clean out completely.

Debris-based ICD

Debris-based ICD

Copper bond failure ICDs occur when the copper connection is physically broken. This can be due to high stress during assembly or use, or the copper bond being weak (or a combination). This failure mode is also design related, in particular, increased PCB thickness, increased hole size and wave soldering all tend to increase the risk of copper bond ICDs. It seems that there has been an increase in the rate of this ICD type, which is related to increased lead-free soldering temperatures and increased board thickness over the past 10 years. Note: This condition also occurs on HDI microvias. The causes are similar but the processing is different.

Copper bond failure ICD

Copper bond failure ICD

Reliability testing has been run on both types of ICDs. Copper bond type ICDs are a significant reliability issue. They show up as assembly failures and product with weakness may have increased tendency for field failures. Drill debris type ICDs have not been shown to be a significant reliability issue in several studies, but they are an industry specification failure, so they affect product yield and costs. Well run IST testing, using a valid coupon structure, has been a very valuable testing method for determining risk due to ICDs.

ICDs can be prevented by good PCB design and improved PCB processing methods. Debris type ICDs are a function of drilling parameters and desmearing. Many of the newer materials with fillers do not drill like standard FR-4. Instead of forming a chip during drilling, they break apart into small particles. These particles then tend to coat the drilled hole walls. One factor associated with debris ICDs is drill bit heating. Factors that result in hotter drill bits cause more debris formation and residues.
Desmearing, which is done to remove drilling residues, often needs to be more aggressive when using these material types. This has been effective at reducing or eliminating debris ICDs.

Copper bond failures are a little more complex. In PCB processing, the key factors are cleaning the innerlayer copper surface so that a strong bond can form. In addition, the electroless copper deposit needs to be in good control, having the correct thickness and grain structure, to have the required strength. Testing and experience show a good processing focus, along with appropriate reliability testing can result in consistently robust product.

Design factors also play a big role. As noted above, board thickness and hole size are key factors. These relate to the amount of stress placed upon the interconnect during thermal exposure. Eliminating soldered through-hole connectors is one of the major ways to reduce this issue, as these often contain most of the larger holes. If you need to have thick boards, look into the z-axis CTE and Tg of your material. Lower z-axis CTE values and higher Tg values will result in reduced stress.

With PCB performance requirements constantly on the rise, ICDs will remain an issue. A better understanding of ICDs will help designers reduce the impact that they have on the performance of the board. Better PCB processing practices in drilling and desmear and selecting electroless copper will improve quality. Implementing best practices will reduce opportunities for ICDs, particularly changing connector approaches. Finally, this issue is taken seriously by the PCB suppliers, many of which are working to combat the sources behind ICD failures.

Doug Trobough is the Corporate Director of Application Engineering at Isola Corp. Doug has worked introducing new material introduction and PCB processing enhancement with Isola for five years. Prior to Isola, Doug had almost 30 years of experience building a wide variety of PCB types and interconnections systems, for Tektronix and Merix Corp., in a variety of technical positions, including CTO for Merix Corp. 

This essay appears in Circuit Cellar 300 (July 2015).

The Future of Intelligent Robots

Robots have been around for over half a century now, making constant progress in terms of their sophistication and intelligence levels, as well as their conceptual and literal closeness to humans. As they become smarter and more aware, it becomes easier to get closer to them both socially and physically. That leads to a world where robots do things not only for us but also with us.

Not-so-intelligent robots made their first debut in factory environments in the late ‘50s. Their main role was to merely handle the tasks that humans were either not very good at or that were dangerous for them. Traditionally, these robots have had very limited sensing; they have essentially been blind despite being extremely strong, fast, and repeatable. Considering what consequences were likely to follow if humans were to freely wander about within the close vicinity of these strong, fast, and blind robots, it seemed to be a good idea to isolate them from the environment by placing them in safety cages.

Advances in the fields of sensing and compliant control made it possible to get a bit closer to these robots, again both socially and physically. Researchers have started proposing frameworks that would enable human-robot collaborative manipulation and task execution in various scenarios. Bi-manual collaborative manufacturing robots like YuMi by ABB and service robots like HERB by the Personal Robotics Lab of Carnegie Mellon University[1] have started emerging. Various modalities of learning from/programming by demonstration, such as kinesthetic teaching and imitation, make it very natural to interact with these robots and teach them the skills and tasks we want them perform the way we teach a child. For instance, the Baxter robot by Rethink Robotics heavily utilizes these capabilities and technologies to potentially bring a teachable robot to every small company with basic manufacturing needs.

As robots gets smarter, more aware, and safer, it becomes easier to socially accept and trust them as well. This reduces the physical distance between humans and robots even further, leading to assistive robotic technologies, which literally “live” side by side with humans 24/7. One such project is the Assistive Dexterous Arm (ADA)[2] that we have been carrying out at the Robotics Institute and the Human-Computer Interaction Institute of Carnegie Mellon University. ADA is a wheelchair mountable, semi-autonomous manipulator arm that utilizes the sliding autonomy concept in assisting people with disabilities in performing their activities of daily living. Our current focus is on assistive feeding, where the robot is expected to help the users eat their meals in a very natural and socially acceptable manner. This requires the ability to predict the user’s behaviors and intentions as well as spatial and social awareness to avoid awkward situations in social eating settings. Also, safety becomes our utmost concern as the robot has to be very close to the user’s face and mouth during task execution.

In addition to assistive manipulators, there have also been giant leaps in the research and development of smart and lightweight exoskeletons that make it possible for paraplegics to walk by themselves. These exoskeletons make use of the same set of technologies, such as compliant control, situational awareness through precise sensing, and even learning from demonstration to capture the walking patterns of a healthy individual.

These technologies combined with the recent developments in neuroscience have made it possible to get even closer to humans than an assistive manipulator or an exoskeleton, and literally unite with them through intelligent prosthetics. An intelligent prosthetic limb uses learning algorithms to map the received neural signals to the user’s intentions as the user’s brain is constantly adapting to the artificial limb. It also needs to be highly compliant to be able to handle the vast variance and uncertainty in the real world, not to mention safety.

Extrapolating from the aforementioned developments and many others, we can easily say that robots are going to be woven into our lives. Laser technology used to be unreachable and cutting-edge from an average person’s perspective a couple decades ago. However, as Rodney Brooks says in his book titled Robot: The Future of Flesh and Machines, (Penguin Books, 2003), now we do not know exactly how many laser devices we have in our houses, and more importantly we don’t even care! That will be the case for the robots. In the not so distant future, we will be enjoying the ride in our autonomous vehicle as a bunch of nanobots in our blood stream are delivering drugs and fixing problems, and we will feel good knowing that our older relatives are getting some great care from their assistive companion robots.

[1] http://www.cmu.edu/herb-robot/
[2] https://youtu.be/glpCAdKEWAA

Tekin Meriçli, PhD, is a well-rounded roboticist with in-depth expertise in machine intelligence and learning, perception, and manipulation. He is currently a Postdoctoral Fellow at the Human-Computer Interaction Institute at Carnegie Mellon University, where he leads the efforts on building intuitive and expressive interfaces to interact with semi-autonomous robotic systems that are intended to assist elderly and disabled. Previously, he was a Postdoctoral Fellow at the National Robotics Engineering Center (NREC) and the Personal Robotics Lab of the Robotics Institute at Carnegie Mellon University. He received his PhD in Computer Science from Bogazici University, Turkey.

This essay appears in Circuit Cellar 298, May 2015.

Security Agents for Embedded Intrusion Detection

Knowingly or unknowingly, we interact with hundreds of networked-embedded devices in our day-to-day lives such as mobile devices, electronic households, medical equipment, automobiles, media players, and many more. This increased dependence of our lives on the networked-embedded devices, nevertheless, has raised serious security concerns. In the past, security of embedded systems was not a major concern as these systems were a stand-alone network that contained only trusted devices with little or no communication to the external world. One could execute an attack only with a direct physical or local access to the internal embedded network or to the device. Today, however, almost every embedded device is connected to other devices or the external world (e.g., the Cloud) for advanced monitoring and management capabilities. On one hand, enabling networking capabilities paves the way for a smarter world that we currently live in, while on the other hand, the same capability raises severe security concerns in embedded devices. Recent attacks on embedded device product portfolios in the Black Hat and Defcon conferences has identified remote exploit vulnerabilities (e.g., an adversary who exploits the remote connectivity of embedded devices to launch attacks such as privacy leakage, malware insertion, and denial of service) as one of the major attack vectors. A handful of research efforts along the lines of traditional security defenses have been proposed to enhance the security posture of these networked devices. These solutions, however, do not entirely solve the problem and we therefore argue the need for a light weight intrusion-defense capability within the embedded device.

In particular, we observe that the networking capability of embedded devices can indeed be leveraged to provide an in-home secure proxy server that monitors all the network traffic to and from the devices. The proxy server will act as a gateway performing policy based operations on all the traffic to and from the interconnected embedded devices inside the household. In order to do so, the proxy server will implement an agent based computing model where each embedded device is required to run a light weight checker agent that periodically reports the device status back to the server; the server verifies the operation integrity and signals back the device to perform its normal functionality. A similar approach is proposed Ang Cui and Salvatore J. Stolfo’s 2011 paper, “Defending Embedded Systems with Software Symbiotes,” where a piece of software called Symbiote is injected into the device’s firmware that uses a secure checksum-based approach to detect any malicious intrusions into the device.

In contrast to Symbiote, we exploit lightweight checker agents at devices that merely forward device status to the server and all the related heavy computations are offloaded to the proxy server, which in turn proves our approach computationally efficient. Alternatively, the proposed model incurs a very small computational overhead in gathering and reporting critical device status messages to the server. Also, the communication overhead can be amortized under most circumstances as the sensor data from the checker agents can be piggybacked to the original data messages being transferred between the device and the server. Our model, as what’s described in the aforementioned Cui and Stolfo paper, can be easily integrated with legacy embedded devices as the only modification required to the legacy devices is a “firmware upgrade that includes checker agents.”

To complete the picture, we propose an additional layer of security for modern embedded devices by designing an AuditBox, as in the article, “Pillarbox,” by K. Bowers, C. Hart, A. Juels, and N. Triandopoulos. It keeps an obfuscated log of malicious events taking place at the device which are reported back to the server at predefined time intervals. This enables our server to act accordingly by either revoking the device from the network or by restoring it to a safe state. AuditBox will enforce integrity by being able to verify whether the logs at the device have been tampered with by an adversary who is in control of the device and covertness by hiding from an attacker with access to the device whether the log reports detection of malicious behavior. To realize these requirements, AuditBox will exploit the concept of forward secure key generation.

Embedded systems security is of crucial importance and the need of the hour. Along with the advancement in embedded systems technology, we need to put an equal emphasis on its security in order for our world to be truly a smarter place.

RESOURCES
K. Bowers, C. Hart, A. Juels, & N. Triandopoulos, “Pillarbox: Combating Next-Generation Malware with Fast Forward-Secure Logging,” in Research in Attacks, Intrusions and Defenses, ser. Lecture Notes in Computer Science, A. Stavrou, H. Bos, and G. Portokalidis (Eds.), Springer, 2014, http://dx.doi.org/10.1007/978-3-319-11379-1_3.

A. Cui & S. J. Stolfo, “Defending embedded systems with software symbiotes,” in Proceedings of the 14th international conference on Recent Advances in Intrusion Detection (RAID’11), R. Sommer, D. Balzarotti, and G. Maier (Eds.), Springer-Verlag, 2011, http://dx.doi.org/10.1007/978-3-642-23644-0_19.

DevuDr. Devu Manikantan Shila is the Principal Investigator for Cyber Security area within the Embedded Systems and Networks Group at the United Technologies Research Center (UTRC).

 

Marten van DijkMarten van Dijk is an Associate Professor of Electrical and Computing Engineering at the University of Connecticut, with over 10 years research experience in system security both in academia and industry.

 

Syed Kamran HaiderSyed Kamran Haider is pursuing a PhD in Computer Engineering supervised by Marten van Dijk at the University of Connecticut.

 

This essay appears in Circuit Cellar 297 (April 2015).