The Future of Hardware Design

The future of hardware design is in the cloud. Many companies are already focused on the Internet of Things (IoT) and creating hardware to be interconnected in the cloud. However, can we get to a point where we build hardware itself in the cloud?

Traditional methods of building hardware in the cloud recalls the large industry of EDA software packages—board layouts, 3-D circuit assemblies, and chip design. It’s arguable that this industry emphasizes mechanical design, focusing on intricate chip placement, 3-D space, and connections. There are also cloud-based SPICE simulators for electronics—a less-than-user-friendly experience with limited libraries of generic parts. Simulators that do have a larger library also tend to have a larger associated cost. Finding exact parts can be a frustrating experience. A SPICE transistor typically does not have a BOM part number requiring a working design to become a sourcing hunt amongst several vendor offerings.123D Circuits with Wifi Module

What if I want to create real hardware in the cloud, and build a project like those in Circuit Cellar articles? This is where I see the innovation that is changing the future of how we make electronics. We now have cloud platforms that provide you with the experience of using actual parts from vendors and interfacing them with a microcontroller. Component lists including servo motors, IR remotes with buttons, LCDs, buzzers with sound, and accelerometers are needed if you’re actually building a project. Definitive parts carried by vendors and not just generic ICs are crucial. Ask any design engineer—they have their typical parts that they reuse and trust in every design. They need to verify that these parts move and work, so having an online platform with these parts allows for a real world simulation.

An Arduino IDE that allows for real-time debugging and stepping through code in the cloud is powerful. Advanced microcontroller IDEs do not have external components in their simulators or environment. A platform that can interconnect a controller with external components in simulation mirrors real life closer than anything else. By observing rises in computer processing power, many opportunities may be realized in the future with other more complex MCUs.

Most hardware designers are unaware of the newest cloud offerings or have not worked with a platform enough to evaluate it as a game-changer. But imagine if new electronics makers and existing engineers could learn and innovate without hardware for free in the cloud.

I remember spending considerable time working on circuit boards to learn the hardware “maker” side of electronics. I would typically start with a breadboard to build basic circuits. Afterwards it was migrated to a protoboard to build a smaller, robust circuit that could be soldered together. Several confident projects later, I jumped to designing and producing PCB boards that eventually led to an entirely different level in the semiconductor industry. Once the boards were designed, all the motors, sensors, and external parts could be assembled to the board for testing.

Traditionally, an assembled PCB was needed to run the hardware design—to test it for reliability, to program it, and to verify it works as desired. Parts could be implemented separately, but in the end, a final assembled design was required for software testing, peripheral integration, and quality testing. Imagine how this is now different using a hardware simulation. The quality aspect will always be tied to actual hardware testing, but the design phase is definitely undergoing disruption. A user can simply modify and test until the design works to their liking, and then design it straight away to a PCB after several online designs failures, all without consequence.

With an online simulation platform, aspiring engineers can now have experiences different from my traditional one. They don’t need labs or breadboards to blink LEDs. The cloud equalizes access to technology regardless of background. Hardware designs can flow like software. Instead of sending electronics kits to countries with importation issues, hardware designs can be shared online and people can toggle buttons and user test it. Students do not have to buy expensive hardware, batteries, or anything more than a computer.

An online simulation platform also affects the design cycle. Hardware design cycles can be fast when needed, but it’s not like software. But by merging the two sides means thousands can access a design and provide feedback overnight, just like a Facebook update. Changes to a design can be done instantly and deployed at the same time—an unheard of cycle time. That’s software’s contribution to the traditional hardware one.
There are other possibilities for hardware simulation on the end product side of the market. For instance, crowdfunding websites have become popular destinations for funding projects. But should we trust a simple video representing a working prototype and then buy the hardware ahead of a production? Why can’t we play with the real hardware online? By using an online simulation of actual hardware, even less can be invested in terms of hardware costs, and in the virtual environment, potential customers can experience the end product built on a real electronic design.

Subtle changes tend to build up and then avalanche to make dramatic changes in how industries operate. Seeing the early signs—realizing something should be simpler—allows you to ask questions and determine where market gaps exist. Hardware simulation in the cloud will change the future of electronics design, and it will provide a great platform for showcasing your designs and teaching others about the industry.

John Young is the Product Marketing Manager for Autodesk’s 123D Circuits ( focusing on building a free online simulator for electronics. He has a semiconductor background in designing products—from R&D to market launch for Freescale and Renesas. His passion is finding the right market segment and building new/revamped products. He holds a BSEE from Florida Atlantic University, an MBA from the Thunderbird School of Global Management and is pursuing a project management certification from Stanford.

Software-Only Hardware Simulation

Simulating embedded hardware in a Windows environment can significantly reduce development time. In this article, Michael Melkonian provides techniques for the software-only simulation of embedded hardware. He presents a simple example of an RTOS-less embedded system that uses memory-mapped I/O to access a UART-like peripheral to serially poll a slave device. The simulator is capable of detecting bugs and troublesome design flaws.

Melkonian writes:

In this article, I will describe techniques for the software-only simulation of embedded hardware in the Windows/PC environment. Software-only simulation implies an arrangement with which the embedded application, or parts of it, can be compiled and run on the Windows platform (host) talking to the software simulator as opposed to the real hardware. This arrangement doesn’t require any hardware or tools other than a native Windows development toolset such as Microsoft Developer Studio/Visual C++. Importantly, the same source code is compiled and linked for both the host and the target. It’s possible and often necessary to simulate more complex aspects of the embedded target such as interrupts and the RTOS layer. However, I will illustrate the basics of simulating hardware in the Windows environment with an example of an extremely simple hypothetical target system (see Figure 1).

Figure 1: There is a parallel between the embedded target and host environment. Equivalent entities are shown on the same level.
Figure 1: There is a parallel between the embedded target and host environment. Equivalent entities are shown on the same level.

Assuming that the source code of the embedded application is basically the same whether it runs in Windows or the embedded target, the simulation offers several advantages. You have the ability to develop and debug device drivers and the application before the hardware is ready. An extremely powerful test harness can be created on the host platform, where all code changes and additions can be verified prior to running on the actual target. The harness can be used as a part of software validation.

Furthermore, you have the ability to test conditions that may not be easy to test using the real hardware. In the vast majority of cases, debugging tools available on the host are far superior to those offered by cross development tool vendors. You have access to runtime checkers to detect memory leaks, especially for embedded software developed in C++. Lastly, note that where the final system comprises a number of CPUs/boards, simulation has the additional advantage of simulating each target CPU via a single process on a multitasking host.

Before you decide to invest in simulation infrastructure, there are a few things to consider. For instance, when the target hardware is complex, the software simulator becomes a fairly major development task. Also, consider the adequacy of the target development tools. This especially applies to debuggers. The absence, or insufficient capability, of the debugger on the target presents a strong case for simulation. When delivery times are more critical than the budget limitations and extra engineering resources are available, the additional development effort may be justified. The simulator may help to get to the final product faster, but at a higher cost. You should also think about whether or not it’s possible to cleanly separate the application from the hardware access layer.

Remember that when exact timings are a main design concern, the real-time aspects of the target are hard to simulate, so the simulator will not help. Moreover, the embedded application’s complexity is relatively minor compared to the hardware drivers, so the simulator may not be justified. However, when the application is complex and sitting on top of fairly simple hardware, the simulator can be extremely useful.

You should also keep in mind that when it’s likely that the software application will be completed before the hardware delivery date, there is a strong case for simulation …

Now let’s focus on what makes embedded software adaptable for simulation. It’s hardly surprising that the following guidelines closely resemble those for writing portable code. First, you need a centralized access mechanism to the hardware (read_hw and write_hw macros). Second, the application code and device driver code must be separated. Third, you must use a thin operating level interface. Finally, avoid using the nonstandard add-ons that some cross-compilers may provide.

Download the entire article: M. Melkonian, “Software-Only Hardware Simulation,” CIrcuit Cellar 164, 2004.

One Professor and Two Orderly Labs

Professor Wolfgang Matthes has taught microcontroller design, computer architecture, and electronics (both digital and analog) at the University of Applied Sciences in Dortmund, Germany, since 1992. He has developed peripheral subsystems for mainframe computers and conducted research related to special-purpose and universal computer architectures for the past 25 years.

When asked to share a description and images of his workspace with Circuit Cellar, he stressed that there are two labs to consider: the one at the University of Applied Sciences and Arts and the other in his home basement.

Here is what he had to say about the two labs and their equipment:

In both labs, rather conventional equipment is used. My regular duties are essentially concerned  with basic student education and hands-on training. Obviously, one does not need top-notch equipment for such comparatively humble purposes.

Student workplaces in the Dortmund lab are equipped for basic training in analog electronics.

Student workplaces in the Dortmund lab are equipped for basic training in analog electronics.

In adjacent rooms at the Dortmund lab, students pursue their own projects, working with soldering irons, screwdrivers, drills,  and other tools. Hence, these rooms are  occasionally called the blacksmith’s shop. Here two such workplaces are shown.

In adjacent rooms at the Dortmund lab, students pursue their own projects, working with soldering irons, screwdrivers, drills, and other tools. Hence, these rooms are occasionally called “the blacksmith’s shop.” Two such workstations are shown.

Oscilloscopes, function generators, multimeters, and power supplies are of an intermediate price range. I am fond of analog scopes, because they don’t lie. I wonder why neither well-established suppliers nor entrepreneurs see a business opportunity in offering quality analog scopes, something that could be likened to Rolex watches or Leica analog cameras.

The orderly lab at home is shown here.

The orderly lab in Matthes’s home is shown here.

Matthes prefers to build his  projects so that they are mechanically sturdy. So his lab is equipped appropriately.

Matthes prefers to build mechanically sturdy projects. So his lab is appropriately equipped.

Matthes, whose research interests include advanced computer architecture and embedded systems design, pursues a variety of projects in his workspace. He describes some of what goes on in his lab:

The projects comprise microcontroller hardware and software, analog and digital circuitry, and personal computers.

Personal computer projects are concerned with embedded systems, hardware add-ons, interfaces, and equipment for troubleshooting. For writing software, I prefer PowerBASIC. Those compilers generate executables, which run efficiently and show a small footprint. Besides, they allow for directly accessing the Windows API and switching to Assembler coding, if necessary.

Microcontroller software is done in Assembler and, if required, in C or BASIC (BASCOM). As the programming language of the toughest of the tough, Assembler comes second after wire [i.e., the soldering iron].

My research interests are directed at computer architecture, instruction sets, hardware, and interfaces between hardware and software. To pursue appropriate projects, programming at the machine level is mandatory. In student education, introductory courses begin with the basics of computer architecture and machine-level programming. However, Assembler programming is only taught at a level that is deemed necessary to understand the inner workings of the machine and to write small time-critical routines. The more sophisticated application programming is usually done in C.

Real work is shown here at the digital analog computer—bring-up and debugging of the master controller board. Each of the six microcontrollers is connected to a general-purpose human-interface module.

A digital analog computer in Matthes’s home lab works on master controller board bring-up and debugging. Each of the six microcontrollers is connected to a general-purpose human-interface module.

Additional photos of Matthes’s workspace and his embedded electronics and micrcontroller projects are available at his new website.




Gigabit Ethernet Designs

WurthWurth Electronics Midcom and Lantiq recently announced The Evaluation Kit, a jointly developed demonstration kit. The kit enables users to easily add Ethernet hardware to an application or device and provides all necessary information to understand the demands of an Ethernet hardware design.

The Evaluation Kit includes an easy-to-use 1-Gbps demonstration board. The (54-mm × 92-mm) credit card-sized demonstration board is powered by USB. The board plugs into PCs and provides up to 1-Gbps bidirectional data rates.

The Evaluation Kit costs approximately $175.

Wurth Electronics Midcom, Inc.


Client Profile: Integrated Knowledge Systems

Integrated Knowledge Systems' NavRanger board

Integrated Knowledge Systems’ NavRanger board

Phoenix, AZ

CONTACT: James Donald,

EMBEDDED PRODUCTS: Integrated Knowledge Systems provides hardware and software solutions for autonomous systems.
featured Product: The NavRanger-OEM is a single-board high-speed laser ranging system with a nine-axis inertial measurement unit for robotic and scanning applications. The system provides 20,000 distance samples per second with a 1-cm resolution and a range of more than 30 m in sunlight when using optics. The NavRanger also includes sufficient serial, analog, and digital I/O for stand-alone robotic or scanning applications.

The NavRanger uses USB, CAN, RS-232, analog, or wireless interfaces for operation with a host computer. Integrated Knowledge Systems can work with you to provide software, optics, and scanning mechanisms to fit your application. Example software and reference designs are available on the company’s website.

EXCLUSIVE OFFER: Enter the code CIRCUIT2014 in the “Special Instructions to Seller” box at checkout and Integrated Knowledge Systems will take $20 off your first order.


Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.

Client Profile: ImageCraft Creations, Inc.

CorStarter prototyping board

CorStarter prototyping board

2625 Middlefield Road, #685,
Palo Alto, CA 94306

CONTACT: Richard Man,

EMBEDDED PRODUCTS:ImageCraft Version 8 C compilers with an IDE for Atmel AVR and Cortex M devices are full-featured toolsets backed by strong support.

CorStarter-STM32 is a complete C hardware and software kit for STM32 Cortex-M3 devices. The $99 kit includes a JTAG pod for programming and debugging.

ImageCraft products offer excellent features and support within budget requisitions. ImageCraft compiler toolsets are used by professionals who demand excellent code quality, full features, and diligent support in a timely manner.

The small, fast compilers provide helpful informational messages and include an IDE with an application builder (Atmel AVR) and debugger (Cortex-M), whole-program code compression technology, and MISRA safety checks. ImageCraft offers two editions that cost $249 and $499.

The demo is fully functional for 45 days, so it is easy to test it yourself.

EXCLUSIVE OFFER: For a limited time, ImageCraft is offering Circuit Cellar readers $40 off the Standard and PRO versions of its Atmel AVR and Cortex-M compiler toolsets. To take advantage of this offer, please visit


Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.

Innovation Space: A Workspace for Prototyping, Programming, and Writing

RobotBASIC co-developer John Blankenship accomplishes a lot in his “cluttered” Vero Beach, FL-based workspace.


John Blankenship in his workspace, where he develops, designs, and writes.

He develops software, designs hardware, packages robot parts for sale, and write books and magazine articles. Thus, his workspace isn’t always neat and tidy, he explained.

“The walls are covered with shelves filled with numerous books, a wide variety of parts, miscellaneous tools, several pieces of test equipment, and many robot prototypes,” he noted.

“Most people would probably find my space cluttered and confining, but for me it comforting knowing everything I might need is close at hand.”

Blankenship co-developed RobotBASIC with Samuel Mishal, a friend and talented programmer. The introductory programming language is geared toward high school-level students.

This PCB makes it easy to build a RobotBASIC-compatible robot.

This PCB makes it easy to build a RobotBASIC-compatible robot.

You can read Blankenship’s article, “Using a Simulated Robot to Decrease Development Time,” in the March 2014 edition of Circuit Cellar. He details how implementing a robotic simulation can reduce development time. Here’s an excerpt:

If you have ever built a robot, you know the physical construction and electronic aspects are only the first step. The real work begins when you start programming your creation.

A typical starting point is to develop a library of subroutines that implement basic behaviors. Later, the routines can be combined to create more complex behaviors and eventually full-blown applications. For example, navigational skills (e.g., hugging a wall, following a line, or finding a beacon) can serve as basic building blocks for tasks such as mowing a yard, finding a charging station, or delivering drinks to guests at a party. Developing basic behaviors can be difficult though, especially if they must work for a variety of situations. For instance, a behavior that enables a robot to transverse a hallway to find a specified doorway and pass through it should work properly with different-width hallways and doorways. Furthermore, the robot should at least attempt to autonomously contend with problems arising from the imprecise movements associated with most hobby robots.

Such problems can generally be solved with a closed-loop control system that continually modifies the robot’s movements based on sensor readings. Unfortunately, sensor readings in a real-world environment are often just as flawed as the robot’s movements. For example, tray reflections from ultrasonic or infrared sensors can produce erroneous sensor readings. Even when the sensors are reading correctly, faulty data can be obtained due to unexpected environmental conditions. These types of problems are generally random and are therefore difficult to detect and identify because the offending situations cannot easily be duplicated. A robot simulator can be a valuable tool in such situations.

Do you want to share images of your workspace, hackspace, or “circuit cellar”? Send your images and space info to

Q&A: Scott Garman, Technical Evangelist

Scott Garman is more than just a Linux software engineer. He is also heavily involved with the Yocto Project, an open-source collaboration that provides tools for the embedded Linux industry. In 2013, Scott helped Intel launch the MinnowBoard, the company’s first open-hardware SBC. —Nan Price, Associate Editor

Scott Garman

Scott Garman

NAN: Describe your current position at Intel. What types of projects have you developed?

SCOTT: I’ve worked at Intel’s Open Source Technology Center for just about four years. I began as an embedded Linux software engineer working on the Yocto Project and within the last year, I moved into a technical evangelism role representing Intel’s involvement with the MinnowBoard.

Before working at Intel, my background was in developing audio products based on embedded Linux for both consumer and industrial markets. I also started my career as a Linux system administrator in academic computing for a particle physics group.

Scott was involved with an Intel MinnowBoard robotics and computer vision demo, which took place at LinuxCon Japan in May 2013.

Scott was involved with an Intel MinnowBoard robotics and computer vision demo, which took place at LinuxCon Japan in May 2013.

I’m definitely a generalist when it comes to working with Linux. I tend to bounce around between things that don’t always get the attention they need, whether it is security, developer training, or community outreach.

More specifically, I’ve developed and maintained parallel computing clusters, created sound-level management systems used at concert stadiums, worked on multi-room home audio media servers and touchscreen control systems, dug into the dark areas of the Autotools and embedded Linux build systems, and developed fun conference demos involving robotics and computer vision. I feel very fortunate to be involved with embedded Linux at this point in history—these are very exciting times!

Scott is shown working on an Intel MinnowBoard demo, which was built around an OWI Robotic Arm.

Scott is shown working on an Intel MinnowBoard demo, which was built around an OWI Robotic Arm.

NAN: Can you tell us a little more about your involvement with the Yocto Project (

SCOTT: The Yocto Project is an effort to reduce the amount of fragmentation in the embedded Linux industry. It is centered on the OpenEmbedded build system, which offers a tremendous amount of flexibility in how you can create embedded Linux distros. It gives you the ability to customize nearly every policy of your embedded Linux system, such as which compiler optimizations you want or which binary package format you need to use. Its killer feature is a layer-based architecture that makes it easy to reuse your code to develop embedded applications that can run on multiple hardware platforms by just swapping out the board support package (BSP) layer and issuing a rebuild command.

New releases of the build system come out twice a year, in April and October.

Here, the OWI Robotic Arm is being assembled.

Here, the OWI Robotic Arm is being assembled.

I’ve maintained various user space recipes (i.e., software components) within OpenEmbedded (e.g., sudo, openssh, etc.). I’ve also made various improvements to our emulation environment, which enables you to run QEMU and test your Linux images without having to install it on hardware.

I created the first version of a security tracking system to monitor Common Vulnerabilities and Exposures (CVE) reports that are relevant to recipes we maintain. I also developed training materials for new developers getting started with the Yocto Project, including a very popular introductory screencast “Getting Started with the Yocto Project—New Developer Screencast Tutorial

NAN: Intel recently introduced the MinnowBoard SBC. Describe the board’s components and uses.

SCOTT: The MinnowBoard is based on Intel’s Queens Bay platform, which pairs a Tunnel Creek Atom CPU (the E640 running at 1 GHz) with the Topcliff Platform controller hub. The board has 1 GB of RAM and includes PCI Express, which powers our SATA disk support and gigabit Ethernet. It’s an SBC that’s well suited for embedded applications that can use that extra CPU and especially I/O performance.

Scott doesn’t have a dedicated workbench or garage. He says he tends to just clear off his desk, lay down some cardboard, and work on things such as the Trippy RGB Waves Kit, which is shown.

Scott doesn’t have a dedicated workbench or garage. He says he tends to just clear off his desk, lay down some cardboard, and work on things such as the Trippy RGB Waves Kit, which is shown.

The MinnowBoard also has the embedded bus standards you’d expect, including GPIO, I2C, SPI, and even CAN (used in automotive applications) support. We have an expansion connector on the board where we route these buses, as well as two lanes of PCI Express for custom high-speed I/O expansion.

There are countless things you can do with MinnowBoard, but I’ve found it is especially well suited for projects where you want to combine embedded hardware with computing applications that benefit from higher performance (e.g., robots that use computer vision, as a central hub for home automation projects, networked video streaming appliances, etc.).

And of course it’s open hardware, which means the schematics, Gerber files, and other design files are available under a Creative Commons license. This makes it attractive for companies that want to customize the board for a commercial product; educational environments, where students can learn how boards like this are designed; or for those who want an open environment to interface their hardware projects.

I created a MinnowBoard embedded Linux board demo involving an OWI Robotic Arm. You can watch a YouTube video to see how it works.

NAN: What compelled Intel to make the MinnowBoard open hardware?

SCOTT: The main motivation for the MinnowBoard was to create an affordable Atom-based development platform for the Yocto Project. We also felt it was a great opportunity to try to release the board’s design as open hardware. It was exciting to be part of this, because the MinnowBoard is the first Atom-based embedded board to be released as open hardware and reach the market in volume.

Open hardware enables our customers to take the design and build on it in ways we couldn’t anticipate. It’s a concept that is gaining traction within Intel, as can be seen with the announcement of Intel’s open-hardware Galileo project.

NAN: What types of personal projects are you working on?

SCOTT: I’ve recently gone on an electronics kit-building binge. Just getting some practice again with my soldering iron with a well-paced project is a meditative and restorative activity for me.

Scott’s Blinky POV Kit is shown. “I don’t know what I’d do without my PanaVise Jr. [vise] and some alligator clips,” he said.

Scott’s Blinky POV Kit is shown. “I don’t know what I’d do without my PanaVise Jr. [vise] and some alligator clips,” he said.

I worked on one project, the Trippy RGB Waves Kit, which includes an RGB LED and is controlled by a microcontroller. It also has an IR sensor that is intended to detect when you wave your hand over it. This can be used to trigger some behavior of the RGB LED (e.g., cycling the colors). Another project, the Blinky POV Kit, is a row of LEDs that can be programmed to create simple text or logos when you wave the device around, using image persistence.

Below is a completed JeeNode v6 Kit Scott built one weekend.

Below is a completed JeeNode v6 Kit Scott built one weekend.

My current project is to add some wireless sensors around my home, including temperature sensors and a homebrew security system to monitor when doors get opened using 915-MHz JeeNodes. The JeeNode is a microcontroller paired with a low-power RF transceiver, which is useful for home-automation projects and sensor networks. Of course the central server for collating and reporting sensor data will be a MinnowBoard.

NAN: Tell us about your involvement in the Portland, OR, open-source developer community.

SCOTT: Portland has an amazing community of open-source developers. There is an especially strong community of web application developers, but more people are hacking on hardware nowadays, too. It’s a very social community and we have multiple nights per week where you can show up at a bar and hack on things with people.

This photo was taken in the Open Source Bridge hacker lounge, where people socialize and collaborate on projects. Here someone brought a brainwave-control game. The players are wearing electroencephalography (EEG) readers, which are strapped to their heads. The goal of the game is to use biofeedback to move the floating ball to your opponent’s side of the board.

This photo was taken in the Open Source Bridge hacker lounge, where people socialize and collaborate on projects. Here someone brought a brainwave-control game. The players are wearing electroencephalography (EEG) readers, which are strapped to their heads. The goal of the game is to use biofeedback to move the floating ball to your opponent’s side of the board.

I’d say it’s a novelty if I wasn’t so used to it already—walking into a bar or coffee shop and joining a cluster of friendly people, all with their laptops open. We have coworking spaces, such as Collective Agency, and hackerspaces, such as BrainSilo and Flux (a hackerspace focused on creating a welcoming space for women).

Take a look at Calagator to catch a glimpse of all the open-source and entrepreneurial activity going on in Portland. There are often multiple events going on every night of the week. Calagator itself is a Ruby on Rails application that was frequently developed at the bar gatherings I referred to earlier. We also have technical conferences ranging from the professional OSCON to the more grassroots and intimate Open Source Bridge.

I would unequivocally state that moving to Portland was one of the best things I did for developing a career working with open-source technologies, and in my case, on open-source projects.

TRACE32 Now Supports Xilinx MicroBlaze 8.50.C

LauterbachThe TRACE32 modular hardware and software supports up to 350 different CPUs. The microprocessor development tools now support the latest version of Xilinx’s MicroBlaze 8.50.c, which is a soft processor core designed for Xilinx FPGAs. The MicroBlaze core is included with Xilinx’s Vivado Design Edition and IDS Embedded Edition.

The TRACE32 tools have supported MicroBlaze for many years by providing efficient and user-friendly debugging at the C or C++ level using the on-chip JTAG interface. This interface also provides code download, flash programming, and quick access to all internal chip peripherals and registers.
Contact Lauterbach for pricing.

Lauterbach GmbH

Xilinx, Inc.

Client Profile: Lauterbach, Inc

1111 Main Street #115
Vancouver, WA 98660


LauterbachFeatured Product: The TRACE32-ICD in-circuit debugger supports a range of on-chip debug interfaces. The debugger’s hardware is universal and enables you to connect to different target processors by simply changing the debug cable. The PowerDebug USB 3.0 can be upgraded with the PowerProbe or the PowerIntergrator to a logic analyzer.

Product Features: The TRACE 32-ICD JTAG debugger has a 5,000-KBps download rate. It features easy high-level Assembler debugging and an interface to all industry-standard compilers. The debugger enables fast download of code to target, OS awareness debugging, and flash programming. It displays internal and external peripherals at a logical level and includes support for hardware breakpoints and trigger (if supported by chip), multicore debugging (SMP and AMP), C and C++, and all common NOR and NAND flash devices.

For more information, visit

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

CC281: Overcome Fear of Ethernet on an FPGA

As its name suggests, the appeal of an FPGA is that it is fully programmable. Instead of writing software, you design hardware blocks to quickly do what’s required of a digital design. This also enables you to reprogram an FPGA product in the field to fix problems “on the fly.”

But what if “you” are an individual electronics DIYer rather than an industrial designer? DIYers can find FPGAs daunting.

Issue281The December issue of Circuit Cellar issue should offer reassurance, at least on the topic of “UDP Streaming on an FPGA.” That’s the focus of Steffen Mauch’s article for our Programmable Logic issue (p. 20).

Ethernet on an FPGA has several applications. For example, it can be used to stream measured signals to a computer for analysis or to connect a camera (via Camera Link) to an FPGA to transmit images to a computer.

Nonetheless, Mauch says, “most novices who start to develop FPGA solutions are afraid to use Ethernet or DDR-SDRAM on their boards because they fear the resulting complexity.” Also, DIYers don’t have the necessary IP core licenses, which are costly and often carry restrictions.

Mauch’s UDP monitor project avoids such costs and restrictions by using a free implementation of an Ethernet-streaming device based on a Xilinx Spartan-6 LX FPGA. His article explains how to use OpenCores’s open-source tri-mode MAC implementation and stream UDP packets with VHDL over Ethernet.

Mauch is not the only writer offering insights into FPGAs. For more advanced FPGA enthusiasts, columnist Colin O’Flynn discusses hardware co-simulation (HCS), which enables the software simulation of a design to be offloaded to an FPGA. This approach significantly shortens the time needed for adequate simulation of a new product and ensures that a design is actually working in hardware (p. 52).

This Circuit Cellar issue offers a number of interesting topics in addition to programmable logic. For example, you’ll find a comprehensive overview of the latest in memory technologies, advice on choosing a flash file system for your embedded Linux system, a comparison of amplifier classes, and much more.

Mary Wilson

The Adafruit Learning System Releases Bluetooth HID Keyboard Controller

Bluefruit2Adafruit’s Bluefruit EZ-Key enables you to create a wireless Bluetooth keyboard controller in an hour. The module acts as a Bluetooth keyboard and is compatible with any Bluetooth-capable device (e.g., Mac, Windows, Linux, iOS, and Android).

You simply power the Bluefruit EZ-Key with 3 to 16 VDC and pair it to a computer, tablet, or smartphone. You can then connect buttons from the 12 input pins. When a button is pressed, it sends a keypress to the computer. The module has been preprogrammed to send the four arrow keys, return, space, “w,” “a,” “s,” “d,” “1,” and “2” by default. Advanced users can use a Future Technology Devices International (FTDI) chip or other serial console cable to reprogram the module’s keys for a human interface device (HID) key report.

BluefruitEach Bluefruit EZ-Key has a unique identifier. More than one module can be paired to a single device. The FCC- and CE-certified, RoHS-compliant modules integrate easily into your project.

Pricing for the Bluefruit EZ-Key begins at $19.95. For more information, visit The Adafruit Learning System. Bluefruit EZ-Key tutorials are also available.

Arduino-Based Hand-Held Gaming System

gameduino2-WEBJames Bowman, creator of the Gameduino game adapter for microcontrollers, recently made an upgrade to the system adding a Future Technology Devices International (FTDI) FT800 chip to drive the graphics. Associate Editor Nan Price interviewed James about the system and its capabilities.

NAN: Give us some background. Where do you live? Where did you go to school? What did you study?


James Bowman

 JAMES: I live on the California coast in a small farming village between Santa Cruz and San Francisco. I moved here from London 17 years ago. I studied computing at Imperial College London.

NAN: What types of projects did you work on when you were employed by Silicon Graphics, 3dfx Interactive, and NVIDIA?

JAMES: Always software and hardware for GPUs. I began in software, which led me to microcode, which led to hardware. Before you know it you’ve learned Verilog. I was usually working near the boundary of software and hardware, optimizing something for cost, speed, or both.

NAN: How did you come up with the idea for the Gameduino game console?

JAMES: I paid for my college tuition by working as a games programmer for Nintendo and Sega consoles, so I was quite familiar with that world. It seemed a natural fit to try to give the Arduino some eye-catching color graphics. Some quick experiments with a breadboard and an FPGA confirmed that the idea was feasible.

NAN: The Gameduino 2 turns your Arduino into a hand-held modern gaming system. Explain the difference from the first version of Gameduino—what upgrades/additions have been made?


The Gameduino2 uses a Future Technology Devices International chip to drive its graphics

JAMES: The original Gameduino had to use an FPGA to generate graphics, because in 2011 there was no such thing as an embedded GPU. It needs an external monitor and you had to supply your own inputs (e.g., buttons, joysticks, etc.). The Gameduino 2 uses the new Future Technology Devices International (FTDI) FT800 chip, which drives all the graphics. It has a built-in color resistive touchscreen and a three-axis accelerometer. So it is a complete game system—you just add the CPU.

NAN: How does the Arduino factor into the design?


An Arduino, Ethernet adapter, and a Gameduino

 JAMES: Arduino is an interesting platform. It is 5 V, believe it or not, so the design needs a level shifter. Also, the Arduino is based on an 8-bit microcontroller, so the software stack needs to be carefully built to provide acceptable performance. The huge advantage of the Arduino is that the programming environment—the IDE, compiler, and downloader—is used and understood by hundreds of thousands of people.

 NAN: Is it easy or possible to customize the Gameduino 2?

 JAMES: I would have to say no. The PCB itself is entirely surface mount technology (SMT) and all the ICs are QFNs—they have no accessible pins! This is a long way from the DIP packages of yesterday, where you could change the circuit by cutting tracks and soldering onto the pins.

I needed a microscope and a hot air station to make the Gameduino2 prototype. That is a long way from the “kitchen table” tradition of the Arduino. Fortunately the Arduino’s physical design is very customization-friendly. Other devices can be stacked up, adding networking, hi-fi sound, or other sensor inputs.

 NAN: The Gameduino 2 project is on Kickstarter through November 7, 2013. Why did you decide to use Kickstarter crowdfunding for this project?

 JAMES: Kickstarter is great for small-scale inventors. The audience it reaches also tends to be interested in novel, clever things. So it’s a wonderful way to launch a small new product.

NAN: What’s next for Gameduino 2? Will the future see a Gameduino 3?

 JAMES: Product cycles in the Arduino ecosystem are quite long, fortunately, so a Gameduino 3 is distant. For the Gameduino 2, I’m writing a book, shipping the product, and supporting the developer community, which will hopefully make use of it.


Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus:

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus:

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.