Software-Only Hardware Simulation

Simulating embedded hardware in a Windows environment can significantly reduce development time. In this article, Michael Melkonian provides techniques for the software-only simulation of embedded hardware. He presents a simple example of an RTOS-less embedded system that uses memory-mapped I/O to access a UART-like peripheral to serially poll a slave device. The simulator is capable of detecting bugs and troublesome design flaws.

Melkonian writes:

In this article, I will describe techniques for the software-only simulation of embedded hardware in the Windows/PC environment. Software-only simulation implies an arrangement with which the embedded application, or parts of it, can be compiled and run on the Windows platform (host) talking to the software simulator as opposed to the real hardware. This arrangement doesn’t require any hardware or tools other than a native Windows development toolset such as Microsoft Developer Studio/Visual C++. Importantly, the same source code is compiled and linked for both the host and the target. It’s possible and often necessary to simulate more complex aspects of the embedded target such as interrupts and the RTOS layer. However, I will illustrate the basics of simulating hardware in the Windows environment with an example of an extremely simple hypothetical target system (see Figure 1).

Figure 1: There is a parallel between the embedded target and host environment. Equivalent entities are shown on the same level.
Figure 1: There is a parallel between the embedded target and host environment. Equivalent entities are shown on the same level.

Assuming that the source code of the embedded application is basically the same whether it runs in Windows or the embedded target, the simulation offers several advantages. You have the ability to develop and debug device drivers and the application before the hardware is ready. An extremely powerful test harness can be created on the host platform, where all code changes and additions can be verified prior to running on the actual target. The harness can be used as a part of software validation.

Furthermore, you have the ability to test conditions that may not be easy to test using the real hardware. In the vast majority of cases, debugging tools available on the host are far superior to those offered by cross development tool vendors. You have access to runtime checkers to detect memory leaks, especially for embedded software developed in C++. Lastly, note that where the final system comprises a number of CPUs/boards, simulation has the additional advantage of simulating each target CPU via a single process on a multitasking host.

Before you decide to invest in simulation infrastructure, there are a few things to consider. For instance, when the target hardware is complex, the software simulator becomes a fairly major development task. Also, consider the adequacy of the target development tools. This especially applies to debuggers. The absence, or insufficient capability, of the debugger on the target presents a strong case for simulation. When delivery times are more critical than the budget limitations and extra engineering resources are available, the additional development effort may be justified. The simulator may help to get to the final product faster, but at a higher cost. You should also think about whether or not it’s possible to cleanly separate the application from the hardware access layer.

Remember that when exact timings are a main design concern, the real-time aspects of the target are hard to simulate, so the simulator will not help. Moreover, the embedded application’s complexity is relatively minor compared to the hardware drivers, so the simulator may not be justified. However, when the application is complex and sitting on top of fairly simple hardware, the simulator can be extremely useful.

You should also keep in mind that when it’s likely that the software application will be completed before the hardware delivery date, there is a strong case for simulation …

Now let’s focus on what makes embedded software adaptable for simulation. It’s hardly surprising that the following guidelines closely resemble those for writing portable code. First, you need a centralized access mechanism to the hardware (read_hw and write_hw macros). Second, the application code and device driver code must be separated. Third, you must use a thin operating level interface. Finally, avoid using the nonstandard add-ons that some cross-compilers may provide.

Download the entire article: M. Melkonian, “Software-Only Hardware Simulation,” CIrcuit Cellar 164, 2004.

AcqirisMAQS Software Simplifies Multichannel Data Acquisition Systems

Keysight Technologies recently announced a new version of its U1092A AcqirisMAQS Multichannel Acquisition Software. AcqirisMAQS software enables configuration management as well as visualization of data for hundreds of channels from a single console. The client-server architecture supports remote operation, making it possible for the data acquisition system to be distributed over a LAN. Keysight-AcqirisMAQS

A fully configurable GUI enables you to easily select instruments and measurement channels for the configuration of the acquisition parameters. Acquired data is presented in multiple display windows. Each window is fully configurable, including the selection of the number of plot areas and axes. Additional display functions provide multi-record overlay, persistence, frequency spectrum, and scalar computations on each trace. A special option for triggered single shot experiments adds an advanced configuration manager, digitizer memory protection locking and fail-safe operation.

Keysight’s multichannel digitizers contain all the timing and synchronization technologies needed to create synchronous sampling across tens or hundreds of channels at a time. For example, a reliable triggering and synchronization is essential to correctly recreate the event or process from captured data. More information about simplifying high-speed multichannel acquisition systems is available in the AcqirisMAQS Multichannel Acquisition Software Document Library.

The new version of the AcqirisMAQS multichannel acquisition software—including a 30-day free trial (evaluation mode)—is currently available now for the Keysight M9709A AXIe 8-bit high-speed digitizer as well as all the others Keysight high-speed digitizers. The 30-day free trial provides the Master and Monitoring functionality for 30 days.

Source: Keysight

Streamlined Touchscreen Design with Application Builder and COMSOL Server

Cypress Semiconductor R&D engineers are creating simulation apps that streamline their touchscreen design processes. To do so, they’re sharing their simulation expertise with colleagues using the Application Builder and COMSOL Server, released with COMSOL Multiphysics simulation software version 5.COMSOL_5.1_COMSOL_Server

With the Application Builder, engineers can create ready-to-use simulation applications that can be implemented across departments, including by product development, sales, and customer service. The Application Builder enables simulation experts to build intuitive simulation apps based on their models directly within the COMSOL environment. COMSOL Server lets them share these apps with colleagues and customers around the globe.

To incorporate advances into touchscreen technology and embedded system products, Cypress simulation engineers use COMSOL for research and design initiatives. Their touchscreens are used in phones and MP3 devices, industrial applications, and more.

Source: COMSOL


How to Improve Software Development Predictability

The analytical methods of failure modes effects and criticality analysis (FMECA) and failure modes effects analysis (FMEA) have been around since the 1940s. In recent years, much effort has been spent on bringing hardware related analyses such as FMECA into the realm of software engineering. In “Software FMEA/FMECA,” George Novacek takes a close look at software FMECA (SWFMECA) and its potential for making software development more predictable.

The roots of failure modes effects and criticality analysis (FMECA) and failure modes effects analysis (FMEA) date back to World War II. FMEA is a subset of FMECA in which the criticality assessment has been omitted. Therefore, for simplicity, I’ll be using the terms FMECA and SWFMECA only in this article. FMECA was developed for identification of potential hardware failures and their mitigation to ensure mission success. During the 1950s, FMECA became indispensable for analyses of equipment in critical applications, such as those occurring in military, aerospace, nuclear, medical, automotive, and other industries.

FMECA is a structured, bottom-up approach considering a failure of each and every component, its impact on the system and how to prevent or mitigate such a failure. FMECA is often combined with fault tree analysis (FTA) or event tree analyses (ETA). The FTA differs from the ETA only in that the former is focused on failures as the top event, the latter on some specific events. Those analyses start with an event and then drill down through the system to their root cause.

In recent years, much effort has been spent on bringing hardware related analyses, such as reliability prediction, FTA, and FMECA into the realm of software engineering. Software failure modes and effects analysis (SWFMEA) and software failure modes, effects, and criticality analysis (SWFMECA) are intended to be software analyses analogous to the hardware ones. In this article I’ll cover SWFMECA as it specifically relates to embedded controllers.

Unlike the classic hardware FMECA based on statistically determined failure rates of hardware components, software analyses assume that the software design is never perfect because it contains faults introduced unintentionally by software developers. It is further assumed that in any complicated software there will always be latent faults, regardless of development techniques, languages, and quality procedures used. This is likely true, but can it be quantified?


SWFMECA should consider the likelihood of latent faults in a product and/or system, which may become patent during operational use and cause the product or the system to fail. The goal is to assess severity of the potential faults, their likelihood of occurrence, and the likelihood of their escaping to the customer. SWFMECA should assess the probability of mistakes being made during the development process, including integration, verification and validation (V&V), and the severity of these faults on the resulting failures. SWFMECA is also intended to determine the faults’ criticality by combining fault likelihood with the consequent failure severity. This should help to determine the risk arising from software in a system. SWFMECA should examine the development process and the product behavior in two separate analyses.

First, Development SWFMECA should address the development, testing and V&V process. This requires understanding of the software development process, the V&V techniques and quality control during that process. It should establish what types of faults may occur when using a particular design technique, programming language and the fault coverage of the verification and validation techniques. Second, Product SWFMECA should analyze the design and its implementation and establish the probability of the failure modes. It must also be based on thorough understanding of the processes as well as the product and its use.

In my opinion, SWFMECA is a bit of a misnomer with little resemblance to the hardware FMECA. Speculations what faults might be hidden in every line of code or every activity during software development is hardly realistic. However, there is resemblance with the functional level FMECA. There, system level effects of failures of functions can be established and addressed accordingly. Establishing the probability of those failures is another matter.

The data needed for such considerations are mostly subjective, their sources esoteric and their reliability debatable. The data are developed statistically, based on history, experience and long term fault data collection. Some data may be available from polling numerous industries, but how applicable they are to a specific developer is difficult to determine. Plausible data may perhaps be developed by long established software developers producing a specific type of software (e.g., Windows applications), but development of embedded controllers with their high mix of hardware/software architectures and relatively low-volume production doesn’t seem to fit the mold.

Engineers understand that hardware has limited life and customers have no problem accepting mean time between failures (MTBF) as a reality. But software does not fail due to age or fatigue. It’s all in the workmanship. I have never seen an embedded software specification requiring software to have some minimum probability of faults. Zero seems always implied.


In the course of SWFMECA preparation, scores for potential faults should be determined: severity, likelihood of occurrence, and potential for escaping to the finished product. The scores between 1 to 10 are multiplied and thus the risk priority number (RPN) is obtained. An RPN larger than 200 should warrant prevention and mitigation planning. Yet the scores are very much subjective—that is, they’re dependent on the software complexity, the people, and other impossible to accurately predict factors. For embedded controllers the determination of the RPN appears to be just an analysis for the sake of analysis.

Statistical analyses are used every day from science to business management. Their usefulness depends on the number of samples and even with an abundance of samples there are no guarantees. SWFMECA can be instrumental for fine-tuning the software development process. In embedded controllers, however, software related failures are addressed by FMECA. SWFMECA alone cannot justify the release of a product.


In embedded controllers, causes of software failures are often hardware related and exact outcomes are difficult to predict. Software faults need to be addressed by testing, code analyses, and, most important, mitigated by the architecture. Redundancy, hardware monitors, and others are time proven methods.

Software begins as an idea expressed in requirements. Design of the system architecture, including hardware/software partitioning is next, followed by software requirements, usually presented as flow charts, state diagrams, pseudo code, and so forth. High and low levels of design follow, until a code is compiled. Integration and testing come next. This is shown in the ubiquitous chart in Figure 1.

Figure 1: Software development "V" model

Figure 1: Software development “V” model

During an embedded controller design, I would not consider performing the RPN calculation, just as I would not try to calculate software reliability. I consider those purely statistical calculations to be of little practical use. However, SWFMECA activity with software ETA and FTA based on functions should be performed as a part of the system FMECA. The software review can be to a large degree automated by tools, such as Software Call Tree and many others. Automation notwithstanding, one should always check the results for plausibility.


Software Call Tree tells us how different modules interface and how a fault or an event would propagate through the system. Similarly, Object Relational Diagram shows how objects’ internal states affect each other. And then there are Control Flow Diagram, Entity Relationship Diagram, Data Flow Diagram, McCabe Logical Path, State Transition Diagram, and others. Those tools are not inexpensive, but they do generate data which make it possible to produce high-quality software. However, it is important to plan all the tests and analyses ahead of the time. It is easy to get mired in so many evaluations that the project’s cost and schedule suffer with little benefit to software quality.

The assumed probability of a software fault becomes a moot point. We should never plunge ahead releasing a code just because we’re satisfied that our statistical development model renders what we think is an acceptable probability of a failure. Instead, we must assume that every function may fail for whatever reason and take steps to ensure those failures are mitigated by the system architecture.

System architecture and software analyses can only be started upon determination that the requirements for the system are sufficiently robust. It is not unusual for a customer to insist on beginning development before signing the specification, which is often full of TBDs (i.e., “to be defined”). This may be leaving so many open issues that the design cannot and should not be started in earnest. Besides, development at such a stage is a violation of certification rules and will likely result in exceeding the budget and the schedule. Unfortunately, customers can’t or don’t always want to understand this and their pressure often prevails.

The ongoing desire to introduce software into the hardware paradigm is understandable. It could bring software development into a fully predictable scientific realm. So far it has been resisting those attempts, remaining to a large degree an art. Whether it can ever become a fully deterministic process, in my view, is doubtful. After all, every creative process is an art. But great strides have been made in development of tools, especially those for analyses, helping to make the process increasingly more predictable.

This article appears in Circuit Cellar 297, April 2015.

The Future of Embedded Linux

My first computer was a Cosmac Elf. My first “Desktop” was a $6,500 HeathKit H8. An Arduino today costs $3 and has more of nearly everything—except cost and size—and even my kids can program it. I became an embedded software developer without knowing it. When that H8 needed bigger floppy disks, a hard disk, or a network, you wrote the drivers yourself—in assembler if you were lucky and machine code if your were not.

Embedded software today is on the cusp of a revolution. The cost of hardware capable of running Linux continues to decline. Raspberry Pi (RPi) can be purchased for $25. A Beagle Bone Black (BBB) costs $45. An increasing number of designers are building products such as Cubi, GumStik, and Olinuxino and seeking to replicate the achievements of the RPi and BBB, which are modeled on the LEGO-like success of Arduino.

These are not “embedded Linux systems.” They are full-blown desktops—less peripherals—that are more powerful than what I owned less than a decade ago. This is a big deal. Hardware is inexpensive, and designs like the BBB and RPi are becoming easily modifiable commodities that can be completed quickly. On the other hand, software is expensive and slow. Time to market is critical. Target markets are increasingly small, with runs of a few thousand units for a specific product and purpose. Consumers are used to computers in everything. They expect computers and assume they will communicate with their smart phones, tablets, and laptops. Each year, consumers expect more.

There are not enough bare metal software developers to hope to meet the demand, and that will not improve. Worse, we can’t move from concept to product with custom software quickly enough to meet market demands. A gigabyte of RAM adds $5 to the cost of a product. The cost of an eight-week delay to value engineer software to work in a few megabytes of RAM instead, on a product that may only ship 5,000 units per year, could make the product unviable.

Products have to be inexpensive, high-quality, and fast. They have to be on the shelves yesterday and tomorrow they will be gone. The bare metal embedded model can’t deliver that, and there are only so many software developers out there with the skills needed to breathe life into completely new hardware.

That is where the joy in embedded development is for me—getting completely new hardware to load its first program. Once I get that first LED to blink everything is downhill from there. But increasingly, my work involves Linux systems integration for embedded systems: getting an embedded Linux system to boot faster, integrating MySQL, and recommending an embedded Linux distribution such as Ubuntu or Debian to a client. When I am lucky, I get to set up a GPIO or write a driver—but frequently these tasks are done by the OEM. Today’s embedded ARMs have everything, including the kitchen sink integrated (probably two).

Modern embedded products are being produced with client server architectures by developers writing in Ruby, PHP, Java, or Python using Apache web servers and MySQL databases and an assortment of web clients communicating over an alphabet soup of protocols to devices they know nothing about. Often, the application developers are working and testing on Linux or even Windows desktops. The time and skills needed to value engineer the software to accommodate small savings in hardware costs do not exist. When clients ask for an embedded software consultant, they are more likely after an embedded IT expert, rather than someone who writes device drives, or develops BSPs.

There will still be a need for those with the skills to write a TCP/IP stack that uses 256 bytes of RAM on an 8-bit processor, but that growing market will still be a shrinking portion of the even faster growing embedded device market.

The future of embedded technology is more of everything. We’ll require larger and more powerful systems, such as embedded devices running full Linux distributions like Ubuntu (even if they are in systems as simple as a pet treadmill) because it’s the easiest, most affordable solution with a fast time to market.

LaneTTFDavid Lynch owns DLA Systems. He is a software consultant and an architect, with projects ranging from automated warehouses to embedded OS ports. When he is not working with computers, he is busy attempting to automate his house and coerce his two children away from screens and into the outdoors to help build their home.


COMSOL Multiphysics 5.0 and the Application Builder

COMSOL recently announced the availability of Multiphysics 5.0 and Application Builder, with which “the power and accuracy of COMSOL Multiphysics is now accessible to everyone through the use of applications.”

Image made using COMSOL Multiphysics and is provided courtesy of COMSOL

Image made using COMSOL Multiphysics and is provided courtesy of COMSOL

According to COMSOL, Version 5.0 includes severral numerous enhancements to the existing Multiphysics software. The COMSOL Multiphysics product suite includes “25 application-specific modules for simulating any physics in the electrical, mechanical, fluid, and chemical disciplines.”

  • Multiphysics – Predefined multiphysics couplings, including Joule Heating with Thermal Expansion; Induction, Microwave, and Laser Heating; Thermal Stress; Thermoelectric and Piezoelectric Effect; and more.
  • Geometry and Mesh – You can create geometry from an imported mesh and call geometry subsequences using a linked subsequence.
  • Optimization and Multipurpose – The Particle Tracing Module includes accumulation of particles, erosion, and etch features. Multianalysis optimization was added as well.
  • Studies and Solvers – Improvements were made for the simulation of CAD assemblies, support for extra dimensions, and the ability to sweep over sets of materials and user-defined functions. Improved probe-while-solving and more.
  • Materials and Functions – Materials can now be copied, pasted, duplicated, dragged, and dropped. Link to Global Materials using a Material Link when the same material is used in multiple components.
  • Mechanical – Model geometrically nonlinear beams, nonlinear elastic materials, and elasticity in joints using the products for modeling structural mechanics.
  • Fluid – Create automatic pipe connections to 3-D flow domains in the Pipe Flow Module. The CFD Module is expanded with two new algebraic turbulence models, as well as turbulent fans and grilles.
  • Electrical – The AC/DC Module, RF Module, and Wave Optics Module now contain a frequency- and material-controlled auto mesh suggestion that offers the easy, one-click meshing of infinite elements and periodic conditions. The Plasma Module now contains interfaces for modeling equilibrium discharges.
  • Chemical – The Chemical Reaction Engineering Module now contains a new Chemistry interface that can be used as a Material node for chemical reactions.

Source: COMSOL


One Professor and Two Orderly Labs

Professor Wolfgang Matthes has taught microcontroller design, computer architecture, and electronics (both digital and analog) at the University of Applied Sciences in Dortmund, Germany, since 1992. He has developed peripheral subsystems for mainframe computers and conducted research related to special-purpose and universal computer architectures for the past 25 years.

When asked to share a description and images of his workspace with Circuit Cellar, he stressed that there are two labs to consider: the one at the University of Applied Sciences and Arts and the other in his home basement.

Here is what he had to say about the two labs and their equipment:

In both labs, rather conventional equipment is used. My regular duties are essentially concerned  with basic student education and hands-on training. Obviously, one does not need top-notch equipment for such comparatively humble purposes.

Student workplaces in the Dortmund lab are equipped for basic training in analog electronics.

Student workplaces in the Dortmund lab are equipped for basic training in analog electronics.

In adjacent rooms at the Dortmund lab, students pursue their own projects, working with soldering irons, screwdrivers, drills,  and other tools. Hence, these rooms are  occasionally called the blacksmith’s shop. Here two such workplaces are shown.

In adjacent rooms at the Dortmund lab, students pursue their own projects, working with soldering irons, screwdrivers, drills, and other tools. Hence, these rooms are occasionally called “the blacksmith’s shop.” Two such workstations are shown.

Oscilloscopes, function generators, multimeters, and power supplies are of an intermediate price range. I am fond of analog scopes, because they don’t lie. I wonder why neither well-established suppliers nor entrepreneurs see a business opportunity in offering quality analog scopes, something that could be likened to Rolex watches or Leica analog cameras.

The orderly lab at home is shown here.

The orderly lab in Matthes’s home is shown here.

Matthes prefers to build his  projects so that they are mechanically sturdy. So his lab is equipped appropriately.

Matthes prefers to build mechanically sturdy projects. So his lab is appropriately equipped.

Matthes, whose research interests include advanced computer architecture and embedded systems design, pursues a variety of projects in his workspace. He describes some of what goes on in his lab:

The projects comprise microcontroller hardware and software, analog and digital circuitry, and personal computers.

Personal computer projects are concerned with embedded systems, hardware add-ons, interfaces, and equipment for troubleshooting. For writing software, I prefer PowerBASIC. Those compilers generate executables, which run efficiently and show a small footprint. Besides, they allow for directly accessing the Windows API and switching to Assembler coding, if necessary.

Microcontroller software is done in Assembler and, if required, in C or BASIC (BASCOM). As the programming language of the toughest of the tough, Assembler comes second after wire [i.e., the soldering iron].

My research interests are directed at computer architecture, instruction sets, hardware, and interfaces between hardware and software. To pursue appropriate projects, programming at the machine level is mandatory. In student education, introductory courses begin with the basics of computer architecture and machine-level programming. However, Assembler programming is only taught at a level that is deemed necessary to understand the inner workings of the machine and to write small time-critical routines. The more sophisticated application programming is usually done in C.

Real work is shown here at the digital analog computer—bring-up and debugging of the master controller board. Each of the six microcontrollers is connected to a general-purpose human-interface module.

A digital analog computer in Matthes’s home lab works on master controller board bring-up and debugging. Each of the six microcontrollers is connected to a general-purpose human-interface module.

Additional photos of Matthes’s workspace and his embedded electronics and micrcontroller projects are available at his new website.




Quartus II Software Arria 10 Edition v14.0

Altera Corp. has released Quartus II software Arria 10 edition v14.0, which is an advanced 20-nm FPGA and SoC design environment. Quartus II software delivers fast compile times and enables high performance for 20-nm FPGA and SoC designs. You can further accelerate Arria 10 FPGA and SoC design cycles by using the range of 20-nm-optimized IP cores included in the latest software release.

Altera’s 20-nm design tools feature advanced algorithms. The Quartus II software Arria 10 edition v14.0 provides on average notably fast compile times. This productivity advantage enables you to shorten design iterations and rapidly close timing on 20-nm design.

Included in the latest software release is a full complement of 20-nm-optimized IP cores to enable faster design cycles. The IP portfolio includes standard protocol and memory interfaces, DSP and SoC IP cores. Altera also optimized its popular IP cores for Arria 10 FPGAs and SoCs, which include 100G Ethernet, 300G Interlaken, Interlaken Look-Aside, and PCI Express Gen3 IP. When implemented in Altera’s Arria 10 FPGAs and SoCs, these IP cores deliver the high performance.

The Quartus II software Arria 10 edition v14.0 is available now for download. The software is available as a subscription edition and includes a free 30-day trial. The annual software subscription is $2,995 for a node-locked PC license. Engineering samples of Arria 10 FPGAs are shipping today.

Source: Altera Corp.

Fast Quad IF DAC

ADI AD9144 16-bit 2.8 GSPS DAC - Fastest Quad IF DAC - High DynaThe AD9144 is a four-channel, 16-bit, 2.8-GSPS DAC that supports high data rates and ultra-wide signal bandwidth to enable wideband and multiband wireless applications. The DAC features 82-dBc spurious-free dynamic range (SFDR) and a 2.8-GSPS maximum sample rate, which permits multicarrier generation up to the Nyquist frequency.

With –164-dBm/Hz noise spectral density, the AD9144 enables higher dynamic range transmitters to be built. Its low SFDR and distortion design techniques provide high-quality synthesis of wideband signals from baseband to high intermediate frequencies. The DAC features a JESD204B eight-lane interface and low inherent latency of fewer than two DAC clock cycles. This simplifies hardware and software system design while permitting multichip synchronization.

The combination of programmable interpolation rate, high sample rates, and low power at 1.5 W provides flexibility when choosing DAC output frequencies. This is especially helpful in meeting four- to six-carrier Global System for Mobile Communications (GSM) transmission specifications and other communications standards. For six-carrier GSM intermodulation distortion (IMD), the AD9144 operates at 77 dBc at 75-MHz IF. Operating with the on-chip phase-locked loop (PLL) at a 30-MHz DAC output frequency, the AD9144 delivers a 76-dB adjacent-channel leakage ratio (ACLR) for four-carrier Wideband Code Division Multiple Access (WCDMA) applications.

The AD9144 includes integrated interpolation filters with selectable interpolation factors. The dual DAC data interface supports word and byte load, enabling users to reduce input pins on lower data rates to save board space, power, and cost.

The DAC is supported by an evaluation board with an FPGA Mezzanine Card (FMC) connector, software, tools, a SPI controller, and reference designs. Analog Devices’s VisualAnalog software package combines a powerful set of simulation and data analysis tools with a user-friendly graphical interface that enables users to customize their input signal and data analysis.

The AD9144BCPZ DAC costs $80. The AD9144-EBZ and AD9144-FMC-EBZ FMC evaluation boards cost $495.

Analog Devices, Inc.

Closed-Loop Module

CogiscanThe LCR Control Module is a fully integrated and closed-loop system designed to eliminate the risk of placing incorrect passive components on PCBs. Combined with the Offline Job Setup and/or Line Setup Control module, the LCR Control Module’s software is integrated with an electrical LCR meter. The system verifies that the measured values (e.g., inductance, capacitance, resistance) are within the tolerances specified for the component. This electrical verification can be done at any time prior to mounting the reel on the placement machine.

Contact Cogiscan for pricing.

Cogiscan, Inc.

A Trace Tool for Embedded Systems

Tracing tools monitor what is going in a program’s execution by logging low-level and frequent events. Thus tracing can detect and help debug performance issues in embedded system applications.

In Circuit Cellar’s April issue, Thiadmer Riemersma describes his DIY tracing setup for small embedded systems. His system comprises three parts: a set of macros to include in the source files of a device under test (DUT), a PC workstation viewer that displays retrieved trace data, and a USB dongle that interfaces the DUT with the workstation.

Small embedded devices typically have limited-performance microcontrollers and scarce interfaces, so Riemersma’s tracing system uses only a single I/O pin on the microcontroller.

Designing a serial protocol that keeps data compact is also important. Riemersma, who develops embedded software for the products of his Netherlands-based company, CompuPhase, explains why:

Compactness of the information transferred from the embedded system to the workstation [which decodes and formats the trace information] is important because the I/O interface that is used for the transfer will probably be the bottleneck. Assuming you are transmitting trace messages bit by bit over a single pin using standard wire and 5- or 3.3-V logic levels, the transfer rate may be limited to roughly 100 Kbps.

My proposed trace protocol achieves compactness by sending numbers in binary, rather than as human-readable text. Common phrases can be sent as numeric message IDs. The trace viewer (or trace ‘listener’) can translate these message IDs back to the human-readable strings.

One important part of the system is the hardware interface—the trace dongle. Since many microcontrollers are designed with only those interfaces used for specific application needs, Riemersma says, typically the first step is finding a spare I/0 pin that can be used to implement the trace protocol.

In the following article excerpt, Riemersma describes his trace dongle and implementation requiring a single I/O pin on the microcontroller:

This is the trace dongle.

This is the trace dongle.

Photo 1 shows the trace dongle. To transmit serial data over a single pin, you need to use an asynchronous protocol. Instead of using a kind of (bit-banged) RS-232, I chose biphase encoding. Biphase encoding has the advantage of being a self-clocking and self-synchronizing protocol. This means that biphase encoding is simple to implement because timing is not critical. The RS-232 protocol requires timing to stay within a 3% error margin; however, biphase encoding is tolerant to timing errors of up to 20% per bit. And, since the protocol resynchronizes on each bit, error accumulation is not an issue.

Figure 1 shows the transitions to transmit an arbitrary binary value in biphase encoding—to be more specific, this variant is biphase mark coding. In the biphase encoding scheme, there is a transition at the start of each bit.

Figure 1: This is an example of a binary value transferred in biphase mark coding.

Figure 1: This is an example of a binary value transferred in biphase mark coding.

For a 1 bit there is also a transition halfway through the clock period. With a 0 bit, there is no extra transition. The absolute levels in biphase encoding are irrelevant, only the changes in the output line are important. In the previous example, the transmission starts with the idle state at a high logic level but ends in an idle state at a low logic level.

Listing 1 shows an example implementation to transmit a byte in biphase encoding over a single I/O pin. The listing refers to the trace_delay() and toggle_pin() functions (or macros). These system-dependent functions must be implemented on the DUT. The trace_delay() function should create a short delay, but not shorter than 5 µs (and not longer than 50 ms). The toggle_pin() function must toggle the output pin from low to high or from high to low.

For each bit, the function in Listing 1 inverts the pin and invokes trace_delay() twice. However, if the bit is a 1, it inverts the pin again between the two delay periods. Therefore, a bit’s clock cycle takes two basic “delay periods.”

Listing 1: Transmitting a byte in biphase encoding, based on a function to toggle an I/O pin, is shown.

Listing 1: Transmitting a byte in biphase encoding, based on a function to toggle an I/O pin, is shown.

The biphase encoding signal goes from the DUT to a trace dongle. The dongle decodes the signal and forwards it as serial data from a virtual RS-232 port to the workstation (see Photo 2 and the circuit in Figure 2).

Photo 2: The trace dongle is inserted into a laptop and connected to the DUT.

Photo 2: The trace dongle is inserted into a laptop and connected to the DUT.

This trace dongle interprets biphase encoding.

Figure 2: This trace dongle interprets biphase encoding.

The buffer is there to protect the microcontroller’s input pin from spikes and to translate the DUT’s logic levels to 5-V TTL levels. I wanted the trace dongle to work whether the DUT used 3-, 3.3-, or 5-V logic. I used a buffer with a Schmitt trigger to avoid the “output high” level of the DUT at 3-V logic, plus noise picked up by the test cable would fall in the undefined area of 5-V TTL input.

Regarding the inductor, the USB interface provides 5 V and the electronics run at 5 V. There isn’t room for a voltage regulator. Since the USB power comes from a PC, I assumed it might not be a “clean” voltage. I used the LC filter to reduce noise on the power line.

The trace dongle uses a Future Technology Devices International (FTDI) FT232RL USB-to-RS-232 converter and a Microchip Technology PIC16F1824 microcontroller. The main reason I chose the FT232RL converter is FTDI’s excellent drivers for multiple OSes. True, your favorite OS already comes with a driver for virtual serial ports, but it is adequate at best. The FTDI drivers offer lower latency and a flexible API. With these drivers, the timestamps displayed in the trace viewers are as accurate as what can be achieved with the USB protocol, typically within 2 ms.

I chose the PIC microcontroller for its low cost and low pin count. I selected the PIC16F1824 because I had used it in an earlier project and I had several on hand. The microcontroller runs on a 12-MHz clock that is provided by the FTDI chip.

The pins to connect to the DUT are a ground and a data line. The data line is terminated at 120 Ω to match the impedance of the wire between the dongle and the DUT.

The cable between the DUT and the trace dongle may be fairly long; therefore signal reflections in the cable should be considered even for relatively low transmission speeds of roughly 250 kHz. That cable is typically just loose wire. The impedance of loose wire varies, but 120 Ω is a good approximate value.

The data line can handle 3-to-5-V logic voltages. Voltages up to 9 V do not harm the dongle because it is protected with a Zener diode (the 9-V limit is due to the selected Zener diode’s maximum power dissipation). The data line has a 10-kΩ to 5-V pull-up, so you can use it on an open-collector output.

The last item of interest in the circuit is a bicolor LED that is simply an indicator for the trace dongle’s status and activity. The LED illuminates red when the dongle is “idle” (i.e., it has been enumerated by the OS). It illuminates green when biphase encoded data is being received.

After the dongle is built, it must be programmed. First, the FT232RL must be programmed (with FTDI’s “FT Prog” program) to provide a 12-MHz clock on Pin C0. The “Product Description” in the USB string descriptors should be set to “tracedongle” so the trace viewers can find the dongle among other FTDI devices (if any). To avoid the dongle being listed as a serial port, I also set the option to only load the FTDI D2XX drivers.

To upload the firmware in the PIC microcontroller, you need a programmer (e.g., Microchip Technology’s PICkit) and a Tag-Connect cable, which eliminates the need for a six-pin header on the PCB, so it saves board space and cost.

The rest of the article provides details of how to create the dongle firmware, how to add trace statements to the DUT software being monitored, and how to use the GUI version of the trace viewer.

The tracing system is complete, but it can be enhanced, Riemersma says. “Future improvements to the tracing system would include the ability to draw graphs (e.g., for task switches or queue capacity) or a way to get higher precision timestamps of received trace packets,” he says.

For Riemersma’s full article, refer to our April issue now available for membership download or single-issue purchase.

Client Profile: Integrated Knowledge Systems

Integrated Knowledge Systems' NavRanger board

Integrated Knowledge Systems’ NavRanger board

Phoenix, AZ

CONTACT: James Donald,

EMBEDDED PRODUCTS: Integrated Knowledge Systems provides hardware and software solutions for autonomous systems.
featured Product: The NavRanger-OEM is a single-board high-speed laser ranging system with a nine-axis inertial measurement unit for robotic and scanning applications. The system provides 20,000 distance samples per second with a 1-cm resolution and a range of more than 30 m in sunlight when using optics. The NavRanger also includes sufficient serial, analog, and digital I/O for stand-alone robotic or scanning applications.

The NavRanger uses USB, CAN, RS-232, analog, or wireless interfaces for operation with a host computer. Integrated Knowledge Systems can work with you to provide software, optics, and scanning mechanisms to fit your application. Example software and reference designs are available on the company’s website.

EXCLUSIVE OFFER: Enter the code CIRCUIT2014 in the “Special Instructions to Seller” box at checkout and Integrated Knowledge Systems will take $20 off your first order.


Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.

A Quiet Place for Soldering and Software Design

Senior software engineer Carlo Tauraso, of Trieste, Italy, has designed his home workspace to be “a distraction-free area where tools, manuals, and computer are at your fingertips.”

Tauraso, who wrote his first Assembler code in the 1980s for the Sinclair Research ZX Spectrum PC, now works on developing firmware for network devices and microinterfaces for a variety of European companies. Several of his articles and programming courses have been published in Italy, France, Spain, and the US. Three of his articles have appeared in Circuit Cellar since 2008.

Photo 1: This workstation is neatly divided into a soldering/assembling area on the left and developing/programming area on the right.

Photo 1: This workstation is neatly divided into a soldering/assembling area on the left and a developing/programming area on the right.

Tauraso keeps an orderly and, most importantly, quiet work area that helps him stay focused on his designs.

This is my “magic” designer workspace. It’s not simple to make an environment that’s perfectly suited to you. When I work and study I need silence.

I am a software engineer, so during designing I always divide the work into two main parts: the analysis and the implementation. I decided, therefore, to separate my workspace into two areas: the developing/programming area on the right and the soldering/assembling area on the left (see Photo 1). When I do one or the other activity, I move physically in one of the two areas of the table. Assembling and soldering are manual activities that relax me. On the other hand, programming often is a rather complex activity that requires a lot more concentration.

Photo 2: The marble slab at the right of Tauraso’s assembling/soldering area protects the table surface and the optical inspection camera nearby helps him work with tiny ICs.

Photo 2: The marble slab at the right of Tauraso’s assembling/soldering area protects the table surface. The optical inspection camera nearby helps him work with tiny ICs.

The assembling/soldering area is carefully set up to keep all of Tauraso’s tools within easy reach.

I fixed a marble slab square on the table to solder without fear of ruining the wood surface (see Photo 2). As you can see, I use a hot-air solder station and the usual iron welder. Today’s ICs are very small, so I also installed a camera for optical inspection (the black cylinder with the blue stripe). On the right, there are 12 outlets, each with its own switch. Everything is ready and at your fingertips!

Photo 3: This developing and programming space, with its three small computers, is called “the little Hydra.”

Photo 3: This developing and programming space, with its three small computers, is called “the little Hydra.”

The workspace’s developing and programming area makes it easy to multitask (see Photo 3).

In the foreground you can see a network of three small computers that I call “the little Hydra” in honor of the object-based OS developed at Carnegie Mellon University in Pittsburgh, PA, during the ’70s. The HYDRA project sought to demonstrate the cost-performance advantages of multiprocessors based on an inexpensive minicomputer. I used the same philosophy, so I have connected three Mini-ITX motherboards. Here I can test network programming with real hardware—one as a server, one as a client, one as a network sniffer or an attacker—while, on the other hand, I can front-end develop Windows and the [Microchip Technology] PIC firmware while chatting with my girlfriend.

This senior software designer has created a quiet work area with all his tools close at hand.

Senior software engineer Tauraso has created a quiet work area with all his tools close at hand.

Circuit Cellar will be publishing Tauraso’s article about a wireless thermal monitoring system based on the ANT+ protocol in an upcoming issue. In the meantime, you can follow Tauraso on Twitter @CarloTauraso.

Traveling With a “Portable Workspace”

As a freelance engineer, Raul Alvarez spends a lot of time on the go. He says the last four or five years he has been traveling due to work and family reasons, therefore he never stays in one place long enough to set up a proper workspace. “Whenever I need to move again, I just pack whatever I can: boards, modules, components, cables, and so forth, and then I’m good to go,” he explains.

Raul_Alvarez_Workspace _Photo_1

Alvarez sits at his “current” workstation.

He continued by saying:

In my case, there’s not much of a workspace to show because my workspace is whichever desk I have at hand in a given location. My tools are all the tools that I can fit into my traveling backpack, along with my software tools that are installed in my laptop.

Because in my personal projects I mostly work with microcontroller boards, modular components, and firmware, until now I think it didn’t bother me not having more fancy (and useful) tools such as a bench oscilloscope, a logic analyzer, or a spectrum analyzer. I just try to work with whatever I have at hand because, well, I don’t have much choice.

Given my circumstances, probably the most useful tools I have for debugging embedded hardware and firmware are a good-old UART port, a multimeter, and a bunch of LEDs. For the UART interface I use a Future Technology Devices International FT232-based UART-to-USB interface board and Tera Term serial terminal software.

Currently, I’m working mostly with Microchip Technology PIC and ARM microcontrollers. So for my PIC projects my tiny Microchip Technology PICkit 3 Programmer/Debugger usually saves the day.

Regarding ARM, I generally use some of the new low-cost ARM development boards that include programming/debugging interfaces. I carry an LPC1769 LPCXpresso board, an mbed board, three STMicroelectronics Discovery boards (Cortex-M0, Cortex-M3, and Cortex-M4), my STMicroelectronics STM32 Primer2, three Texas Instruments LaunchPads (the MSP430, the Piccolo, and the Stellaris), and the following Linux boards: two BeagleBones (the gray one and a BeagleBone Black), a Cubieboard, an Odroid-X2, and a Raspberry Pi Model B.

Additionally, I always carry an Arduino UNO, a Digilent chipKIT Max 32 Arduino-compatible board (which I mostly use with MPLAB X IDE and “regular” C language), and a self-made Parallax Propeller microcontroller board. I also have a Wi-Fi 3G TP-LINK TL-WR703N mini router flashed   with OpenWRT that enables me to experiment with Wi-Fi and Ethernet and to tinker with their embedded Linux environment. It also provides me Internet access with the use of a 3G modem.

Raul_Alvarez_Workspace _Photo_2

Not a bad set up for someone on the go. Alvarez’s “portable workstation” includes ICs, resistors, and capacitors, among other things. He says his most useful tools are a UART port, a multimeter, and some LEDs.

In three or four small boxes I carry a lot of sensors, modules, ICs, resistors, capacitors, crystals, jumper cables, breadboard strips, and some DC-DC converter/regulator boards for supplying power to my circuits. I also carry a small video camera for shooting my video tutorials, which I publish from time to time at my website ( I have installed in my laptop TechSmith’s Camtasia for screen capture and Sony Vegas for editing the final video and audio.

Some IDEs that I have currently installed in my laptop are: LPCXpresso, Texas Instruments’s Code Composer Studio, IAR EW for Renesas RL78 and 8051, Ride7, Keil uVision for ARM, MPLAB X, and the Arduino IDE, among others. For PC coding I have installed Eclipse, MS Visual Studio, GNAT Programming Studio (I like to tinker with Ada from time to time), QT Creator, Python IDLE, MATLAB, and Octave. For schematics and PCB design I mostly use CadSoft’s EAGLE, ExpressPCB, DesignSpark PCB, and sometimes KiCad.

Traveling with my portable rig isn’t particularly pleasant for me. I always get delayed at security and customs checkpoints in airports. I get questioned a lot especially about my circuit boards and prototypes and I almost always have to buy a new set of screwdrivers after arriving at my destination. Luckily for me, my nomad lifestyle is about to come to an end soon and finally I will be able to settle down in my hometown in Cochabamba, Bolivia. The first two things I’m planning to do are to buy a really big workbench and a decent digital oscilloscope.

Alvarez’s article “The Home Energy Gateway: Remotely Control and Monitor Household Devices” appeared in Circuit Cellar’s February issue. For more information about Alvarez, visit his website or follow him on Twitter @RaulAlvarezT.

Client Profile: ImageCraft Creations, Inc.

CorStarter prototyping board

CorStarter prototyping board

2625 Middlefield Road, #685,
Palo Alto, CA 94306

CONTACT: Richard Man,

EMBEDDED PRODUCTS:ImageCraft Version 8 C compilers with an IDE for Atmel AVR and Cortex M devices are full-featured toolsets backed by strong support.

CorStarter-STM32 is a complete C hardware and software kit for STM32 Cortex-M3 devices. The $99 kit includes a JTAG pod for programming and debugging.

ImageCraft products offer excellent features and support within budget requisitions. ImageCraft compiler toolsets are used by professionals who demand excellent code quality, full features, and diligent support in a timely manner.

The small, fast compilers provide helpful informational messages and include an IDE with an application builder (Atmel AVR) and debugger (Cortex-M), whole-program code compression technology, and MISRA safety checks. ImageCraft offers two editions that cost $249 and $499.

The demo is fully functional for 45 days, so it is easy to test it yourself.

EXCLUSIVE OFFER: For a limited time, ImageCraft is offering Circuit Cellar readers $40 off the Standard and PRO versions of its Atmel AVR and Cortex-M compiler toolsets. To take advantage of this offer, please visit


Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.