Implement a Tilt and Interference-Compensated Electronic Compass

Would you like to incorporate an electronic compass in a consumer product you’re designing or a personal device you’re constructing? If so, you’ll do well to understand as much as possible about the differences between various sensors and how certain forms of interference can affect their accuracies.

Mark Pedley of Freescale Semiconductor has an article in Circuit Cellar 265 (August 2012) on these topics. An abridged version of his article follows. Pedley writes:

 Whenever a new high-volume consumer electronics market develops, the semiconductor companies are never far behind, providing excellent components at surprisingly low prices. The market for sensors in consumer products is a recent example. It all started with an accelerometer used to select between portrait and landscape display orientations and then, with the addition of a magnetometer, evolved into the electronic compass (eCompass) used to align street maps to the smartphone’s geographic heading or to enable augmented reality overlays. As a result, high-volume pricing for smartphone accelerometer and magnetometer sensors is now less than $1 each.

A magnetometer sensor alone cannot provide an accurate compass heading for two reasons. First, the magnetic field measured at the magnetometer varies significantly with tilt angle. Second, the magnetometer requires calibrating not only for its own offset but also against spurious magnetic fields resulting from any nearby ferromagnetic components on the circuit board. This article describes how the accelerometer is used to compensate the magnetometer for tilt and includes a simple technique for calibrating the magnetometer.


The accelerometer should be three axis and capable of operating in the ±2-g range with a minimum of 10 bits of resolution. The output of a 10-bit accelerometer operating in the ±2-g range will change by 512 counts as the accelerometer is rotated 180° from pointing downward to upward, giving an average sensitivity of one count per 0.35° change in tilt. This is more than adequate for tilt-compensation purposes.

It is important to check the accelerometer datasheet for the “0-g offset accuracy” which is the output when the accelerometer is in 0-g freefall. Since this value is a constant additive error on each accelerometer channel, it adds a bias in the calculated tilt angles, so look for accelerometers where this parameter does not exceed 50 mg.

The magnitude of the earth’s geomagnetic field is typically about 50 µT with a horizontal component that varies over the earth’s surface, from a maximum of about 40 µT down to 0 at the geomagnetic poles. If an eCompass is required to operate in horizontal geomagnetic fields down to 10 µT (in arctic Canada, for example) with a noise jitter of ±3°, then a back-of-the-envelope calculation indicates that a magnetometer with a maximum noise level of 0.5 µT is needed.

Most of my projects have used Freescale’s MMA8451Q Xtrinsic three-axis, 14-bit accelerometer and MAG3110 three-axis magnetometer. The MMA8451Q is supplied in a 3-mm × 3-mm × 1-mm, 16-pin QFN package and provides a 14-bit data output with ±30-mg, 0-g offset accuracy. The MAG3110 magnetometer is supplied in a 2-mm × 2-mm × 0.85 mm, 10-pin DFN package and provides a measurement range of ±1,000 µT with 0.1-µT resolution and a noise level down to 0.25 µT. Both parts operate with a supply voltage between 1.95 V and 3.6 V.

Similar sensors are supplied by Asahi Kasei (AKM), Kionix, STMicroelectronics, and other manufacturers. Your best strategy is to go to the manufacturers’ websites and make a list of those that provide samples in single units or low-volume packs of up to five devices. With a bit of luck, you may be able to get both the accelerometer and magnetometer sensors for free. Add a handful of decoupling capacitors and pull-up resistors and you should be well within the $5 component cost.

Each reader has a preferred microcontroller to read the raw data from the two sensors and implement the eCompass. This article assumes the microcontroller provides an I2C bus to interface to the sensors, supports floating-point operations whether natively or through software emulation libraries, and has a few spare bytes of program and data memory…


Once you’ve selected your sensors, the next step is to design the accelerometer and magnetometer daughterboard with I2C bus connection to the microcontroller. Reference schematics for the MMA8451Q and MAG3110 are provided in the sensor datasheets and reproduced in Figure 1.

Figure 1: Schematics for (a) MMA8451Q and (b) MAG3110 sensors (Source: M. Pedley, Circuit Cellar 265)

Don’t waste any time rotating the accelerometer or magnetometer packages to align their x-, y-, and z-sensing directions to each other since this will be  fixed later in software. But do ensure the sensor board will not be mounted in the immediate vicinity of any ferromagnetic materials since these will produce a constant additive magnetic field termed the “hard-iron field.” The most common ferromagnetic materials are iron, steel, ferrite, nickel, and cobalt. Non-ferromagnetic materials are all safe to use (e.g., aluminum, copper, brass, tin, silver, and gold).

The calibration process described later enables the estimation and software subtraction of any hard-iron field, but it’s good practice to minimize hard iron interference at the design stage. Remember, a current trace will create a cylindrical magnetic field that falls off relatively slowly with the inverse of distance, so place the magnetometer as far away from high current traces as possible. A 0.1-A current trace at 10-mm distance will produce a 2-µT magnetic field, four times our 0.5-µT error budget, only reducing to 0.5 µT at a 40-mm distance. More detailed layout guidance is provided in Freescale Semiconductor’s application note AN4247: “Layout Recommendations for PCBs Using a Magnetometer Sensor.”

You’ll be surprised at the number of features implemented in the latest consumer sensors (i.e., freefall detection, high- and low-pass filtering options, automatic portrait and landscape detection, etc.), but disable all these since you simply want the raw accelerometer and magnetometer data. Configure the accelerometer into a 2-g range and check that you can read the x, y, and z accelerometer and magnetometer data (in units of bit counts) from the sensors’ internal registers at a sampling rate of between 10 Hz and 50 Hz. Smartphones commonly use IDH3 to minimize power consumption while anything above 50 Hz is overkill. Check the accelerometer datasheet for the conversion factor between counts and g (4,096 counts per g for the MMA8451Q in ±2-g mode) and use this to scale the x, y, z accelerometer readings into units of g. Do the same for the x, y, z magnetometer data again taking the conversion factor from the magnetometer datasheet (10 counts per µT for the MAG3110).


The equations and C software in Listing 1 use the “aerospace,” or “x-North y-East z-Down,” coordinate system depicted in Photo 1.

Listing 1: C source code for the tilt-compensated eCompass (Source: M. Pedley, Circuit Cellar 265)

This defines the initial eCompass orientation to be where the x-axis points north, the y-axis points east, and the z-axis points downwards. The three orientation angles, roll (ϕ), pitch (θ), and compass heading, or yaw (ψ)—are defined as clockwise rotations about the positive x, y, and z axes— respectively. Photo 1 also shows the earth’s gravitational vector which points downward with magnitude of 1 g or 9.81 ms-2 and the earth’s geomagnetic field vector, which points downward from horizontal (in the northern hemisphere) by the inclination angle δ to give a horizontal component B0cosδ and a downward component B0sinδ.

Photo 1: The aerospace noth-east-down coordinate system (Source: M. Pedley, Circuit Cellar 265)

Based on how your eCompass housing will be held, you should be able to assign the compass-pointing direction or x-axis, the downward or z-axis, and the y-axis, which should point to the right to complete a right-handed coordinate system.


You now need to align the sensor data to the aerospace coordinate system. As with all work with magnetometers, this should be performed on a wooden table well away from any laboratory power supplies or steel furniture. Place the eCompass flat and upright so the z-axis points downward and is aligned with gravity. Check that the accelerometer z-axis reads approximately 1 g and the x- and y-axes are near 0. Invert the eCompass so its z-axis points upward and check that the z-axis now reads approximately –1 g. Repeat with the x- and y-axes pointing downward and then upward and check that the x- and  y-axis accelerometer readings are near 1 g and –1 g, respectively. It’s not important if the accelerometer readings are a few tens of mg away from the required reading since all you’re doing here is correcting for gross rotations of the sensor packages and the sensor daughterboard in multiples of 90°. Any needed correction will be unique for your board layout and mounting orientation but will be no more complicated than “swap the x- and y-accelerometer channels and negate the z-channel reading.” Code this accelerometer axis mapping into your software and don’t touch it again.

Figure 2 may help explain this visually. The accelerometer sensor measures both gravity and linear acceleration and, in the absence of any linear acceleration (as is the case when sitting on a desk), the magnitude of the accelerometer reading will always equal 1 g, and therefore, lie on the surface of a 1-g sphere, irrespective of the orientation.

Figure 2: Accelerometer axis alignment points (Source: M. Pedley, Circuit Cellar 265)

The six measurements  lie on the vertices of an octahedron inscribed within the 1-g sphere and the axis mapping simply rotates and reflects the octahedron as needed until the accelerometer channels are correctly aligned.

The magnetometer axis alignment is similar to that of the accelerometer, but makes use of the geomagnetic field vector. Place the eCompass flat, upright, and pointing northward and then rotate in yaw angle by 270° to the east, south, and finally west. The x-channel magnetometer reading should be a maximum when the eCompass is pointed north and a minimum when pointed south. The y-channel magnetometer reading should be a minimum when the eCompass is pointed east and a maximum when pointed west. The z-channel reading should be approximately constant since the vertical component of the geomagnetic field is constant irrespective of rotation in yaw.

Then invert the eCompass on the desk and repeat the process. As before, the magnetometer x-axis reading should be a maximum when the eCompass is pointed north and a minimum when pointed south. But now, because of the inverted position, the magnetometer y-axis should be a maximum when the eCompass is pointed east and a minimum when pointed west. The magnetometer z-axis reading should still be constant but, in the northern hemisphere, lower than the previous upright readings since the magnetometer z-axis is now aligned against the downward component of the geomagnetic field vector.

Figure 3 shows upright and inverted magnetometer measurements taken in the northern hemisphere with a 270o compass rotation.

Figure 3: The upright (a) and inverted (b) magnetometer measurements (Source: M. Pedley, Circuit Cellar 265)

The maximum and minimum of the x- and y-axis magnetometer measurements occur at the expected angles and the z-axis measurement is less when inverted than when upright. These magnetometer axes are therefore correctly aligned but, as with the accelerometer correction, swap and negate the measurements from your three magnetometer channels as needed until correctly aligned and then lock down this part of your code.

A lot can be learned by closely looking at the measurements in Figure 3. The x- and y-magnetometer measurements lie on a circle with radius of approximately 25 µT enabling us to deduce that the horizontal geomagnetic field is approximately 25 µT. But the measurements are offset from zero by the magnetic “hard iron” interfering field, which results from both permanently magnetized ferromagnetic materials on the circuit board and from a zero-field offset in the magnetometer sensor itself. Consumer sensor manufacturers long ago realized it was pointless to accurately calibrate their magnetometers when their target market is smartphones, each with a different hard-iron interfering field. The magnetometer sensor offset is, therefore, calibrated together with the circuit board hard-iron magnetic field. For now, simply note that the x and y components of the hard iron offset have values of approximately 215 µT and 185 µT. A simple method to determine all three hard-iron components is described later.

Refer to the complete article for information about calculating the roll and pitch angles and determining the compass heading angle.

Mark Pedley has a Physics degree from Oxford University and now works on sensor fusion algorithms for Freescale Semiconductor in Tempe, Arizona.

DIY Internet-Enabled Home Control System

Why shell out hundreds or thousands of dollars on various home control systems (HCS) when you have the skills and resources to build your own? You can design and implement sophisticated Internet-enabled systems with free tools and some careful planning.

John Breitenbach did just that. He used a microcontroller, free software, and a cloud-based data platform to construct a remote monitoring system for his home’s water heater. The innovative design can email or text status messages and emergency alerts to a smartphone. You can build a similar system to monitor any number of appliances, rooms, or buildings.

An abridged version of Breitenbach’s article, “Internet-Enabled Home Control” (Circuit Cellar 264, July 2012), appears below. (A link to the entire article and an access password are noted at the end of this post.) Breitenbach writes:

Moving from the Northeast to North Carolina, my wife and I were surprised to find that most homes don’t have basements. In the north, the frost line is 36˝–48 ˝ below the surface. To prevent frost heave, foundations must be dug at least that deep. So, digging down an extra few feet to create a basement makes sense. Because the frost line is only 15 ˝ in the Raleigh area, builders rarely excavate the additional 8’ to create basements.

The lack of basements means builders must find unique locations for a home’s mechanical systems including the furnace, AC unit, and water heater. I was shocked to find that my home’s water heater is located in the attic, right above one of the bedrooms (see Photo 1).

Photo 1: My home’s water heater is located in our attic. (Photo courtesy of Michael Thomas)

During my high school summers I worked for my uncle’s plumbing business (“Breitenbach Plumbing—We’re the Best, Don’t Call the Rest”) and saw firsthand the damage water can do to a home. Water heaters can cause some dramatic end-of-life plumbing failures, dumping 40 or more gallons of water at once followed by the steady flow of the supply line.

Having cleaned up the mess of a failed water heater in my own basement up north, I haven’t had a good night’s sleep since I discovered the water heater in my North Carolina attic. For peace of mind, especially when traveling, I instrumented my attic so I could be notified immediately if water started to leak. My goal was to use a microcontroller so I could receive push notifications via e-mails or text messages. In addition to emergency messages, status messages sent on a regular basis reassure me the system is running. I also wanted to use a web browser to check the current status at any time.


The attic monitor is based on Renesas Electronics’s YRDKRX62N demonstration kit, which features the RX62N 32-bit microcontroller (see Photo 2). Renesas has given away thousands of these boards to promote the RX, and the boards are also widely available through distributors. The YRDK board has a rich feature set including a graphics display, push buttons, and an SD-card slot, plus Ethernet, USB, and serial ports. An Analog Devices ADT7420 digital I2C temperature sensor also enables you to keep an eye on the attic temperature. I plan to use this for a future addition to the project that compares this temperature to the outside air temperature to control an attic fan.

Photo 2: The completed board, which is based on a Renesas Electronics YRDKRX62N demonstration kit. (Photo courtesy of Michael Thomas)


Commercial water-detection sensors are typically made from two exposed conductive surfaces in close proximity to each other on a nonconductive surface. Think of a single-sided PCB with no solder mask and tinned traces (see Photo 3).

Photo 3: A leak sensor (Photo courtesy of Michael Thomas)

These sensors rely on the water conductivity to close the circuit between the two conductors. I chose a sensor based on this type of design for its low cost. But, once I received the sensors, I realized I could have saved myself a few bucks by making my own sensor from a couple of wires or a piece of proto-board.

When standing water on the sensor shorts the two contacts, the resistance across the sensor drops to between 400 kΩ and 600 kΩ. The sensor is used as the bottom resistor in a voltage divider with a 1-MΩ resistor up top. The output of the divider is routed to the 12-bit analog inputs on the RX62N microcontroller. Figure 1 shows the sensor interface circuit. When the voltage read by the analog-to-digital converter (ADC) drops below 2 V, it’s time to start bailing. Two sensors are connected: one in the catch pan under the water heater, and a second one just outside the catch pan to detect failures in the small expansion tank.

Figure 1: The sensor interface to the YRDK RX62N board


One of my project goals was to push notifications to my cell phone because Murphy’s Law says water heaters are likely to fail while you’re away for the weekend. Because I wanted to keep the project costs low, I used my home’s broadband connection as the gateway for the attic monitor. The Renesas RX62N microcontroller includes a 100-Mbps Ethernet controller, so I simply plugged in the cable to connect the board to my home network. The open-source µIP stack supplied by Renesas with the YRDK provides the protocol engine needed to talk to the Internet.

There were a couple of complications with using my home network as the attic monitor’s gateway to the world. It is behind a firewall built into my router and, for security reasons, I don’t want to open up ports to the outside world.

My Internet service provider (ISP) occasionally changes the Internet protocol (IP) address associated with my cable modem. So I would never know what address to point my web browser. I needed a solution that would address both of these problems. Enter Exosite, a company that provides solutions for cloud-based, machine-to-machine (M2M) communications.


Exosite provides a number of software components and services that enable M2M communications via the cloud. This is a different philosophy from supervisory control and data acquisition (SCADA) systems I’ve used in the past. The control systems I’ve worked on over the years typically involve a local host polling the hundreds or thousands of connected sensors and actuators that make up a commercial SCADA system. These systems are generally designed to be monitored locally at a single location. In the case of the attic monitor, my goal was to access a limited number of data points from anywhere, and have the system notify me rather than having to continuously poll. Ideally, I’d only hear from the device when there was a problem.

Exosite is the perfect solution: the company publishes a set of simple application programming interfaces (APIs) using standard web protocols that enable smart devices to push data to their servers in the cloud in real time. Once the data is in the cloud, events, alerts, and scripts can be created to do different things with the data—in my case, to send me an e-mail and SMS text alert if there is anything wrong with my water heater. Connected devices can share data with each other or pull data from public data sources, such as public weather stations. Exosite has an industrial-strength platform for large-scale commercial applications. It provides free access to it for the open-source community. I can create a free account that enables me to connect one or two devices to the Exosite platform.

Embedded devices using Exosite are responsible for pushing data to the server and pulling data from it. Devices use simple HTTP requests to accomplish this. This works great in my home setup because the attic monitor can work through my firewall, even when my Internet provider occasionally changes the IP address of my cable modem. Figure 2 shows the network diagram.

Figure 2: The cloud-based network


Web-based dashboards hosted on Exosite’s servers can be built and configured to show real-time and historical data from connected devices. Controls, such as switches, can be added to the dashboards to push data back down to the device, enabling remote control of embedded devices. Because the user interface is “in the cloud,” there is no need to store all the user interface (UI) widgets and data in the embedded device, which greatly reduces the storage requirements. Photo 4 shows the dashboard for the attic monitor.

Photo 4: Exosite dashboard for the attic monitor

Events and alerts can be added to the dashboard. These are logical evaluations Exosite’s server performs on the incoming data. Events can be triggered based on simple comparisons (e.g., a data value is too high or too low) or complex combinations of a comparison plus a duration (e.g., a data value remains too high for a period of time). Setting up a leak event for one of the sensors is shown in Photo 5.

Photo 5: Creating an event in Exosite

In this case, the event is triggered when the reported ADC voltage is less than 2 V. An event can also be triggered if Exosite doesn’t receive an update from the device for a set period of time. This last feature can be used as a watchdog to ensure the device is still working.

When an event is triggered, an alert can optionally be sent via e-mail. This is the final link that enables an embedded device in my attic to contact me anywhere, anytime, to alert me to a problem. Though I have a smartphone that enables me to access my e-mail account, I can also route the alarm message to my wife’s simpler phone through her cellular provider’s e-mail-to-text-message gateway. Most cellular providers offer this service, which works by sending an e-mail to a special address containing the cell phone number. On the Verizon network, the e-mail address is <yourcellularnumber> Other providers have similar gateways.

The attic monitor periodically sends heartbeat messages to Exosite to let me know it’s still working. It also sends the status of the water sensors and the current temperature in the attic. I can log in to Exosite at any time to see my attic’s real-time status. I have also configured events and alarms that will notify me if a leak is detected or if the temperature gets too hot…

The complete article includes details such about the Internet engine, reading the cloud, tips for updating the design, and more.  You can read the entire article by typing netenabledcontrol to open the password-protected PDF.

DIY Solar-Powered, Gas-Detecting Mobile Robot

German engineer Jens Altenburg’s solar-powered hidden observing vehicle system (SOPHECLES) is an innovative gas-detecting mobile robot. When the Texas Instruments MSP430-based mobile robot detects noxious gas, it transmits a notification alert to a PC, Altenburg explains in his article, “SOPHOCLES: A Solar-Powered MSP430 Robot.”  The MCU controls an on-board CMOS camera and can wirelessly transmit images to the “Robot Control Center” user interface.

Take a look at the complete SOPHOCLES design. The CMOS camera is located on top of the robot. Radio modem is hidden behind the camera so only the antenna is visible. A flexible cable connects the camera with the MSP430 microcontroller.

Altenburg writes:

The MSP430 microcontroller controls SOPHOCLES. Why did I need an MSP430? There are lots of other micros, some of which have more power than the MSP430, but the word “power” shows you the right way. SOPHOCLES is the first robot (with the exception of space robots like Sojourner and Lunakhod) that I know of that’s powered by a single lithium battery and a solar cell for long missions.

The SOPHOCLES includes a transceiver, sensors, power supply, motor
drivers, and an MSP430. Some block functions (i.e., the motor driver or radio modems) are represented by software modules.

How is this possible? The magic mantra is, “Save power, save power, save power.” In this case, the most important feature of the MSP430 is its low power consumption. It needs less than 1 mA in Operating mode and even less in Sleep mode because the main function of the robot is sleeping (my main function, too). From time to time the robot wakes up, checks the sensor, takes pictures of its surroundings, and then falls back to sleep. Nice job, not only for robots, I think.

The power for the active time comes from the solar cell. High-efficiency cells provide electric energy for a minimum of approximately two minutes of active time per hour. Good lighting conditions (e.g., direct sunlight or a light beam from a lamp) activate the robot permanently. The robot needs only about 25 mA for actions such as driving its wheel, communicating via radio, or takes pictures with its built in camera. Isn’t that impossible? No! …

The robot has two power sources. One source is a 3-V lithium battery with a 600-mAh capacity. The battery supplies the CPU in Sleep mode, during which all other loads are turned off. The other source of power comes from a solar cell. The solar cell charges a special 2.2-F capacitor. A step-up converter changes the unregulated input voltage into 5-V main power. The LTC3401 changes the voltage with an efficiency of about 96% …

Because of the changing light conditions, a step-up voltage converter is needed for generating stabilized VCC voltage. The LTC3401 is a high-efficiency converter that starts up from an input voltage as low as 1 V.

If the input voltage increases to about 3.5 V (at the capacitor), the robot will wake up, changing into Standby mode. Now the robot can work.

The approximate lifetime with a full-charged capacitor depends on its tasks. With maximum activity, the charging is used after one or two minutes and then the robot goes into Sleep mode. Under poor conditions (e.g., low light for a long time), the robot has an Emergency mode, during which the robot charges the capacitor from its lithium cell. Therefore, the robot has a chance to leave the bad area or contact the PC…

The control software runs on a normal PC, and all you need is a small radio box to get the signals from the robot.

The Robot Control Center serves as an interface to control the robot. Its main feature is to display the transmitted pictures and measurement values of the sensors.

Various buttons and throttles give you full control of the robot when power is available or sunlight hits the solar cells. In addition, it’s easy to make short slide shows from the pictures captured by the robot. Each session can be saved on a disk and played in the Robot Control Center…

The entire article appears in Circuit Cellar 147 2002. Type “solarrobot”  to access the password-protected article.

Q&A: Guido Ottaviani (Roboticist, Author)

Guido Ottaviani designs and creates innovative microcontroller-based robot systems in the city from which remarkable innovations in science and art have originated for centuries.

Guido Ottaviani

By day, the Rome-based designer is a technical manager for a large Italian editorial group. In his spare time he designs robots and interacts with various other “electronics addicts.” In an a candid interview published in Circuit Cellar 265 (August 2012), Guido described his fascination with robotics, his preferred microcontrollers, and some of his favorite design projects. Below is an abridged version of the interview.

NAN PRICE: What was the first MCU you worked with? Where were you at the time? Tell us about the project and what you learned.

GUIDO OTTAVIANI: The very first one was not technically an MCU, that was too early. It was in the mid 1980s. I worked on an 8085 CPU-based board with a lot of peripherals, clocked at 470 kHz (less than half a megahertz!) used for a radio set control panel. I was an analog circuits designer in a big electronics company, and I had started studying digital electronics on my own on a Bugbook series of self-instruction books, which were very expensive at that time. When the company needed an assembly programmer to work on this board, I said, “Don’t worry, I know the 8085 CPU very well.” Of course this was not true, but they never complained, because that job was done well and within the scheduled time.

I learned a lot about how to optimize CPU cycles on a slow processor. The program had very little time to switch off the receiver to avoid destroying it before the powerful transmitter started.

Flow charts on paper, a Motorola developing system with the program saved on an 8” floppy disk, a very primitive character-based editor, the program burned on an external EPROM and erased with a UV lamp. That was the environment! When, 20 years later, I started again with embedded programming for my hobby, using Microchip Technology’s MPLAB IDE (maybe still version 6.xx) and a Microchip Technology PIC16F84, it looked like paradise to me, even if I had to relearn almost everything.

But, what I’ve learned about code optimization—both for speed and size—is still useful even when I program the many resources on the dsPIC33F series…

NAN: You worked in the field of analog and digital development for several years. Tell us a bit about your background and experiences.

GUIDO: Let me talk about my first day of work, exactly 31 years ago.

Being a radio amateur and electronics fan, I went often to the surplus stores to find some useful components and devices, or just to touch the wonderful receivers or instruments: Bird wattmeters, Collins or Racal receivers, BC 312, BC 603 or BC 1000 military receivers and everything else on the shelves.

The first day of work in the laboratory they told to me, “Start learning that instrument.” It was a Hewlett-Packard spectrum analyzer (maybe an HP85-something) that cost more than 10 times my annual gross salary at that time. I still remember the excitement of being able to touch it, for that day and the following days. Working in a company full of these kinds of instruments (the building even had a repair and calibration laboratory with HP employees), with more than a thousand engineers who knew everything from DC to microwaves to learn from, was like living in Eden. The salary was a secondary issue (at that time).

I worked on audio and RF circuits in the HF to UHF bands: active antennas, radiogoniometers, first tests on frequency hopping and spread spectrum, and a first sample of a Motorola 68000-based GPS as big as a microwave oven.

Each instrument had an HPIB (or GPIB or IEEE488) interface to the computer. So I started approaching this new (for me) world of programming an HP9845 computer (with a cost equivalent to 5 years of my salary then) to build up automatic test sets for the circuits I developed. I was very satisfied when I was able to obtain a 10-Hz frequency hopping by a Takeda-Riken frequency synthesizer. It was not easy with such a slow computer, BASIC language, and a bus with long latencies. I had to buffer each string of commands in an array and use some special pre-caching features of the HPIB interface I found in the manual.

Every circuit, even if it was analog, was interfaced with digital ports. The boards were full of SN74xx (or SN54xx) ICs, just to make some simple functions as switching, multiplexing, or similar. Here again, my lack of knowledge was filled with the “long history experience” on Bugbook series.

Well, audio, RF, programming, communications, interfacing, digital circuits. What was I still missing to have a good background for the next-coming hobby of robotics? Ah! Embedded programming. But I’ve already mentioned this experience.

After all these design jobs, because my knowledge started spreading on many different projects, it was natural to start working as a system engineer, taking care of all the aspects of a complex system, including project management.

NAN: You have a long-time interest in robotics and autonomous robot design. When did you first become interested in robots and why?

GUIDO: I’ve loved the very simple robots in the toy store windows since I was young, the same I have on my website (Pino and Nino). But they were too simple. Just making something a little bit more sophisticated required too much electronics at that time.

After a big gap in my electronics activity, I discovered a newly published robotic magazine, with an electronic parts supplement. This enabled me to build a programmable robot based on a Microchip PIC16F84. A new adventure started. I felt much younger. Suddenly, all the electronics-specialized neurons inside my brain, after being asleep for many years, woke up and started running again. Thanks to the Internet (not yet available when I left professional electronics design), I discovered a lot of new things: MCUs, free IDEs running even on a simple computer, free compilers, very cheap programming devices, and lots of documentation freely available. I threw away the last Texas Instruments databook I still had on my bookshelf and started studying again. There were a lot of new things to know, but, with a good background, it was a pleasant (if frantic) study. I’ve also bought some books, but they became old before I finished reading them.

Within a few months, jumping among all the hardware and software versions Microchip released at an astonishing rate, I found Johann Borenstein et al’s book Where Am I?: Systems and Methods for Mobile Robot Positioning (University of Michigan, 1996). This report and Borenstein’s website taught me a lot about autonomous navigation techniques. David P. Anderson’s “My Robots” webpage ( inspired all my robots, completed or forthcoming.

I’ve never wanted to make a remote-controlled car, so my robots must navigate autonomously in an unknown environment, reacting to the external stimuli. …

NAN: Robotics is a focal theme in many of the articles you have contributed to Circuit Cellar. One of your article series, “Robot Navigation and Control” (224–225, 2009), was about a navigation control subsystem you built for an autonomous differential steering explorer robot. The first part focused on the robotic platform that drives motors and controls an H-bridge. You then described the software development phase of the project. Is the project still in use? Have you made any updates to it?

The “dsNavCon” system features a Microchip Technology dsPIC30F4012 motor controller and a general-purpose dsPIC30F3013. (Source: G. Ottaviani, CC224)

GUIDO: After I wrote that article series, that project evolved until the beginning of this year. There is a new switched power supply, a new audio sensor, the latest version of dsNav dsPIC33-based board became commercially available online, some mechanical changing, improvements on telemetry console, a lot of modifications in the firmware, and the UMBmark calibration performed successfully.

The goal is reached. That robot was a lab to experiment sensors, solutions, and technologies. Now I’m ready for a further step: outdoors.

NAN: You wrote another robotics-related article in 2010 titled, “A Sensor System for Robotics Applications” (Circuit Cellar 236). Here you describe adding senses—sight, hearing, and touch—to a robotics design. Tell us about the design, which is built around an Arduino Diecimila board. How does the board factor into the design?

An Arduino-based robot platform (Source: G. Ottavini, CC236)

GUIDO: That was the first time I used an Arduino. I’ve always used PICs, and I wanted to test this well-known board. In that case, I needed to interface many I2C, analog sensors, and an I2C I/O expander. I didn’t want to waste time configuring peripherals. All the sensors had 5-V I/O. The computing time constraints were not so strict. Arduino fits perfectly into all of these prerequisites. The learning curve was very fast. There was already a library of every device I’ve used. There was no need for a voltage level translation between 3.3 and 5 V. Everything was easy, fast, and cheap. Why not use it for these kinds of projects?

NAN: You designed an audio sensor for a Rino robotic platform (“Sound Tone Detection with a PSoC Part 1 and Part 2,” Circuit Cellar 256–257, 2011). Why did you design the system? Did you design it for use at work or home? Give us some examples of how you’ve used the sensor.

GUIDO: I already had a sound board based on classic op-amp ICs. I discovered the PSoC devices in a robotic meeting. At that moment, there was a special offer for the PSoC1 programmer and, incidentally, it was close to my birthday. What a perfect gift from my relatives!

This was another excuse to study a completely different programmable device and add something new to my background. The learning curve was not as easy as the Arduino one. It is really different because… it does a very different job. The new PSoC-based audio board was smaller, simpler, and with many more features than the previous one. The original project was designed to detect a fixed 4-kHz tone, but now it is easy to change the central frequency, the band, and the behavior of the board. This confirms once more, if needed, that nowadays, this kind of professional design is also available to hobbyists. …

NAN: What do you consider to be the “next big thing” in the embedded design industry? Is there a particular technology that you’ve used or seen that will change the way engineers design in the coming months and years?

GUIDO: As often happens, the “big thing” is one of the smallest ones. Many manufacturers are working on micro-nano-pico watt devices. I’ve done a little but not very extreme experimenting with my Pendulum project. Using the sleeping features of a simple PIC10F22P with some care, I’ve maintained the pendulum’s oscillation bob for a year with a couple of AAA batteries and it is still oscillating behind me right now.

Because of this kind of MCU, we can start seriously thinking about energy harvesting. We can get energy from light, heat, any kind of RF, movement or whatever, to have a self-powered, autonomous device. Thanks to smartphones, PDAs, tablets, and other portable devices, the MEMS sensors have become smaller and less expensive.

In my opinion, all this technology—together with supercapacitors, solid-state batteries or similar—will spread many small devices everywhere to monitor everything.

The entire interview is published in Circuit Cellar 265 (August 2012).

The Basics of Thermocouples

Whether you’re looking to build a temperature-sensing device or you need to add sensing capabilities to a larger system, you should familiarize yourself with thermocouples and understand how to design thermocouple interfaces. Bob Perrin covered these topics and more in 1999 Circuit Cellar Online article , “The Basics of Thermocouples.” The article appears below in its entirety.

A mathematician, a physicist, and an engineer were at lunch. The bartender asked the three gentlemen, “what is this pi I hear so much about?”

The mathematician replied, “pi is the ratio of a circle’s circumference to its diameter.”

The physicist answered, “pi is 3.14159265359.”

The engineer looked up, flatly stated, “Oh, pi’s about three,” then promptly went back to doodling on the back of his napkin.

The point is not that engineers are sloppy, careless, or socially inept. The point is that we are eminently practical. We are solvers of problems in a non-ideal world. This means we must be able to apply concepts to real problems and know when certain effects are negligible in our application.

For example, when designing first- or second-order filters, 3 is often a close enough approximation for pi, given the tolerance and temperature dependence of affordable components.

But, before we can run off and make gross approximations, we must understand the physical principles involved in the system we’re designing. One topic that seems to suffer from gross approximations without a firm understanding of the issues involved is temperature measurement with thermocouples.

Thermocouples are simple temperature sensors consisting of two wires made from dissimilar alloys. These devices are simple in construction and easy to use. But, like any electronic component, they require a certain amount of explanation. The intent of this paper is to present and explain how to use thermocouples and how to design thermocouple interfaces.


Figure 1a shows a thermocouple. One junction is designated the hot junction. The other junction is designated as the cold or reference junction. The current developed in the loop is proportional to the difference in temperature between the hot and cold junctions. Thermocouples measure differences in temperature, not absolute temperature.

Figure 1a: Two wires are all that are required to form a thermocouple.

To understand why a current is formed, we must revert to physics. Unfortunately, I’m not a physicist, so this explanation may bend a concept or two, but I’ll proceed nonetheless.

Consider a homogenous metallic wire. If heat is applied at one end, the electrons at that end become more energetic. They absorb energy and move out of their normal energy states and into higher ones. Some will be liberated from their atoms entirely. These newly freed highly energetic electrons move toward the cool end of the wire. As these electrons speed down the wire, they transfer their energy to other atoms. This is how energy (heat) is transferred from the hot end to the cool end of the wire.

As these electrons build up at the cool end of the wire, they experience an electrostatic repulsion. The not-so-energetic electrons at the cool end move toward the hot end of the wire, which is how charge neutrality is maintained in the conductor.

The electrons moving from the cold end toward the hot end move slower than the energetic electrons moving from the hot end move toward the cool end. But, on a macroscopic level, a charge balance is maintained.

When two dissimilar metals are used to form a thermocouple loop, as in Figure 1a, the difference in the two metal’s affinity for electrons enables a current to develop when a temperature differential is set up between the two junctions.

As electrons move from the cold junction to the hot junction, these not-so-energetic electrons are able to move easier in one metal than the other. The electrons that are moving from the hot end to the cold end have already absorbed a lot of energy, and are free to move almost equally well in both wires. This is why an electric current is developed in the loop.

I may have missed some finer points of the physics, but I think I hit the highlights. If anyone can offer a more in-depth or detailed explanation, please e-mail me. One of the best things about writing for a technical audience is learning from my readers.


If you use thermocouples, you must insert a measurement device in the loop to acquire information about the temperature difference between the hot and cold junctions. Figure 1b shows a typical setup. The thermocouple wires are brought to a terminal block and an electric circuit measures the open circuit voltage.

Figure 1b: To use a thermocouple, you must have a measurement system.

When the thermocouple wires are connected to the terminal block, an additional pair of thermocouples is formed (one at each screw terminal). This is true if the screw-terminals are a different alloy from the thermocouple wires. Figure 1c shows an alternate representation of Figure 1b. Junction 2 and junction 3 are undesired artifacts of the connection to the measurement circuitry. These two junctions are commonly called parasitic thermocouples.

Figure 1c: The act of connecting a measurement system made of copper introduces two parasitic thermocouples.

In a physical circuit, parasitic thermocouples are formed at every solder joint, connector, and even every internal IC bond wire. If it weren’t for something called the Law of Intermediate Metals, these parasitic junctions would cause us endless trouble.

The Law of Intermediate Metals states that a third metal may be inserted into a thermocouple system without affecting the system if, and only if, the junctions with the third metal are kept isothermal (at the same temperature).

In Figure 1c, if junction 2 and junction 3 are at the same temperature, they will have no effect on the current in the loop. The voltage seen by the voltmeter in Figure 1b will be proportional to the difference in temperature between Junction 1 and Junctions 2 and 3.

Junction 1 is the hot junction. The isothermal terminal block is effectively removed electrically from the circuit, so the temperature of the cold junction is the temperature of the terminal block.


Thermocouples produce a voltage (or loop current) that is proportional to the difference in temperature between the hot junction and the reference junction. If you want to know the absolute temperature at the hot junction, you must know the absolute temperature of the reference junction.

There are three ways to find out the temperature of the reference junction. The simplest method is to measure the temperature at the reference junction with a thermistor or semiconductor temperature sensor such as Analog Devices’ TMP03/04. Then, in software, add the measured thermocouple temperature (the difference between the hot junction and the reference junction) to the measured temperature of the reference junction. This calculation will yield the absolute temperature of the hot junction.

The second method involves holding the reference junction at a fixed and known temperature. An ice bath, or an ice slushy, is one of the most common methods used in laboratory settings. Figure 2 shows how this is accomplished.

Figure 2: By inserting a short pigtail of Metal A onto the terminal block where Metal B would normally connect, we move the cold junction.

Alternately, we could have omitted the pigtail of Metal A and just immersed the terminal block in the ice. This would work fine, but it would be much messier than the method shown in Figure 2.

Sometimes, the temperature of the cold junction (terminal block) in Figure 1c is allowed to float to ambient. Then ambient is assumed to be “about 25°C,” or some other “close enough” temperature. This method is usually found in systems where knowing the temperature of the hot junction is not overly critical.

The third method used to nail down the cold junction temperature is to use a cold junction compensation IC such as the Analog Devices AD594 or Linear Technology LT1025. This method sort of combines the first two methods.

These ICs have a temperature sensor in them that detects the temperature of the cold junction. This is presumably the same temperature as the circuit board on which the IC is mounted. The IC then produces a voltage that is proportional to the voltage produced by a thermocouple with its hot junction at ambient and its cold junction at 0°C. This voltage is added to the EMF produced by the thermocouple. The net effect is the same as if the cold junction were physically held at 0°C.

The act of knowing (or approximating) the cold junction temperature and taking this information in to account in the overall measurement is referred to as cold junction compensation. The three techniques I discussed are each methods of cold junction compensation.

The ice bath is probably the most accurate method. An ice slushy can maintain a uniformity of about 0.1°C without much difficulty. I’ve read that an ice bath can maintain a uniformity of 0.01°C, but I’ve never been able to achieve that level of uniformity. Ice baths are physically awkward and therefore usually impractical for industrial measurements.

The off-the-shelf cold junction compensation ICs can be expensive and generally are only accurate to a few degrees Celsius, but many systems use these devices.

Using a thermistor, or even the PN junction on a diode or BJT, to measure the cold junction temperature can be fairly inexpensive and quite accurate. The most common difficulty encountered with this system is calibration. Prudent positioning of the sensor near, or on the terminal block is important.

If the terminal block is to be used as the cold junction (see Figure 1b), the terminal block must be kept isothermal. In practice, keeping the terminal block truly isothermal is almost impossible. So, compromises must be made. This is the stock and trade of engineers. Knowing what is isothermal “enough” for your application is the trick.

Lots of money can be wasted on precision electronics if the terminal block’s screw terminals are allowed to develop a significant thermal gradient. This condition generally happens when power components are placed near the terminal blocks. You must pay careful attention to keeping the temperature stable around the terminal blocks.

There are two broad classes of temperature-measurement applications. The first class involves measuring absolute temperature. For example, you may want to know the temperature of the inside of an oven relative to a standard temperature scale (like the Celsius scale). This type of application requires that you know precisely the absolute temperature of the reference junction.

The second type of measurement involves measuring differences in temperature. For example, in a microcalorimeter, you may want to measure the temperature of the system, then start some chemical reaction and measure the temperature as the reaction proceeds. The information of value is the difference between first measurement and the subsequent ones.

Systems that measure temperature differences are generally easier to construct because control or precise measurement of the reference junction isn’t required. What is required is that the reference junction remain at a constant temperature while the two measurements occur. Whether the reference junction is at 25.0°C or 30.0°C isn’t relevant because the subtraction of consecutive measurements will remove the reference junction temperature from the computed answer.

You can use thermocouples to make precise differential temperature measurements, but you must ensure the terminal block forming the cold junction is “close enough” to isothermal. You must also ensure that the cold junction has enough thermal mass so it will not change temperature over the time you have between measurements.


Thermocouples are given a letter designation that indicates the materials they are fabricated from. This letter designation is called the thermocouples “type.” Table 1 shows the common thermocouples available and their usable temperature ranges.

Table 1: There are a wide variety of industry-standard alloy combinations that form standard thermocouples. The most commonly used are J, K, T, and E.

Each thermocouple type will produce a different open-circuit voltage (Seebeck voltage) for a given set of temperature conditions. None of these devices are linear over a full range of temperatures. There are standard tables available that tabulate Seebeck voltages as a function of temperature.[1] There are also standard polynomial models available for thermocouples.

Thermocouples produce a small Seebeck voltage. For example, a type K thermocouple produces about 40 µV per degree Celsius when both junctions are near room temperature. The most sensitive of the thermocouples, type E, produces about 60 µV per degree Celsius when both junctions are near room temperature.

In many applications, the range of temperatures being measured is sufficiently small that the Seebeck voltage is assumed to be linear over the range of interest. This eliminates the need for lookup tables or polynomial computation in the system. Often the loss of absolute accuracy is negligible, but this tradeoff is one the design engineer must weigh carefully.


When designing a thermocouple interface, there are only a few pieces of information you need to know:

  • what type of thermocouple will be used
  • what is the full range of temperatures the hot junction will be exposed to
  • what is the full range of temperatures the cold junction will be exposed to
  • what is the temperature resolution required for your application
  • does your system require galvanic isolation
  • what type of cold junction compensation will be used

If the answer to the last question requires the analog addition of a voltage from a commercial cold junction compensation IC, then the manufacturer of the IC will probably supply you with an adequate reference design. If you plan to do the cold-junction compensation either physically (by an ice bath) or in software (by measuring the cold junction’s temperature with another device), then you must build or buy a data-acquisition system.

Galvanic isolation is an important feature in many industrial applications. Because thermocouples are really just long loops of wire, they will often pick up high levels of common-mode noise. In some applications, the thermocouples may be bonded to equipment that is at line voltage (or higher).

In this case, galvanic isolation is required to keep high-voltage AC out of your data acquisition system. This type of isolation is usually accomplished in one of two ways—using either an opto-isolator or a transformer. Both systems require the thermocouple signal conditioner to allow its ground to float with respect to earth ground. Figure 3a and 3b outlines these schemes.

Figure 3: Galvanic isolation to a few thousand volts is easy (but a little expensive) using opto-isolation (a) and inexpensive (but a bit more challenging) using a VFC and a transformer (b).

Because the focus of this article is on the interface to the thermocouple, I’ll have to leave the details of implementing galvanic isolation to another article.

Given the tiny voltage levels produced by a thermocouple, the designer of the signal-conditioning module should focus carefully on noise rejection. Using the common-mode rejection (CMR) characteristics of a differential amplifier is a good place to start. Figure 4 shows a simple yet effective thermocouple interface

Figure 4: The common-mode filter and common-mode rejection characteristics pay off in thermocouple amplifiers.

The monolithic instrumentation amplifier (in-amp) is a $2–$5 part (depending on grade and manufacturer). These are usually 8-pin DIP or SOIC devices. In-amps are simple differential amplifiers. The gain is set with a single external resistor. The input impedance of an in-amp is typically 10 gigaohms.

Certainly you can use op-amps, or even discrete parts to build a signal conditioner. However, all the active components on a monolithic in-amp are on the same dice and are kept more-or-less isothermal. This means in-amp characteristics behave nicely over temperature. Good CMR, controllable gain, small size, and high input impedance make in-amps perfect as the heart of a thermocouple conditioning circuit.

Temperature tends to change relatively slowly. So, if you find your system has noise, you can usually install supplementary low-pass filters. These can be implemented in hardware or software. In many systems, it’s not uncommon to take 128 measurements over 1 s and then average the results. Digital filters are big cost reducers in production systems.

Another problem often faced when designing thermocouple circuits is nulling amplifier offset. You can null the amplifier offset in a variety of ways [2], but my favorite is by chopping the input. Figure 5 shows how this process can be accomplished.

Figure 5: An input chopper like a CD4052 is all that is necessary to null signal conditioner offsets.

Thermocouples have such small signal levels, gains on the order of 1000 V/V are not uncommon, which means an op-amp or in-amp with a voltage offset of even 1 mV will have an offset at the output on the order of volts.

The chopper in Figure 5 allows the microcontroller to reverse the polarity of the thermocouple. To null the circuit, the microcontroller will take two measurements then subtract them.

First, set the chopper so the ADC measures GAIN (Vsensor + Voffset). Second, set the chopper so the ADC measures GAIN (–Vsensor + Voffset).

Subtract the second measurement from the first and divide by two. The result is GAIN*Vsensor. As you can see, this is exactly the quantity we are interested in. The in-amp’s offset has been removed from the measurement.


In 1821, Thomas J. Seebeck discovered that if a junction of two dissimilar metals is heated, a voltage is produced. This voltage has since been dubbed the Seebeck voltage.

Thermocouples are found in everything from industrial furnaces to medical devices. At first glance, thermocouples may seem fraught with mystery. They are not. After all, how can a device that’s built from two wires and has been around for 180 years be all that tough to figure out?

When designing with thermocouples, just keep these four concepts in mind and the project will go much smoother. First, thermocouples produce a voltage that is proportional to the difference in temperature between the hot junction and the reference junction.

Second, because thermocouples measure relative temperature differences, cold junction compensation is required if the system is to report absolute temperatures. Cold-junction compensation simply means knowing the absolute temperature of the cold junction and adjusting the reparted temperature value accordingly.

The third thing to remember is that thermocouples have a small Seebeck voltage coefficient, typically on the order of tens of microvolts per degree Celsius. And last, thermocouples are non-linear across their temperature range. Linearization, if needed, is best done in software.

Armed with these concepts, the circuits in this article, and a bit of time, you should have a good start on being able to design a thermocouple into your next project.

Bob Perrin has designed instrumentation for agronomy, soil physics, and water activity research. He has also designed embedded controllers for a variety of other applications.



[2] B.Perrin, “Practical Analog Design,” Circuit Cellar, #94, May 1998.


AD594, TMP03/04
Analog Devices

Texas Instruments (Burr-Brown Corp.)

Linear Technology

This article was originally published in Circuit Cellar Online in 1999. Posted with permission. Circuit Cellar and are Elektor International Media publications.


Seven-Controller EtherCAT Orchestra

When I first saw the Intel Industrial Control in Concert demonstration at Design West 2012 in San Jose, CA, I immediately thought of Kurt Vonnegut ‘s 1952 novel Player Piano. The connection, of course, is that the player piano in the novel and Intel’s Atom-based robotic orchestra both play preprogrammed music without human involvement. But the similarities end there. Vonnegut used the self-playing autopiano as a metaphor for a mechanized society in which wealthy industrialists replaced human workers with automated machines. In contrast, Intel’s innovative system demonstrated engineering excellence and created a buzz in the in the already positive atmosphere at the conference.

In “EtherCAT Orchestra” (Circuit Cellar 264, July 2012), Richard Wotiz carefully details the awe-inspiring music machine that’s built around seven embedded systems, each of which is based on Intel’s Atom D525 dual-core microprocessor. He provides information about the system you can’t find on YouTube or hobby tech blogs. Here is the article in its entirety.

EtherCAT Orchestra

I have long been interested in automatically controlled musical instruments. When I was little, I remember being fascinated whenever I ran across a coin-operated electromechanical calliope or a carnival hurdy-gurdy. I could spend all day watching the many levers, wheels, shafts, and other moving parts as it played its tunes over and over. Unfortunately, the mechanical complexity and expertise needed to maintain these machines makes them increasingly rare. But, in our modern world of pocket-sized MP3 players, there’s still nothing like seeing music created in front of you.

I recently attended the Design West conference (formerly the Embedded Systems Conference) in San Jose, CA, and ran across an amazing contraption that reminded me of old carnival music machines. The system was created for Intel as a demonstration of its Atom processor family, and was quite successful at capturing the attention of anyone walking by Intel’s booth (see Photo 1).

Photo 1—This is Intel’s computer-controlled orchestra. It may not look like any musical instrument you’ve ever seen, but it’s quite a thing to watch. The inspiration came from Animusic’s “Pipe Dream,” which appears on the video screen at the top. (Source: R. Wotiz)

The concept is based on Animusic’s music video “Pipe Dream,” which is a captivating computer graphics representation of a futuristic orchestra. The instruments in the video play when virtual balls strike against them. Each ball is launched at a precise time so it will land on an instrument the moment each note is played.

The demonstration, officially known as Intel’s Industrial Control in Concert, uses high-speed pneumatic valves to fire practice paintballs at plastic targets of various shapes and sizes. The balls are made of 0.68”-diameter soft rubber. They put on quite a show bouncing around while a song played. Photo 2 shows one of the pneumatic firing arrays.

Photo 2—This is one of several sets of pneumatic valves. Air is supplied by the many tees below the valves and is sent to the ball-firing nozzles near the top of the photo. The corrugated hoses at the top supply balls to the nozzles. (Source: R. Wotiz)

The valves are the gray boxes lined up along the center. When each one opens, a burst of air is sent up one of the clear hoses to a nozzle to fire a ball. The corrugated black hoses at the top supply the balls to the nozzles. They’re fed by paintball hoppers that are refilled after each performance. Each nozzle fires at a particular target (see Photo 3).

Photo 3—These are the targets at which the nozzles from Photo 2 are aimed. If you look closely, you can see a ball just after it bounced off the illuminated target at the top right. (Source: R. Wotiz)

Each target has an array of LEDs that shows when it’s activated and a piezoelectric sensor that detects a ball’s impact. Unfortunately, slight variations in the pneumatics and the balls themselves mean that not every ball makes it to its intended target. To avoid sounding choppy and incomplete, the musical notes are triggered by a fixed timing sequence rather than the ball impact sensors. Think of it as a form of mechanical lip syncing. There’s a noticeable pop when a ball is fired, so the system sounds something like a cross between a pinball machine and a popcorn popper. You may expect that to detract from the music, but I felt it added to the novelty of the experience.

The control system consists of seven separate embedded systems, all based on Intel’s Atom D525 dual-core microprocessor, on an Ethernet network (see Figure 1).

Figure 1—Each block across the top is an embedded system providing some aspect of the user interface. The real-time interface is handled by the modules at the bottom. They’re controlled by the EtherCAT master at the center. (Source. R. Wotiz)

One of the systems is responsible for the real-time control of the mechanism. It communicates over an Ethernet control automation technology (EtherCAT) bus to several slave units, which provide the I/O interface to the sensors and actuators.


EtherCAT is a fieldbus providing high-speed, real-time control over a conventional 100 Mb/s Ethernet hardware infrastructure. It’s a relatively recent technology, originally developed by Beckhoff Automation GmbH, and currently managed by the EtherCAT Technology Group (ETG), which was formed in 2003. You need to be an ETG member to access most of their specification documents, but information is publicly available. According to information on the ETG website, membership is currently free to qualified companies. EtherCAT was also made a part of international standard IEC 61158 “Industrial Communication Networks—Fieldbus Specifications” in 2007.

EtherCAT uses standard Ethernet data frames, but instead of each device decoding and processing an individual frame, the devices are arranged in a daisy chain, where a single frame is circulated through all devices in sequence. Any device with an Ethernet port can function as the master, which initiates the frame transmission. The slaves need specialized EtherCAT ports. A two-port slave device receives and starts processing a frame while simultaneously sending it out to the next device (see Figure 2).

Figure 2—Each EtherCAT slave processes incoming data as it sends it out the downstream port. (Source: R. Wotiz))

The last slave in the chain detects that there isn’t a downstream device and sends its frame back to the previous device, where it eventually returns to the originating master. This forms a logical ring by taking advantage of both the outgoing and return paths in the full-duplex network. The last slave can also be directly connected to a second Ethernet port on the master, if one is available, creating a physical ring. This creates redundancy in case there is a break in the network. A slave with three or more ports can be used to form more complex topologies than a simple daisy chain. However, this wouldn’t speed up network operation, since a frame still has to travel through each slave, one at a time, in both directions.

The EtherCAT frame, known as a telegram, can be transmitted in one of two different ways depending on the network configuration. When all devices are on the same subnet, the data is sent as the entire payload of an Ethernet frame, using an EtherType value of 0x88A4 (see Figure 3a).

Figure 3a—An EtherCAT frame uses the standard Ethernet framing format with very little overhead. The payload size shown includes both the EtherCAT telegram and any padding bytes needed to bring the total frame size up to 64 bytes, the minimum size for an Ethernet frame. b—The payload can be encapsulated inside a UDP frame if it needs to pass through a router or switch. (Source: R. Wotiz)

If the telegrams must pass through a router or switch onto a different physical network, they may be encapsulated within a UDP datagram using a destination port number of 0x88A4 (see Figure 3b), though this will affect network performance. Slaves do not have their own Ethernet or IP addresses, so all telegrams will be processed by all slaves on a subnet regardless of which transmission method was used. Each telegram contains one or more EtherCAT datagrams (see Figure 4).

Each datagram includes a block of data and a command indicating what to do with the data. The commands fall into three categories. Write commands copy the data into a slave’s memory, while read commands copy slave data into the datagram as it passes through. Read/write commands do both operations in sequence, first copying data from memory into the outgoing datagram, then moving data that was originally in the datagram into memory. Depending on the addressing mode, the read and write operations of a read/write command can both access the same or different devices. This enables fast propagation of data between slaves.

Each datagram contains addressing information that specifies which slave device should be accessed and the memory address offset within the slave to be read or written. A 16-bit value for each enables up to 65,535 slaves to be addressed, with a 65,536-byte address space for each one. The command code specifies which of four different addressing modes to use. Position addressing specifies a slave by its physical location on the network. A slave is selected only if the address value is zero. It increments the address as it passes the datagram on to the next device. This enables the master to select a device by setting the address value to the negative of the number of devices in the network preceding the desired device. This addressing mode is useful during system startup before the slaves are configured with unique addresses. Node addressing specifies a slave by its configured address, which the master will set during the startup process. This mode enables direct access to a particular device’s memory or control registers. Logical addressing takes advantage of one or more fieldbus memory management units (FMMUs) on a slave device. Once configured, a FMMU will translate a logical address to any desired physical memory address. This may include the ability to specify individual bits in a data byte, which provides an efficient way to control specific I/O ports or register bits without having to send any more data than needed. Finally, broadcast addressing selects all slaves on the network. For broadcast reads, slaves send out the logical OR of their data with the data from the incoming datagram.

Each time a slave successfully reads or writes data contained in a datagram, it increments the working counter value (see Figure 4).

Figure 4—An EtherCAT telegram consists of a header and one or more datagrams. Each datagram can be addressed to one slave, a particular block of data within a slave, or multiple slaves. A slave can modify the datagram’s Address, C, IRQ, Process data, and WKC fields as it passes the data on to the next device. (Source: R. Wotiz)

This enables the master to confirm that all the slaves it was expecting to communicate with actually handled the data sent to them. If a slave is disconnected, or its configuration changes so it is no longer being addressed as expected, then it will no longer increment the counter. This alerts the master to rescan the network to confirm the presence of all devices and reconfigure them, if necessary. If a slave wants to alert the master of a high-priority event, it can set one or more bits in the IRQ field to request the master to take some predetermined action.


Frames are processed in each slave by a specialized EtherCAT slave controller (ESC), which extracts incoming data and inserts outgoing data into the frame as it passes through. The ESC operates at a high speed, resulting in a typical data delay from the incoming to the outgoing network port of less than 1 μs. The operating speed is often dominated by how fast the master can process the data, rather than the speed of the network itself. For a system that runs a process feedback loop, the master has to receive data from the previous cycle and process it before sending out data for the next cycle. The minimum cycle time TCYC is given by: TCYC = TMP + TFR + N × TDLY  + 2 × TCBL + TJ. TMP = master’s processing time, TFR = frame transmission time on the network (80 ns per data byte + 5 μs frame overhead), N = total number of slaves, TDLY  = sum of the forward and return delay times through each slave (typically 600 ns), TCBL = cable propagation delay (5 ns per meter for Category 5 Ethernet cable), and TJ = network jitter (determined by master).[1]

A slave’s internal processing time may overlap some or all of these time windows, depending on how its I/O is synchronized. The network may be slowed if the slave needs more time than the total cycle time computed above. A maximum-length telegram containing 1,486 bytes of process data can be communicated to a network of 1,000 slaves in less than 1 ms, not including processing time.

Synchronization is an important aspect of any fieldbus. EtherCAT uses a distributed clock (DC) with a resolution of 1 ns located in the ESC on each slave. The master can configure the slaves to take a snapshot of their individual DC values when a particular frame is sent. Each slave captures the value when the frame is received by the ESC in both the outbound and returning directions. The master then reads these values and computes the propagation delays between each device. It also computes the clock offsets between the slaves and its reference clock, then uses these values to update each slave’s DC to match the reference. The process can be repeated at regular intervals to compensate for clock drift. This results in an absolute clock error of less than 1 μs between devices.


The orchestra’s EtherCAT network is built around a set of modules from National Instruments. The virtual conductor is an application running under LabVIEW Real-Time on a CompactRIO controller, which functions as the master device. It communicates with four slaves containing a mix of digital and analog I/O and three slaves consisting of servo motor drives. Both the master and the I/O slaves contain a FPGA to implement any custom local processing that’s necessary to keep the data flowing. The system runs at a cycle time of 1 ms, which provides enough timing resolution to keep the balls properly flying.

I hope you’ve enjoyed learning about EtherCAT—as well as the fascinating musical device it’s used in—as much as I have.

Author’s note: I would like to thank Marc Christenson of SISU Devices, creator of this amazing device, for his help in providing information on the design.


[1] National Instruments Corp., “Benchmarks for the NI 9144 EtherCAT Slave Chassis,”


Animusic, LLC,

Beckhoff Automation GmbH, “ET1100 EtherCAT Slave Controller Hardware Data Sheet, Version 1.8”, 2010,

EtherCAT Technology Group, “The Ethernet Fieldbus”, 2009,

Intel, Atom microprocessor, www/us/en/processors/atom/atom-processor.html.


Atom D525 dual-core microprocessor

Intel Corp.

LabVIEW Real-Time modules, CompactRIO controller, and EtherCAT devices

National Instruments Corp.

Circuit Cellar 264 is now on newsstands, and it’s available at the CC-Webshop.

Q&A: Lawrence Foltzer (Communications Engineer)

In the U.S., a common gift to give someone when he or she finishes school or completes a course of career training is Dr. Seuss’s book, Oh, the Place You’ll Go. I thought of the book’s title when I first read our May interview with engineer Lawrence Foltzer. After finishing electronics training in the U.S. Navy, Foltzer found himself working in such diverse locations as a destroyer in Mediterranean Sea, IBM’s Watson Research Center in Yorktown Heights, NY, and Optilink, DSC, Alcatel, and Turin Networks in Petaluma, CA. Simply put: his electronics training has taken him to many interesting places!

Foltzer’s interests include fiber optic communication, telecommunications, direct digital synthesis, and robot navigation. He wrote four articles for Circuit Cellar between June 1993 and March 2012.

Lawrence Foltzer presented these frequency-domain test instruments in Circuit Cellar 254 (September 2011). An Analog Devices AD9834-based RFG is on the left. An AD5930-based SFG is on the right. The ICSP interface used to program a Microchip Technology PIC16F627A microcontroller is provided by a dangling RJ connector socket. (Source: L. Foltzer, CC254)

Below is an abridged version of the interview now available in Circuit Cellar 262 (May 2012).

NAN: You spent 30 years working in the fiber optics communication industry. How did that come about? Have you always had an interest specifically in fiber optic technology?

LARRY: My career has taken me many interesting places, working with an amazing group of people, on the cusp of many technologies. I got my first electronics training in the Navy, both operating and maintaining the various anti-submarine warfare systems including the active sonar system; Gertrude, the underwater telephone; and two fire-control electromechanical computers for hedgehog and torpedo targeting. I spent two of my four years in the Navy in schools.

When I got out of the Navy in 1964, I managed to land a job with IBM. I’d applied for a job maintaining computers, but IBM sent me to the Thomas J. Watson Research Center in Yorktown Heights, NY. They gave me several tests on two different visits before hiring me. I was one of four out of forty who got a job. Mine was working in John B. Gunn’s group, preparing Gunn-oscillator samples and assisting the physicists in the group in performing both microwave and high-speed pulsed measurements.

One of my sample preparation duties was the application of AuGeNi ohmic contacts on GaAs samples. Ohmic contacts were essential to the proper operation of the Gunn effect, which is a bulk semiconductor phenomenon. Other labs at the research center were also working with GaAs for other devices: the LED, injection laser diode, and Hall-effect sensors to name a few. It turned out that the evaporated AuGeNi contact used on the Gunn devices was superior to the plated AuSnIn contact, so I soon found myself making 40,000 A per square centimeter pulsed-diode lasers. A year later I transferred to Gaithersburg, MD, to IBM-FSD where I was responsible for transferring laser diode technology to the group that made battlefield laser illuminators and optical radars. We used flexible light guides to bring the output from many lasers together to increase beam brightness.

As the Vietnam war came to an end, IBM closed down the Laser and Quantum Electronics (LQE) group I was in, but at the same time I received a job offer to join Comsat Labs, Clarksburg, MD, from an engineer for whom I had built Gunn devices for phased array studies. So back to the world of microwaves for a few years where I worked on the satellite qualification of tunnel (Asaki) diodes, Impatt diodes, step-recovery diodes, and GaAs FETs.

About a year after joining Comsat Labs, the former head of the now defunct IBM-LQE group, Bill Culver, called on me to help him prove to the army that a “single-fiber,” over-the-hill guided missile could replace the TOW missile and save soldier lives from the target tanks counterfire.

NAN: Tell us about some of your early projects and the types of technologies you used and worked on during that time.

LARRY: So, in 1973-ish, Bill Culver, Gordon Gould (Laser Inventor), and I formed Optelecom, Inc. In those days, when one spoke of fiber optics, one meant fiber bundles. Single fibers were seen as too unreliable, so hundreds of fibers were bundled together so that a loss of tens of fibers only caused a loss of a few percent of the injected light. Furthermore, bundles presented a large cross section to the primitive light sources of the day, which helped increase transmission distances.

Bill remembered seeing one of C. L. Stong’s Amateur Scientist columns in Scientific American about a beam balance based on a silica fiber suspension. In that column, Stong had shown that silica fibers could be made with tensile strengths 20 times that of steel. So a week later, Bill and I had constructed a fiber drawing apparatus in my basement and we drew the first few meters of fiber of the approximately 350 km of fiber we made in my home until we captured our first army contract and opened an office in Gaithersburg, MD.

Our first fibers were for mechanical-strength development. Optical losses measured hundreds of dBs/km in those days. But our plastic clad silica (PCS) fiber losses pretty much tracked those of Corning, Bell Labs, and ITT-EOPD (Electro-Optics Products Division). Pretty soon we were making 8 dB/km fibers up to 6 km in length. I left Optelecom when follow-on contracts with the army slowed; but by that time we had demonstrated missile payout of 4 km of signal carrying fiber at speeds of 600 ft/s, and slower speed runs from fixed-wing and Helo RPVs. The first video games were born!

At Optelecom I also worked with Gordon Gould on a CO2 laser-based secure communications system. A ground-based laser interrogated a Stark-effect based modulator and retro-reflector that returned a video signal to the ground station. I designed and developed all of that system’s electronics.

Government funding for our fiber payout work diminished, so I joined ITT-EOPD in 1976. In those days, if you needed a connector or a splice, or a pigtailed LED, laser or detector, you made it yourself; and I was good with my hands. So, in addition to running programs to develop fused fiber couplers, etc., I was also in charge of the group that built the emitters and detectors needed to support the transmission systems group.

NAN: You participated in Motorola’s IEEE-802 MAC subcommittee on token-passing access control methods. Tell us about that experience.

NAN: How long have you been designing MCU-based systems? Tell us about your first MCU-based design.

LARRY: I was in Motorola’s strategic marketing department (SMD) when the Apple 2 first came on the scene. Some of the folks in the SMD were the developers of the RadioShack color computer. Long story short, I quickly became a fan of the MC6809 CPU, and wrote some pretty fancy code for the day that rotated 3-D objects, and a more animated version of Space Invaders. I developed a menu-driven EPROM programmer that could program all of the EPROMs then available and then some. My company, Computer Accessories of AZ, advertised in Rainbow magazine until the PC savaged the market. I sold about 1,200 programmers and a few other products before closing up shop.

NAN: Circuit Cellar has published four of your articles about design projects. Your first article, “Long-Range Infrared Communications” was published in 1993 (Circuit Cellar 35). Which advances in IR technology have most impressed and excited you since then?

LARRY: Vertical cavity surface-emitting lasers (VCSEL). The Japanese were the first to realize their potential, but did not participate in their early development. Honeywell Optoelectronics was the first to offer 850-nm VCSELs commercially. I think I bought my first VCSELs from Hamilton Avnet in the late 1980s for $6 a pop. But 850 nm is excluded from Telecom (Bellcore), so companies like Cielo and Picolight went to work on long wavelength parts. I worked with Cielo on 1310-nm VCSEL array technology while at Turin Networks, and actually succeeded in adding VCSEL transmitter and array receiver optics to several optical line cards. It was my hope that VCSELs would find their way into the fiber to the home (FTTH) systems of the future, delivering 1 Gbps or more for 33% of what it costs today.

Circuit Cellar 262 (May 2012) is now on newsstands.

Issue 262: Advances in Measurement & Sensor Tech

As I walked the convention center floor at the 2012 Design West conference in San Jose, CA, it quickly became clear that measurement and sensor technologies are at the forefront of embedded innovation. For instance, at the Terasic Technologies booth, I spoke with Allen Houng, Terasic’s Strategic Marketing Manager, about the VisualSonic Studio project developed by students from National Taiwan University. The innovative design—which included an Altera DE2-115 FPGA development kit and a Terasic 5-megapixel CMOS sensor (D5M)—used interactive tokens to control computer-generated music. Sensor technology figured prominently in the design. It was just one of many exciting projects on display.

In this issue, we feature articles on a variety of measurement-and sensor-related embedded design projects. I encourage you to try similar projects and share your results with our editors.

Starting on page 14, Petre Tzvetanov Petrov describes a multilevel audible logical probe design. Petrov states that when working with digital systems “it is good to have a logical probe with at least four levels in order to more rapidly find the node in the circuit where things are going wrong.” His low-cost audible logical probe indicates four input levels, and there’s an audible tone for each input level.

Matt Oppenheim explains how to use touch sensors to trigger audio tags on electronic devices (p. 20). His design is intended to help visually impaired users. But you can use a few capacitive-touch sensors with an Android device to create the application of your choice.

The portable touch-sensor assembly. The touch-sensor boards are mounted on the back of a digital radio, connected to a IOIO board and a Nexus One smartphone. The Android interface is displayed on the phone. (Source: M. Oppenheim)

Two daisy-chained Microchip Technology mTouch boards with a battery board providing the power and LED boards showing the channel status. (Source: M. Oppenheim)

Read the interview with Lawrence Foltzer on page 30 for a little inspiration. Interestingly, one of his first MCU-based projects was a sonar sensor.

The impetus for Kyle Gilpin’s “menU” design was a microprocessor-based sensor system he installed in his car to display and control a variety of different sensors (p. 34).

The design used to test the menU system on the mbed processor was intentionally as simple as possible. Four buttons drive the menu system and an alphanumeric LCD is used to display the menu. Alternatively, one can use the mbed’s USB-to-serial port to connect with a terminal emulator running on a PC to both display and control the menu system. (Source: K. Gilpin)

The current menU system enables Gilpin to navigate through a hierarchical set of menu items while both observing and modifying the parameters of an embedded design.

The menU system is generic enough to be compiled for most desktop PCs running Windows, OSX, or Linux using the Qt development framework. This screenshot demonstrates the GUI for the menU system. The menu itself is displayed in a separate terminal window. The GUI has four simulated LEDs and one simulated photocell all of which correspond to the hardware available on the mbed processor development platform. (Source: K. Gilpin)

The final measurement-and-sensor-related article in this issue is columnist Richard Wotiz’s “Camera Image Stabilization” (p. 46). Wotiz details various IS techniques.

Our other columnists cover accelerated testing (George Novacek, p. 60), energy harvesting (George Martin, p. 64), and SNAP engine versatility (Jeff Bachiochi, p. 68).

Lastly, I’m excited to announce that we have a new columnist, Patrick Schaumont, whose article “One-Time Passwords from Your Watch” starts on page 52.

The Texas Instruments eZ430 Chronos watch displays a unique code that enables logging into Google’s Gmail. The code is derived from the current time and a secret value embedded in the watch. (Source: P. Schaumont)

Schaumont is an Associate Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. His interests include embedded security, covering hardware, firmware, and software. Welcome, Patrick!

Circuit Cellar 262 (May 2012) is now available.

Wireless Data Control for Remote Sensor Monitoring

Circuit Cellar has published dozens of interesting articles about handy wireless applications over the years. And now we have another innovative project to report about. Circuit Cellar author Robert Bowen contacted us recently with a link to information about his iFarm-II controller data acquisition system.

The iFarm-II controller data acquisition system (Source: R. Bowen)

The design features two main components. Bowen’s “iFarm-Remote” and the “iFarm-Base controller” work together to as an accurate remote wireless data acquisition system. The former has six digital inputs (for monitoring relay or switch contacts) and six digital outputs (for energizing a relay’s coil). The latter is a stand-alone wireless and internet ready controller. Its LCD screen displays sensor readings from the iFarm-Remote controller. When you connect the base to the Internet, you can monitor data reading via a browser. In addition, you can have the base email you notifications pertaining to the sensor input channels.

You can connect the system to the Internet for remote monitoring. The Network Settings Page enables you to configure the iFarm-Base controller for your network. (Source: R. Bowen)

Bowen writes:

The iFarm-II Controller is a wireless data acquisition system used to remotely monitor temperature and humidity conditions in a remote location. The iFarm consists of two controllers, the iFarm-Remote and iFarm-Base controller. The iFarm-Remote is located in remote location with various sensors (supports sensors that output +/-10VDC ) connected. The iFarm-Remote also provides the user with 6-digital inputs and 6-digital outputs. The digital inputs may be used to detect switch closures while the digital outputs may be used to energize a relay coil. The iFarm-Base supports either a 2.4GHz or 900Mhz RF Module.

The iFarm-Base controller is responsible for sending commands to the iFarm-Remote controller to acquire the sensor and digital input status readings. These readings may be viewed locally on the iFarm-Base controllers LCD display or remotely via an Internet connection using your favorite web-browser. Alarm conditions can be set on the iFarm-Base controller. An active upper or lower limit condition will notify the user either through an e-mail or a text message sent directly to the user. Alternatively, the user may view and control the iFarm-Remote controller via web-browser. The iFarm-Base controllers web-server is designed to support viewing pages from a PC, Laptop, iPhone, iTouch, Blackberry or any mobile device/telephone which has a WiFi Internet connection.—Robert Bowen,

iFarm-Host/Remote PCB Prototype (Source: R. Bowen)

Robert Bowen is a senior field service engineer for MTS Systems Corp., where he designs automated calibration equipment and develops testing methods for customers involved in the material and simulation testing fields. Circuit Cellar has published three of his articles since 2001:

FPGA-Based VisualSonic Design Project

The VisualSonic Studio project on display at Design West last week was as innovative as it was fun to watch in operation. The design—which included an Altera DE2-115 FPGA development kit and a Terasic 5-megapixel CMOS Sensor (D5M)—used interactive tokens to control computer-generated music.

at Design West 2012 in San Jose, CA (Photo: Circuit Cellar)

I spoke with Allen Houng, Strategic Marketing Manager for Terasic, about the project developed by students from National Taiwan University. He described the overall design, and let me see the Altera kit and Terasic sensor installation.

A view of the kit and sensor (Photo: Circuit Cellar)

Houng also he also showed me the design in action. To operate the sound system, you simply move the tokens to create the sound effects of your choosing. Below is a video of the project in operation (Source: Terasic’s YouTube channel).

Design West Update: Intel’s Computer-Controlled Orchestra

It wasn’t the Blue Man Group making music by shooting small rubber balls at pipes, xylophones, vibraphones, cymbals, and various other sound-making instruments at Design West in San Jose, CA, this week. It was Intel and its collaborator Sisu Devices.

Intel's "Industrial Controller in Concert" at Design West, San Jose

The innovative Industrial Controller in Concert system on display featured seven Atom processors, four operating systems, 36 paint ball hoppers, and 2300 rubber balls, a video camera for motion sensing, a digital synthesizer, a multi-touch display, and more. PVC tubes connect the various instruments.

Intel's "Industrial Controller in Concert" features seven Atom processors 2300

Once running, the $160,000 system played a 2,372-note song and captivated the Design West audience. The nearby photo shows the system on the conference floor.

Click here learn more and watch a video of the computer-controlled orchestra in action.

Robot Design with Microsoft Kinect, RDS 4, & Parallax’s Eddie

Microsoft announced on March 8 the availability of Robotics Developer Studio 4 (RDS 4) software for robotics applications. RDS 4 was designed to work with the Kinect for Windows SDK. To demonstrate the capabilities of RDS 4, the Microsoft robotics team built the Follow Me Robot with a Parallax Eddie robot, laptop running Windows 7, and the Kinect.

In the following short video, Microsoft software developer Harsha Kikkeri demonstrates Follow Me Robot.

Circuit Cellar readers are already experimenting Kinect and developing embedded system to work with it n interesting ways. In an upcoming article about a Kinect-based project, designer Miguel Sanchez describes a interesting Kinect-based 3-D imaging system.

Sanchez writes:

My project started as a simple enterprise that later became a bit more challenging. The idea of capturing the silhouette of an individual standing in front of the Kinect was based on isolating those points that are between two distance thresholds from the camera. As depth image already provides the distance measurement, all the pixels of the subject will be between a range of distances, while other objects in the scene will be outside of this small range. But I wanted to have just the contour line of a person and not all the pixels that belong to that person’s body. OpenCV is a powerful computer vision library. I used it for my project because of function blobs. This function extracts the contour of the different isolated objects of a scene. As my image would only contain one object—the person standing in front of the camera—function blobs would return the exact list of coordinates of the contour of the person, which was what I needed. Please note that this function is a heavy image processing made easy for the user. It provides not just one, but a list of all the different objects that have been detected in the image. It can also specify is holes inside a blob are permitted. It can also specify the minimum and maximum areas of detected blobs. But for my project, I am only interested in detecting the biggest blob returned, which will be the one with index zero, as they are stored in decreasing order of blob area in the array returned by the blobs function.

Though it is not a fault of blobs function, I quickly realized that I was getting more detail than I needed and that there was a bit of noise in the edges of the contour. Filtering out on a bit map can be easily accomplished with a blur function, but smoothing out a contour did not sound so obvious to me.

A contour line can be simplified by removing certain points. A clever algorithm can do this by removing those points that are close enough to the overall contour line. One of these algorithms is the Douglas-Peucker recursive contour simplification algorithm. The algorithm starts with the two endpoints and it accepts one point in between whose orthogonal distance from the line connecting the two first points is larger than a given threshold. Only the point with the largest distance is selected (or none if the threshold is not met). The process is repeated recursively, as new points are added, to create the list of accepted points (those that are contributing the most to the general contour given a user-provided threshold). The larger the threshold, the rougher the resulting contour will be.

By simplifying a contour, now human silhouettes look better and noise is gone, but they look a bit synthetic. The last step I did was to perform a cubic-spline interpolation so contour becomes a set of curves between the different original points of the simplified contour. It seems a bit twisted to simplify first to later add back more points because of the spline interpolation, but this way it creates a more visually pleasant and curvy result, which was my goal.


(Source: Miguel Sanchez)
(Source: Miguel Sanchez)

The nearby images show aspects of the process Sanchez describes in his article, where an offset between the human figure and the drawn silhouette is apparent.

The entire article is slated to appear in the June or July edition of Circuit Cellar.

DIY Cap-Touch Amp for Mobile Audio

Why buy an amp for your iPod or MP3 player when you can build your own? With the proper parts and a proven plan of action, you can craft a custom personal audio amp to suit your needs. Plus, hitting the workbench with some chips and PCB is much more exciting than ordering an amp online.

In the April 2012 issue of Circuit Cellar, Coleton Denninger and Jeremy Lichtenfeld write about a capacitive-touch, gain-controlled amplifier while studying at Camosun College in Canada. The design features a Cypress Semiconductor CY8C29466-24PXI PSoC, a Microchip Technology mTouch microcontroller, and a Texas Instruments TPA1517.

Denninger and Lichtenfeld write:

Since every kid and his dog owns an iPod, an MP3 player, or some other type of personal audio device, it made sense to build a personal audio amplifier (see Photo 1). The tough choices were how we were going to make it stand out enough to attract kids who already own high-end electronics and how we were going to do it with a budget of around $40…

The capacitive-touch stage of the personal audio amp (Source: C. Denninger & J. Lichtenfeld)

Our first concern was how we were going to mix and amplify the low-power audio input signals from iPods, microphones, and electric guitars. We decided to have a couple of different inputs, and we wanted stereo and mono outputs. After doing some extensive research, we chose to use the Cypress Semiconductors CY8C29466-24PXI programmable system-on-chip (PSoC). This enabled us to digitally mix and vary the low-power amplification using the programmable gain amplifiers and switched capacitor blocks. It also came in a convenient 28-pin DIP package that followed our design guidelines. Not only was it perfect for our design, but the product and developer online support forums for all of Cypress’s products were very helpful.
Let’s face it: mechanical switches and pots are fast becoming obsolete in the world of consumer electronics (not to mention costly when compared to other alternatives). This is why we decided to use capacitive-touch sensing to control the low-power gain. Why turn a potentiometer or push a switch when your finger comes pre-equipped with conductive electrolytes? We accomplished this capacitive touch using Microchip Technology’s mTouch Sensing Solutions series of 8-bit microcontrollers. …


The audio mixer flowchart

Who doesn’t like a little bit of a light show? We used the same aforementioned PIC, but implemented it as a voltage unit meter. This meter averaged out our output signal level and indicated via LEDs the peaks in the music played. Essentially, while you listen to your favorite beats, the amplifier will beat with you! …
This amp needed to have a bit of kick when it came to the output. We’re not talking about eardrum-bursting power, but we wanted to have decent quality with enough power to fill an average-sized room with sound. We decided to go with a Class AB audio amplifier—the TPA1517 from Texas Instruments (TI) to be exact. The TPA1517 is a stereo audio-power amplifier that contains two identical amplifiers capable of delivering 6 W per channel of continuous average power into a 4-Ω load. This quality chip is easy to implement. And at only a couple of bucks, it’s an affordable choice!


The power amplification stage of the personal audio amp (Souce: C. Denninger & J. Lichtenfeld)

The complete article—with a schematic, diagrams, and code—will appear in Circuit Cellar 261 (April 2012).








Aerial Robot Demonstration Wows at TEDTalk

In a TEDTalk Thursday, engineer Vijay Kumar presented an exciting innovation in the field of unmanned aerial vehicle (UAV) technology. He detailed how a team of UPenn engineers retrofitted compact aerial robots with embedded technologies that enable them to swarm and operate as a team to take on a variety of remarkable tasks. A swarm can complete construction projects, orchestrate a nine-instrument piece of music, and much more.

The 0.1-lb aerial robot Kumar presented on stage—built by UPenn students Alex Kushleyev and Daniel Mellinger—consumed approximately 15 W, he said. The 8-inch design—which can operate outdoors or indoors without GPS—featured onboard accelerometers, gyros, and processors.

“An on-board processor essentially looks at what motions need to be executed, and combines these motions, and figures out what commands to send to the motors 600 times a second,” Kumar said.

Watch the video for the entire talk and demonstration. Nine aerial robots play six instruments at the 14:49 minute mark.

Zero-Power Sensor (ZPS) Network

Recently, we featured two notable projects featuring Echelon’s Pyxos Pyxos technology: one about solid-state lighting solutions and one about a radiant floor heating zone controller. Here we present another innovative project: a zero-power sensor (ZPS) network on polymer.

The Zero Power Switch (Source: Wolfgang Richter, Faranak M.Zadeh)

The ZPS system—which was developed by Wolfgang Richter and Faranak M. Zadeh of Ident Technology AG— doesn’t require battery or RF energy for operation. The sensors, developed on polymer foils, are fed by an electrical alternating field with a 200-kHz frequency. A Pyxos network enables you to transmit of wireless sensor data to various devices.

In their documentation, Wolfgang Richter and Faranak M. Zadeh write:

“The developed wireless Zero power sensors (ZPS) do not need power, battery or radio frequency energy (RF) in order to operate. The system is realized on polymer foils in a printing process and/or additional silicon and is very eco-friendly in production and use. The sensors are fed by an electrical alternating field with the frequency of 200 KHz and up to 5m distance. The ZPS sensors can be mounted anywhere that they are needed, e.g. on the body, in a room, a machine or a car. One ZPS server can work for a number of ZPS-sensor clients and can be connected to any net to communicate with network intelligence and other servers. By modulating the electric field the ZPS-sensors can transmit a type of “sensor=o.k. signal” command. Also ZPS sensors can be carried by humans (or animals) for the vital signs monitoring. So they are ideal for wireless monitoring systems (e.g. “aging at home”). The ZPS system is wireless, powerless and cordless system and works simultaneously, so it is a self organized system …

The wireless Skinplex zero power sensor network is a very simply structured but surely functioning multiple sensor system that combines classical physics as taught by Kirchhoff with the latest advances in (smart) sensor technology. It works with a virtually unlimited number of sensor nodes in inertial space, without a protocol, and without batteries, cables and connectors. A chip not bigger than a particle of dust will be fabricated this year with the assistance of Cottbus University and Prof. Wegner. The system is ideal to communicate via PYXOS/Echelon to other instances and servers.

Pyxos networks helps to bring wireless ZPS sensor data over distances to external instances, nets and servers. With the advanced ECHELON technology even AC Power Line (PL) can be used.

As most of a ZPS server is realized in software it can be easily programmed into a Pyxos networks device, a very cost saving effect! Applications start from machine controls, smart office solutions, smart home up to Homes of elderly and medical facilities as everywhere else where Power line (PL) exists.”

Inside the ZPS project (Source: Wolfgang Richter, Faranak M.Zadeh)

For more information about Pyxos technology, visit

This project, as well as others, was promoted by Circuit Cellar based on a 2007 agreement with Echelon.