Seven-Controller EtherCAT Orchestra

When I first saw the Intel Industrial Control in Concert demonstration at Design West 2012 in San Jose, CA, I immediately thought of Kurt Vonnegut ‘s 1952 novel Player Piano. The connection, of course, is that the player piano in the novel and Intel’s Atom-based robotic orchestra both play preprogrammed music without human involvement. But the similarities end there. Vonnegut used the self-playing autopiano as a metaphor for a mechanized society in which wealthy industrialists replaced human workers with automated machines. In contrast, Intel’s innovative system demonstrated engineering excellence and created a buzz in the in the already positive atmosphere at the conference.

In “EtherCAT Orchestra” (Circuit Cellar 264, July 2012), Richard Wotiz carefully details the awe-inspiring music machine that’s built around seven embedded systems, each of which is based on Intel’s Atom D525 dual-core microprocessor. He provides information about the system you can’t find on YouTube or hobby tech blogs. Here is the article in its entirety.

EtherCAT Orchestra

I have long been interested in automatically controlled musical instruments. When I was little, I remember being fascinated whenever I ran across a coin-operated electromechanical calliope or a carnival hurdy-gurdy. I could spend all day watching the many levers, wheels, shafts, and other moving parts as it played its tunes over and over. Unfortunately, the mechanical complexity and expertise needed to maintain these machines makes them increasingly rare. But, in our modern world of pocket-sized MP3 players, there’s still nothing like seeing music created in front of you.

I recently attended the Design West conference (formerly the Embedded Systems Conference) in San Jose, CA, and ran across an amazing contraption that reminded me of old carnival music machines. The system was created for Intel as a demonstration of its Atom processor family, and was quite successful at capturing the attention of anyone walking by Intel’s booth (see Photo 1).

Photo 1—This is Intel’s computer-controlled orchestra. It may not look like any musical instrument you’ve ever seen, but it’s quite a thing to watch. The inspiration came from Animusic’s “Pipe Dream,” which appears on the video screen at the top. (Source: R. Wotiz)

The concept is based on Animusic’s music video “Pipe Dream,” which is a captivating computer graphics representation of a futuristic orchestra. The instruments in the video play when virtual balls strike against them. Each ball is launched at a precise time so it will land on an instrument the moment each note is played.

The demonstration, officially known as Intel’s Industrial Control in Concert, uses high-speed pneumatic valves to fire practice paintballs at plastic targets of various shapes and sizes. The balls are made of 0.68”-diameter soft rubber. They put on quite a show bouncing around while a song played. Photo 2 shows one of the pneumatic firing arrays.

Photo 2—This is one of several sets of pneumatic valves. Air is supplied by the many tees below the valves and is sent to the ball-firing nozzles near the top of the photo. The corrugated hoses at the top supply balls to the nozzles. (Source: R. Wotiz)

The valves are the gray boxes lined up along the center. When each one opens, a burst of air is sent up one of the clear hoses to a nozzle to fire a ball. The corrugated black hoses at the top supply the balls to the nozzles. They’re fed by paintball hoppers that are refilled after each performance. Each nozzle fires at a particular target (see Photo 3).

Photo 3—These are the targets at which the nozzles from Photo 2 are aimed. If you look closely, you can see a ball just after it bounced off the illuminated target at the top right. (Source: R. Wotiz)

Each target has an array of LEDs that shows when it’s activated and a piezoelectric sensor that detects a ball’s impact. Unfortunately, slight variations in the pneumatics and the balls themselves mean that not every ball makes it to its intended target. To avoid sounding choppy and incomplete, the musical notes are triggered by a fixed timing sequence rather than the ball impact sensors. Think of it as a form of mechanical lip syncing. There’s a noticeable pop when a ball is fired, so the system sounds something like a cross between a pinball machine and a popcorn popper. You may expect that to detract from the music, but I felt it added to the novelty of the experience.

The control system consists of seven separate embedded systems, all based on Intel’s Atom D525 dual-core microprocessor, on an Ethernet network (see Figure 1).

Figure 1—Each block across the top is an embedded system providing some aspect of the user interface. The real-time interface is handled by the modules at the bottom. They’re controlled by the EtherCAT master at the center. (Source. R. Wotiz)

One of the systems is responsible for the real-time control of the mechanism. It communicates over an Ethernet control automation technology (EtherCAT) bus to several slave units, which provide the I/O interface to the sensors and actuators.

EtherCAT

EtherCAT is a fieldbus providing high-speed, real-time control over a conventional 100 Mb/s Ethernet hardware infrastructure. It’s a relatively recent technology, originally developed by Beckhoff Automation GmbH, and currently managed by the EtherCAT Technology Group (ETG), which was formed in 2003. You need to be an ETG member to access most of their specification documents, but information is publicly available. According to information on the ETG website, membership is currently free to qualified companies. EtherCAT was also made a part of international standard IEC 61158 “Industrial Communication Networks—Fieldbus Specifications” in 2007.

EtherCAT uses standard Ethernet data frames, but instead of each device decoding and processing an individual frame, the devices are arranged in a daisy chain, where a single frame is circulated through all devices in sequence. Any device with an Ethernet port can function as the master, which initiates the frame transmission. The slaves need specialized EtherCAT ports. A two-port slave device receives and starts processing a frame while simultaneously sending it out to the next device (see Figure 2).

Figure 2—Each EtherCAT slave processes incoming data as it sends it out the downstream port. (Source: R. Wotiz))

The last slave in the chain detects that there isn’t a downstream device and sends its frame back to the previous device, where it eventually returns to the originating master. This forms a logical ring by taking advantage of both the outgoing and return paths in the full-duplex network. The last slave can also be directly connected to a second Ethernet port on the master, if one is available, creating a physical ring. This creates redundancy in case there is a break in the network. A slave with three or more ports can be used to form more complex topologies than a simple daisy chain. However, this wouldn’t speed up network operation, since a frame still has to travel through each slave, one at a time, in both directions.

The EtherCAT frame, known as a telegram, can be transmitted in one of two different ways depending on the network configuration. When all devices are on the same subnet, the data is sent as the entire payload of an Ethernet frame, using an EtherType value of 0x88A4 (see Figure 3a).

Figure 3a—An EtherCAT frame uses the standard Ethernet framing format with very little overhead. The payload size shown includes both the EtherCAT telegram and any padding bytes needed to bring the total frame size up to 64 bytes, the minimum size for an Ethernet frame. b—The payload can be encapsulated inside a UDP frame if it needs to pass through a router or switch. (Source: R. Wotiz)

If the telegrams must pass through a router or switch onto a different physical network, they may be encapsulated within a UDP datagram using a destination port number of 0x88A4 (see Figure 3b), though this will affect network performance. Slaves do not have their own Ethernet or IP addresses, so all telegrams will be processed by all slaves on a subnet regardless of which transmission method was used. Each telegram contains one or more EtherCAT datagrams (see Figure 4).

Each datagram includes a block of data and a command indicating what to do with the data. The commands fall into three categories. Write commands copy the data into a slave’s memory, while read commands copy slave data into the datagram as it passes through. Read/write commands do both operations in sequence, first copying data from memory into the outgoing datagram, then moving data that was originally in the datagram into memory. Depending on the addressing mode, the read and write operations of a read/write command can both access the same or different devices. This enables fast propagation of data between slaves.

Each datagram contains addressing information that specifies which slave device should be accessed and the memory address offset within the slave to be read or written. A 16-bit value for each enables up to 65,535 slaves to be addressed, with a 65,536-byte address space for each one. The command code specifies which of four different addressing modes to use. Position addressing specifies a slave by its physical location on the network. A slave is selected only if the address value is zero. It increments the address as it passes the datagram on to the next device. This enables the master to select a device by setting the address value to the negative of the number of devices in the network preceding the desired device. This addressing mode is useful during system startup before the slaves are configured with unique addresses. Node addressing specifies a slave by its configured address, which the master will set during the startup process. This mode enables direct access to a particular device’s memory or control registers. Logical addressing takes advantage of one or more fieldbus memory management units (FMMUs) on a slave device. Once configured, a FMMU will translate a logical address to any desired physical memory address. This may include the ability to specify individual bits in a data byte, which provides an efficient way to control specific I/O ports or register bits without having to send any more data than needed. Finally, broadcast addressing selects all slaves on the network. For broadcast reads, slaves send out the logical OR of their data with the data from the incoming datagram.

Each time a slave successfully reads or writes data contained in a datagram, it increments the working counter value (see Figure 4).

Figure 4—An EtherCAT telegram consists of a header and one or more datagrams. Each datagram can be addressed to one slave, a particular block of data within a slave, or multiple slaves. A slave can modify the datagram’s Address, C, IRQ, Process data, and WKC fields as it passes the data on to the next device. (Source: R. Wotiz)

This enables the master to confirm that all the slaves it was expecting to communicate with actually handled the data sent to them. If a slave is disconnected, or its configuration changes so it is no longer being addressed as expected, then it will no longer increment the counter. This alerts the master to rescan the network to confirm the presence of all devices and reconfigure them, if necessary. If a slave wants to alert the master of a high-priority event, it can set one or more bits in the IRQ field to request the master to take some predetermined action.

TIMING

Frames are processed in each slave by a specialized EtherCAT slave controller (ESC), which extracts incoming data and inserts outgoing data into the frame as it passes through. The ESC operates at a high speed, resulting in a typical data delay from the incoming to the outgoing network port of less than 1 μs. The operating speed is often dominated by how fast the master can process the data, rather than the speed of the network itself. For a system that runs a process feedback loop, the master has to receive data from the previous cycle and process it before sending out data for the next cycle. The minimum cycle time TCYC is given by: TCYC = TMP + TFR + N × TDLY  + 2 × TCBL + TJ. TMP = master’s processing time, TFR = frame transmission time on the network (80 ns per data byte + 5 μs frame overhead), N = total number of slaves, TDLY  = sum of the forward and return delay times through each slave (typically 600 ns), TCBL = cable propagation delay (5 ns per meter for Category 5 Ethernet cable), and TJ = network jitter (determined by master).[1]

A slave’s internal processing time may overlap some or all of these time windows, depending on how its I/O is synchronized. The network may be slowed if the slave needs more time than the total cycle time computed above. A maximum-length telegram containing 1,486 bytes of process data can be communicated to a network of 1,000 slaves in less than 1 ms, not including processing time.

Synchronization is an important aspect of any fieldbus. EtherCAT uses a distributed clock (DC) with a resolution of 1 ns located in the ESC on each slave. The master can configure the slaves to take a snapshot of their individual DC values when a particular frame is sent. Each slave captures the value when the frame is received by the ESC in both the outbound and returning directions. The master then reads these values and computes the propagation delays between each device. It also computes the clock offsets between the slaves and its reference clock, then uses these values to update each slave’s DC to match the reference. The process can be repeated at regular intervals to compensate for clock drift. This results in an absolute clock error of less than 1 μs between devices.

MUSICAL NETWORKS

The orchestra’s EtherCAT network is built around a set of modules from National Instruments. The virtual conductor is an application running under LabVIEW Real-Time on a CompactRIO controller, which functions as the master device. It communicates with four slaves containing a mix of digital and analog I/O and three slaves consisting of servo motor drives. Both the master and the I/O slaves contain a FPGA to implement any custom local processing that’s necessary to keep the data flowing. The system runs at a cycle time of 1 ms, which provides enough timing resolution to keep the balls properly flying.

I hope you’ve enjoyed learning about EtherCAT—as well as the fascinating musical device it’s used in—as much as I have.

Author’s note: I would like to thank Marc Christenson of SISU Devices, creator of this amazing device, for his help in providing information on the design.

REFERENCE

[1] National Instruments Corp., “Benchmarks for the NI 9144 EtherCAT Slave Chassis,” http://zone.ni.com/devzone/cda/tut/p/id/10596.

RESOURCES

Animusic, LLC, www.animusic.com.

Beckhoff Automation GmbH, “ET1100 EtherCAT Slave Controller Hardware Data Sheet, Version 1.8”, 2010, www.beckhoff.com/english/download/ethercat_development_products.htm.

EtherCAT Technology Group, “The Ethernet Fieldbus”, 2009, www.ethercat.org/pdf/english/ETG_Brochure_EN.pdf.

Intel, Atom microprocessor, www.intel.com/content/ www/us/en/processors/atom/atom-processor.html.

SOURCES

Atom D525 dual-core microprocessor

Intel Corp.

www.intel.com

LabVIEW Real-Time modules, CompactRIO controller, and EtherCAT devices

National Instruments Corp.

www.ni.com

Circuit Cellar 264 is now on newsstands, and it’s available at the CC-Webshop.

Q&A: Lawrence Foltzer (Communications Engineer)

In the U.S., a common gift to give someone when he or she finishes school or completes a course of career training is Dr. Seuss’s book, Oh, the Place You’ll Go. I thought of the book’s title when I first read our May interview with engineer Lawrence Foltzer. After finishing electronics training in the U.S. Navy, Foltzer found himself working in such diverse locations as a destroyer in Mediterranean Sea, IBM’s Watson Research Center in Yorktown Heights, NY, and Optilink, DSC, Alcatel, and Turin Networks in Petaluma, CA. Simply put: his electronics training has taken him to many interesting places!

Foltzer’s interests include fiber optic communication, telecommunications, direct digital synthesis, and robot navigation. He wrote four articles for Circuit Cellar between June 1993 and March 2012.

Lawrence Foltzer presented these frequency-domain test instruments in Circuit Cellar 254 (September 2011). An Analog Devices AD9834-based RFG is on the left. An AD5930-based SFG is on the right. The ICSP interface used to program a Microchip Technology PIC16F627A microcontroller is provided by a dangling RJ connector socket. (Source: L. Foltzer, CC254)

Below is an abridged version of the interview now available in Circuit Cellar 262 (May 2012).

NAN: You spent 30 years working in the fiber optics communication industry. How did that come about? Have you always had an interest specifically in fiber optic technology?

LARRY: My career has taken me many interesting places, working with an amazing group of people, on the cusp of many technologies. I got my first electronics training in the Navy, both operating and maintaining the various anti-submarine warfare systems including the active sonar system; Gertrude, the underwater telephone; and two fire-control electromechanical computers for hedgehog and torpedo targeting. I spent two of my four years in the Navy in schools.

When I got out of the Navy in 1964, I managed to land a job with IBM. I’d applied for a job maintaining computers, but IBM sent me to the Thomas J. Watson Research Center in Yorktown Heights, NY. They gave me several tests on two different visits before hiring me. I was one of four out of forty who got a job. Mine was working in John B. Gunn’s group, preparing Gunn-oscillator samples and assisting the physicists in the group in performing both microwave and high-speed pulsed measurements.

One of my sample preparation duties was the application of AuGeNi ohmic contacts on GaAs samples. Ohmic contacts were essential to the proper operation of the Gunn effect, which is a bulk semiconductor phenomenon. Other labs at the research center were also working with GaAs for other devices: the LED, injection laser diode, and Hall-effect sensors to name a few. It turned out that the evaporated AuGeNi contact used on the Gunn devices was superior to the plated AuSnIn contact, so I soon found myself making 40,000 A per square centimeter pulsed-diode lasers. A year later I transferred to Gaithersburg, MD, to IBM-FSD where I was responsible for transferring laser diode technology to the group that made battlefield laser illuminators and optical radars. We used flexible light guides to bring the output from many lasers together to increase beam brightness.

As the Vietnam war came to an end, IBM closed down the Laser and Quantum Electronics (LQE) group I was in, but at the same time I received a job offer to join Comsat Labs, Clarksburg, MD, from an engineer for whom I had built Gunn devices for phased array studies. So back to the world of microwaves for a few years where I worked on the satellite qualification of tunnel (Asaki) diodes, Impatt diodes, step-recovery diodes, and GaAs FETs.

About a year after joining Comsat Labs, the former head of the now defunct IBM-LQE group, Bill Culver, called on me to help him prove to the army that a “single-fiber,” over-the-hill guided missile could replace the TOW missile and save soldier lives from the target tanks counterfire.

NAN: Tell us about some of your early projects and the types of technologies you used and worked on during that time.

LARRY: So, in 1973-ish, Bill Culver, Gordon Gould (Laser Inventor), and I formed Optelecom, Inc. In those days, when one spoke of fiber optics, one meant fiber bundles. Single fibers were seen as too unreliable, so hundreds of fibers were bundled together so that a loss of tens of fibers only caused a loss of a few percent of the injected light. Furthermore, bundles presented a large cross section to the primitive light sources of the day, which helped increase transmission distances.

Bill remembered seeing one of C. L. Stong’s Amateur Scientist columns in Scientific American about a beam balance based on a silica fiber suspension. In that column, Stong had shown that silica fibers could be made with tensile strengths 20 times that of steel. So a week later, Bill and I had constructed a fiber drawing apparatus in my basement and we drew the first few meters of fiber of the approximately 350 km of fiber we made in my home until we captured our first army contract and opened an office in Gaithersburg, MD.

Our first fibers were for mechanical-strength development. Optical losses measured hundreds of dBs/km in those days. But our plastic clad silica (PCS) fiber losses pretty much tracked those of Corning, Bell Labs, and ITT-EOPD (Electro-Optics Products Division). Pretty soon we were making 8 dB/km fibers up to 6 km in length. I left Optelecom when follow-on contracts with the army slowed; but by that time we had demonstrated missile payout of 4 km of signal carrying fiber at speeds of 600 ft/s, and slower speed runs from fixed-wing and Helo RPVs. The first video games were born!

At Optelecom I also worked with Gordon Gould on a CO2 laser-based secure communications system. A ground-based laser interrogated a Stark-effect based modulator and retro-reflector that returned a video signal to the ground station. I designed and developed all of that system’s electronics.

Government funding for our fiber payout work diminished, so I joined ITT-EOPD in 1976. In those days, if you needed a connector or a splice, or a pigtailed LED, laser or detector, you made it yourself; and I was good with my hands. So, in addition to running programs to develop fused fiber couplers, etc., I was also in charge of the group that built the emitters and detectors needed to support the transmission systems group.

NAN: You participated in Motorola’s IEEE-802 MAC subcommittee on token-passing access control methods. Tell us about that experience.

NAN: How long have you been designing MCU-based systems? Tell us about your first MCU-based design.

LARRY: I was in Motorola’s strategic marketing department (SMD) when the Apple 2 first came on the scene. Some of the folks in the SMD were the developers of the RadioShack color computer. Long story short, I quickly became a fan of the MC6809 CPU, and wrote some pretty fancy code for the day that rotated 3-D objects, and a more animated version of Space Invaders. I developed a menu-driven EPROM programmer that could program all of the EPROMs then available and then some. My company, Computer Accessories of AZ, advertised in Rainbow magazine until the PC savaged the market. I sold about 1,200 programmers and a few other products before closing up shop.

NAN: Circuit Cellar has published four of your articles about design projects. Your first article, “Long-Range Infrared Communications” was published in 1993 (Circuit Cellar 35). Which advances in IR technology have most impressed and excited you since then?

LARRY: Vertical cavity surface-emitting lasers (VCSEL). The Japanese were the first to realize their potential, but did not participate in their early development. Honeywell Optoelectronics was the first to offer 850-nm VCSELs commercially. I think I bought my first VCSELs from Hamilton Avnet in the late 1980s for $6 a pop. But 850 nm is excluded from Telecom (Bellcore), so companies like Cielo and Picolight went to work on long wavelength parts. I worked with Cielo on 1310-nm VCSEL array technology while at Turin Networks, and actually succeeded in adding VCSEL transmitter and array receiver optics to several optical line cards. It was my hope that VCSELs would find their way into the fiber to the home (FTTH) systems of the future, delivering 1 Gbps or more for 33% of what it costs today.

Circuit Cellar 262 (May 2012) is now on newsstands.

Issue 262: Advances in Measurement & Sensor Tech

As I walked the convention center floor at the 2012 Design West conference in San Jose, CA, it quickly became clear that measurement and sensor technologies are at the forefront of embedded innovation. For instance, at the Terasic Technologies booth, I spoke with Allen Houng, Terasic’s Strategic Marketing Manager, about the VisualSonic Studio project developed by students from National Taiwan University. The innovative design—which included an Altera DE2-115 FPGA development kit and a Terasic 5-megapixel CMOS sensor (D5M)—used interactive tokens to control computer-generated music. Sensor technology figured prominently in the design. It was just one of many exciting projects on display.

In this issue, we feature articles on a variety of measurement-and sensor-related embedded design projects. I encourage you to try similar projects and share your results with our editors.

Starting on page 14, Petre Tzvetanov Petrov describes a multilevel audible logical probe design. Petrov states that when working with digital systems “it is good to have a logical probe with at least four levels in order to more rapidly find the node in the circuit where things are going wrong.” His low-cost audible logical probe indicates four input levels, and there’s an audible tone for each input level.

Matt Oppenheim explains how to use touch sensors to trigger audio tags on electronic devices (p. 20). His design is intended to help visually impaired users. But you can use a few capacitive-touch sensors with an Android device to create the application of your choice.

The portable touch-sensor assembly. The touch-sensor boards are mounted on the back of a digital radio, connected to a IOIO board and a Nexus One smartphone. The Android interface is displayed on the phone. (Source: M. Oppenheim)

Two daisy-chained Microchip Technology mTouch boards with a battery board providing the power and LED boards showing the channel status. (Source: M. Oppenheim)

Read the interview with Lawrence Foltzer on page 30 for a little inspiration. Interestingly, one of his first MCU-based projects was a sonar sensor.

The impetus for Kyle Gilpin’s “menU” design was a microprocessor-based sensor system he installed in his car to display and control a variety of different sensors (p. 34).

The design used to test the menU system on the mbed processor was intentionally as simple as possible. Four buttons drive the menu system and an alphanumeric LCD is used to display the menu. Alternatively, one can use the mbed’s USB-to-serial port to connect with a terminal emulator running on a PC to both display and control the menu system. (Source: K. Gilpin)

The current menU system enables Gilpin to navigate through a hierarchical set of menu items while both observing and modifying the parameters of an embedded design.

The menU system is generic enough to be compiled for most desktop PCs running Windows, OSX, or Linux using the Qt development framework. This screenshot demonstrates the GUI for the menU system. The menu itself is displayed in a separate terminal window. The GUI has four simulated LEDs and one simulated photocell all of which correspond to the hardware available on the mbed processor development platform. (Source: K. Gilpin)

The final measurement-and-sensor-related article in this issue is columnist Richard Wotiz’s “Camera Image Stabilization” (p. 46). Wotiz details various IS techniques.

Our other columnists cover accelerated testing (George Novacek, p. 60), energy harvesting (George Martin, p. 64), and SNAP engine versatility (Jeff Bachiochi, p. 68).

Lastly, I’m excited to announce that we have a new columnist, Patrick Schaumont, whose article “One-Time Passwords from Your Watch” starts on page 52.

The Texas Instruments eZ430 Chronos watch displays a unique code that enables logging into Google’s Gmail. The code is derived from the current time and a secret value embedded in the watch. (Source: P. Schaumont)

Schaumont is an Associate Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. His interests include embedded security, covering hardware, firmware, and software. Welcome, Patrick!

Circuit Cellar 262 (May 2012) is now available.

Wireless Data Control for Remote Sensor Monitoring

Circuit Cellar has published dozens of interesting articles about handy wireless applications over the years. And now we have another innovative project to report about. Circuit Cellar author Robert Bowen contacted us recently with a link to information about his iFarm-II controller data acquisition system.

The iFarm-II controller data acquisition system (Source: R. Bowen)

The design features two main components. Bowen’s “iFarm-Remote” and the “iFarm-Base controller” work together to as an accurate remote wireless data acquisition system. The former has six digital inputs (for monitoring relay or switch contacts) and six digital outputs (for energizing a relay’s coil). The latter is a stand-alone wireless and internet ready controller. Its LCD screen displays sensor readings from the iFarm-Remote controller. When you connect the base to the Internet, you can monitor data reading via a browser. In addition, you can have the base email you notifications pertaining to the sensor input channels.

You can connect the system to the Internet for remote monitoring. The Network Settings Page enables you to configure the iFarm-Base controller for your network. (Source: R. Bowen)

Bowen writes:

The iFarm-II Controller is a wireless data acquisition system used to remotely monitor temperature and humidity conditions in a remote location. The iFarm consists of two controllers, the iFarm-Remote and iFarm-Base controller. The iFarm-Remote is located in remote location with various sensors (supports sensors that output +/-10VDC ) connected. The iFarm-Remote also provides the user with 6-digital inputs and 6-digital outputs. The digital inputs may be used to detect switch closures while the digital outputs may be used to energize a relay coil. The iFarm-Base supports either a 2.4GHz or 900Mhz RF Module.

The iFarm-Base controller is responsible for sending commands to the iFarm-Remote controller to acquire the sensor and digital input status readings. These readings may be viewed locally on the iFarm-Base controllers LCD display or remotely via an Internet connection using your favorite web-browser. Alarm conditions can be set on the iFarm-Base controller. An active upper or lower limit condition will notify the user either through an e-mail or a text message sent directly to the user. Alternatively, the user may view and control the iFarm-Remote controller via web-browser. The iFarm-Base controllers web-server is designed to support viewing pages from a PC, Laptop, iPhone, iTouch, Blackberry or any mobile device/telephone which has a WiFi Internet connection.—Robert Bowen, http://wireless.xtreemhost.com/

iFarm-Host/Remote PCB Prototype (Source: R. Bowen)

Robert Bowen is a senior field service engineer for MTS Systems Corp., where he designs automated calibration equipment and develops testing methods for customers involved in the material and simulation testing fields. Circuit Cellar has published three of his articles since 2001:

FPGA-Based VisualSonic Design Project

The VisualSonic Studio project on display at Design West last week was as innovative as it was fun to watch in operation. The design—which included an Altera DE2-115 FPGA development kit and a Terasic 5-megapixel CMOS Sensor (D5M)—used interactive tokens to control computer-generated music.

at Design West 2012 in San Jose, CA (Photo: Circuit Cellar)

I spoke with Allen Houng, Strategic Marketing Manager for Terasic, about the project developed by students from National Taiwan University. He described the overall design, and let me see the Altera kit and Terasic sensor installation.

A view of the kit and sensor (Photo: Circuit Cellar)

Houng also he also showed me the design in action. To operate the sound system, you simply move the tokens to create the sound effects of your choosing. Below is a video of the project in operation (Source: Terasic’s YouTube channel).

Design West Update: Intel’s Computer-Controlled Orchestra

It wasn’t the Blue Man Group making music by shooting small rubber balls at pipes, xylophones, vibraphones, cymbals, and various other sound-making instruments at Design West in San Jose, CA, this week. It was Intel and its collaborator Sisu Devices.

Intel's "Industrial Controller in Concert" at Design West, San Jose

The innovative Industrial Controller in Concert system on display featured seven Atom processors, four operating systems, 36 paint ball hoppers, and 2300 rubber balls, a video camera for motion sensing, a digital synthesizer, a multi-touch display, and more. PVC tubes connect the various instruments.

Intel's "Industrial Controller in Concert" features seven Atom processors 2300

Once running, the $160,000 system played a 2,372-note song and captivated the Design West audience. The nearby photo shows the system on the conference floor.

Click here learn more and watch a video of the computer-controlled orchestra in action.

Robot Design with Microsoft Kinect, RDS 4, & Parallax’s Eddie

Microsoft announced on March 8 the availability of Robotics Developer Studio 4 (RDS 4) software for robotics applications. RDS 4 was designed to work with the Kinect for Windows SDK. To demonstrate the capabilities of RDS 4, the Microsoft robotics team built the Follow Me Robot with a Parallax Eddie robot, laptop running Windows 7, and the Kinect.

In the following short video, Microsoft software developer Harsha Kikkeri demonstrates Follow Me Robot.

Circuit Cellar readers are already experimenting Kinect and developing embedded system to work with it n interesting ways. In an upcoming article about a Kinect-based project, designer Miguel Sanchez describes a interesting Kinect-based 3-D imaging system.

Sanchez writes:

My project started as a simple enterprise that later became a bit more challenging. The idea of capturing the silhouette of an individual standing in front of the Kinect was based on isolating those points that are between two distance thresholds from the camera. As depth image already provides the distance measurement, all the pixels of the subject will be between a range of distances, while other objects in the scene will be outside of this small range. But I wanted to have just the contour line of a person and not all the pixels that belong to that person’s body. OpenCV is a powerful computer vision library. I used it for my project because of function blobs. This function extracts the contour of the different isolated objects of a scene. As my image would only contain one object—the person standing in front of the camera—function blobs would return the exact list of coordinates of the contour of the person, which was what I needed. Please note that this function is a heavy image processing made easy for the user. It provides not just one, but a list of all the different objects that have been detected in the image. It can also specify is holes inside a blob are permitted. It can also specify the minimum and maximum areas of detected blobs. But for my project, I am only interested in detecting the biggest blob returned, which will be the one with index zero, as they are stored in decreasing order of blob area in the array returned by the blobs function.

Though it is not a fault of blobs function, I quickly realized that I was getting more detail than I needed and that there was a bit of noise in the edges of the contour. Filtering out on a bit map can be easily accomplished with a blur function, but smoothing out a contour did not sound so obvious to me.

A contour line can be simplified by removing certain points. A clever algorithm can do this by removing those points that are close enough to the overall contour line. One of these algorithms is the Douglas-Peucker recursive contour simplification algorithm. The algorithm starts with the two endpoints and it accepts one point in between whose orthogonal distance from the line connecting the two first points is larger than a given threshold. Only the point with the largest distance is selected (or none if the threshold is not met). The process is repeated recursively, as new points are added, to create the list of accepted points (those that are contributing the most to the general contour given a user-provided threshold). The larger the threshold, the rougher the resulting contour will be.

By simplifying a contour, now human silhouettes look better and noise is gone, but they look a bit synthetic. The last step I did was to perform a cubic-spline interpolation so contour becomes a set of curves between the different original points of the simplified contour. It seems a bit twisted to simplify first to later add back more points because of the spline interpolation, but this way it creates a more visually pleasant and curvy result, which was my goal.

 

(Source: Miguel Sanchez)
(Source: Miguel Sanchez)

The nearby images show aspects of the process Sanchez describes in his article, where an offset between the human figure and the drawn silhouette is apparent.

The entire article is slated to appear in the June or July edition of Circuit Cellar.

DIY Cap-Touch Amp for Mobile Audio

Why buy an amp for your iPod or MP3 player when you can build your own? With the proper parts and a proven plan of action, you can craft a custom personal audio amp to suit your needs. Plus, hitting the workbench with some chips and PCB is much more exciting than ordering an amp online.

In the April 2012 issue of Circuit Cellar, Coleton Denninger and Jeremy Lichtenfeld write about a capacitive-touch, gain-controlled amplifier while studying at Camosun College in Canada. The design features a Cypress Semiconductor CY8C29466-24PXI PSoC, a Microchip Technology mTouch microcontroller, and a Texas Instruments TPA1517.

Denninger and Lichtenfeld write:

Since every kid and his dog owns an iPod, an MP3 player, or some other type of personal audio device, it made sense to build a personal audio amplifier (see Photo 1). The tough choices were how we were going to make it stand out enough to attract kids who already own high-end electronics and how we were going to do it with a budget of around $40…

The capacitive-touch stage of the personal audio amp (Source: C. Denninger & J. Lichtenfeld)

Our first concern was how we were going to mix and amplify the low-power audio input signals from iPods, microphones, and electric guitars. We decided to have a couple of different inputs, and we wanted stereo and mono outputs. After doing some extensive research, we chose to use the Cypress Semiconductors CY8C29466-24PXI programmable system-on-chip (PSoC). This enabled us to digitally mix and vary the low-power amplification using the programmable gain amplifiers and switched capacitor blocks. It also came in a convenient 28-pin DIP package that followed our design guidelines. Not only was it perfect for our design, but the product and developer online support forums for all of Cypress’s products were very helpful.
Let’s face it: mechanical switches and pots are fast becoming obsolete in the world of consumer electronics (not to mention costly when compared to other alternatives). This is why we decided to use capacitive-touch sensing to control the low-power gain. Why turn a potentiometer or push a switch when your finger comes pre-equipped with conductive electrolytes? We accomplished this capacitive touch using Microchip Technology’s mTouch Sensing Solutions series of 8-bit microcontrollers. …

 

The audio mixer flowchart

Who doesn’t like a little bit of a light show? We used the same aforementioned PIC, but implemented it as a voltage unit meter. This meter averaged out our output signal level and indicated via LEDs the peaks in the music played. Essentially, while you listen to your favorite beats, the amplifier will beat with you! …
This amp needed to have a bit of kick when it came to the output. We’re not talking about eardrum-bursting power, but we wanted to have decent quality with enough power to fill an average-sized room with sound. We decided to go with a Class AB audio amplifier—the TPA1517 from Texas Instruments (TI) to be exact. The TPA1517 is a stereo audio-power amplifier that contains two identical amplifiers capable of delivering 6 W per channel of continuous average power into a 4-Ω load. This quality chip is easy to implement. And at only a couple of bucks, it’s an affordable choice!

 

The power amplification stage of the personal audio amp (Souce: C. Denninger & J. Lichtenfeld)

The complete article—with a schematic, diagrams, and code—will appear in Circuit Cellar 261 (April 2012).

 

 

 

 

 

 

 

Aerial Robot Demonstration Wows at TEDTalk

In a TEDTalk Thursday, engineer Vijay Kumar presented an exciting innovation in the field of unmanned aerial vehicle (UAV) technology. He detailed how a team of UPenn engineers retrofitted compact aerial robots with embedded technologies that enable them to swarm and operate as a team to take on a variety of remarkable tasks. A swarm can complete construction projects, orchestrate a nine-instrument piece of music, and much more.

The 0.1-lb aerial robot Kumar presented on stage—built by UPenn students Alex Kushleyev and Daniel Mellinger—consumed approximately 15 W, he said. The 8-inch design—which can operate outdoors or indoors without GPS—featured onboard accelerometers, gyros, and processors.

“An on-board processor essentially looks at what motions need to be executed, and combines these motions, and figures out what commands to send to the motors 600 times a second,” Kumar said.

Watch the video for the entire talk and demonstration. Nine aerial robots play six instruments at the 14:49 minute mark.

Zero-Power Sensor (ZPS) Network

Recently, we featured two notable projects featuring Echelon’s Pyxos Pyxos technology: one about solid-state lighting solutions and one about a radiant floor heating zone controller. Here we present another innovative project: a zero-power sensor (ZPS) network on polymer.

The Zero Power Switch (Source: Wolfgang Richter, Faranak M.Zadeh)

The ZPS system—which was developed by Wolfgang Richter and Faranak M. Zadeh of Ident Technology AG— doesn’t require battery or RF energy for operation. The sensors, developed on polymer foils, are fed by an electrical alternating field with a 200-kHz frequency. A Pyxos network enables you to transmit of wireless sensor data to various devices.

In their documentation, Wolfgang Richter and Faranak M. Zadeh write:

“The developed wireless Zero power sensors (ZPS) do not need power, battery or radio frequency energy (RF) in order to operate. The system is realized on polymer foils in a printing process and/or additional silicon and is very eco-friendly in production and use. The sensors are fed by an electrical alternating field with the frequency of 200 KHz and up to 5m distance. The ZPS sensors can be mounted anywhere that they are needed, e.g. on the body, in a room, a machine or a car. One ZPS server can work for a number of ZPS-sensor clients and can be connected to any net to communicate with network intelligence and other servers. By modulating the electric field the ZPS-sensors can transmit a type of “sensor=o.k. signal” command. Also ZPS sensors can be carried by humans (or animals) for the vital signs monitoring. So they are ideal for wireless monitoring systems (e.g. “aging at home”). The ZPS system is wireless, powerless and cordless system and works simultaneously, so it is a self organized system …

The wireless Skinplex zero power sensor network is a very simply structured but surely functioning multiple sensor system that combines classical physics as taught by Kirchhoff with the latest advances in (smart) sensor technology. It works with a virtually unlimited number of sensor nodes in inertial space, without a protocol, and without batteries, cables and connectors. A chip not bigger than a particle of dust will be fabricated this year with the assistance of Cottbus University and Prof. Wegner. The system is ideal to communicate via PYXOS/Echelon to other instances and servers.

Pyxos networks helps to bring wireless ZPS sensor data over distances to external instances, nets and servers. With the advanced ECHELON technology even AC Power Line (PL) can be used.

As most of a ZPS server is realized in software it can be easily programmed into a Pyxos networks device, a very cost saving effect! Applications start from machine controls, smart office solutions, smart home up to Homes of elderly and medical facilities as everywhere else where Power line (PL) exists.”

Inside the ZPS project (Source: Wolfgang Richter, Faranak M.Zadeh)

For more information about Pyxos technology, visit www.echelon.com.

This project, as well as others, was promoted by Circuit Cellar based on a 2007 agreement with Echelon.

Robot Nav with Acoustic Delay Triangulation

Building a robot is a rite of passage for electronics engineers. And thus this magazine has published dozens of robotics-related articles over the years.

In the March issue, we present a particularly informative article on the topic of robot navigation in particular. Larry Foltzer tackles the topic of robot positioning with acoustic delay triangulation. It’s more of a theoretical piece than a project article. But we’re confident you’ll find it intriguing and useful.

Here’s an excerpt from Foltzer’s article:

“I decided to explore what it takes, algorithmically speaking, to make a robot that is capable of discovering its position on a playing field and figuring out how to maneuver to another position within the defined field of play. Later on I will build a minimalist-like platform to test algorithms performance.

In the interest of hardware simplicity, my goal is to use as few sensors as possible. I will use ultrasonic sensors to determine range to ultrasonic beacons located at the corners of the playing field and wheel-rotation sensors to measure distance traversed, if wheel-rotation rate times time proves to be unreliable.

From a software point of view, the machine must be able to determine robot position on a defined playing field, determine robot position relative to the target’s position, determine robot orientation or heading, calculate robot course change to approach target position, and periodically update current position and distance to the target. Because of my familiarity with Microchip Technology’s 8-bit microcontrollers and instruction sets, the PIC16F627A is my choice for the microcontrollers (mostly because I have them in my inventory).

To this date, the four goals listed—in terms of algorithm development and code—are complete and are the main subjects of this article. Going forward, focus must now shift to the hardware side, including software integration to test beyond pure simulation.

SENSOR TECHNOLOGY & THE PLAYING FIELD
A brief survey of ultrasonic ranging sensors indicates that most commercially available units have a range capability of 20’ or less. This is for a sensor type that detects the echo of its own emission. However, in this case, the robot’s sensor will not have to detect its own echoes, but will instead receive the response to its query from an addressable beacon that acts like an active mirror. For navigation purposes, these mirrors are located at three of the four corners of the playing field. By using active mirrors or beacons, received signal strength will be significantly greater than in the usual echo ranging situation. Further, the use of the active mirror approach to ranging should enable expansion of the effective width of the sensor’s beam to increase the sensor’s effective field of view, reducing cost and complexity.

Taking the former into account, I decided the size of the playing field will be 16’ on a side and subdivided into 3” squares forming an (S × S) = (64 × 64) = (26, 26) unit grid. I selected this size to simplify the binary arithmetic used in the calculations. For the purpose of illustration here, the target is considered to be at the center of the playing field, but it could very well be anywhere within the defined boundaries of the playing field.

Figure 1: Squarae playing field (Source: Larry Foltzer CC260)

ECHOES TO POSITION VECTORS
Referring to Figure 1, the corners of the square playing field are labeled in clockwise order from A to D. Ultrasonic sonar transceiver beacons/active mirrors are placed at three of the corners of the playing field, at the corners marked A, B, and D.”

The issue in which this article appears will available here in the coming days.

Solid-State Lighting Solutions Project

Electronics system control, “green design,” and energy efficiency are important topics in industry and academia. Here we look at a project from San Jose-based Echelon Corp.’s 2007 “Control Without Limits” design competition. Designers were challenged to implement Pyxos technology in innovative systems that reduced energy consumption. Daryl Soderman and Dale Stepps (of INTELTECH Corp.) took First Prize for their Solid State Lighting Solutions project.

The Pyxos chip is on the board (Source: Echelon & Inteltech)

So, how does it work? Using the Pyxos FT network protocol, this alternative lighting project is a cost-effective, energy-efficient solution that’s well-suited for use in residential, commercial, or public buildings. You can easily embed the LED lighting and control system—which features SSL lighting, a user interface, motion detectors, and light sensors—in an existing network. In addition, you can control up to five zones in a building by using the system’s fully programmable ESB-proof touchpad.

Another view of the Pyxos chip is on the board (Source: Echelon & Inteltech)

 

For more information about Pyxos technology, visit www.echelon.com.

This winning project, as well as others, was promoted by Circuit Cellar based on a 2007 agreement with Echelon.

 

 

 

Improved Radiation Meter Webinar

Want to learn about Elektor’s improved radiation meter? On February 16, Elektor technical editor Thijs Beckers will host a webinar at element14 about the radiation meter, which is a DIY system that can measure alpha, beta, and gamma radiation.

(Improved Radiation Meter – Source: Elektor.com)

According to Elektor, all that’s required to measure radiation is “a simple PIN photodiode and a suitable preamplifier circuit.” The system features “an optimized preamplifier and a microcontroller-based counter. The microcontroller takes care of measuring time and pulse rate, displaying the result in coun

ts per minute.The device we describe can be used with different sensors to measure gamma and alpha radiation. It is particularly suitable for long-term measurements and for examining weakly radioactive samples.”

Its FREE to register at www.element14.com/community/events/3185.

Start Time: 2/16/12 9:00 AM CST (America/Chicago)
End Time: 2/16/12 10:00 AM CST (America/Chicago)
Location: Online event

Elektor International Media is the parent company of Circuit Cellar.