Seven-Controller EtherCAT Orchestra

When I first saw the Intel Industrial Control in Concert demonstration at Design West 2012 in San Jose, CA, I immediately thought of Kurt Vonnegut ‘s 1952 novel Player Piano. The connection, of course, is that the player piano in the novel and Intel’s Atom-based robotic orchestra both play preprogrammed music without human involvement. But the similarities end there. Vonnegut used the self-playing autopiano as a metaphor for a mechanized society in which wealthy industrialists replaced human workers with automated machines. In contrast, Intel’s innovative system demonstrated engineering excellence and created a buzz in the in the already positive atmosphere at the conference.

In “EtherCAT Orchestra” (Circuit Cellar 264, July 2012), Richard Wotiz carefully details the awe-inspiring music machine that’s built around seven embedded systems, each of which is based on Intel’s Atom D525 dual-core microprocessor. He provides information about the system you can’t find on YouTube or hobby tech blogs. Here is the article in its entirety.

EtherCAT Orchestra

I have long been interested in automatically controlled musical instruments. When I was little, I remember being fascinated whenever I ran across a coin-operated electromechanical calliope or a carnival hurdy-gurdy. I could spend all day watching the many levers, wheels, shafts, and other moving parts as it played its tunes over and over. Unfortunately, the mechanical complexity and expertise needed to maintain these machines makes them increasingly rare. But, in our modern world of pocket-sized MP3 players, there’s still nothing like seeing music created in front of you.

I recently attended the Design West conference (formerly the Embedded Systems Conference) in San Jose, CA, and ran across an amazing contraption that reminded me of old carnival music machines. The system was created for Intel as a demonstration of its Atom processor family, and was quite successful at capturing the attention of anyone walking by Intel’s booth (see Photo 1).

Photo 1—This is Intel’s computer-controlled orchestra. It may not look like any musical instrument you’ve ever seen, but it’s quite a thing to watch. The inspiration came from Animusic’s “Pipe Dream,” which appears on the video screen at the top. (Source: R. Wotiz)

The concept is based on Animusic’s music video “Pipe Dream,” which is a captivating computer graphics representation of a futuristic orchestra. The instruments in the video play when virtual balls strike against them. Each ball is launched at a precise time so it will land on an instrument the moment each note is played.

The demonstration, officially known as Intel’s Industrial Control in Concert, uses high-speed pneumatic valves to fire practice paintballs at plastic targets of various shapes and sizes. The balls are made of 0.68”-diameter soft rubber. They put on quite a show bouncing around while a song played. Photo 2 shows one of the pneumatic firing arrays.

Photo 2—This is one of several sets of pneumatic valves. Air is supplied by the many tees below the valves and is sent to the ball-firing nozzles near the top of the photo. The corrugated hoses at the top supply balls to the nozzles. (Source: R. Wotiz)

The valves are the gray boxes lined up along the center. When each one opens, a burst of air is sent up one of the clear hoses to a nozzle to fire a ball. The corrugated black hoses at the top supply the balls to the nozzles. They’re fed by paintball hoppers that are refilled after each performance. Each nozzle fires at a particular target (see Photo 3).

Photo 3—These are the targets at which the nozzles from Photo 2 are aimed. If you look closely, you can see a ball just after it bounced off the illuminated target at the top right. (Source: R. Wotiz)

Each target has an array of LEDs that shows when it’s activated and a piezoelectric sensor that detects a ball’s impact. Unfortunately, slight variations in the pneumatics and the balls themselves mean that not every ball makes it to its intended target. To avoid sounding choppy and incomplete, the musical notes are triggered by a fixed timing sequence rather than the ball impact sensors. Think of it as a form of mechanical lip syncing. There’s a noticeable pop when a ball is fired, so the system sounds something like a cross between a pinball machine and a popcorn popper. You may expect that to detract from the music, but I felt it added to the novelty of the experience.

The control system consists of seven separate embedded systems, all based on Intel’s Atom D525 dual-core microprocessor, on an Ethernet network (see Figure 1).

Figure 1—Each block across the top is an embedded system providing some aspect of the user interface. The real-time interface is handled by the modules at the bottom. They’re controlled by the EtherCAT master at the center. (Source. R. Wotiz)

One of the systems is responsible for the real-time control of the mechanism. It communicates over an Ethernet control automation technology (EtherCAT) bus to several slave units, which provide the I/O interface to the sensors and actuators.

EtherCAT

EtherCAT is a fieldbus providing high-speed, real-time control over a conventional 100 Mb/s Ethernet hardware infrastructure. It’s a relatively recent technology, originally developed by Beckhoff Automation GmbH, and currently managed by the EtherCAT Technology Group (ETG), which was formed in 2003. You need to be an ETG member to access most of their specification documents, but information is publicly available. According to information on the ETG website, membership is currently free to qualified companies. EtherCAT was also made a part of international standard IEC 61158 “Industrial Communication Networks—Fieldbus Specifications” in 2007.

EtherCAT uses standard Ethernet data frames, but instead of each device decoding and processing an individual frame, the devices are arranged in a daisy chain, where a single frame is circulated through all devices in sequence. Any device with an Ethernet port can function as the master, which initiates the frame transmission. The slaves need specialized EtherCAT ports. A two-port slave device receives and starts processing a frame while simultaneously sending it out to the next device (see Figure 2).

Figure 2—Each EtherCAT slave processes incoming data as it sends it out the downstream port. (Source: R. Wotiz))

The last slave in the chain detects that there isn’t a downstream device and sends its frame back to the previous device, where it eventually returns to the originating master. This forms a logical ring by taking advantage of both the outgoing and return paths in the full-duplex network. The last slave can also be directly connected to a second Ethernet port on the master, if one is available, creating a physical ring. This creates redundancy in case there is a break in the network. A slave with three or more ports can be used to form more complex topologies than a simple daisy chain. However, this wouldn’t speed up network operation, since a frame still has to travel through each slave, one at a time, in both directions.

The EtherCAT frame, known as a telegram, can be transmitted in one of two different ways depending on the network configuration. When all devices are on the same subnet, the data is sent as the entire payload of an Ethernet frame, using an EtherType value of 0x88A4 (see Figure 3a).

Figure 3a—An EtherCAT frame uses the standard Ethernet framing format with very little overhead. The payload size shown includes both the EtherCAT telegram and any padding bytes needed to bring the total frame size up to 64 bytes, the minimum size for an Ethernet frame. b—The payload can be encapsulated inside a UDP frame if it needs to pass through a router or switch. (Source: R. Wotiz)

If the telegrams must pass through a router or switch onto a different physical network, they may be encapsulated within a UDP datagram using a destination port number of 0x88A4 (see Figure 3b), though this will affect network performance. Slaves do not have their own Ethernet or IP addresses, so all telegrams will be processed by all slaves on a subnet regardless of which transmission method was used. Each telegram contains one or more EtherCAT datagrams (see Figure 4).

Each datagram includes a block of data and a command indicating what to do with the data. The commands fall into three categories. Write commands copy the data into a slave’s memory, while read commands copy slave data into the datagram as it passes through. Read/write commands do both operations in sequence, first copying data from memory into the outgoing datagram, then moving data that was originally in the datagram into memory. Depending on the addressing mode, the read and write operations of a read/write command can both access the same or different devices. This enables fast propagation of data between slaves.

Each datagram contains addressing information that specifies which slave device should be accessed and the memory address offset within the slave to be read or written. A 16-bit value for each enables up to 65,535 slaves to be addressed, with a 65,536-byte address space for each one. The command code specifies which of four different addressing modes to use. Position addressing specifies a slave by its physical location on the network. A slave is selected only if the address value is zero. It increments the address as it passes the datagram on to the next device. This enables the master to select a device by setting the address value to the negative of the number of devices in the network preceding the desired device. This addressing mode is useful during system startup before the slaves are configured with unique addresses. Node addressing specifies a slave by its configured address, which the master will set during the startup process. This mode enables direct access to a particular device’s memory or control registers. Logical addressing takes advantage of one or more fieldbus memory management units (FMMUs) on a slave device. Once configured, a FMMU will translate a logical address to any desired physical memory address. This may include the ability to specify individual bits in a data byte, which provides an efficient way to control specific I/O ports or register bits without having to send any more data than needed. Finally, broadcast addressing selects all slaves on the network. For broadcast reads, slaves send out the logical OR of their data with the data from the incoming datagram.

Each time a slave successfully reads or writes data contained in a datagram, it increments the working counter value (see Figure 4).

Figure 4—An EtherCAT telegram consists of a header and one or more datagrams. Each datagram can be addressed to one slave, a particular block of data within a slave, or multiple slaves. A slave can modify the datagram’s Address, C, IRQ, Process data, and WKC fields as it passes the data on to the next device. (Source: R. Wotiz)

This enables the master to confirm that all the slaves it was expecting to communicate with actually handled the data sent to them. If a slave is disconnected, or its configuration changes so it is no longer being addressed as expected, then it will no longer increment the counter. This alerts the master to rescan the network to confirm the presence of all devices and reconfigure them, if necessary. If a slave wants to alert the master of a high-priority event, it can set one or more bits in the IRQ field to request the master to take some predetermined action.

TIMING

Frames are processed in each slave by a specialized EtherCAT slave controller (ESC), which extracts incoming data and inserts outgoing data into the frame as it passes through. The ESC operates at a high speed, resulting in a typical data delay from the incoming to the outgoing network port of less than 1 μs. The operating speed is often dominated by how fast the master can process the data, rather than the speed of the network itself. For a system that runs a process feedback loop, the master has to receive data from the previous cycle and process it before sending out data for the next cycle. The minimum cycle time TCYC is given by: TCYC = TMP + TFR + N × TDLY  + 2 × TCBL + TJ. TMP = master’s processing time, TFR = frame transmission time on the network (80 ns per data byte + 5 μs frame overhead), N = total number of slaves, TDLY  = sum of the forward and return delay times through each slave (typically 600 ns), TCBL = cable propagation delay (5 ns per meter for Category 5 Ethernet cable), and TJ = network jitter (determined by master).[1]

A slave’s internal processing time may overlap some or all of these time windows, depending on how its I/O is synchronized. The network may be slowed if the slave needs more time than the total cycle time computed above. A maximum-length telegram containing 1,486 bytes of process data can be communicated to a network of 1,000 slaves in less than 1 ms, not including processing time.

Synchronization is an important aspect of any fieldbus. EtherCAT uses a distributed clock (DC) with a resolution of 1 ns located in the ESC on each slave. The master can configure the slaves to take a snapshot of their individual DC values when a particular frame is sent. Each slave captures the value when the frame is received by the ESC in both the outbound and returning directions. The master then reads these values and computes the propagation delays between each device. It also computes the clock offsets between the slaves and its reference clock, then uses these values to update each slave’s DC to match the reference. The process can be repeated at regular intervals to compensate for clock drift. This results in an absolute clock error of less than 1 μs between devices.

MUSICAL NETWORKS

The orchestra’s EtherCAT network is built around a set of modules from National Instruments. The virtual conductor is an application running under LabVIEW Real-Time on a CompactRIO controller, which functions as the master device. It communicates with four slaves containing a mix of digital and analog I/O and three slaves consisting of servo motor drives. Both the master and the I/O slaves contain a FPGA to implement any custom local processing that’s necessary to keep the data flowing. The system runs at a cycle time of 1 ms, which provides enough timing resolution to keep the balls properly flying.

I hope you’ve enjoyed learning about EtherCAT—as well as the fascinating musical device it’s used in—as much as I have.

Author’s note: I would like to thank Marc Christenson of SISU Devices, creator of this amazing device, for his help in providing information on the design.

REFERENCE

[1] National Instruments Corp., “Benchmarks for the NI 9144 EtherCAT Slave Chassis,” http://zone.ni.com/devzone/cda/tut/p/id/10596.

RESOURCES

Animusic, LLC, www.animusic.com.

Beckhoff Automation GmbH, “ET1100 EtherCAT Slave Controller Hardware Data Sheet, Version 1.8”, 2010, www.beckhoff.com/english/download/ethercat_development_products.htm.

EtherCAT Technology Group, “The Ethernet Fieldbus”, 2009, www.ethercat.org/pdf/english/ETG_Brochure_EN.pdf.

Intel, Atom microprocessor, www.intel.com/content/ www/us/en/processors/atom/atom-processor.html.

SOURCES

Atom D525 dual-core microprocessor

Intel Corp.

www.intel.com

LabVIEW Real-Time modules, CompactRIO controller, and EtherCAT devices

National Instruments Corp.

www.ni.com

Circuit Cellar 264 is now on newsstands, and it’s available at the CC-Webshop.

Issue 264: A Case for the DIY Electronics Fix

Most of today’s expensive electronics systems are engineered to be left alone—meaning, the manufacturer doesn’t want you opening, servicing, or tweaking the products on your own. But that doesn’t mean intelligent, inquisitive engineers shouldn’t give modern electronics gadgets a good hack. The rewards tend to outweigh the drawbacks. As Steve Ciarcia argues in Circuit Cellar 264 (July), you stand to learn a lot by looking inside electronics systems, especially broken ones. Even if you can’t fix them, you can pull out the components and use them in future projects.

In “Fix It or Toss It?” he writes:

No prophetic diatribes or deep philosophical insights this month. Just the musings of an old guy who apparently doesn’t know when to throw in the towel. Let me explain.

I have a friend with a couple LCD monitors he purchased about two years ago. Perhaps due to continuous duty operation (only interrupted by automatic “Sleep mode”), both were now exhibiting some flakiness, particularly when powering up from “sleep.” More importantly, if power was completely shut down, as in a power failure, they wouldn’t come back on at all without manual intervention. He asked if they could be repaired or must they be replaced.

Since I remembered something about a few manufacturers who’d had a bunch of motherboard problems a while back due to bad electrolytic capacitors, I suspected a power supply problem. Of course, agreeing to look into the problem and figuring out how to get inside the monitors was a whole different issue. Practically all of today’s electronics are not meant to be opened or serviced internally at all. Fortunately, my sledgehammer disassembly techniques weren’t so bad that I couldn’t reassemble them. In the process, I found several bulging and leaking capacitors on the power supply board. After replacing the capacitors, the monitors came right up with no problems.

Power supplies just seem to have it out for me. Recently, I had a wireless router stop working and, after a little diagnosing, I determined that its power supply (an external wall-wart) had failed. While hardly worth my time, I was curious, so I cracked open the sealed case to see just how complicated it was. Sure enough, replacing one scorched electrolytic capacitor and gluing the case back together put me back in business.

All this got me thinking about the relative value of various electronic devices. What is the replace/repair decision line? These $200 high-tech electronic LCD monitors failed because of $3 worth of old-tech components that I was fortunately able to fix. It took time to do the repair that has some value, but it also takes time to shop for and purchase a replacement. There must be better monitors these days for the same price. Should I have told him to toss them and use the opportunity to upgrade?

It’s interesting to consider the type of person who repairs stuff like this (being an EE with a fully equipped lab doesn’t hurt either). I mean, I do it primarily because I like knowing how things work. Okay, so I’m getting a little carried away after fixing a couple burnt capacitors, but there’s still an incredible sense of satisfaction in being able to put something back together and having it work. Since I was a kid, dissecting circuits and equipment helped me understand the design choices that were made, and my curiosity naturally lead me to engineering.

Now, I recognize that people like me who repair their own electronics for curiosity or adventure are very much in the minority. So, what about the average person with a failed piece of $200 electronics? For them, the only goal is getting the functionality back as soon as possible. Do they go to a repair service where it takes longer and involves a couple trips? Worse yet, some things just can’t be repaired, and the bad news then is having both the repair “inspection” cost and the replacement cost. I’m guessing that in 99% of typical cases, the no-brainer decision is to toss the failed unit and buy a new one—without ever giving me a chance to tear it apart and play with it.

Let’s face it. Taking modern equipment apart to make even simple repairs is next to impossible. The manufacturers use every trick in the design book to minimize the cost of the goods. This means leaving out features that might make end-user repair easier. Cases that snap together (once)—or worse, are heat-welded together—are cheaper than cases with screws or latches. Most board electronics are custom-labeled surface mount devices, everything uses custom connectors, and the short cabling between boards has no slack to swing out subassemblies for access, and so forth. You couldn’t even fit a scope probe inside most of this stuff if you tried. Sure, some manufacturers do still put component reference designators in the silkscreen, but I suspect it’s so they can repair subassemblies on their production line before final assembly, not make it easier for me to poke around.

Anyway, like I said, there’s no prophetic conclusion to be drawn from all of this. I fix stuff because I enjoy the challenge and I usually learn something from it. Even if I can’t repair the item, I usually keep some of the useful components and/or subassemblies for experimenter one-off projects or proof-of-concept prototypes. You never know when something in the junk box might prove useful.

Circuit Cellar 264 (July 2012) is now available on newsstands and at the Circuit Cellar Webshop.

Issue 264: EQ Answers

The answers to the Circuit Cellar 264 (July) Engineering Quotient are now available. The problems and answers are listed below, along with a schematic.

Problem 1a—Is it possible to transmit on-off (DC) signals between two pieces of equipment in both directions simultaneously on the same wire, in much the same way that telephones do for audio?

Source: D. Tweed, CC264

Answer 1a—Why not? Hybrids work just as well at DC as they do for audio; you just need a receiver with balanced inputs, like an RS-422 buffer:

All resistors are the same value (e.g., 4,700 Ω) and the transmit driver needs to be a voltage source (low impedance).

If the transmitter switches between, say, 0 V and 5 V, the opposite receiver will see a voltage differential of 0 V and 2.5 V, respectively, while the local receiver will just see 0V.

For long lines, you’ll probably want to use lower resistances and you’ll want to limit the slew rate of the transmitter so that the receiver doesn’t produce glitches on the transitions of the local transmitter.

If the RS-422 receiver is replaced with an op-amp differential amplifier with a gain of 2, then any analog voltage transmitted by one end will be reproduced at the other end.

Problem 1b—But doesn’t a true hybrid use transformers, or at least some tricky transformer simulation with op amps to ensure the transmitted signal does not appear on the receive port?

Answer 1b—No. A hybrid is just a bridge circuit, with one arm of the bridge replaced by the line and the termination at the far end. The transmit signal is applied to two opposite corners of the bridge and the receive signal is taken from the other two corners.

In order to provide the Tx/Rx isolation, the bridge must be balanced, which in the example above, means that the lower resistor on each side must match the impedance of the line/far end combination. For DC and short lines, a simple resistor suffices. At audio frequencies and with the long unshielded twisted pairs used in telephony, a more complex matching impedance is required.

Transformers are used only because it’s the easiest way (and the only passive way) to get a balanced drive and/or receive signal — the transmit driver and receiver cannot share a ground. In order to mass produce phones that were dirt cheap, yet simple and reliable, the phone company figured out how to use a multi-winding transformer to provide the both the isolation and the balanced/unbalanced conversion in both directions, usually with a single resistor and capacitor to provide the line matching. As noted, modern electronic phones use active electronics to achieve the same things.

As always, the theory is simple, but the practical implementations can get complicated.

Problem 2a—The conventional way to calculate the magnitude (length) of a vector is to take the square root of the sum of the squares of its components. On small processors, this can be somewhat difficult (especially the square root operation), and various approximations are used instead.

One approximation that works surprisingly well for 2-D vectors and complex numbers is to take the absolute values of the two components, compare them, then add 1/3 of the smaller to the larger.

What is the maximum error using this method?

Answer 2a—If we restrict the discussion to unit vectors at various angles A, the x component is cos(A) and the y component is sin(A), and the correct magnitude is 1.

Furthermore, let’s concentrate on angles between 0 and 45° — then we know that both cos(A) and sin(A) are positive and that cos(A) > sin(A). (The absolute value and compare operations provide the symmetry that covers the rest of the unit circle.) The approximation then gives the result

Magnitude = cos(A) + sin(A)/3

Graphing this shows that this is most negative (0.943) at 45° and most positive (1.054) at approximately 18.4° (the actual angle is given by atan(1/3) —can you show why?). The peak error is therefore –5.7%, +5.4%.

Problem 2b—Is there a similar formula that gives even better results?

Answer 2b—Yes. One more multiplication operation gives a result that has less than 4% error:

Magnitude = 0.960433 × max(|x|, |y|) + 0.397826 × min(|x|, |y|)

This function is most negative at 0° and 45°, and most positive at 22.5°. The error is ± 3.96%. This form is well-suited to DSPs that have multiply-accumulate units. The two constants can be expressed as 62943/65536 and 26072/65536, respectively.

Contributor: David Tweed

Q&A: Ayse Kivilcim Coskun (Engineer, BU)

Ayse Kivilcim Coskun’s research on 3-D stacked systems has gained notoriety in academia, and it could change the way electrical engineers and chip manufacturers think about energy efficiency for years to come. In a recent interview, the Boston University engineering professor briefed us on her work and explained how she came to focus on the topics of green computing and 3-D systems.

Boston University professor Ayse Kivilcim Coskun

The following is an excerpt from an interview that appears in Circuit Cellar 264 (July 2012), which is currently on newsstands.

NAN: When did you first become interested in computer engineering?

AYSE: I’ve been interested in electronics since high school and in science and physics since I was little. My undergraduate major was microelectronics engineering. I actually did not start studying computer engineering officially until graduate school at University of California, San Diego. However, during my undergraduate education, I started taking programming, operating systems, logic design, and computer architecture classes, which spiked my interest in the area.

NAN: Tell us about your teaching position at the Electrical and Computer Engineering Department at Boston University (BU).

AYSE: I have been an assistant professor at BU for almost three years. I teach Introduction to Software Engineering to undergraduates and Introduction to Embedded systems to graduate students. I enjoy that both courses develop computational thinking as well as hands-on implementation skills. It’s great to see the students learning to build systems and have fun while learning.

NAN: As an engineering professor, you have some insight into what excites future engineers. What “hot topics” currently interest your students?

AYSE: Programming and software design in general are certainly attracting a lot of interest. Our introductory software engineering class is attracting a growing number of students across the College of Engineering every year. DSP, image processing, and security are also hot topics among the students. Our engineering students are very keen on seeing a working system at the end of their class projects. Some project examples from my embedded systems class include embedded low-power gaming consoles, autonomous toy vehicles, and embedded systems focusing on healthcare or security applications …

NAN: How did you come to focus on energy efficiency and thermal challenges?

AYSE: Energy efficiency has been a hot topic for embedded systems for several decades, mainly due to battery-life restrictions. With the growth of computing sources at all levels—from embedded to large-scale computers, and following the move to data centers and the cloud—now energy efficiency is a major bottleneck for any computing system. The focus on energy efficiency and temperature management among the academic community was increasing when I started my PhD. I got especially interested in thermally induced problems as I also had some background on fault tolerance and reliability topics. I thought it would be interesting to leverage job scheduling to improve thermal behavior and my advisor liked the idea too. Temperature-aware job scheduling in multiprocessor systems was the first energy-efficiency related project I worked on.

NAN: In May 2011, you were awarded the A. Richard Newton Graduate Scholarship at the Design Automation Conference (DAC) for a joint project, “3-D Systems for Low-Power High-Performance Computing.” Tell us about the project and how you became involved.

AYSE: My vision is that 3-D stacked systems—where multiple dies are stacked together into a single chip—can provide significant benefits in energy efficiency. However, there are design, modeling, and management challenges that need to be addressed in order to simultaneously achieve energy efficiency and reliability. For example, stacking enables putting DRAM and processor cores together on a single 3-D chip. This means we can cut down the memory access latency, which is the main performance bottleneck for a lot of applications today. This gain in performance could be leveraged to run processors at a lower speed or use simpler cores, which would enable low-power, high-performance computing. Or we can use the reduction in memory latency to boost performance of single-chip multicore systems. Higher performance, however, means higher power and temperature. Thermal challenges are already pressing concerns for 3-D design, as cooling these systems is difficult. The project focuses on simultaneously analyzing performance, power, and temperature and using this analysis to design system management methods that maximize performance under power or thermal constraints.

I started researching 3-D systems during a summer internship at  the Swiss Federal Institute of Technology (EPFL) in the last year of my PhD. Now, the area is maturing and there are even some 3-D prototype systems being designed. I think it is an exciting time for 3-D research as we’ll start seeing a larger pool of commercial 3-D stacked chips in a few years. The A. Richard Newton scholarship enabled us to do the preliminary research and collect results. Following the scholarship, I also received a National Science Foundation (NSF) CAREER award for designing innovative strategies for modeling and management of 3-D stacked systems.

The entire interview appears in Circuit Cellar 264  (July 2012).

Electronics Engineering Crossword (Issue 264)

The answers to Circuit Cellar’s July Electronics Engineer crossword puzzle are now available.

Issue 264 crossword answers

Across

3.     IONIZATION—Occurs when an atom or molecule gains either and positive or negative charge

4.     ANDROIDPHONE—In “Audio-Enhanced Touch Sensors” (Circuit Cellar, May 2012), Matt Oppenheim said one of the stumbling blocks of using this for data collection is that it will try to recharge itself whenever you connect it to a USB port. [two words]

6.     FOLTZER—Circuit Cellar interviewee who participated in Motorola’s IEEE-802 MAC subcommittee on token-passing access control methods. [two words]

13.   COORDINATEDUNIVERSALTIME—A method of keeping the world in sync [three words]

14.   CICCHINELLI—Circuit Cellar published his book about a commonly used computer programming language in 2010

17.   HACKSPACE—i.e, “a circuit cellar”

18.   CHIP—A basic component of an electronic device

19.   VOLTAGEREFERENCE—National Semiconductor’s LM385 series is an example of an adjustable one. [three words]

Down

1.     DOPPLEREFFECT—A phenomenon that occurs when a vehicle sounding a siren approaches, passes, and recedes from an observer [two words]

2.     WAVEFORMGENERATOR—A device that produces electronic signals [two words]

5.     ANGSTROM—Equals 1/10,000,000,000 m

7.     ISOTHERMALPROCESS— ΔT = 0 [two words]

8.     COMPARATOR—A device that compares two voltages or currents and switches its output to indicate which is larger

9.     NSPE—Organization formed in 1934 by bridge engineer David Steinman

10.   EMI—Acronym; common cause of electronic data corruption and subject of Novacek’s December 2011 Circuit Cellar article [two words]

11.   PIEZOELECTRICITY—Occurs when crystals acquire a charge after being compressed, twisted, or distorted (e.g., quartz)

12.   WIDLAR—American electrical engineer (1937–1991); IC pioneer

15.   LEDDRIVER—Circuitry that regulates or provides powers to a light source [two words]

16.   JOULE—Symbolized by 10th letter of the alphabet

20.   RTOS—Hint: acronym. Unscramble the following: IETEORGSEPSMNYMRLTIAEAT