Q&A: Andrew Godbehere, Imaginative Engineering

Engineers are inherently imaginative. I recently spoke with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley, about how his ideas become realities, his design process, and his dream project. —Nan Price, Associate Editor

Andrew Godbehere

Andrew Godbehere

NAN: You are currently working toward your Electrical Engineering PhD at the University of California, Berkeley. Can you describe any of the electronics projects you’ve worked on?

ANDREW: In my final project at Cornell University, I worked with a friend of mine, Nathan Ward, to make wearable wireless accelerometers and find some way to translate a dancer’s movement into music, in a project we called CUMotive. The computational core was an Atmel ATmega644V connected to an Atmel AT86RF230 802.15.4 wireless transceiver. We designed the PCBs, including the transmission line to feed the ceramic chip antenna. Everything was hand-soldered, though I recommend using an oven instead. We used Kionix KXP74 tri-axis accelerometers, which we encased in a lot of hot glue to create easy-to-handle boards and to shield them from static.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

The dancer had four accelerometers connected to a belt pack with an Atmel chip and transceiver. On the receiver side, a musical instrument digital interface (MIDI) communicated with a synthesizer. (Design details are available at http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2007/njw23_abg34/index.htm.)

I was excited about designing PCBs for 802.15.4 radios and making them work. I was also enthusiastic about trying to figure out how to make some sort of music with the product. We programmed several possibilities, one of which was a sort of theremin; another was a sort of drum kit. I found that this was the even more difficult part—not just the making, but the making sense.

When I got to Berkeley, my work switched to the theoretical. I tried to learn everything I could about robotic systems and how to make sense of them and their movements.

NAN: Describe the real-time machine vision-tracking algorithm and integrated vision system you developed for the “Are We There Yet?” installation.

ANDREW: I’ve always been interested in using electronics and robotics for art. Having a designated emphasis in New Media on my degree, I was fortunate enough to be invited to help a professor on a fascinating project.

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

For the “Are We There Yet?” installation, we used a PointGrey FireFlyMV camera with a wide-angle lens. The camera was situated a couple hundred feet away from the control computer, so we used a USB-to-Ethernet range extender to communicate with the camera.

We installed a color camera in a gallery in the Contemporary Jewish Museum in San Francisco, CA. We used Meyer Sound speakers with a high-end controller system, which enabled us to “position” sound in the space and to sweep audio tracks around at (the computer’s programmed) will. The Meyer Sound D-Mitri platform was controlled by the computer with Open Sound Control (OSC).

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

The hard work was to then program the computer to discern humans from floors, furniture, shadows, sunbeams, and cloud reflections. The gallery had many skylights, which made the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful to determine whether detected color-perturbance was human or not.

Once complete, the experience of the installation was beautiful, enchanting, and maybe a little spooky. The audio tracks were all questions (e.g., “Are we there yet?”) and they were always spoken near you, as if addressed to you. They responded to your movement in a way that felt to me like dancing with a ghost. You can watch videos about the installation at www.are-we-there-yet.org.

The “Are We There Yet?” project opens itself up to possible use as an embedded system. I’ve been told that the software I wrote works on iOS devices by the start-up company Romo (www.kickstarter.com/projects/peterseid/romo-the-smartphone-robot-for-everyone), which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud-tracking system using a Raspberry Pi and a reasonable webcam.

I may create an automatic cloud-tracking system to watch clouds. I think computers could be capable of this capacity for abstraction, even though we think of the leisurely pastime as the mark of a dreamer.

NAN: Some of the projects you’ve contributed to focus on switched linear systems, hybrid systems, wearable interfaces, and computation and control. Tell us about the projects and your research process.

ANDREW: I think my research is all driven by imagination. I try to imagine a world that could be, a world that I think would be nice, or better, or important. Once I have an idea that captivates my imagination in this way, I have no choice but to try to realize the idea and to seek out the knowledge necessary to do so.

For the wearable wireless accelerometers, it began with the thought: Wouldn’t it be cool if dance and music were inherently connected the way we try to make it seem when we’re dancing? From that thought, the designs started. I thought: The project has to be wireless and low power, it needs accelerometers to measure movement, it needs a reasonable processor to handle the data, it needs MIDI output, and so forth.

My switched linear systems research came about in a different way. As I was in class learning about theories regarding stabilization of hybrid systems, I thought: Why would we do it this complicated way, when I have this reasonably simple intuition that seems to solve the problem? I happened to see the problem a different way as my intuition was trying to grapple with a new concept. That naive accident ended up as a publication, “Stabilization of Planar Switched Linear Systems Using Polar Coordinates,” which I presented in 2010 at Hybrid Systems: Computation and Control (HSCC) in Stockholm, Sweden.

NAN: How did you become interested in electronics?

ANDREW: I always thought things that moved seemingly of their own volition were cool and inherently attention-grabbing. I would think: Did it really just do that? How is that possible?

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Electric rally-car tracks and radio-controlled cars were a favorite of mine. I hadn’t really thought about working with electronics or computers until middle school. Before that, I was all about paleontology. Then, I saw an episode of Scientific American Frontiers, which featured Alan Alda excitedly interviewing RoboCup contestants. Watching RoboCup [a soccer game involving robotic players], I was absolutely enchanted.

While my childhood electronic toys moved and somehow acted as their own entities, they were puppets to my intentions. Watching RoboCup, I knew these robots were somehow making their own decisions on-the-fly, magically making beautiful passes and goals not as puppets, but as something more majestic. I didn’t know about the technical blood, sweat, and tears that went into it all, so I could have these romantic fantasies of what it was, but I was hooked from that moment.

That spurred me to apply to a specialized science and engineering high school program. It was there that I was fortunate enough to attend a fabulous electronics class (taught by David Peins), where I learned the basics of electronics, the joy of tinkering, and even PCB design and assembly (drilling included). I loved everything involved. Even before I became academically invested in the field, I fell in love with the manual craft of making a circuit.

NAN: Tell us about your first design.

ANDREW: Once I’d learned something about designing and making circuits, I jumped in whole-hog, to a comical degree. My very first project without any course direction was an electroencephalograph!

I wanted to make stuff move on my computer with my brain, the obvious first step. I started with a rough design and worked on tweaking parameters and finding components.

In retrospect, I think that first attempt was actually an electromyograph that read the movements of my eye muscles. And it definitely was an electrocardiograph. Success!

Someone suggested that it might not be a good idea to have a power supply hooked up in any reasonably direct path with your brain. So, in my second attempt, I tried to make something new, so I digitized the signal on the brain side and hooked it up to eight white LEDs. On the other side, I had eight phototransistors coupled with the LEDs and covered with heat-shrink tubing to keep out outside light. That part worked, and I was excited about it, even though I was having some trouble properly tuning the op-amps in that version.

NAN: Describe your “dream project.”

ANDREW: Augmented reality goggles. I’m dead serious about that, too. If given enough time and money, I would start making them.

I would use some emerging organic light-emitting diode (OLED) technology. I’m eyeing the start-up MicroOLED (www.microoled.net) for its low-power “near-to-eye” display technologies. They aren’t available yet, but I’m hopeful they will be soon. I’d probably hook that up to a Raspberry Pi SBC, which is small enough to be worn reasonably comfortably.

Small, high-resolution cameras have proliferated with modern cell phones, which could easily be mounted into the sides of goggles, driving each OLED display independently. Then, it’s just a matter of creativity for how to use your newfound vision! The OpenCV computer vision library offers a great starting point for applications such as face detection, image segmentation, and tracking.

Google Glass is starting to get some notice as a sort of “heads-up” display, but in my opinion, it doesn’t go nearly far enough. Here’s the craziest part—please bear with me—I’m willing to give up directly viewing the world with my natural eyes, I would be willing to have full field-of-vision goggles with high-resolution OLED displays with stereoscopic views from two high-resolution smartphone-style cameras. (At least until the technology gets better, as described in Rainbows End by Vernor Vinge.) I think, for this version, all the components are just now becoming available.

Augmented reality goggles would do a number of things for vision and human-computer interaction (HCI). First, 3-D overlays in the real world would be possible.

Crude example: I’m really terrible with faces and names, but computers are now great with that, so why not get a little help and overlay nametags on people when I want? Another fascinating thing for me is that this concept of vision abstracts the body from the eyes. So, you could theoretically connect to the feed from any stereoscopic cameras around (e.g., on an airplane, in the Grand Canyon, or on the back of some wild animal), or you could even switch points of view with your friend!

Perhaps reality goggles are not commercially viable now, but I would unabashedly use them for myself. I dream about them, so why not make them?

Issue 280: EQ Answers

PROBLEM 1
What is the key difference between the following two C functions?

#define VOLTS_FULL_SCALE 5.000
#define KPA_PER_VOLT 100.0
#define KPA_THRESHOLD 200.0

/* adc_reading is a value between 0 and 1
 */
bool test_pressure (float adc_reading)
{
  float voltage = adc_reading * VOLTS_FULL_SCALE;
  float pressure = voltage * KPA_PER_VOLT;

  return pressure > KPA_THRESHOLD;
}

bool test_pressure2 (float adc_reading)
{
  float voltage_threshold = KPA_THRESHOLD / KPA_PER_VOLT;
  float adc_threshold = voltage_threshold / VOLTS_FULL_SCALE;

  return adc_reading > adc_threshold;
}

ANSWER 1
The first function, test_pressure(), converts the ADC reading to engineering units before making the threshold comparison. This is a direct, obvious way to implement such a function.

The second function, test_pressure2(), converts the threshold value to an equivalent ADC reading, so that the two can be compared directly.

The key difference is in performance. The first function requires that arithmetic be done on each reading before making the comparison. However, the calculations in the second function can all be performed at compile time, which means that the only run-time operation is the comparison itself.

PROBLEM 2
How many NAND gates would it take to implement the following translation table? There are five inputs and eight outputs. You may consider an inverter to be a one-input NAND gate.

Inputs Outputs
A B C D E   F G H I J K L M
1 1 1 1 1 0 0 0 0 1 1 1 1
0 1 1 1 1 0 0 0 0 0 0 1 1
0 0 1 1 1 0 0 0 0 0 0 0 0
0 0 0 1 1 1 0 0 0 0 0 1 1
0 0 0 0 1 1 0 0 0 1 1 1 1

 

ANSWER 2
First of all, note that there are really only four inputs and three unique outputs for this function, since input E is always 1 and outputs GHI are always 0. The only real outputs are F, plus the groups JK and LM.

Since the other 27 input combinations haven’t been specified, we can take the output values associated with all of them as “don’t care.”

The output F is simply the inversion of input C.

The output JK is high only when A is high or D is low.

The output LM is high except when B is low and C is high.

Therefore, the entire function can be realized with a total of five gates:

eq0641_fig1

PROBLEM 3
Quick history quiz: Who were the three companies who collaborated to create the initial standard for the Ethernet LAN?

ANSWER 3
The original 10-Mbps Ethernet standard was jointly developed by Digital Equipment Corp. (DEC), Intel, and Xerox. It was released in November 1980, and was commonly referred to as “DIX Ethernet.”

PROBLEM 4
What was the name of the wireless network protocol on which Ethernet was based? Where was it developed?

ANSWER 4
The multiple access with collision detection protocol that Ethernet uses was based on a radio protocol developed at the University of Hawaii. It was known as the “ALOHA protocol.”

Electrical Engineering Crossword (Issue 281)

The answers to Circuit Cellar’s December electronics engineering crossword puzzle are now available.

281-Crossword-key

Across

4. VENNDIAGRAM—Represents many relation possibilities [two words]
5. PERSISTRON—Produces a persistent display
9. CODOMAIN—A set that includes attainable values
12. HOMOPOLAR—Electrically symmetrical
13. TRUTHTABLE—Determines a complicated statement’s validity [two words]
17. POWERCAPPING—Controls either the instant or the average power consumption [two words]
18. MAGNETRON—The first form, invented in 1920, was a split-anode type
19. MAGNETICFLUX—F
20. TURINGCOMPLETE—The Z3 functional program-controlled computer, for example [two words]

Down

1. CHAOSCOMPUTERCLUB—Well-known European hacker association [three words]
2. LOGICLEVEL—When binary, it is high and low [two words]
3. LINEARINTERPOLATION—A simple, but inaccurate, way to convert A/D values into engineering units [two words]
6. SYNCHRONOUSCIRCUIT—A clock signal ensures this device’s parts are in parallel [two words]
7. BOARDBRINGUP—Design validation process [three words]
8. HORNERSRULE—An algorithm for any polynomial order [two words]
10. MEALY—This machine’s current state and inputs dictate its output values
11. SQUAREWAVE—It is produced by a binary logic device [two words]
14. THEREMIN—Its electronic signals can be amplified and sent to a loudspeaker
15. ABAMPERE—10 A
16. SCOPEPROBE—Connects test equipment to a DUT [two words]

CC281: Overcome Fear of Ethernet on an FPGA

As its name suggests, the appeal of an FPGA is that it is fully programmable. Instead of writing software, you design hardware blocks to quickly do what’s required of a digital design. This also enables you to reprogram an FPGA product in the field to fix problems “on the fly.”

But what if “you” are an individual electronics DIYer rather than an industrial designer? DIYers can find FPGAs daunting.

Issue281The December issue of Circuit Cellar issue should offer reassurance, at least on the topic of “UDP Streaming on an FPGA.” That’s the focus of Steffen Mauch’s article for our Programmable Logic issue (p. 20).

Ethernet on an FPGA has several applications. For example, it can be used to stream measured signals to a computer for analysis or to connect a camera (via Camera Link) to an FPGA to transmit images to a computer.

Nonetheless, Mauch says, “most novices who start to develop FPGA solutions are afraid to use Ethernet or DDR-SDRAM on their boards because they fear the resulting complexity.” Also, DIYers don’t have the necessary IP core licenses, which are costly and often carry restrictions.

Mauch’s UDP monitor project avoids such costs and restrictions by using a free implementation of an Ethernet-streaming device based on a Xilinx Spartan-6 LX FPGA. His article explains how to use OpenCores’s open-source tri-mode MAC implementation and stream UDP packets with VHDL over Ethernet.

Mauch is not the only writer offering insights into FPGAs. For more advanced FPGA enthusiasts, columnist Colin O’Flynn discusses hardware co-simulation (HCS), which enables the software simulation of a design to be offloaded to an FPGA. This approach significantly shortens the time needed for adequate simulation of a new product and ensures that a design is actually working in hardware (p. 52).

This Circuit Cellar issue offers a number of interesting topics in addition to programmable logic. For example, you’ll find a comprehensive overview of the latest in memory technologies, advice on choosing a flash file system for your embedded Linux system, a comparison of amplifier classes, and much more.

Mary Wilson
editor@circuitcellar.com

DSP vs. RISC Processors (EE Tip #110)

There are a few fundamental differences between DSP and RISC processors. One difference has to do with arithmetic. In the analog domain, saturation, or clipping, isn’t recommended. But it generally comes with a design when, for example, an op-amp is driven high with an input signal. In the digital domain, saturation should be prevented because it causes distortion of the signal being analyzed. But some saturation is better than overflow or wrap-around. Generally speaking, a RISC processor will not saturate, but a DSP will. This is an important feature if you want to do signal processing.

Let’s take a look at an example. Consider a 16-bit processor working with unsigned numbers. The minimum value that can be represented is 0 (0×0000), and the maximum is 65535 (0xFFFF). Compute:

out = 2 × x

where x is an input value (or an intermediate value in a series of calculations). With a generic processor, you’re in trouble when x is greater than 32767.

If x = 33000 (0x80E8), the result is out = 66000 (0x101D0). Because this value can’t be represented with 16 bits, the out = 2 × x processor will truncate the value:

out = 2 × 333000 = 464(0x01D0)

From that point on, all the calculations will be off. On the other end, a DSP (or an arithmetic unit with saturation) will saturate the value to its maximum (or minimum) capability:

out = 2 × 333000 = 65535(0xFFFF)

In the first case, looking at out, it would be wrong to assume that x is a small value. With saturation, the out is still incorrect, although it accurately shows that the input is a large number. Trends in the signal can be tracked with saturation. If the saturation isn’t severe (affecting only a few samples), the signal might be demodulated correctly.

Generic RISC processors like the NXP (Philips) LPC2138 don’t have a saturation function, so it’s important to ensure that the input values or the size of the variable are scaled correctly to prevent overflow. This problem can be avoided with a thorough simulation process.—Circuit Cellar 190, Bernard Debbasch, “ARM-Based Modern Answering Machine,” 2006.

This piece originally appeared in Circuit Cellar 190, 2006.