Q&A: Andrew Godbehere, Imaginative Engineering

Engineers are inherently imaginative. I recently spoke with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley, about how his ideas become realities, his design process, and his dream project. —Nan Price, Associate Editor

Andrew Godbehere

Andrew Godbehere

NAN: You are currently working toward your Electrical Engineering PhD at the University of California, Berkeley. Can you describe any of the electronics projects you’ve worked on?

ANDREW: In my final project at Cornell University, I worked with a friend of mine, Nathan Ward, to make wearable wireless accelerometers and find some way to translate a dancer’s movement into music, in a project we called CUMotive. The computational core was an Atmel ATmega644V connected to an Atmel AT86RF230 802.15.4 wireless transceiver. We designed the PCBs, including the transmission line to feed the ceramic chip antenna. Everything was hand-soldered, though I recommend using an oven instead. We used Kionix KXP74 tri-axis accelerometers, which we encased in a lot of hot glue to create easy-to-handle boards and to shield them from static.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

The dancer had four accelerometers connected to a belt pack with an Atmel chip and transceiver. On the receiver side, a musical instrument digital interface (MIDI) communicated with a synthesizer. (Design details are available at http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2007/njw23_abg34/index.htm.)

I was excited about designing PCBs for 802.15.4 radios and making them work. I was also enthusiastic about trying to figure out how to make some sort of music with the product. We programmed several possibilities, one of which was a sort of theremin; another was a sort of drum kit. I found that this was the even more difficult part—not just the making, but the making sense.

When I got to Berkeley, my work switched to the theoretical. I tried to learn everything I could about robotic systems and how to make sense of them and their movements.

NAN: Describe the real-time machine vision-tracking algorithm and integrated vision system you developed for the “Are We There Yet?” installation.

ANDREW: I’ve always been interested in using electronics and robotics for art. Having a designated emphasis in New Media on my degree, I was fortunate enough to be invited to help a professor on a fascinating project.

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

For the “Are We There Yet?” installation, we used a PointGrey FireFlyMV camera with a wide-angle lens. The camera was situated a couple hundred feet away from the control computer, so we used a USB-to-Ethernet range extender to communicate with the camera.

We installed a color camera in a gallery in the Contemporary Jewish Museum in San Francisco, CA. We used Meyer Sound speakers with a high-end controller system, which enabled us to “position” sound in the space and to sweep audio tracks around at (the computer’s programmed) will. The Meyer Sound D-Mitri platform was controlled by the computer with Open Sound Control (OSC).

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

The hard work was to then program the computer to discern humans from floors, furniture, shadows, sunbeams, and cloud reflections. The gallery had many skylights, which made the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful to determine whether detected color-perturbance was human or not.

Once complete, the experience of the installation was beautiful, enchanting, and maybe a little spooky. The audio tracks were all questions (e.g., “Are we there yet?”) and they were always spoken near you, as if addressed to you. They responded to your movement in a way that felt to me like dancing with a ghost. You can watch videos about the installation at www.are-we-there-yet.org.

The “Are We There Yet?” project opens itself up to possible use as an embedded system. I’ve been told that the software I wrote works on iOS devices by the start-up company Romo (www.kickstarter.com/projects/peterseid/romo-the-smartphone-robot-for-everyone), which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud-tracking system using a Raspberry Pi and a reasonable webcam.

I may create an automatic cloud-tracking system to watch clouds. I think computers could be capable of this capacity for abstraction, even though we think of the leisurely pastime as the mark of a dreamer.

NAN: Some of the projects you’ve contributed to focus on switched linear systems, hybrid systems, wearable interfaces, and computation and control. Tell us about the projects and your research process.

ANDREW: I think my research is all driven by imagination. I try to imagine a world that could be, a world that I think would be nice, or better, or important. Once I have an idea that captivates my imagination in this way, I have no choice but to try to realize the idea and to seek out the knowledge necessary to do so.

For the wearable wireless accelerometers, it began with the thought: Wouldn’t it be cool if dance and music were inherently connected the way we try to make it seem when we’re dancing? From that thought, the designs started. I thought: The project has to be wireless and low power, it needs accelerometers to measure movement, it needs a reasonable processor to handle the data, it needs MIDI output, and so forth.

My switched linear systems research came about in a different way. As I was in class learning about theories regarding stabilization of hybrid systems, I thought: Why would we do it this complicated way, when I have this reasonably simple intuition that seems to solve the problem? I happened to see the problem a different way as my intuition was trying to grapple with a new concept. That naive accident ended up as a publication, “Stabilization of Planar Switched Linear Systems Using Polar Coordinates,” which I presented in 2010 at Hybrid Systems: Computation and Control (HSCC) in Stockholm, Sweden.

NAN: How did you become interested in electronics?

ANDREW: I always thought things that moved seemingly of their own volition were cool and inherently attention-grabbing. I would think: Did it really just do that? How is that possible?

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Electric rally-car tracks and radio-controlled cars were a favorite of mine. I hadn’t really thought about working with electronics or computers until middle school. Before that, I was all about paleontology. Then, I saw an episode of Scientific American Frontiers, which featured Alan Alda excitedly interviewing RoboCup contestants. Watching RoboCup [a soccer game involving robotic players], I was absolutely enchanted.

While my childhood electronic toys moved and somehow acted as their own entities, they were puppets to my intentions. Watching RoboCup, I knew these robots were somehow making their own decisions on-the-fly, magically making beautiful passes and goals not as puppets, but as something more majestic. I didn’t know about the technical blood, sweat, and tears that went into it all, so I could have these romantic fantasies of what it was, but I was hooked from that moment.

That spurred me to apply to a specialized science and engineering high school program. It was there that I was fortunate enough to attend a fabulous electronics class (taught by David Peins), where I learned the basics of electronics, the joy of tinkering, and even PCB design and assembly (drilling included). I loved everything involved. Even before I became academically invested in the field, I fell in love with the manual craft of making a circuit.

NAN: Tell us about your first design.

ANDREW: Once I’d learned something about designing and making circuits, I jumped in whole-hog, to a comical degree. My very first project without any course direction was an electroencephalograph!

I wanted to make stuff move on my computer with my brain, the obvious first step. I started with a rough design and worked on tweaking parameters and finding components.

In retrospect, I think that first attempt was actually an electromyograph that read the movements of my eye muscles. And it definitely was an electrocardiograph. Success!

Someone suggested that it might not be a good idea to have a power supply hooked up in any reasonably direct path with your brain. So, in my second attempt, I tried to make something new, so I digitized the signal on the brain side and hooked it up to eight white LEDs. On the other side, I had eight phototransistors coupled with the LEDs and covered with heat-shrink tubing to keep out outside light. That part worked, and I was excited about it, even though I was having some trouble properly tuning the op-amps in that version.

NAN: Describe your “dream project.”

ANDREW: Augmented reality goggles. I’m dead serious about that, too. If given enough time and money, I would start making them.

I would use some emerging organic light-emitting diode (OLED) technology. I’m eyeing the start-up MicroOLED (www.microoled.net) for its low-power “near-to-eye” display technologies. They aren’t available yet, but I’m hopeful they will be soon. I’d probably hook that up to a Raspberry Pi SBC, which is small enough to be worn reasonably comfortably.

Small, high-resolution cameras have proliferated with modern cell phones, which could easily be mounted into the sides of goggles, driving each OLED display independently. Then, it’s just a matter of creativity for how to use your newfound vision! The OpenCV computer vision library offers a great starting point for applications such as face detection, image segmentation, and tracking.

Google Glass is starting to get some notice as a sort of “heads-up” display, but in my opinion, it doesn’t go nearly far enough. Here’s the craziest part—please bear with me—I’m willing to give up directly viewing the world with my natural eyes, I would be willing to have full field-of-vision goggles with high-resolution OLED displays with stereoscopic views from two high-resolution smartphone-style cameras. (At least until the technology gets better, as described in Rainbows End by Vernor Vinge.) I think, for this version, all the components are just now becoming available.

Augmented reality goggles would do a number of things for vision and human-computer interaction (HCI). First, 3-D overlays in the real world would be possible.

Crude example: I’m really terrible with faces and names, but computers are now great with that, so why not get a little help and overlay nametags on people when I want? Another fascinating thing for me is that this concept of vision abstracts the body from the eyes. So, you could theoretically connect to the feed from any stereoscopic cameras around (e.g., on an airplane, in the Grand Canyon, or on the back of some wild animal), or you could even switch points of view with your friend!

Perhaps reality goggles are not commercially viable now, but I would unabashedly use them for myself. I dream about them, so why not make them?

Member Profile: Walter O. Krawec

Walter O. Krawec

Walter O. Krawec

Upstate New York

Research Assistant and PhD Student, Stevens Institute of Technology

Walter has been reading Circuit Cellar since he got his first issue in 1999. Free copies were available at the Trinity College Fire Fighting Robot Contest, which was his first experience with robotics. Circuit Cellar was the first magazine for which he wrote an article (“An HC11 File Manager,” two-part series, issues 129 and 130, 2001).

Robotics, among other things. He is particularly interested in developmental and evolutionary robotics (where the robot’s strategies, controllers, and so forth are evolved instead of programmed in directly).

Walter is enjoying his Raspberry Pi. “What a remarkable product! I think it’s great that I can take my AI software, which I’ve been writing on a PC, copy it to the Raspberry Pi, compile it with GCC, then off it goes with little or no modification!”

Walter is designing a new programming language and interpreter (for Windows/Mac/Linux, including the Raspberry Pi) that uses a simulated quantum computer to drive a robot. “What better way to learn the basics of quantum computing than by building a robot around one?” The first version of this language is available on his website (walterkrawec.org). He has plans to release an improved version.

Walter said he is amazed with the power of the latest embedded technology, for example the Raspberry Pi. “For less than $40 you have a perfect controller for a robot that can handle incredibly complex programs. Slap on one of those USB battery packs and you have a fully mobile robot,” he said. He used a Pololu Maestro to interface the motors and analog sensors. “It all works and it does everything I need.” However, he added, “If you want to build any of this yourself by hand it can be much harder, especially since most of the cool stuff is surface mount, making it difficult to get started.”

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

I/O Raspberry Pi Expansion Card

The RIO is an I/O expansion card intended for use with the Raspberry Pi SBC. The card stacks on top of a Raspberry Pi to create a powerful embedded control and navigation computer in a small 20-mm × 65-mm × 85-mm footprint. The RIO is well suited for applications requiring real-world interfacing, such as robotics, industrial and home automation, and data acquisition and control.

RoboteqThe RIO adds 13 inputs that can be configured as digital inputs, 0-to-5-V analog inputs with 12-bit resolution, or pulse inputs capable of pulse width, duty cycle, or frequency capture. Eight digital outputs are provided to drive loads up to 1 A each at up to 24 V.
The RIO includes a 32-bit ARM Cortex M4 microcontroller that processes and buffers the I/O and creates a seamless communication with the Raspberry Pi. The RIO processor can be user-programmed with a simple BASIC-like programming language, enabling it to perform logic, conditioning, and other I/O processing in real time. On the Linux side, RIO comes with drivers and a function library to quickly configure and access the I/O and to exchange data with the Raspberry Pi.

The RIO features several communication interfaces, including an RS-232 serial port to connect to standard serial devices, a TTL serial port to connect to Arduino and other microcontrollers that aren’t equipped with a RS-232 transceiver, and a CAN bus interface.
The RIO is available in two versions. The RIO-BASIC costs $85 and the RIO-AHRS costs $175.

Roboteq, Inc.

Electrical Engineering and Artistic Expression

I think we’re on the verge of the next artistic renaissance. This time, instead of magnificent architecture, beautifully painted portraits, and the rise of humanism, I think engineering (specifically electrical engineering) will begin to define exciting new forms of artistic expression.

Cornell University graduate and electrical engineer Jeremy Blum in 2011 blog post

Regular Circuit Cellar readers will recognize Jeremy Blum as our November issue interview subject. Blum’s post sums up a philosophy that seems to be shared by some other recent EE graduates or aspiring electrical engineers. They view their work as art, or at least they like to occasionally work in art.

For example, Circuit Cellar’s January issue will feature an interview with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley. He has intertwined engineering and art more than once.

This is the central control belt pack worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel Mega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt pack worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel Mega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

When he was Cornell student, he collaborated with Nathan Ward on a final project to translate a dancer’s movement into music. They created a central control belt pack for the dancer, which connected to four wearable wireless accelerometers to measure the dancer’s movements. Inside the belt pack, an ATmega 644V connected to an Atmel AT86RF230 wireless transceiver interfaced with a musical instrument digital interface (MIDI) and synthesizer.

When Godbehere graduated from Cornell and headed to UC Berkeley, his focus shifted to theoretical topics and robotic systems. But he jumped at a professor’s invitation to become involved in the “Are We There Yet?” art installation in 2011 at the Contemporary Jewish Museum in San Francisco.

During the four-month exhibit, visitors entered a nearly empty gallery to encounter recorded questions emanating from numerous floor speakers. A camera followed each visitor’s moves and robotic algorithms enabled it to determine which floor speaker to activate. The questions heard could range from “What Is My Purpose?” to “What’s Up Doc?”

How a visitor moved through the interactive installation triggered the combination of questions he or she heard.

Video documentary of “Are We There Yet?” 

Godbehere was the computer vision system engineer working with artists Gil Gershoni and Ken Goldberg, who is also a robotics and new media professor at UC Berkeley.

“We installed a color camera in a beautiful gallery in the Contemporary Jewish Museum… and a set of speakers with a high-end controller system from Meyer Sound that enabled us to ‘position’ sound in the space and to sweep audio tracks around at (the computer’s programmed) will,” Godbehere says. “The Meyer Sound System is the D-Mitri control system, controlled by the computer with Open Sound Control (OSC).

“The hard work was then to program the computer to discern humans from floors, furniture, shadows, sunbeams, and reflections of clouds. The gallery had many skylights, making the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful in determining if detected color-perturbance was human or not.”

Behind the technology of “Are We There Yet?”

Can such art also have “practical” consumer applications? Godbehere says there are elements that can be used as an embedded system.

“I’ve been told that the software I wrote works on iOS devices by the startup company Romo, which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud tracking system using a Raspberry Pi and a reasonable webcam.”

If you’re interested in learning more about Godbehere’s engineering and artistic work, be sure to check out the January issue of Circuit Cellar.

And if you have an opinion on electrical engineering and art, please post your comments below.

MIT’s Self-Assembling Robots

Calling it a low-tech solution to a high-tech challenge, MIT researchers have received a lot of attention recently for their modular system of self-assembling robot cubes. The video of the so-called M-Blocks in action, which MIT posted earlier this month on YouTube, has also become high profile. A recent tally has the video at nearly 1.5 million views and counting.


The text accompanying the video explains how the cubes are able to move around and climb over each other,  jump into the air, and roll across surfaces as they connect in a variety of configurations. And they do all this without any external moving parts. Instead, each M-Block contains a flywheel that can reach speeds of 20,000 rpm. When the flywheel brakes, it imparts angular momentum to the cube.  Precisely placed magnets on every face and edge of each M-Block enable any two cubes to attach to each other.

The simple design holds short- and long-term promise.  According  to an October 4 article by Larry Hardesty of the MIT News Office, it is hoped that the blocks can be miniaturized someday, perhaps to swarming microbots that can self-assemble with a purpose. Even at their current size, further development of the M-Blocks might lead to “armies of mobile cubes” that can help repair bridges and buildings in emergencies, raise scaffolding, reconfigure into heavy equipment or furniture as needed, or head in to environments hostile to humans to diagnose and repair problems, the article suggests.

While it may not rise to “cooperative group behavior,”  the ability of one cube to drag another and influence its alignment is impressive. What could 100 or more of these robots accomplish as MIT researchers continue to develop algorithms to control them?

A prototype of the new modular robot, with its flywheel exposed. (Photo: M. Scott Brauer)

A prototype of the new modular robot, with its interior and flywheel exposed.
(Photo: M. Scott Brauer)

Q&A: Jeremy Blum, Electrical Engineer, Entrepreneur, Author

Jeremy Blum

Jeremy Blum

Jeremy Blum, 23, has always been a self-proclaimed tinkerer. From Legos to 3-D printers, he has enjoyed learning about engineering both in and out of the classroom. A recent Cornell University College of Engineering graduate, Jeremy has written a book, started his own company, and traveled far to teach children about engineering and sustainable design. Jeremy, who lives in San Francisco, CA, is now working on Google’s Project Glass.—Nan Price, Associate Editor

NAN: When did you start working with electronics?

JEREMY: I’ve been tinkering, in some form or another, ever since I figured out how to use my opposable thumbs. Admittedly, it wasn’t electronics from the offset. As with most engineers, I started with Legos. I quickly progressed to woodworking and I constructed several pieces of furniture over the course of a few years. It was only around the start of my high school career that I realized the extent to which I could express my creativity with electronics and software. I thrust myself into the (expensive) hobby of computer building and even built an online community around it. I financed my hobby through my two companies, which offered computer repair services and video production services. After working exclusively with computer hardware for a few years, I began to dive deeper into analog circuits, robotics, microcontrollers, and more.

NAN: Tell us about some of your early, pre-college projects.

JEREMY: My most complex early project was the novel prosthetic hand I developed in high school. The project was a finalist in the prestigious Intel Science Talent Search. I also did a variety of robotics and custom-computer builds. The summer before starting college, my friends and I built a robot capable of playing “Guitar Hero” with nearly 100% accuracy. That was my first foray into circuit board design and parallel programming. My most ridiculous computer project was a mineral oil-cooled computer. We submerged an entire computer in a fish tank filled with mineral oil (it was actually a lot of baby oil, but they are basically the same thing).

DeepNote Guitar Hero Robot

DeepNote Guitar Hero Robot

Mineral Oil-Cooled Computer

Mineral Oil-Cooled Computer

NAN: You’re a recent Cornell University College of Engineering graduate. While you were there, you co-founded Cornell’s PopShop. Tell us about the workspace. Can you describe some PopShop projects?

Cornell University's PopShop

Cornell University’s PopShop

JEREMY: I recently received my Master’s degree in Electrical and Computer Engineering from Cornell University, where I previously received my BS in the same field. During my time at Cornell, my peers and I took it upon ourselves to completely retool the entrepreneurial climate at Cornell. The PopShop, a co-working space that we formed a few steps off Cornell’s main campus, was our primary means of doing this. We wanted to create a collaborative space where students could come to explore their own ideas, learn what other entrepreneurial students were working on, and get involved themselves.

The PopShop is open to all Cornell students. I frequently hosted events there designed to get more students inspired about pursuing their own ideas. Common occurrences included peer office hours, hack-a-thons, speed networking sessions, 3-D printing workshops, and guest talks from seasoned venture capitalists.

Student startups that work (or have worked) out of the PopShop co-working space include clothing companies, financing companies, hardware startups, and more. Some specific companies include Rosie, SPLAT, LibeTech (mine), SUNN (also mine), Bora Wear, Yorango, Party Headphones, and CoVenture.

NAN: Give us a little background information about Cornell University Sustainable Design (CUSD). Why did you start the group? What types of CUSD projects were you involved with?

CUSD11JEREMY: When I first arrived at Cornell my freshman year, I knew right away that I wanted to join a research lab, and that I wanted to join a project team (knowing that I learn best in hands-on environments instead of in the classroom). I joined the Cornell Solar Decathlon Team, a very large group of mostly engineers and architects who were building a solar-powered home to enter in the biannual solar decathlon competition orchestrated by the Department of Energy.

By the end of my freshman year, I was the youngest team leader in the organization.  After competing in the 2009 decathlon, I took over as chief director of the team and worked with my peers to re-form the organization into Cornell University Sustainable Design (CUSD), with the goal of building a more interdisciplinary team, with far-reaching impacts.


Under my leadership, CUSD built a passive schoolhouse in South Africa (which has received numerous international awards), constructed a sustainable community in Nicaragua, has been the only student group tasked with consulting on sustainable design constraints for Cornell’s new Tech Campus in New York City, partnered with nonprofits to build affordable homes in upstate New York, has taught workshops in museums and school, contributed to the design of new sustainable buildings on Cornell’s Ithaca campus, and led a cross-country bus tour to teach engineering and sustainability concepts at K–12 schools across America. The group is now comprised of students from more than 25 different majors with dozens of advisors and several simultaneous projects. The new team leaders are making it better every day. My current startup, SUNN, spun out of an EPA grant that CUSD won.

CUSD7NAN: You spent two years working at MakerBot Industries, where you designed electronics for a 3-D printer and a 3-D scanner. Any highlights from working on those projects?

JEREMY: I had a tremendous opportunity to learn and grow while at MakerBot. When I joined, I was one of about two dozen total employees. Though I switched back and forth between consulting and full-time/part-time roles while class was in session, by the time I stopped working with MakerBot (in January 2013), the company had grown to more than 200 people. It was very exciting to be a part of that.

I designed all of the electronics for the original MakerBot Replicator. This constituted a complete redesign from the previous electronics that had been used on the second generation MakerBot 3-D printer. The knowledge I gained from doing this (e.g., PCB design, part sourcing, DFM, etc.) drastically outweighed much of what I had learned in school up to that point. I can’t say much about the 3-D scanner (the MakerBot Digitizer), as it has been announced, but not released (yet).

The last project I worked on before leaving MakerBot was designing the first working prototype of the Digitizer electronics and firmware. These components comprised the demo that was unveiled at SXSW this past April. This was a great opportunity to apply lessons learned from working on the Replicator electronics and find ways in which my personal design process and testing techniques could be improved. I frequently use my MakerBot printers to produce custom mechanical enclosures that complement the open-source electronics projects I’ve released.

NAN: Tell us about your company, Blum Idea Labs. What types of projects are you working on?

JEREMY: Blum Idea Labs is the entity I use to brand all my content and consulting services. I primarily use it as an outlet to facilitate working with educational organizations. For example, the St. Louis Hacker Scouts, the African TAHMO Sensor Workshop, and several other international organizations use a “Blum Idea Labs Arduino curriculum.” Most of my open-source projects, including my tutorials, are licensed via Blum Idea Labs. You can find all of them on my blog (www.jeremyblum.com/blog). I occasionally offer private design consulting through Blum Idea Labs, though I obviously can’t discuss work I do for clients.

NAN: Tell us about the blog you write for element14.

JEREMY: I generally use my personal blog to write about projects that I’ve personally been working on.  However, when I want to talk about more general engineering topics (e.g., sustainability, engineering education, etc.), I post them on my element14 blog. I have a great working relationship with element14. It has sponsored the production of all my Arduino Tutorials and also provided complete parts kits for my book. We cross-promote each-other’s content in a mutually beneficial fashion that also ensures that the community gets better access to useful engineering content.

NAN: You recently wrote Exploring Arduino: Tools and Techniques for Engineering Wizardry. Do you consider this book introductory or is it written for the more experienced engineer?

JEREMY: As with all the video and written content that I produce on my website and on YouTube, I tried really hard to make this book useful and accessible to both engineering veterans and newbies. The book builds on itself and provides tons of optional excerpts that dive into greater technical detail for those who truly want to grasp the physics and programming concepts behind what I teach in the book. I’ve already had readers ranging from teenagers to senior citizens comment on the applicability of the book to their varying degrees of expertise. The Amazon reviews tell a similar story. I supplemented the book with a lot of free digital content including videos, part descriptions, and open-source code on the book website.

NAN: What can readers expect to learn from the book?

JEREMY: I wrote the book to serve as an engineering introduction and as an idea toolbox for those wanting to dive into concepts in electrical engineering, computer science, and human-computer interaction design. Though Exploring Arduino uses the Arduino as a platform to experiment with these concepts, readers can expect to come away from the book with new skills that can be applied to a variety of platforms, projects, and ideas. This is not a recipe book. The projects readers will undertake throughout the book are designed to teach important concepts in addition to traditional programming syntax and engineering theories.

NAN: I see you’ve spent some time introducing engineering concepts to children and teaching them about sustainable engineering and renewable energy. Tell us about those experiences. Any highlights?

JEREMY: The way I see it, there are two ways in which engineers can make the world a better place: they can design new products and technologies that solve global problems or they can teach others the skills they need to assist in the development of solutions to global problems. I try hard to do both, though the latter enables me to have a greater impact, because I am able to multiply my impact by the number of students I teach. I’ve taught workshops, written curriculums, produced videos, written books, and corresponded directly with thousands of students all around the world with the goal of transferring sufficient knowledge for these students to go out and make a difference.

Here are some highlights from my teaching work:


I taught BlueStamp Engineering, a summer program for high school students in NYC in the summer of 2012. I also guest-lectured at the program in 2011 and 2013.

I co-organized a cross-country bus tour where we taught sustainability concepts to school children across the country.

indiaI was invited to speak at Techkriti 2013 in Kanpur, India. I had the opportunity to meet many students from IIT Kanpur who already followed my videos and used my tutorials to build their own projects.

Blum Idea Labs partnered with the St. Louis Hacker Scouts to construct a curriculum for teaching electronics to the students. Though I wasn’t there in person, I did welcome them all to the program with a personalized video.

brooklyn_childrens_zoneThrough CUSD, I organized multiple visits to the Brooklyn Children’s Zone, where my team and I taught students about sustainable architecture and engineering.

Again with CUSD, we visited the Intrepid museum to teach sustainable energy concepts using potato batteries.


NAN: Speaking of promoting engineering to children, what types of technologies do you think will be important in the near future?

JEREMY: I think technologies that make invention more widely accessible are going to be extremely important in the coming years. Cheaper tools, prototyping platforms such as the Arduino and the Raspberry Pi, 3-D printers, laser cutters, and open developer platforms (e.g., Android) are making it easier than ever for any person to become an inventor or an engineer.  Every year, I see younger and younger students learning to use these technologies, which makes me very optimistic about the things we’ll be able to do as a society.

3-D Printed Robotics Innovation: A Low-Cost Solution for Prosthetic Hands

Gibbardholding DextrusUK-based inventor and robotist Joel Gibbard used a 3-D printer to design and build a prosthetic robotic hand. He founded the Open Hand Project with the goal of making the prosthetic hands available for amputees.


 NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

 JOEL: I was born in Bristol, UK, and grew up in that area. Bristol is a fantastic place for robotics in the UK, so I couldn’t have had a better place to start from. There’s a lot to engage children here, like the highly popular @Bristol science museum. I studied for a degree in Robotics at the University of Plymouth, which encourages a very practical approach to engineering. Right from the first year we were working with electronics, robotics, and writing code.

 NAN: When did you first start working with robotics?

 JOEL: The first robots I ever made were using the Lego MINDSTORMS NXT robotics kits. I was very lucky because these were just starting to come out when I was about 6 or 7 years old. I think from ages three to 15 every single birthday or Christmas present was a new Lego set. To this day, I still think Lego is the best tool for rapid prototyping in the early stages of an idea.

 NAN: Tell us about your first design/some of your early projects. Do you have any photos or diagrams?

 JOEL: The earliest project I remember working on with my father was a full-scale model of the space shuttle complete with robotic arm and fully motorized launch pad. When on the launch pad it was almost my height. I think my father took having kids as an opportunity to get back into making things. We also made a Saturn 5 rocket, Sydney Harbour Bridge and Concorde. One of my first robots was a Lego Technic creation. It had tracks, a double-barreled gun on one arm, a pincer on the other, and a submarine on the back, just in case. I think I was about eight years old when I made it.

 NAN: You originally developed the Dextrus robotic hand while you were at the University of Plymouth. Why did you design the system? How has its development progressed since the original concept?


Joel keeps an ongoing design sketchbook.

JOEL: I have a sketchbook of around 10 to 20 inventions that are options for the next thing I want to make. This grows faster than it shrinks. One day I was thinking about what to make next and the thought occurred to me that if I were to lose my hand, I wouldn’t be able to make anything. So it made the most sense to design a hand to have just in case. Once I have that, heaven forbid I need to; I could use it to then make a better hand, and so forth, until I have a robot hand that is as good as a human hand. It sounds ridiculous, but that was enough motivation for me to make the first one.


This is an early version of the Dextrus hand.

After posting the project on YouTube, I received comments from people asking to have the designs to make their own, which wasn’t really possible, since it was such a one-off prototype. But I thought it was a good idea. Why not make an open-source hand? After that, I looked more into prostheses and discovered that this is really necessary and people want it.

 NAN: The Dextrus incorporates 3-D printed parts. How does the 3-D printing factor in your design? Does it make each hand customizable?

 JOEL: 3-D printing is essential to the design. Many of the parts have cavities inside them, which wouldn’t be possible to make using injection molding. One would have to make the parts in two halves then glue them together, which creates weak points. With 3-D printing, each part is one solid piece with cavities for the tendons to slide through.

Customization is a great area to explore in the future. It’s quite easy to modify things like the length and shape of the fingers while maintaining the functionality of the hand. In the not-too-distant future, I could envisage an amputee 3-D scanning their remaining hand and sending the scan to me. I could then reverse it and match their Dextrus hand (approximately) to the dimensions of their other hand.


The 3-D printed Dextrus hand.

 NAN: There are three types of Dextrus robotic hands: The Dextrus, the Dextrus EMG, and the Dextrus Research. Can you describe the differences?

 JOEL: They have the same basic design and components. The Dextrus and Dextrus EMG are exactly the same, but the EMG comes with all of the extras that enable someone to use it as a myoelectric prosthesis. The Dextrus Research has a number of differences that result in a more robust (but more expensive and heavier) hand. It has steel ball bearings instead of nylon bushes and is printed with denser plastic. It also comes with everything you need to use it straight out of the box (e.g., a power supply).

 NAN: You founded the Open Hand Project as a result of your work on the Dextrus robotic hand. Describe the project and its purpose.

 JOEL: The aim of the Open Hand Project is to make advanced prosthetic hands more accessible to amputees. It has the potential to revolutionize the prosthetics industry by trivializing the cost of prosthetics (to insurance companies). I also hope that it will help to advance prosthetic hands. If the hardware is much less expensive, we can start to focus on the human robot interface. At the moment, it uses electromyographical signals, which sound advanced but are actually 50-year-old technology and don’t give complex functionality like individual finger movement. If the hardware is inexpensive, then money can instead be spent on operations to tap into the nervous system and then the hand can literally be a direct replacement for the human hand. You’ll think about moving your hand and the robotic hand will do exactly what you’re thinking. If done correctly, you’ll also be able to feel with it. We’re talking Luke Skywalker Star Wars tech. It exists now, but is not yet fully tested and proven.

 NAN: Prior to venturing out on your own, you were an Applications Engineer at National Instruments (NI). Although you are no longer working for the company, it is backing the Open Hand Project by providing test and measurement equipment. How did NI become involved in the project?

 JOEL: National Instruments has been great since I’ve left the company. I explained what I wanted to do, and it was fully supportive. To get the equipment, all I had to do was ask! It really does live up to its reputation of being one of the best places to work. I hope that I’ll be able to repay them with business in the future. If I’m successful, then I’ll be able to buy equipment for future projects.

 NAN: Why did you decide to use crowdfunding for this project?

 JOEL: I wanted to keep everything open source for this project. Investors don’t want to fund an open-source project. You have no leverage to make money and your ideas will be taken and used by other people (which is encouraged). For this reason, only people who are genuinely interested in the vision of the project will want to invest, and that’s just not something that will make a company money. Crowdfunding is perfect, because people appreciate how this can help people and they’re willing to contribute to that.

I believe that everyone should have access to public health care and that your level of care should not be dependent on the size of your wallet. Making prosthetics open source will be a step in the right direction, but this model does not have to be limited to prosthetics. Take the drugs industry for example. Drugs companies work off patents, they have to patent their drugs in order to make back the millions of dollars they spend developing them and end up charging $1,000 for a pill that costs them $0.01 to make in order to cover all of their costs. If the research was publicly funded and open source, the innovations in this industry would be dramatically accelerated and once drugs were developed, they could be sold more cheaply, if sale of the drugs was government regulated, the price could be controlled and the money could go back into funding more developments.

 NAN: What’s next for the Dextrus?

 JOEL: There are a few directions I’d like this project to go in. First and foremost is the development of low-cost robotic prostheses for adults. After this, I’d like to look into partial amputations and finger prostheses. I’d also like to try and miniaturize the hand so that children can use it as well. Before any of this can happen I’ll need to reach my crowdfunding goal on indiegogo!


CC279: Working with RobotBasic

In Circuit Cellar’s October issue, columnist Jeff Bachiochi introduces readers to RobotBasic, a free robot control programming language that you can use to control real or simulated robots, and provides a detailed explanation on how to use it.

Photo 1: This army of robots all use the RobotBASIC Robot Operating System (RROS). Note the large robot has an arm located just above the wheels that is controlled by a second RROS. It uses an on-board laptop running a RobotBASIC (RB) application. The small robots are all controlled via a Bluetooth link from an external PC running an RB application.

Photo 1: This army of robots all use the RobotBASIC Robot Operating System (RROS). Note the large robot has an arm located just above the wheels that is controlled by a second RROS. It uses an on-board laptop running a RobotBASIC (RB) application. The small robots are all controlled via a Bluetooth link from an external PC running an RB application.

“About five years ago, John Blankenship and Samuel Mishal coauthored Robot Programmer’s Bonanza, a book explaining the freely available RobotBASIC IDE they offer. RobotBASIC (RB) is a powerful language that enables you to use standard BASIC syntax (or a modified C-style syntax ( i.e., ++, +=, !=, and &&) to quickly write a program to control and simulate a robot with many types of sensors,” Bachiochi says. “This is a great tool to teach programming.”

RB, with more than 800 commands and functions, can also be a tool for non-robotic applications such as tackling tough engineering problems or creating animated simulations, Bachiochi says.

It’s likely that anyone who starts out simulating with RobotBasic will eventually want to control real robot hardware.

“There is no need to worry,” Bachiochi says. “RB was written to make use of a PC’s I/O. The parallel port is a good source for digital I/O and the serial port is well suited for external communication. The same commands used for robot movement in the simulator can alternatively be sent to a serial port establishing a sort of serial robot command protocol. But, tethered robots aren’t so cool, and many robots are too small to tote around a PC as their “great and powerful Oz.”

“Luckily, much has changed since RB’s original concepts were put into practice,” he adds. “We all know what has happened to these PC ports. They’ve fallen under the USB’s mighty power. RB doesn’t care whether it is talking with a serial port or a USB virtual serial port. USB offers inexpensive Bluetooth dongles and can create wireless serial communication to external devices.”

Bachiochi also discusses the RobotBASIC Robot Operating System (RROS), created to support RB’s serial robot command protocol. The module is available from RB’s website.

“The RROS is a preprogrammed module that can receive communication from RB, interpret commands, and directly interface to hardware,” Bachiochi says. “The module is a Pololu Baby Orangutan robot controller, consisting of an Atmel ATmega328P microcontroller and a Pololu TB6612 dual motor driver carrier in a DIP24 form factor. You can use the module (which comes preprogrammed with the RROS) to build robots like those shown in Photo 1.”

Bachiochi’s look at RB and RROS is a two-part series. In Part 2, appearing in Circuit Cellar’s November issue, Bachiochi will explain how to translate between RROS and the iRobot Create Open Interface.

So, if you want explore a programming language that can take you from simulated to real-world robotics control, check out the October and November issues.


CC278: Evolving Neural Networks in Robotics

ccpostrobotAre you curious about how an evolving neural network helps a robot learn about itself and its environment?


A neural network with two inputs, one output, and three hidden neurons.

In the September issue of Circuit Cellar, Walter O. Krawec begins a two-part series that describes an ENN he uses in robot development experiments, explains how short-term memory (STM) evaluates a network’s conditions and how to add data to STM, and discusses how an ENN uses a robot’s minimalistic “instincts” and “reflexes” to guide a robot’s evolution.

Krawec, who has been building robots since 1999, is a research assistant and PhD student in Computer Science at the Stevens Institute of Technology in Hoboken, N.J. The work presented in his two-part series is based on a paper published in the proceedings of the 13th International Artificial Life Conference in 2012.

The overall goal of the Krawec’s experiments in developmental robotics is to enable a robot to learn on its own without human intervention. “An ENN is used to accomplish this,” he says.  “This network will be capable of growing and learning in real time as the robot operates.”

In his series, Krawec presents an architecture he says “enables a robot to ‘grow’ from a naive individual with no knowledge of itself (i.e., no notion of what its sensors are reporting or what its outputs actually do) to one that can operate in an environment.”

“This architecture will consist of an evolving neural network (ENN), a short-term memory (STM), and simple instincts and reflexes.

“Despite a minimal set of instincts, which provide penalties and rewards for certain actions (e.g., crashing into a wall, the robots described in this article sometimes develop complicated and unexpected behaviors. Such behaviors range from following walls (despite the robots’ binary proximity sensors) to games of ‘follow the leader.’…

“This article explores basic artificial neural network (ANN) concepts and outlines the ENN I’m using in this project. This is a neural network that, over time, learns not only by adjusting synaptic weights but also by growing new neurons and new connections (generally resulting in a recurrent neural network). Finally, I’ll discuss the STM system and how it is used to evaluate a network’s fitness.”

The second article in Krawec’s series appears in Circuit Cellar’s October issue.

“In Part 2, I’ll examine the reflex and instinct system, which feeds reward information to an ENN and the ‘decision path’ system, which rewards or penalizes chains of actions,” Krawec says. “Finally, I’ll discuss experiments conducted to demonstrate this architecture in a simulated environment. In particular, I’ll describe some interesting behaviors that robots have developed in trial runs.”

For more, check out Krawec’s articles on “Experiments in Developmental Robotics” in the September and October issues. You will also find information and videos about his work with robots on his website.


AAR Arduino Autonomous Mobile Robot

The AAR Arduino Robot is a small autonomous mobile robot designed for those new to robotics and for experienced Arduino designers. The robot is well suited for hobbyists and school projects. Designed in the Arduino open-source prototyping platform, the robot is easy to program and run.

The AAR, which is delivered fully assembled, comes with a comprehensive CD that includes all the software needed to write, compile, and upload programs to your robot. It also includes a firmware and hardware self test. For wireless control, the robot features optional Bluetooth technology and a 433-MHz RF.

The AAR robot’s features include an Atmel ATmega328P 8-bit AVR-RISC processor with a 16-MHz clock, Arduino open-source software, two independently controlled 3-VDC motors, an I2C bus, 14 digital I/Os on the processor, eight analog input lines, USB interface programming, an on-board odometer sensor on both wheels, a line tracker sensor, and an ISP connector for bootloader programming.

The AAR’s many example programs help you get your robot up and running. With many expansion kits available, your creativity is unlimited.

Contact Global Specialties for pricing.

Global Specialties

Microcontroller-Based, Cube-Solving Robot

Cube Solver in ActionCanadian Nelson Epp has earned degrees in physics and electrical engineering. But as a child, he was stumped by the Rubik’s Cube puzzle. So, as an adult, he built a Rubik’s Cube-solving robot that uses a Parallax Propeller microcontroller and a 52-move algorithm to solve the 3-D puzzle.

Designing and completing the robot wasn’t easy. Epp says he originally used a “gripper”-type robot that was “a complete disaster.” Then he experimented with different algorithms–“human memorizable ones”—before settling on a solution method developed by mathematician Morwen Thistlethwaite. (The algorithm is based on the mathematical concepts of a group, a subgroup, and generator and coset representatives.)

Nelson also developed a version of his Rubik’s Cube solver that used neural networks to analyze the cube’s colors, but that worked only half the time.

So, considering the time he had to spend on project trial and error (and his obligations to work, family, and pets), it took about six years to complete the robot. He writes about the results in the September issue of Circuit Cellar magazine. 

Here, he describes some of the choices he made in hardware components.

“The cube solver hardware uses two external power supplies: 5 VDC for the servomotors and 12 VDC for the remaining circuits. The 12-VDC power supply feeds a Texas Instruments (TI) UA78M33 and a UA78M05 linear regulator. The UA78M05 regulator powers an Electronics123 C3088 camera board. The UA78M33 regulator powers a Maxim Integrated MAX3232 ECPE RS-232 transceiver, a Microchip Technology 24LC256 CMOS serial EEPROM, remote reset circuitry, the Propeller, a SD/MMC card, the camera board’s digital output circuitry, and an ECS ECS-300C-160 oscillator. The images at right show my cube solver and circuit board.
“The ECS-300C-160 is a self-contained dual-output oscillator that can produce clock signals that are binary fractions of the 16-MHz base signal. My application uses the 8- and 16-MHz clock taps. The Propeller is clocked with the 8-MHz signal and then internally multiplied up to 64 MHz. The 16-MHz signal is fed to the camera.

“I used a MAX3232 transceiver to communicate to the host’s RS-232 port. The Propeller’s serial input pin and serial output pin are only required at startup. After the Propeller starts up, these pins can be used to exchange commands with the host. The Propeller also has pins for serial communication to an EEPROM, which are used during power up when a host is not sending a program.

“The cube-solving algorithm uses the coset representative file stored on an SD card, which is read by the Propeller via a SparkFun Electronics Breakout Board for SD-MMC cards. The Propeller interface to the SD card consists of a chip select, data in, data out, data clock, and power. The chip select is fixed into the active state. The three lines associated with data are wired to the Propeller.

“The Propeller uses a camera to determine the cube’s starting permutation. The C3088 uses an Electronics123 OV6630 color sensor module. I chose the camera because its data format and clocking speed was within the range of the Propeller’s capabilities. The C3088 has jumpers for external or internal clocking.”

To read more about Epp’s design journey—and outcomes—check out Circuit Cellar’s September issue. And click here for a video of his robot at work.


CC 276: MCU-Based Prosthetic Arm with Kinect

In its July issue, Circuit Cellar presents a project that combines the technology behind Microsoft’s Kinect gaming device with a prototype prosthetic arm.

The project team and  authors of the article include Jung Soo Kim, an undergraduate student in Biomedical Engineering at Ryerson University in Toronto, Canada, Nika Zolfaghari, a master’s student at Ryerson, and Dr. James Andrew Smith, who specializes in Biomedical Engineering at Ryerson.

“We designed an inexpensive, adaptable platform for prototype prosthetics and their testing systems,” the team says. “These systems use Microsoft’s Kinect for Xbox, a motion sensing device, to track a healthy human arm’s instantaneous movement, replicate the exact movement, and test a prosthetic prototype’s response.”

“Kelvin James was one of the first to embed a microprocessor in a prosthetic limb in the mid-1980s…,” they add. “With the maker movement and advances in embedded electronics, mechanical T-slot systems, and consumer-grade sensor systems, these applications now have more intuitive designs. Integrating Xbox provides a platform to test prosthetic devices’ control algorithms. Xbox also enables prosthetic arm end users to naturally train their arms.”

They elaborate on their choices in building the four main hardware components of their design, which include actuators, electronics, sensors, and mechanical support:

“Robotis Dynamixel motors combine power-dense neodymium motors from Maxon Motors with local angle sensing and high gear ratio transmission, all in a compact case. Atmel’s on-board 8-bit ATmega8 microcontroller, which is similar to the standard Arduino, has high (17-to-50-ms) latency. Instead, we used a 16-bit Freescale Semiconductor MC9S12 microcontroller on an Arduino-form-factor board. It was bulkier, but it was ideal for prototyping. The Xbox system provided high-level sensing. Finally, we used Twintec’s MicroRAX 10-mm profile T-slot aluminum to speed the mechanical prototyping.”

The team’s goal was to design a  prosthetic arm that is markedly different from others currently available. “We began by building a working prototype of a smooth-moving prosthetic arm,” they say in their article.

“We developed four quadrant-capable H-bridge-driven motors and proportional-derivative (PD) controllers at the prosthetic’s joints to run on a MC9S12 microcontroller. Monitoring the prosthetic’s angular position provided us with an analytic comparison of the programmed and outputted results.”

A Technological Arts Esduino microcontroller board is at the heart of the prosthetic arm design.

The team concludes that its project illustrates how to combine off-the-shelf Arduino-compatible parts, aluminum T-slots, servomotors, and a Kinect into an adaptable prosthetic arm.

But more broadly, they say, it’s a project that supports the argument that  “more natural ways of training and tuning prostheses” can be achieved because the Kinect “enables potential end users to manipulate their prostheses without requiring complicated scripting or programming methods.”

For more on this interesting idea, check out the July issue of Circuit Cellar. And for a video from an earlier Circuit Cellar post about this project, click here.


Electrostatic Cleaning Robot Project

How do you clean a clean-energy generating system? With a microcontroller (and a few other parts, of course). An excellent example is US designer Scott Potter’s award-winning, Renesas RL78 microcontroller-based Electrostatic Cleaning Robot system that cleans heliostats (i.e., solar-tracking mirrors) used in solar energy-harvesting systems. Renesas and Circuit Cellar magazine announced this week at DevCon 2012 in Garden Grove, CA, that Potter’s design won First Prize in the RL78 Green Energy Challenge.

This image depicts two Electrostatic Cleaning Robots set up on two heliostats. (Source: S. Potter)

The nearby image depicts two Electrostatic Cleaning Robots set up vertically in order to clean the two heliostats in a horizontal left-to-right (and vice versa) fashion.

The Electrostatic Cleaning Robot in place to clean

Potter’s design can quickly clean heliostats in Concentrating Solar Power (CSP) plants. The heliostats must be clean in order to maximize steam production, which generates power.

The robot cleaner prototype

Built around an RL78 microcontroller, the Electrostatic Cleaning Robot provides a reliable cleaning solution that’s powered entirely by photovoltaic cells. The robot traverses the surface of the mirror and uses a high-voltage AC electric field to sweep away dust and debris.

Parts and circuitry inside the robot cleaner

Object oriented C++ software, developed with the IAR Embedded Workbench and the RL78 Demonstration Kit, controls the device.

IAR Embedded Workbench IDE

The RL78 microcontroller uses the following for system control:

• 20 Digital I/Os used as system control lines

• 1 ADC monitors solar cell voltage

• 1 Interval timer provides controller time tick

• Timer array unit: 4 timers capture the width of sensor pulses

• Watchdog timer for system reliability

• Low voltage detection for reliable operation in intermittent solar conditions

• RTC used in diagnostic logs

• 1 UART used for diagnostics

• Flash memory for storing diagnostic logs

The complete project (description, schematics, diagrams, and code) is now available on the Challenge website.


Autonomous Mobile Robot (Part 2): Software & Operation

I designed a microcontroller-based mobile robot that can cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. In the first part of this series, I introduced my TOMBOT robot’s hardware. Now I’ll describe its software and how to achieve autonomous robot behavior.

Autonomous Behavior Model Overview
The TOMBOT is a minimalist system with just enough components to demonstrate some simple autonomous behaviors: Cruise, Escape, Avoid, and Home behaviors (see Figure 1). All the behaviors require left and right servos for maneuverability. In general, “Cruise” just keeps the robot in motion in lieu of any stimulus. “Escape” uses the bumper to sense a collision and then 180 spin with reverse. “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle. Finally “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.

Figure 1: High-level autonomous behavior flow

Figure 2 shows more details. The diagram captures the interaction of TOMBOT hardware and software. On the left side of the diagram are the sensors, power sources, and command override (the XBee radio command input). All analog sensor inputs and bumper switches are sampled (every 100 ms automatically) during the Microchip Technology PIC32 Timer 1 interrupt. The bumper left and right switches undergo debounce using 100 ms as a timer increment. The analog sensors inputs are digitized using the PIC32’s 10-bit ADC. Each sensor is assigned its own ADC channel input. The collected data is averaged in some cases and then made available for use by the different behaviors. Processing other than averaging is done within the behavior itself.

Figure 2: Detailed TOMBOT autonomous model

All behaviors are implemented as state machines. If a behavior requests motor control, it will be internally arbitrated against all other behaviors before motor action is taken. Escape has the highest priority (the power behavior is not yet implemented) and will dominate with its state machine over all the other behaviors. If escape is not active, then avoid will dominate as a result of its IR detectors are sensing an object in front of the TOMBOT less than 8″ away. If escape and avoid are not active, then home will overtake robot steering to sense track a light source that is immediately in front of TOMBOT. Finally cruise assumes command, and takes the TOMBOT in a forward direction temporarily.

A received command from the XBee RF module can stop and start autonomous operation remotely. This is very handy for system debugging. Complete values of all sensors and battery power can be viewed on graphics display using remote command, with LEDs and buzzer, announcing remote command acceptance and execution.

Currently, the green LED is used to signal that the TOMBOT is ready to accept a command. Red is used to indicate that the TOMBOT is executing a command. The buzzer indicates that the remote command has been completed coincident with the red led turning on.

With behavior programming, there are a lot of considerations. For successful autonomous operation, calibration of the photocells and IR sensors and servos is required. The good news is that each of these behaviors can be isolated (selectively comment out prior to compile time what is not needed), so that phenomena can be isolated and the proper calibrations made. We will discuss this as we get a little bit deeper into the library API, but in general, behavior modeling itself does not require accurate modeling and is fairly robust under less than ideal conditions.

TOMBOT Software Library
The TOMBOT robot library is modular. Some experience with C programming is required to use it (see Figure 3).

Figure 3: TOMBOT Library

The entire library is written using Microchip’s PIC32 C compiler. Both the compiler and Microchip’s 8.xx IDE are available as free downloads at www.microchip.com. The overall library structure is shown. At a highest level library has three main sections: Motor, I/O and Behavior. We cover these areas in some detail.

TOMBOT Motor Library
All functions controlling the servos’ (left and right wheel) operation is contained in this part of the library (see Listing1 Motor.h). In addition the Microchip PIC32 peripheral library is also used. Motor initialization is required before any other library functions. Motor initialization starts up both left and right servo in idle position using PIC32 PWM peripherals OC3 and OC4 and the dual Timer34 (32 bits) for period setting. C Define statements are used to set pulse period and duty cycle for both left and right wheels. These defines provide PWM varies from 1 to 2 ms for different speed CCW rotation over a 20-ms period and from 1.5 ms to 1 ms for CC rotation.

Listing 1: All functions controlling the servos are in this part of the library.

V_LEFT and V_RIGHT (velocity left and right) use the PIC32 peripheral library function to set duty cycle. The other motor functions, in turn, use V_LEFT and V_RIGHT with the define statements. See FORWARD and BACKWARD functions as an example (see Listing 2).

Listing 2: Motor function code examples

In idle setting both PWM set to 1-ms center positions should cause the servos not to turn. A servo calibration process is required to ensure center position does not result in any rotation. For the servos we have a set screw that can be used to adjust motor idle to no spin activity with a small Philips screwdriver.

TOMBOT I/O Library

This is a collection of different low level library functions. Let’s deal with these by examining their files and describing the function set starting with timer (see Listing 3). It uses Timer45 combination (full 32 bits) for precision timer for behaviors. The C defines statements set the different time values. The routine is noninterrupt at this time and simply waits on timer timeout to return.

Listing 3: Low-level library functions

The next I/O library function is ADC. There are a total of five analog inputs all defined below. Each sensor definition corresponds to an integer (32-bit number) designating the specific input channel to which a sensor is connected. The five are: Right IR, Left IR, Battery, Left Photo Cell, Right Photo Cell.

The initialization function initializes the ADC peripheral for the specific channel. The read function performs a 10-bit ADC conversion and returns the result. To faciliate operation across the five sensors we use SCAN_SENSORS function. This does an initialization and conversion of each sensor in turn. The results are placed in global memory where the behavior functions can access . SCAN_SENOR also performs a running average of the last eight samples of photo cell left and right (see Listing 4).

Listing 4: SCAN_SENOR also performs a running average of the last eight samples

The next I/O library function is Graphics (see Listing 5). TOMBOT uses a 102 × 64 monchrome graphics display module that has both red and green LED backlights. There are also red and green LEDs on the module that are independently controlled. The module is driven by the PIC32 SPI2 interface and has several control lines CS –chip select, A0 –command /data.

Listing 5: The Graphics I/O library function

The Graphics display relies on the use of an 8 × 8 font stored in as a project file for character generation. Within the library there are also cursor position macros, functions to write characters or text strings, and functions to draw 32 × 32 bit maps. The library graphic primitives are shown for intialization, module control, and writing to the module. The library writes to a RAM Vmap memory area. And then from this RAM area the screen is updated using dumpVmap function. The LED and backlight controls included within these graphics library.

The next part of I/O library function is delay (see Listing 6). It is just a series of different software delays that can be used by other library function. They were only included because of legacy use with the graphics library.

Listing 6: Series of different software delays

The next I/O library function is UART-XBEE (see Listing 7). This is the serial driver to configure and transfer data through the XBee radio on the robot side. The library is fairly straightforward. It has an initialize function to set up the UART1B for 9600 8N1, transmit and receive.

Listing 7: XBee library functions

Transmission is done one character at a time. Reception is done via interrupt service routine, where the received character is retrieved and a semaphore flag is set. For this communication, I use a Sparkfun XBee Dongle configured through USB as a COM port and then run HyperTerminal or an equivalent application on PC. The default setting for XBee is all that is required (see Photo 1).

Photo 1: XBee PC to TOMBOT communications

The next I/O library function is buzzer (see Listing 8). It uses a simple digital output (Port F bit 1) to control a buzzer. The functions are initializing buzzer control and then the on/off buzzer.

Listing 8: The functions initialize buzzer control

TOMBOT Behavior Library
The Behavior library is the heart of the autonomous TOMBOT and where integrated behavior happens. All of these behaviors require the use of left and right servos for autonomous maneuverability. Each behavior is a finite state machine that interacts with the environment (every 0.1 s). All behaviors have a designated priority relative to the wheel operation. These priorities are resolved by the arbiter for final wheel activation. Listing 9 shows the API for the entire Behavior Library.

Listing 9: The API for the entire behavior library

Let’s briefly cover the specifics.

  • “Cruise” just keeps the robot in motion in lieu of any stimulus.
  • “Escape” uses the bumper to sense a collision and then 180° spin with reverse.
  • “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle.
  • “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.
  • “Remote operation” allows for the TOMBOT to respond to the PC via XBee communications to enter/exit autonomous mode, report status, or execute a predetermined motion scenario (i.e., Spin X times, run back and forth X times, etc.).
  • “Dump” is an internal function that is used within Remote.
  • “Arbiter” is an internal function that is an intrinsic part of the behavior library that resolves different behavior priorities for wheel activation.

Here’s an example of the Main function-invoking different Behavior using API (see Listing 10). Note that this is part of a main loop. Behaviors can be called within a main loop or “Stacked Up”. You can remove or stack up behaviors as you choose ( simply comment out what you don’t need and recompile). Keep in mind that remote is a way for a remote operator to control operation or view status.

Listing 10: TOMBOT API Example

Let’s now examine the detailed state machine associated with each behavior to gain a better understanding of behavior operation (see Listing 11).

Listing 11:The TOMBOT’s arbiter

The arbiter is simple for TOMBOT. It is a fixed arbiter. If either during escape or avoid, it abdicates to those behaviors and lets them resolve motor control internally. Home or cruise motor control requests are handled directly by the arbiter (see Listing 12).

Listing 12: Home behavior

Home is still being debugged and is not yet a final product. The goal is for the TOMBOT during Home is to steer the robot toward a strong light source when not engaged in higher priority behaviors.

The Cruise behavior sets motor to forward operation for one second if no other higher priority behaviors are active (see Listing 13).

Listing 13: Cruise behavior

The Escape behavior tests the bumper switch state to determine if a bump is detected (see Listing 14). Once detected it runs through a series of states. The first is an immediate backup, and then it turns around and moves away from obstacle.

Listing 14: Escape behavior

This function is a response to the remote C or capture command that formats and dumps (see Listing 15) to the graphics display The IR left and right, Photo left and Right, and battery in floating point format.

Listing 15: The dump function

This behavior uses the IR sensors and determines if an object is within 8″ of the front of TOMBOT (see Listing 16).

Listing 16: Avoid behavior

If both sensors detect a target within 8″ then it just turns around and moves away (pretty much like escape). If only the right sensor detects an object in range spins away from right side else if on left spins away on left side (see Listing 17).

Listing 17: Remote part 1

Remote behavior is fairly comprehensive (see Listing 18). There are 14 different cases. Each case is driven by a different XBee received radio character. Once a character is received the red LED is turned on. Once the behavior is complete, the red LED is turned off and a buzzer is sounded.

Listing 18: Remote part 2

The first case toggles Autonomous mode on and off. The other 13 are prescribed actions. Seven of these 13 were written to demonstrate TOMBOT mobile agility with multiple spins, back and forwards. The final six of the 13 are standard single step debug like stop, backward, and capture. Capture dumps all sensor output to the display screen (see Table 1).

Table 1: TOMBOT remote commands

Early Findings & Implementation
Implementation always presents a choice. In my particular case, I was interested in rapid development. At that time, I selected to using non interrupt code and just have linear flow of code for easy debug. This amounts to “blocking code.” Block code is used throughout the behavior implementation and causes the robot to be nonresponsive when blocking occurs. All blocking is identified when timeout functions occur. Here the robot is “blind” to outside environmental conditions. Using a real-time operating system (e.g., Free RTOS) to eliminate this problem is recommended.

The TOMBOT also uses photocells for homing. These sensitive devices have different responses and need to be calibrated to ensure correct response. A photocell calibration is needed within the baseline and used prior to operation.


The TOMBOT was successfully demoed to a large first-grade class in southern California as part of a Science, Technology, Engineering and Mathematics (STEM) program. The main behaviors were limited to Remote, Avoid, and Escape. With autonomous operation off, the robot demonstrated mobility and maneuverability. With autonomous operation on, the robot could interact with a student to demo avoid and escape behavior.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.