Q&A: Robotics Mentor and Champion

Peter Matteson, a Senior Project Engineer at Pratt & Whitney in East Hartford, CT, has a passion for robotics. We recently discussed how he became involved with mentoring a high school robotics team, the types of robots the team designs, and the team’s success.—Nan Price, Associate Editor

 

NAN: You mentor a FIRST (For Inspiration and Recognition of Science and Technology) robotics team for a local high school. How did you become involved?

Peter Matteson

Peter Matteson

PETER: I became involved in FIRST in late 2002 when one of my fraternity brothers who I worked with at the time mentioned that FIRST was looking for new mentors to help the team the company sponsored. I was working at what was then known as UTC Power (sold off to ClearEdge Power Systems last year) and the company had sponsored Team 177 Bobcat Robotics since 1995.

After my first year mentoring the kids and experiencing the competition, I got hooked. I loved the competition and strategy of solving a new game each year and designing and building a robot. I enjoyed working with the kids, teaching them how to design and build mechanisms and strategize the games.

The FIRST team’s 2010 robot is shown.

The FIRST team’s 2010 robot is shown.

A robot’s articulating drive train is tested  on an obstacle (bump) at the 2010 competition.

A robot’s articulating drive train is tested on an obstacle (bump) at the 2010 competition.

NAN: What types of robots has your team built?

A temporary control board was used to test the drive base at the 2010 competition.

A temporary control board was used to test the drive base at the 2010 competition.

PETER: Every robot we make is purposely built for a specific game the year we build it. The robots have varied from arm robots with a 15’ reach to catapults that launch a 40” diameter ball, to Frisbee throwers, to Nerf ball shooters.

They have varied in drive train from 4 × 4 to 6 × 6 to articulating 8 × 8. Their speeds have varied from 6 to 16 fps.

NAN: What types of products do you use to build the robots? Do you have any favorites?

PETER: We use a variant of the Texas Instruments (TI) cRIO electronics kit for the controller, as is required per the FIRST competition rules. The motors and motor controllers we use are also mandated to a few choices. We prefer VEX Robotics VEXPro Victors, but we also design with the TI Jaguar motor controllers. For the last few years, we used a SparkFun CMUcam webcam for the vision system. We build with Grayhill encoders, various inexpensive limit switches, and gyro chips.

The team designed a prototype minibot.

The team designed a prototype minibot.

For pneumatics we utilize compressors from Thomas and VIAIR. Our cylinders are primarily from Bimba, but we also use Parker and SMC. For valves we use SMC and Festo. We usually design with clipart plastic or stainless accumulator tanks. Our gears and transmissions come from AndyMark, VEX Robotics’s VEXPro, and BaneBots.

The AndyMark shifter transmissions were a mainstay of ours until last year when we tried the VEXPro transmissions for the first time. Over the years, we have utilized many of the planetary transmissions from AndyMark, VEX Robotics, and BaneBots. We have had good experience with all the manufacturers. BaneBots had a shaky start, but it has vastly improved its products.

We have many other odds and ends we’ve discovered over the years for specific needs of the games. Those are a little harder to describe because they tend to be very specific, but urethane belting is useful in many ways.

NAN: Has your team won any competitions?

Peter’s FIRST team is pictured at the 2009 championship at the Georgia Dome in Atlanta, GA. (Peter is standing fourth from the right.)

Peter’s FIRST team is pictured at the 2009 championship at the Georgia Dome in Atlanta, GA. (Peter is standing fourth from the right.)

PETER: My team is considered one of the most successful in FIRST. We have won four regional-level competitions. We have always shined at the competition’s championship level when the 400 teams from the nine-plus countries that qualify vie for the championship.

In my years on the team, we have won the championship twice (2007 and 2010), been the championship finalist once (2011), won our division, made the final four a total of six times (2006–2011), and were division finalists in 2004.

A FIRST team member works on a robot “in the pits” at the 2011 Hartford, CT, regional competition.

A FIRST team member works on a robot “in the pits” at the 2011 Hartford, CT, regional competition.

Team 177 was the only team to make the final four more than three years in a row, setting the bar at six consecutive trips. It was also the only team to make seven trips to the final four, including in 2001.

NAN: What is your current occupation?

PETER: I am a Senior Project Engineer at Pratt & Whitney. I oversee and direct a team of engineers designing components for commercial aircraft propulsion systems.

NAN: How and when did you become interested in robotics?

PETER: I have been interested in robotics for as long as I can remember. The tipping point was probably when I took an industrial robotics course in college. That was when I really developed a curiosity about what I could do with robots.

The industrial robots course started with basic programming robots for tasks. We had a welding robot we taught the weld path and it determined on its own how to get between points.

We also worked with programming a robot to install light bulbs and then determine if the bulbs were working properly.

In addition to practical labs such as those, we also had to design the optimal robot for painting a car and figure out how to program it. We basically had to come up with a proposal for how to design and build the robot from scratch.

This robot from the 2008 competition holds a 40” diameter ball for size reference.

This robot from the 2008 competition holds a 40” diameter ball for size reference.

NAN: What advice do you have for engineers or students who are designing robots or robotic systems?

PETER: My advice is to clearly set your requirements at the beginning of the project and then do some research into how other people have accomplished them. Use that inspiration as a stepping-off point. From there, you need to build a prototype. I like to use wood, cardboard, and other materials to build prototypes. After this you can iterate to improve your design until it performs exactly as expected.

Ohio-Based “Design Dungeon”

“Steve Ciarcia had a ‘Circuit Cellar.’ I have a ‘Design Dungeon,’” Steve Lubbers says about his Dayton, OH-based workspace.

“An understanding wife and a spare room in the house allocated a nice place for a workshop. Too bad the engineer doesn’t keep it nice and tidy! I am amazed by the nice clean workspaces that have previously been published! So for those of you who need a visit from FEMA, don’t feel bad. Take a look at my mess.”

Steve Lubbers describes his workbench as a “work in progress.”

Steve Lubbers describes his workbench as a “work in progress.”

The workspace is a creative mess that has produced dozens of projects for Circuit Cellar contests. From the desk to the floor to the closet, the space is stocked with equipment and projects in various stages.

Lubbers writes:

The doorway is marked “The Dungeon.” The first iteration of The Dungeon was in my parents’ basement. When I bought a house, the workshop and the sign moved to my new home.

The door is a requirement when company comes to visit. Once you step inside, you will see why. The organizational plan seems to be a pile for everything, and everything in a pile. Each new project seems to reduce the amount of available floor space.

Lubbers_Floor

Lubbers’s organization plan is simple: “A pile for everything, and everything in a pile.”

“High-tech computing” is accomplished on a PDP-11/23. This boat anchor still runs to heat the room, but my iPod has more computing abilities! My nieces and nephews don’t really believe in 8” disks, but I have proof.

The desk (messy of course) holds a laptop computer and a ham radio transceiver. Several of my Circuit Cellar projects have been related to amateur radio. A short list of my ham projects includes a CW keyer, an antenna controller, and a PSK-31 (digital communications) interface.

Lubbers_Desk

Is there a desk under there?

My workbench has a bit of clear space for my latest project and fragments of previous projects are in attendance. The skull in the back right is wearing the prototype for my Honorable Mention in the Texas Instruments Design Stellaris 2010 contest. It’s a hands-free USB mouse. The red tube was the fourth-place winner in the microMedic 2013 National Contest.

Front and center is the prototype for my March 2014 Circuit Cellar article on robotics. Test equipment is a mix of old and new. Most of the newer equipment has been funded by past Circuit Cellar contests and articles.

Lubbers_Hero

“My wife allows my Hero Jr. robot to visit the living room. He is housebroken after all,” Lubbers says.

The closet is a “graveyard” for all of the contest kits I have received, models I would like to build, and other contraptions the wife doesn’t allow to invade the rest of the house. (She is pretty considerate because you will find my Hero Jr. robot in the living room.)

At one time, The Dungeon served as my home office. For about five years I had the ideal “down the hall” commute. A stocked lab helped justify my ability to work from home.

When management pulled the plug on working remotely, the lab got put to work developing about a dozen projects for Circuit Cellar contests. There has been a dry spell since my last contest entry, so these days I am helping develop the software for the ham radio Satellite FOX-1. My little “CubeSat” will operate as a ham radio transponder and a platform for university experiments when it launches in late 2014. Since I will probably never go to space myself, the next best thing is launching my code into orbit. It’s a good thing that FOX-1 is smaller than a basketball. If it was bigger, it might not fit on my workbench!

Lubbers’s article about building a swarm of robots will appear in Circuit Cellar’s March issue. To learn more about Lubbers, read our 2013 interview.

Q&A: Hacker, Roboticist, and Website Host

Dean “Dino” Segovis is a self-taught hardware hacker and maker from Pinehurst, NC. In 2011, he developed the Hack A Week website, where he challenges himself to create and post weekly DIY projects. Dino and I recently talked about some of his favorite projects and products. —Nan Price, Associate Editor

 

NAN: You have been posting a weekly project on your website, Hack A Week, for almost three years. Why did you decide to create the website?

Dean "Dino" Segovis at his workbench

Dean “Dino” Segovis at his workbench

DINO: One day on the Hack A Day website I saw a post that caught my attention. It was seeking a person to fill a potential position as a weekly project builder and video blogger. It was offering a salary of $35,000 a year, which was pretty slim considering you had to live in Santa Monica, CA. I thought, “I could do that, but not for $35,000 a year.”

That day I decided I was going to challenge myself to come up with a project and video each week and see if I could do it for at least one year. I came up with a simple domain name, www.hackaweek.com, bought it, and put up a website within 24 h.

My first project was a 555 timer-based project that I posted on April 1, 2011, on my YouTube channel, “Hack A Week TV.” I made it through the first year and just kept going. I currently have more than 3.2 million video views and more than 19,000 subscribers from all over the world.

NAN: Hack A Week features quite a few robotics projects. How are the robots built? Do you have a favorite?

rumblebot head

Dino’s very first toy robot hack was the Rumble robot. The robot featured an Arduino that sent PWM to the on-board H-bridge in the toy to control the motors for tank steering. A single PING))) sensor helped with navigation.

Rumble robot

The Rumble robot

DINO: I usually use an Arduino as the robot’s controller and Roomba gear motors for locomotion. I have built a few others based on existing wheeled motorized toys and I’ve made a few with the Parallax Propeller chip.

My “go-to” sensor is usually the Parallax PING))) ultrasonic sensor. It’s easy to connect and work with and the code is straightforward. I also use bump sensors, which are just simple contact switches, because they mimic the way some insects navigate.

Nature is a great designer and much can be learned from observing it. I like to keep my engineering simple because it’s robust and easy to repair. The more you complicate a design, the more it can do. But it also becomes more likely that something will fail. Failure is not a bad thing if it leads to a better design that overcomes the failure. Good design is a balance of these things. This is why I leave my failures and mistakes in my videos to show how I arrive at the end result through some trial and error.

My favorite robot would be “Photon: The Video and Photo Robot” that I built for the 2013 North Carolina Maker Faire. It’s my masterpiece robot…so far.

NAN: Tell us a little more about Photon. Did you encounter any challenges while developing the robot?

Photon awaits with cameras rolling, ready to go forth and record images.

Photon awaits with cameras rolling, ready to go forth and record images.

DINO: The idea for Photon first came to me in February 2013. I had been playing with the Emic 2 text-to-speech module from Parallax and I thought it would be fun to use it to give a robot speech capability. From there the idea grew to include cameras that would record and stream to the Internet what the robot saw and then give the robot the ability to navigate through the crowd at Maker Faire.

I got a late start on the project and ended up burning the midnight oil to get it finished in time. One of the bigger challenges was in designing a motorized base that would reliably move Photon across a cement floor.

The problem was in dealing with elevation changes on the floor covering. What if Photon encountered a rug or an extension cord?

I wanted to drive it with two gear motors salvaged from a Roomba 4000 vacuum robot to enable tank-style steering. A large round base with a caster at the front and rear worked well, but it would only enable a small change in surface elevation. I ended up using that design and made sure that it stayed away from anything that might get it in trouble.

The next challenge was giving Photon some sensors so it could navigate and stay away from obstacles. I used one PING))) sensor mounted on its head and turned the entire torso into a four-zone bump sensor, as was a ring around the base. The ring pushed on a series of 42 momentary contact switches connected together in four zones. All these sensors were connected to an Arduino running some simple code that turned Photon away from obstacles it encountered. Power was supplied by a motorcycle battery mounted on the base inside the torso.

The head held two video cameras, two smartphones in camera mode, and one GoPro camera. One video camera and the GoPro were recording in HD; the other video camera was recording in time-lapse mode. The two smartphones streamed live video, one via 4G to a Ustream channel and the other via Wi-Fi. The Ustream worked great, but the Wi-Fi failed due to interference.

Photon’s voice came from the Emic 2 connected to another Arduino sending it lines of text to speak. The audio was amplified by a small 0.5-W LM386 amplifier driving a 4” speaker. An array of blue LEDs mounted on the head illuminated with the brightness modulated by the audio signal when Photon spoke. The speech was just a lot of lines of text running in a timed loop.

Photon’s brain includes two Arduinos and an LM386 0.5-W audio amplifier with a sound-to-voltage circuit added to drive the mouth LED array. Photon’s voice comes from a Parallax Emic 2 text-to-speech module.

Photon’s brain includes two Arduinos and an LM386 0.5-W audio amplifier with a sound-to-voltage circuit added to drive the mouth LED array. Photon’s voice comes from a Parallax Emic 2 text-to-speech module.

Connecting all of these things together was very challenging. Each component needed a regulated power supply, which I built using LM317T voltage regulators. The entire current draw with motors running was about 1.5 A. The battery lasted about 1.5 h before needing a recharge. I had an extra battery so I could just swap them out during the quick charge cycle and keep downtime to a minimum.

I finished the robot around 11:00 PM the night before the event. It was a hit! The videos Photon recorded are fascinating to watch. The look of wonder on people’s faces, the kids jumping up to see themselves in the monitors, the smiles, and the interaction are all very interesting.

NAN: Many of your Hack A Week projects include Parallax products. Why Parallax?

DINO: Parallax is a great electronics company that caters to the DIY hobbyist. It has a large knowledge base on its website as well as a great forum with lots of people willing to help and share their projects.

About a year ago Parallax approached me with an offer to supply me with a product in exchange for featuring it in my video projects on Hack A Week. Since I already used and liked the product, it was a perfect offer. I’ll be posting more Parallax-based projects throughout the year and showcasing a few of them on the ELEV-8 quadcopter as a test platform.

NAN: Let’s change topics. You built an Electronic Fuel Injector Tester, which is featured on HomemadeTools.net. Can you explain how the 555 timer chips are used in the tester?

DINO: 555 timers are great! They can be used in so many projects in so many ways. They’re easy to understand and use and require only a minimum of external components to operate and configure.

The 555 can run in two basic modes: monostable and astable.

Dino keeps this fuel injector tester in his tool box at work. He’s a European auto technician by day.

Dino keeps this fuel injector tester in his tool box at work. He’s a European auto technician by day.

An astable circuit produces a square wave. This is a digital waveform with sharp transitions between low (0 V) and high (+ V). The durations of the low and high states may be different. The circuit is called astable because it is not stable in any state: the output is continually changing between “low” and “high.”

A monostable circuit produces a single output pulse when triggered. It is called a monostable because it is stable in just one state: “output low.” The “output high” state is temporary.

The injector tester, which is a monostable circuit, is triggered by pressing the momentary contact switch. The single-output pulse turns on an astable circuit that outputs a square-wave pulse train that is routed to an N-channel MOSFET. The MOSFET turns on and off and outputs 12 V to the injector. A flyback diode protects the MOSFET from the electrical pulse that comes from the injector coil when the power is turned off and the field collapses. It’s a simple circuit that can drive any injector up to 5 A.

This is a homebrew PCB for Dino's fuel injector tester. Two 555s drive a MOSFET that switches the injector.

This is a homebrew PCB for Dino’s fuel injector tester. Two 555s drive a MOSFET that switches the injector.

NAN: You’ve been “DIYing” for quite some time. How and when did your interest begin?

DINO: It all started in 1973 when I was 13 years old. I used to watch a TV show on PBS called ZOOM, which was produced by WGBH in Boston. Each week they had a DIY project they called a “Zoom-Do,” and one week the project was a crystal radio. I ordered the Zoom-Do instruction card and set out to build one. I got everything put together but it didn’t work! I checked and rechecked everything, but it just wouldn’t work.

I later realized why. The instructions said to use a “cat’s whisker,” which I later found out was a thin piece of wire. I used a real cat’s whisker clipped from my cat! Anyway, that project sparked something inside me (pun intended). I was hooked! I started going house to house asking people if they had any broken or unwanted radios and or TVs I could have so I could learn about electronics and I got tons of free stuff to mess with.

My mom and dad were pretty cool about letting me experiment with it all. I was taking apart TV sets, radios, and tape recorders in my room and actually fixing a few of them. I was in love with electronics. I had an intuition for understanding it. I eventually found some ham radio guys who were great mentors and I learned a lot of good basic electronics from them.

NAN: Is there a particular electronics engineer, programmer, or designer who has inspired the work you do today?

DINO: Forrest Mims was a great inspiration in my early 20s. I got a big boost from his “Engineer’s Notebooks.” The simple way he explained things and his use of graph paper to draw circuit designs really made learning about electronics easy and fun. I still use graph paper to draw my schematics during the design phase and for planning when building a prototype on perf board. I’m not interested in any of the software schematic programs because most of my projects are simple and easy to draw. I like my pencil-and-paper approach.

NAN: What was the last electronics-design related product you purchased and what type of project did you use it with?

DINO: An Arduino Uno. I used two of these in the Photon robot.

NAN: What new technologies excite you and why?

DINO: Organic light-emitting diodes (OLEDs). They’ll totally change the way we manufacture and use digital displays.

I envision a day when you can go buy your big-screen TV that you’ll bring home in a cardboard tube, unroll it, and place it on the wall. The processor and power supply will reside on the floor, out of the way, and a single cable will go to the panel. The power consumption will be a fraction of today’s LCD or plasma displays and they’ll be featherweight by comparison. They’ll be used to display advertising on curved surfaces anywhere you like. Cell phone displays will be curved and flexible.

How about a panoramic set of virtual reality goggles or a curved display in a flight simulator? Once the technology gets out of the “early adopter” phase, prices will come down and you’ll own that huge TV for a fraction of what you pay now. One day we might even go to a movie and view it on a super-huge OLED panorama screen.

NAN: Final question. If you had a full year and a good budget to work on any design project you wanted, what would you build?

DINO: There’s a project I’ve wanted to build for some time now: A flight simulator based on the one used in Google Earth. I would use a PC to run the simulator and build a full-on seat-inside enclosure with all the controls you would have in a jet airplane. There are a lot of keyboard shortcuts for a Google flight simulator that could be triggered by switches connected to various controls (e.g., rudder pedals, flaps, landing gear, trim tabs, throttle, etc.). I would use the Arduino Leonardo as the controller for the peripheral switches because it can emulate a USB keyboard. Just program it, plug it into a USB port along with a joystick, build a multi-panel display (or use that OLED display I dream of), and go fly!

Google Earth’s flight simulator also lets you fly over the surface of Mars! Not only would this be fun to build and fly, it would also be a great educational tool. It’s definitely on the Hack A Week project list!

Editor’s Note: This article also appears in the Circuit Cellar’s upcoming March issue, which focuses on robotics. The March issue will soon be available for membership download or single-issue purchase.

 

A Visit to the World Maker Faire in New York

If you missed the World Maker Faire in New York City, you can pick up Circuit Cellar’s February issue for highlights of the innovative projects and hackers represented there.Veteran electronics DIYer and magazine columnist Jeff Bachiochi is the perfect guide.

“The World Maker Faire is part science fair and part country fair,” Bachiochi says. “Makers are DIYers. The maker movement empowers everyone to build, repair, remake, hack, and adapt all things. The Maker Faire shares the experiences of makers who have been involved in this important process… Social media keeps us in constant contact and can educate, but it can’t replace the feeling you can get from hands-on live interaction with people and the things they have created.

Photo 1: This pole-climbing robot is easy to deploy at a moment’s notice. There is no need for a ladder to get emergency communication antennas up high where they can be most effective.

Photo 1: This pole-climbing robot is easy to deploy at a moment’s notice. There is no need for a ladder to get emergency communication antennas up high where they can be most effective.

“It should be noted that not all Maker Faire exhibitors are directly involved with technology. Some non-technological projects on display included the ‘Art Car’ from Pittsburgh, which is an annual revival of an old clunker turned into a drivable art show on wheels. There was also the life-size ‘Mouse Trap’ game, which was quite the contraption and just plain fun, especially if you grew up playing the original game.”

Bachiochi’s article introduces you to a wide variety of innovators, hackers, and hackerspaces.

“The 721st Mechanized Contest Battalion (MCB) is an amateur radio club from Warren County, NJ, that combines amateur (ham) radio with electronics, engineering, mechanics, building, and making,” Bachiochi says. “The club came to the Maker Faire to demonstrate its Emergency Antenna Platform System (E-APS) robot. The robot, which is designed for First Responder Organizations, will turn any parking lot lamppost into an instant antenna tower (see Photo 1).”

The keen and growing interest in 3-D printing as a design tool was evident at the Maker Faire.

“Working by day as an analog/mixed-signal IC design engineer for Cortina Systems in Canada, Andrew Plumb needed a distraction. In the evenings, Plumb uses a MakerBot 3-D printer to create 3-D designs of plastic, like thousands of others experimenting with 3-D printing,” Bachiochi says. “Plumb was not satisfied with simply printing plastic widgets. In fact, he showed me a few of his projects, which include printing plastic onto paper and cloth (see Photo 2).”

Photo 2: Andrew Plumb showed me some unique ideas he was experimenting with using one of his 3-D printers. By printing the structural frame directly on tissue paper, ultra-light parts are practically ready to fly.

Photo 2: Andrew Plumb showed me some unique ideas he was experimenting with using one of his 3-D printers. By printing the structural frame directly on tissue paper, ultra-light parts are practically ready to fly.

Also in the 3-D arena, Bachiochi encountered some innovative new products.

“It was just a matter of time until someone introduced a personal scanner to create digital files of 3-D objects. The MakerBot Digitizer Desktop 3-D Scanner is the first I’ve seen (see Photo 3),” Bachiochi says. “It uses a laser, a turntable, and a CMOS camera to pick off 3-D points and output a STL file. The scanner will create a 3-D image from an object up to 8″ in height and width. There is no third axis scanning, so you must plan your model’s orientation to achieve the best results. Priced less than most 3-D printers, this will be a hot item for 3-D printing enthusiasts.”

Bachiochi’s article includes a lengthy section about “other interesting stuff” and people at the Maker Faire, including the Public Laboratory for Open Technology and Science (Public Lab), a community that uses inexpensive DIY techniques to investigate environmental concerns.

Photo 3: The MakerBot Digitizer Desktop 3-D Scanner is the first production scanner I’ve seen that will directly provide files compatible with the 3-D printing process. This is a long-awaited addition to MakerBot’s line of 3-D printers. (Photo credit: Spencer Higgins)

Photo 3: The MakerBot Digitizer Desktop 3-D Scanner is the first production scanner I’ve seen that will directly provide files compatible with the 3-D printing process.  (Photo credit: Spencer Higgins)

“For instance, the New York chapter featured two spectrometers, a you-fold-it cardboard version and a near-infrared USB camera-based kit,” Bachiochi says. “This community of educators, technologists, scientists, and community organizers believes they can promote action, intervention, and awareness through a participatory research model in which you can play a part.”

At this family-friendly event, Bachiochi met a family that “creates” together.

“Asheville, NC-based Beatty Robotics is not your average robotics company,” Bachiochi says. “The Beatty team is a family that likes to share fun robotic projects with friends, family, and other roboticists around the world. The team consists of Dad (Robert) and daughters Camille ‘Lunamoth’ and Genevieve ‘Julajay.’ The girls have been mentored in electronics, software programming, and workshop machining. They do some unbelievable work (see Photo 4). Everyone has a hand in designing, building, and programming their fleet of robots. The Hall of Science is home to one of their robots, the Mars Rover.”

There is much more in Bachiochi’s five-page look at the Maker Faire, including resources for finding and participating in a hackerspace community. The February issue including Bachiochi’s articles is available for membership download or single-issue purchase.

Photo 4: Beatty Robotics is a family of makers that produces some incredible models. Young Camille Beatty handles the soldering, but is also well-versed in machining and other areas of expertise.

Photo 4: Beatty Robotics is a family of makers that produces some incredible models. Young Camille Beatty handles the soldering, but is also well-versed in machining and other areas of expertise.

Q&A: Andrew Godbehere, Imaginative Engineering

Engineers are inherently imaginative. I recently spoke with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley, about how his ideas become realities, his design process, and his dream project. —Nan Price, Associate Editor

Andrew Godbehere

Andrew Godbehere

NAN: You are currently working toward your Electrical Engineering PhD at the University of California, Berkeley. Can you describe any of the electronics projects you’ve worked on?

ANDREW: In my final project at Cornell University, I worked with a friend of mine, Nathan Ward, to make wearable wireless accelerometers and find some way to translate a dancer’s movement into music, in a project we called CUMotive. The computational core was an Atmel ATmega644V connected to an Atmel AT86RF230 802.15.4 wireless transceiver. We designed the PCBs, including the transmission line to feed the ceramic chip antenna. Everything was hand-soldered, though I recommend using an oven instead. We used Kionix KXP74 tri-axis accelerometers, which we encased in a lot of hot glue to create easy-to-handle boards and to shield them from static.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt-pack to be worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel ATmega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

The dancer had four accelerometers connected to a belt pack with an Atmel chip and transceiver. On the receiver side, a musical instrument digital interface (MIDI) communicated with a synthesizer. (Design details are available at http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2007/njw23_abg34/index.htm.)

I was excited about designing PCBs for 802.15.4 radios and making them work. I was also enthusiastic about trying to figure out how to make some sort of music with the product. We programmed several possibilities, one of which was a sort of theremin; another was a sort of drum kit. I found that this was the even more difficult part—not just the making, but the making sense.

When I got to Berkeley, my work switched to the theoretical. I tried to learn everything I could about robotic systems and how to make sense of them and their movements.

NAN: Describe the real-time machine vision-tracking algorithm and integrated vision system you developed for the “Are We There Yet?” installation.

ANDREW: I’ve always been interested in using electronics and robotics for art. Having a designated emphasis in New Media on my degree, I was fortunate enough to be invited to help a professor on a fascinating project.

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

This view of the Yud Gallery is from the installed camera with three visitors present. Note the specular reflections on the floor. They moved throughout the day with the sun. This movement needed to be discerned from a visitor’s typical movement .

For the “Are We There Yet?” installation, we used a PointGrey FireFlyMV camera with a wide-angle lens. The camera was situated a couple hundred feet away from the control computer, so we used a USB-to-Ethernet range extender to communicate with the camera.

We installed a color camera in a gallery in the Contemporary Jewish Museum in San Francisco, CA. We used Meyer Sound speakers with a high-end controller system, which enabled us to “position” sound in the space and to sweep audio tracks around at (the computer’s programmed) will. The Meyer Sound D-Mitri platform was controlled by the computer with Open Sound Control (OSC).

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

This view of the Yud Gallery is from the perspective of the computer running the analysis. This is a probabilistic view, where the brightness of each pixel represents the “belief” that the pixel is part of an interesting foreground object, such as a pedestrian. Note the hot spots corresponding nicely with the locations of the visitors in the image above.

The hard work was to then program the computer to discern humans from floors, furniture, shadows, sunbeams, and cloud reflections. The gallery had many skylights, which made the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful to determine whether detected color-perturbance was human or not.

Once complete, the experience of the installation was beautiful, enchanting, and maybe a little spooky. The audio tracks were all questions (e.g., “Are we there yet?”) and they were always spoken near you, as if addressed to you. They responded to your movement in a way that felt to me like dancing with a ghost. You can watch videos about the installation at www.are-we-there-yet.org.

The “Are We There Yet?” project opens itself up to possible use as an embedded system. I’ve been told that the software I wrote works on iOS devices by the start-up company Romo (www.kickstarter.com/projects/peterseid/romo-the-smartphone-robot-for-everyone), which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud-tracking system using a Raspberry Pi and a reasonable webcam.

I may create an automatic cloud-tracking system to watch clouds. I think computers could be capable of this capacity for abstraction, even though we think of the leisurely pastime as the mark of a dreamer.

NAN: Some of the projects you’ve contributed to focus on switched linear systems, hybrid systems, wearable interfaces, and computation and control. Tell us about the projects and your research process.

ANDREW: I think my research is all driven by imagination. I try to imagine a world that could be, a world that I think would be nice, or better, or important. Once I have an idea that captivates my imagination in this way, I have no choice but to try to realize the idea and to seek out the knowledge necessary to do so.

For the wearable wireless accelerometers, it began with the thought: Wouldn’t it be cool if dance and music were inherently connected the way we try to make it seem when we’re dancing? From that thought, the designs started. I thought: The project has to be wireless and low power, it needs accelerometers to measure movement, it needs a reasonable processor to handle the data, it needs MIDI output, and so forth.

My switched linear systems research came about in a different way. As I was in class learning about theories regarding stabilization of hybrid systems, I thought: Why would we do it this complicated way, when I have this reasonably simple intuition that seems to solve the problem? I happened to see the problem a different way as my intuition was trying to grapple with a new concept. That naive accident ended up as a publication, “Stabilization of Planar Switched Linear Systems Using Polar Coordinates,” which I presented in 2010 at Hybrid Systems: Computation and Control (HSCC) in Stockholm, Sweden.

NAN: How did you become interested in electronics?

ANDREW: I always thought things that moved seemingly of their own volition were cool and inherently attention-grabbing. I would think: Did it really just do that? How is that possible?

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Andrew worked on this project when computers still had parallel ports. a—This photo shows manually etched PCB traces for a digital EKG (the attempted EEG) with 8-bit LED optoisolation. The rainbow cable connects to a computer’s parallel port. The interface code was written in C++ and ran on DOS. b—The EKG circuitry and digitizer are shown on the left. The 8-bit parallel computer interface is on the right. Connecting the two boards is an array of coupled LEDs and phototransistors, encased in heat shrink tubing to shield against outside light.

Electric rally-car tracks and radio-controlled cars were a favorite of mine. I hadn’t really thought about working with electronics or computers until middle school. Before that, I was all about paleontology. Then, I saw an episode of Scientific American Frontiers, which featured Alan Alda excitedly interviewing RoboCup contestants. Watching RoboCup [a soccer game involving robotic players], I was absolutely enchanted.

While my childhood electronic toys moved and somehow acted as their own entities, they were puppets to my intentions. Watching RoboCup, I knew these robots were somehow making their own decisions on-the-fly, magically making beautiful passes and goals not as puppets, but as something more majestic. I didn’t know about the technical blood, sweat, and tears that went into it all, so I could have these romantic fantasies of what it was, but I was hooked from that moment.

That spurred me to apply to a specialized science and engineering high school program. It was there that I was fortunate enough to attend a fabulous electronics class (taught by David Peins), where I learned the basics of electronics, the joy of tinkering, and even PCB design and assembly (drilling included). I loved everything involved. Even before I became academically invested in the field, I fell in love with the manual craft of making a circuit.

NAN: Tell us about your first design.

ANDREW: Once I’d learned something about designing and making circuits, I jumped in whole-hog, to a comical degree. My very first project without any course direction was an electroencephalograph!

I wanted to make stuff move on my computer with my brain, the obvious first step. I started with a rough design and worked on tweaking parameters and finding components.

In retrospect, I think that first attempt was actually an electromyograph that read the movements of my eye muscles. And it definitely was an electrocardiograph. Success!

Someone suggested that it might not be a good idea to have a power supply hooked up in any reasonably direct path with your brain. So, in my second attempt, I tried to make something new, so I digitized the signal on the brain side and hooked it up to eight white LEDs. On the other side, I had eight phototransistors coupled with the LEDs and covered with heat-shrink tubing to keep out outside light. That part worked, and I was excited about it, even though I was having some trouble properly tuning the op-amps in that version.

NAN: Describe your “dream project.”

ANDREW: Augmented reality goggles. I’m dead serious about that, too. If given enough time and money, I would start making them.

I would use some emerging organic light-emitting diode (OLED) technology. I’m eyeing the start-up MicroOLED (www.microoled.net) for its low-power “near-to-eye” display technologies. They aren’t available yet, but I’m hopeful they will be soon. I’d probably hook that up to a Raspberry Pi SBC, which is small enough to be worn reasonably comfortably.

Small, high-resolution cameras have proliferated with modern cell phones, which could easily be mounted into the sides of goggles, driving each OLED display independently. Then, it’s just a matter of creativity for how to use your newfound vision! The OpenCV computer vision library offers a great starting point for applications such as face detection, image segmentation, and tracking.

Google Glass is starting to get some notice as a sort of “heads-up” display, but in my opinion, it doesn’t go nearly far enough. Here’s the craziest part—please bear with me—I’m willing to give up directly viewing the world with my natural eyes, I would be willing to have full field-of-vision goggles with high-resolution OLED displays with stereoscopic views from two high-resolution smartphone-style cameras. (At least until the technology gets better, as described in Rainbows End by Vernor Vinge.) I think, for this version, all the components are just now becoming available.

Augmented reality goggles would do a number of things for vision and human-computer interaction (HCI). First, 3-D overlays in the real world would be possible.

Crude example: I’m really terrible with faces and names, but computers are now great with that, so why not get a little help and overlay nametags on people when I want? Another fascinating thing for me is that this concept of vision abstracts the body from the eyes. So, you could theoretically connect to the feed from any stereoscopic cameras around (e.g., on an airplane, in the Grand Canyon, or on the back of some wild animal), or you could even switch points of view with your friend!

Perhaps reality goggles are not commercially viable now, but I would unabashedly use them for myself. I dream about them, so why not make them?

Member Profile: Walter O. Krawec

Walter O. Krawec

Walter O. Krawec

LOCATION:
Upstate New York

OCCUPATION:
Research Assistant and PhD Student, Stevens Institute of Technology

MEMBER STATUS:
Walter has been reading Circuit Cellar since he got his first issue in 1999. Free copies were available at the Trinity College Fire Fighting Robot Contest, which was his first experience with robotics. Circuit Cellar was the first magazine for which he wrote an article (“An HC11 File Manager,” two-part series, issues 129 and 130, 2001).

TECH INTERESTS:
Robotics, among other things. He is particularly interested in developmental and evolutionary robotics (where the robot’s strategies, controllers, and so forth are evolved instead of programmed in directly).

RECENT TECH ACQUISITION:
Walter is enjoying his Raspberry Pi. “What a remarkable product! I think it’s great that I can take my AI software, which I’ve been writing on a PC, copy it to the Raspberry Pi, compile it with GCC, then off it goes with little or no modification!”

CURRENT PROJECTS:
Walter is designing a new programming language and interpreter (for Windows/Mac/Linux, including the Raspberry Pi) that uses a simulated quantum computer to drive a robot. “What better way to learn the basics of quantum computing than by building a robot around one?” The first version of this language is available on his website (walterkrawec.org). He has plans to release an improved version.

THOUGHTS ON EMBEDDED TECH:
Walter said he is amazed with the power of the latest embedded technology, for example the Raspberry Pi. “For less than $40 you have a perfect controller for a robot that can handle incredibly complex programs. Slap on one of those USB battery packs and you have a fully mobile robot,” he said. He used a Pololu Maestro to interface the motors and analog sensors. “It all works and it does everything I need.” However, he added, “If you want to build any of this yourself by hand it can be much harder, especially since most of the cool stuff is surface mount, making it difficult to get started.”

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

I/O Raspberry Pi Expansion Card

The RIO is an I/O expansion card intended for use with the Raspberry Pi SBC. The card stacks on top of a Raspberry Pi to create a powerful embedded control and navigation computer in a small 20-mm × 65-mm × 85-mm footprint. The RIO is well suited for applications requiring real-world interfacing, such as robotics, industrial and home automation, and data acquisition and control.

RoboteqThe RIO adds 13 inputs that can be configured as digital inputs, 0-to-5-V analog inputs with 12-bit resolution, or pulse inputs capable of pulse width, duty cycle, or frequency capture. Eight digital outputs are provided to drive loads up to 1 A each at up to 24 V.
The RIO includes a 32-bit ARM Cortex M4 microcontroller that processes and buffers the I/O and creates a seamless communication with the Raspberry Pi. The RIO processor can be user-programmed with a simple BASIC-like programming language, enabling it to perform logic, conditioning, and other I/O processing in real time. On the Linux side, RIO comes with drivers and a function library to quickly configure and access the I/O and to exchange data with the Raspberry Pi.

The RIO features several communication interfaces, including an RS-232 serial port to connect to standard serial devices, a TTL serial port to connect to Arduino and other microcontrollers that aren’t equipped with a RS-232 transceiver, and a CAN bus interface.
The RIO is available in two versions. The RIO-BASIC costs $85 and the RIO-AHRS costs $175.

Roboteq, Inc.
www.roboteq.com

Electrical Engineering and Artistic Expression

I think we’re on the verge of the next artistic renaissance. This time, instead of magnificent architecture, beautifully painted portraits, and the rise of humanism, I think engineering (specifically electrical engineering) will begin to define exciting new forms of artistic expression.

Cornell University graduate and electrical engineer Jeremy Blum in 2011 blog post

Regular Circuit Cellar readers will recognize Jeremy Blum as our November issue interview subject. Blum’s post sums up a philosophy that seems to be shared by some other recent EE graduates or aspiring electrical engineers. They view their work as art, or at least they like to occasionally work in art.

For example, Circuit Cellar’s January issue will feature an interview with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley. He has intertwined engineering and art more than once.

This is the central control belt pack worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel Mega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt pack worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel Mega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

When he was Cornell student, he collaborated with Nathan Ward on a final project to translate a dancer’s movement into music. They created a central control belt pack for the dancer, which connected to four wearable wireless accelerometers to measure the dancer’s movements. Inside the belt pack, an ATmega 644V connected to an Atmel AT86RF230 wireless transceiver interfaced with a musical instrument digital interface (MIDI) and synthesizer.

When Godbehere graduated from Cornell and headed to UC Berkeley, his focus shifted to theoretical topics and robotic systems. But he jumped at a professor’s invitation to become involved in the “Are We There Yet?” art installation in 2011 at the Contemporary Jewish Museum in San Francisco.

During the four-month exhibit, visitors entered a nearly empty gallery to encounter recorded questions emanating from numerous floor speakers. A camera followed each visitor’s moves and robotic algorithms enabled it to determine which floor speaker to activate. The questions heard could range from “What Is My Purpose?” to “What’s Up Doc?”

How a visitor moved through the interactive installation triggered the combination of questions he or she heard.

Video documentary of “Are We There Yet?” 

Godbehere was the computer vision system engineer working with artists Gil Gershoni and Ken Goldberg, who is also a robotics and new media professor at UC Berkeley.

“We installed a color camera in a beautiful gallery in the Contemporary Jewish Museum… and a set of speakers with a high-end controller system from Meyer Sound that enabled us to ‘position’ sound in the space and to sweep audio tracks around at (the computer’s programmed) will,” Godbehere says. “The Meyer Sound System is the D-Mitri control system, controlled by the computer with Open Sound Control (OSC).

“The hard work was then to program the computer to discern humans from floors, furniture, shadows, sunbeams, and reflections of clouds. The gallery had many skylights, making the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful in determining if detected color-perturbance was human or not.”

Behind the technology of “Are We There Yet?”

Can such art also have “practical” consumer applications? Godbehere says there are elements that can be used as an embedded system.

“I’ve been told that the software I wrote works on iOS devices by the startup company Romo, which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud tracking system using a Raspberry Pi and a reasonable webcam.”

If you’re interested in learning more about Godbehere’s engineering and artistic work, be sure to check out the January issue of Circuit Cellar.

And if you have an opinion on electrical engineering and art, please post your comments below.

MIT’s Self-Assembling Robots

Calling it a low-tech solution to a high-tech challenge, MIT researchers have received a lot of attention recently for their modular system of self-assembling robot cubes. The video of the so-called M-Blocks in action, which MIT posted earlier this month on YouTube, has also become high profile. A recent tally has the video at nearly 1.5 million views and counting.

 

The text accompanying the video explains how the cubes are able to move around and climb over each other,  jump into the air, and roll across surfaces as they connect in a variety of configurations. And they do all this without any external moving parts. Instead, each M-Block contains a flywheel that can reach speeds of 20,000 rpm. When the flywheel brakes, it imparts angular momentum to the cube.  Precisely placed magnets on every face and edge of each M-Block enable any two cubes to attach to each other.

The simple design holds short- and long-term promise.  According  to an October 4 article by Larry Hardesty of the MIT News Office, it is hoped that the blocks can be miniaturized someday, perhaps to swarming microbots that can self-assemble with a purpose. Even at their current size, further development of the M-Blocks might lead to “armies of mobile cubes” that can help repair bridges and buildings in emergencies, raise scaffolding, reconfigure into heavy equipment or furniture as needed, or head in to environments hostile to humans to diagnose and repair problems, the article suggests.

While it may not rise to “cooperative group behavior,”  the ability of one cube to drag another and influence its alignment is impressive. What could 100 or more of these robots accomplish as MIT researchers continue to develop algorithms to control them?

A prototype of the new modular robot, with its flywheel exposed. (Photo: M. Scott Brauer)

A prototype of the new modular robot, with its interior and flywheel exposed.
(Photo: M. Scott Brauer)

Q&A: Jeremy Blum, Electrical Engineer, Entrepreneur, Author

Jeremy Blum

Jeremy Blum

Jeremy Blum, 23, has always been a self-proclaimed tinkerer. From Legos to 3-D printers, he has enjoyed learning about engineering both in and out of the classroom. A recent Cornell University College of Engineering graduate, Jeremy has written a book, started his own company, and traveled far to teach children about engineering and sustainable design. Jeremy, who lives in San Francisco, CA, is now working on Google’s Project Glass.—Nan Price, Associate Editor

NAN: When did you start working with electronics?

JEREMY: I’ve been tinkering, in some form or another, ever since I figured out how to use my opposable thumbs. Admittedly, it wasn’t electronics from the offset. As with most engineers, I started with Legos. I quickly progressed to woodworking and I constructed several pieces of furniture over the course of a few years. It was only around the start of my high school career that I realized the extent to which I could express my creativity with electronics and software. I thrust myself into the (expensive) hobby of computer building and even built an online community around it. I financed my hobby through my two companies, which offered computer repair services and video production services. After working exclusively with computer hardware for a few years, I began to dive deeper into analog circuits, robotics, microcontrollers, and more.

NAN: Tell us about some of your early, pre-college projects.

JEREMY: My most complex early project was the novel prosthetic hand I developed in high school. The project was a finalist in the prestigious Intel Science Talent Search. I also did a variety of robotics and custom-computer builds. The summer before starting college, my friends and I built a robot capable of playing “Guitar Hero” with nearly 100% accuracy. That was my first foray into circuit board design and parallel programming. My most ridiculous computer project was a mineral oil-cooled computer. We submerged an entire computer in a fish tank filled with mineral oil (it was actually a lot of baby oil, but they are basically the same thing).

DeepNote Guitar Hero Robot

DeepNote Guitar Hero Robot

Mineral Oil-Cooled Computer

Mineral Oil-Cooled Computer

NAN: You’re a recent Cornell University College of Engineering graduate. While you were there, you co-founded Cornell’s PopShop. Tell us about the workspace. Can you describe some PopShop projects?

Cornell University's PopShop

Cornell University’s PopShop

JEREMY: I recently received my Master’s degree in Electrical and Computer Engineering from Cornell University, where I previously received my BS in the same field. During my time at Cornell, my peers and I took it upon ourselves to completely retool the entrepreneurial climate at Cornell. The PopShop, a co-working space that we formed a few steps off Cornell’s main campus, was our primary means of doing this. We wanted to create a collaborative space where students could come to explore their own ideas, learn what other entrepreneurial students were working on, and get involved themselves.

The PopShop is open to all Cornell students. I frequently hosted events there designed to get more students inspired about pursuing their own ideas. Common occurrences included peer office hours, hack-a-thons, speed networking sessions, 3-D printing workshops, and guest talks from seasoned venture capitalists.

Student startups that work (or have worked) out of the PopShop co-working space include clothing companies, financing companies, hardware startups, and more. Some specific companies include Rosie, SPLAT, LibeTech (mine), SUNN (also mine), Bora Wear, Yorango, Party Headphones, and CoVenture.

NAN: Give us a little background information about Cornell University Sustainable Design (CUSD). Why did you start the group? What types of CUSD projects were you involved with?

CUSD11JEREMY: When I first arrived at Cornell my freshman year, I knew right away that I wanted to join a research lab, and that I wanted to join a project team (knowing that I learn best in hands-on environments instead of in the classroom). I joined the Cornell Solar Decathlon Team, a very large group of mostly engineers and architects who were building a solar-powered home to enter in the biannual solar decathlon competition orchestrated by the Department of Energy.

By the end of my freshman year, I was the youngest team leader in the organization.  After competing in the 2009 decathlon, I took over as chief director of the team and worked with my peers to re-form the organization into Cornell University Sustainable Design (CUSD), with the goal of building a more interdisciplinary team, with far-reaching impacts.

CUSD3

Under my leadership, CUSD built a passive schoolhouse in South Africa (which has received numerous international awards), constructed a sustainable community in Nicaragua, has been the only student group tasked with consulting on sustainable design constraints for Cornell’s new Tech Campus in New York City, partnered with nonprofits to build affordable homes in upstate New York, has taught workshops in museums and school, contributed to the design of new sustainable buildings on Cornell’s Ithaca campus, and led a cross-country bus tour to teach engineering and sustainability concepts at K–12 schools across America. The group is now comprised of students from more than 25 different majors with dozens of advisors and several simultaneous projects. The new team leaders are making it better every day. My current startup, SUNN, spun out of an EPA grant that CUSD won.

CUSD7NAN: You spent two years working at MakerBot Industries, where you designed electronics for a 3-D printer and a 3-D scanner. Any highlights from working on those projects?

JEREMY: I had a tremendous opportunity to learn and grow while at MakerBot. When I joined, I was one of about two dozen total employees. Though I switched back and forth between consulting and full-time/part-time roles while class was in session, by the time I stopped working with MakerBot (in January 2013), the company had grown to more than 200 people. It was very exciting to be a part of that.

I designed all of the electronics for the original MakerBot Replicator. This constituted a complete redesign from the previous electronics that had been used on the second generation MakerBot 3-D printer. The knowledge I gained from doing this (e.g., PCB design, part sourcing, DFM, etc.) drastically outweighed much of what I had learned in school up to that point. I can’t say much about the 3-D scanner (the MakerBot Digitizer), as it has been announced, but not released (yet).

The last project I worked on before leaving MakerBot was designing the first working prototype of the Digitizer electronics and firmware. These components comprised the demo that was unveiled at SXSW this past April. This was a great opportunity to apply lessons learned from working on the Replicator electronics and find ways in which my personal design process and testing techniques could be improved. I frequently use my MakerBot printers to produce custom mechanical enclosures that complement the open-source electronics projects I’ve released.

NAN: Tell us about your company, Blum Idea Labs. What types of projects are you working on?

JEREMY: Blum Idea Labs is the entity I use to brand all my content and consulting services. I primarily use it as an outlet to facilitate working with educational organizations. For example, the St. Louis Hacker Scouts, the African TAHMO Sensor Workshop, and several other international organizations use a “Blum Idea Labs Arduino curriculum.” Most of my open-source projects, including my tutorials, are licensed via Blum Idea Labs. You can find all of them on my blog (www.jeremyblum.com/blog). I occasionally offer private design consulting through Blum Idea Labs, though I obviously can’t discuss work I do for clients.

NAN: Tell us about the blog you write for element14.

JEREMY: I generally use my personal blog to write about projects that I’ve personally been working on.  However, when I want to talk about more general engineering topics (e.g., sustainability, engineering education, etc.), I post them on my element14 blog. I have a great working relationship with element14. It has sponsored the production of all my Arduino Tutorials and also provided complete parts kits for my book. We cross-promote each-other’s content in a mutually beneficial fashion that also ensures that the community gets better access to useful engineering content.

NAN: You recently wrote Exploring Arduino: Tools and Techniques for Engineering Wizardry. Do you consider this book introductory or is it written for the more experienced engineer?

JEREMY: As with all the video and written content that I produce on my website and on YouTube, I tried really hard to make this book useful and accessible to both engineering veterans and newbies. The book builds on itself and provides tons of optional excerpts that dive into greater technical detail for those who truly want to grasp the physics and programming concepts behind what I teach in the book. I’ve already had readers ranging from teenagers to senior citizens comment on the applicability of the book to their varying degrees of expertise. The Amazon reviews tell a similar story. I supplemented the book with a lot of free digital content including videos, part descriptions, and open-source code on the book website.

NAN: What can readers expect to learn from the book?

JEREMY: I wrote the book to serve as an engineering introduction and as an idea toolbox for those wanting to dive into concepts in electrical engineering, computer science, and human-computer interaction design. Though Exploring Arduino uses the Arduino as a platform to experiment with these concepts, readers can expect to come away from the book with new skills that can be applied to a variety of platforms, projects, and ideas. This is not a recipe book. The projects readers will undertake throughout the book are designed to teach important concepts in addition to traditional programming syntax and engineering theories.

NAN: I see you’ve spent some time introducing engineering concepts to children and teaching them about sustainable engineering and renewable energy. Tell us about those experiences. Any highlights?

JEREMY: The way I see it, there are two ways in which engineers can make the world a better place: they can design new products and technologies that solve global problems or they can teach others the skills they need to assist in the development of solutions to global problems. I try hard to do both, though the latter enables me to have a greater impact, because I am able to multiply my impact by the number of students I teach. I’ve taught workshops, written curriculums, produced videos, written books, and corresponded directly with thousands of students all around the world with the goal of transferring sufficient knowledge for these students to go out and make a difference.

Here are some highlights from my teaching work:

bluestamp

I taught BlueStamp Engineering, a summer program for high school students in NYC in the summer of 2012. I also guest-lectured at the program in 2011 and 2013.

I co-organized a cross-country bus tour where we taught sustainability concepts to school children across the country.

indiaI was invited to speak at Techkriti 2013 in Kanpur, India. I had the opportunity to meet many students from IIT Kanpur who already followed my videos and used my tutorials to build their own projects.

Blum Idea Labs partnered with the St. Louis Hacker Scouts to construct a curriculum for teaching electronics to the students. Though I wasn’t there in person, I did welcome them all to the program with a personalized video.

brooklyn_childrens_zoneThrough CUSD, I organized multiple visits to the Brooklyn Children’s Zone, where my team and I taught students about sustainable architecture and engineering.

Again with CUSD, we visited the Intrepid museum to teach sustainable energy concepts using potato batteries.

intrepid

NAN: Speaking of promoting engineering to children, what types of technologies do you think will be important in the near future?

JEREMY: I think technologies that make invention more widely accessible are going to be extremely important in the coming years. Cheaper tools, prototyping platforms such as the Arduino and the Raspberry Pi, 3-D printers, laser cutters, and open developer platforms (e.g., Android) are making it easier than ever for any person to become an inventor or an engineer.  Every year, I see younger and younger students learning to use these technologies, which makes me very optimistic about the things we’ll be able to do as a society.

3-D Printed Robotics Innovation: A Low-Cost Solution for Prosthetic Hands

Gibbardholding DextrusUK-based inventor and robotist Joel Gibbard used a 3-D printer to design and build a prosthetic robotic hand. He founded the Open Hand Project with the goal of making the prosthetic hands available for amputees.

 

 NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

 JOEL: I was born in Bristol, UK, and grew up in that area. Bristol is a fantastic place for robotics in the UK, so I couldn’t have had a better place to start from. There’s a lot to engage children here, like the highly popular @Bristol science museum. I studied for a degree in Robotics at the University of Plymouth, which encourages a very practical approach to engineering. Right from the first year we were working with electronics, robotics, and writing code.

 NAN: When did you first start working with robotics?

 JOEL: The first robots I ever made were using the Lego MINDSTORMS NXT robotics kits. I was very lucky because these were just starting to come out when I was about 6 or 7 years old. I think from ages three to 15 every single birthday or Christmas present was a new Lego set. To this day, I still think Lego is the best tool for rapid prototyping in the early stages of an idea.

 NAN: Tell us about your first design/some of your early projects. Do you have any photos or diagrams?

 JOEL: The earliest project I remember working on with my father was a full-scale model of the space shuttle complete with robotic arm and fully motorized launch pad. When on the launch pad it was almost my height. I think my father took having kids as an opportunity to get back into making things. We also made a Saturn 5 rocket, Sydney Harbour Bridge and Concorde. One of my first robots was a Lego Technic creation. It had tracks, a double-barreled gun on one arm, a pincer on the other, and a submarine on the back, just in case. I think I was about eight years old when I made it.

 NAN: You originally developed the Dextrus robotic hand while you were at the University of Plymouth. Why did you design the system? How has its development progressed since the original concept?

Gibbarddesignprocess

Joel keeps an ongoing design sketchbook.

JOEL: I have a sketchbook of around 10 to 20 inventions that are options for the next thing I want to make. This grows faster than it shrinks. One day I was thinking about what to make next and the thought occurred to me that if I were to lose my hand, I wouldn’t be able to make anything. So it made the most sense to design a hand to have just in case. Once I have that, heaven forbid I need to; I could use it to then make a better hand, and so forth, until I have a robot hand that is as good as a human hand. It sounds ridiculous, but that was enough motivation for me to make the first one.

GibbardEarlyDextrus

This is an early version of the Dextrus hand.

After posting the project on YouTube, I received comments from people asking to have the designs to make their own, which wasn’t really possible, since it was such a one-off prototype. But I thought it was a good idea. Why not make an open-source hand? After that, I looked more into prostheses and discovered that this is really necessary and people want it.

 NAN: The Dextrus incorporates 3-D printed parts. How does the 3-D printing factor in your design? Does it make each hand customizable?

 JOEL: 3-D printing is essential to the design. Many of the parts have cavities inside them, which wouldn’t be possible to make using injection molding. One would have to make the parts in two halves then glue them together, which creates weak points. With 3-D printing, each part is one solid piece with cavities for the tendons to slide through.

Customization is a great area to explore in the future. It’s quite easy to modify things like the length and shape of the fingers while maintaining the functionality of the hand. In the not-too-distant future, I could envisage an amputee 3-D scanning their remaining hand and sending the scan to me. I could then reverse it and match their Dextrus hand (approximately) to the dimensions of their other hand.

Gibbard3DprintedDextrushand

The 3-D printed Dextrus hand.

 NAN: There are three types of Dextrus robotic hands: The Dextrus, the Dextrus EMG, and the Dextrus Research. Can you describe the differences?

 JOEL: They have the same basic design and components. The Dextrus and Dextrus EMG are exactly the same, but the EMG comes with all of the extras that enable someone to use it as a myoelectric prosthesis. The Dextrus Research has a number of differences that result in a more robust (but more expensive and heavier) hand. It has steel ball bearings instead of nylon bushes and is printed with denser plastic. It also comes with everything you need to use it straight out of the box (e.g., a power supply).

 NAN: You founded the Open Hand Project as a result of your work on the Dextrus robotic hand. Describe the project and its purpose.

 JOEL: The aim of the Open Hand Project is to make advanced prosthetic hands more accessible to amputees. It has the potential to revolutionize the prosthetics industry by trivializing the cost of prosthetics (to insurance companies). I also hope that it will help to advance prosthetic hands. If the hardware is much less expensive, we can start to focus on the human robot interface. At the moment, it uses electromyographical signals, which sound advanced but are actually 50-year-old technology and don’t give complex functionality like individual finger movement. If the hardware is inexpensive, then money can instead be spent on operations to tap into the nervous system and then the hand can literally be a direct replacement for the human hand. You’ll think about moving your hand and the robotic hand will do exactly what you’re thinking. If done correctly, you’ll also be able to feel with it. We’re talking Luke Skywalker Star Wars tech. It exists now, but is not yet fully tested and proven.

 NAN: Prior to venturing out on your own, you were an Applications Engineer at National Instruments (NI). Although you are no longer working for the company, it is backing the Open Hand Project by providing test and measurement equipment. How did NI become involved in the project?

 JOEL: National Instruments has been great since I’ve left the company. I explained what I wanted to do, and it was fully supportive. To get the equipment, all I had to do was ask! It really does live up to its reputation of being one of the best places to work. I hope that I’ll be able to repay them with business in the future. If I’m successful, then I’ll be able to buy equipment for future projects.

 NAN: Why did you decide to use crowdfunding for this project?

 JOEL: I wanted to keep everything open source for this project. Investors don’t want to fund an open-source project. You have no leverage to make money and your ideas will be taken and used by other people (which is encouraged). For this reason, only people who are genuinely interested in the vision of the project will want to invest, and that’s just not something that will make a company money. Crowdfunding is perfect, because people appreciate how this can help people and they’re willing to contribute to that.

I believe that everyone should have access to public health care and that your level of care should not be dependent on the size of your wallet. Making prosthetics open source will be a step in the right direction, but this model does not have to be limited to prosthetics. Take the drugs industry for example. Drugs companies work off patents, they have to patent their drugs in order to make back the millions of dollars they spend developing them and end up charging $1,000 for a pill that costs them $0.01 to make in order to cover all of their costs. If the research was publicly funded and open source, the innovations in this industry would be dramatically accelerated and once drugs were developed, they could be sold more cheaply, if sale of the drugs was government regulated, the price could be controlled and the money could go back into funding more developments.

 NAN: What’s next for the Dextrus?

 JOEL: There are a few directions I’d like this project to go in. First and foremost is the development of low-cost robotic prostheses for adults. After this, I’d like to look into partial amputations and finger prostheses. I’d also like to try and miniaturize the hand so that children can use it as well. Before any of this can happen I’ll need to reach my crowdfunding goal on indiegogo!

 

CC279: Working with RobotBasic

In Circuit Cellar’s October issue, columnist Jeff Bachiochi introduces readers to RobotBasic, a free robot control programming language that you can use to control real or simulated robots, and provides a detailed explanation on how to use it.

Photo 1: This army of robots all use the RobotBASIC Robot Operating System (RROS). Note the large robot has an arm located just above the wheels that is controlled by a second RROS. It uses an on-board laptop running a RobotBASIC (RB) application. The small robots are all controlled via a Bluetooth link from an external PC running an RB application.

Photo 1: This army of robots all use the RobotBASIC Robot Operating System (RROS). Note the large robot has an arm located just above the wheels that is controlled by a second RROS. It uses an on-board laptop running a RobotBASIC (RB) application. The small robots are all controlled via a Bluetooth link from an external PC running an RB application.

“About five years ago, John Blankenship and Samuel Mishal coauthored Robot Programmer’s Bonanza, a book explaining the freely available RobotBASIC IDE they offer. RobotBASIC (RB) is a powerful language that enables you to use standard BASIC syntax (or a modified C-style syntax ( i.e., ++, +=, !=, and &&) to quickly write a program to control and simulate a robot with many types of sensors,” Bachiochi says. “This is a great tool to teach programming.”

RB, with more than 800 commands and functions, can also be a tool for non-robotic applications such as tackling tough engineering problems or creating animated simulations, Bachiochi says.

It’s likely that anyone who starts out simulating with RobotBasic will eventually want to control real robot hardware.

“There is no need to worry,” Bachiochi says. “RB was written to make use of a PC’s I/O. The parallel port is a good source for digital I/O and the serial port is well suited for external communication. The same commands used for robot movement in the simulator can alternatively be sent to a serial port establishing a sort of serial robot command protocol. But, tethered robots aren’t so cool, and many robots are too small to tote around a PC as their “great and powerful Oz.”

“Luckily, much has changed since RB’s original concepts were put into practice,” he adds. “We all know what has happened to these PC ports. They’ve fallen under the USB’s mighty power. RB doesn’t care whether it is talking with a serial port or a USB virtual serial port. USB offers inexpensive Bluetooth dongles and can create wireless serial communication to external devices.”

Bachiochi also discusses the RobotBASIC Robot Operating System (RROS), created to support RB’s serial robot command protocol. The module is available from RB’s website.

“The RROS is a preprogrammed module that can receive communication from RB, interpret commands, and directly interface to hardware,” Bachiochi says. “The module is a Pololu Baby Orangutan robot controller, consisting of an Atmel ATmega328P microcontroller and a Pololu TB6612 dual motor driver carrier in a DIP24 form factor. You can use the module (which comes preprogrammed with the RROS) to build robots like those shown in Photo 1.”

Bachiochi’s look at RB and RROS is a two-part series. In Part 2, appearing in Circuit Cellar’s November issue, Bachiochi will explain how to translate between RROS and the iRobot Create Open Interface.

So, if you want explore a programming language that can take you from simulated to real-world robotics control, check out the October and November issues.

 

CC278: Evolving Neural Networks in Robotics

ccpostrobotAre you curious about how an evolving neural network helps a robot learn about itself and its environment?

CCKRAWECPOST

A neural network with two inputs, one output, and three hidden neurons.

In the September issue of Circuit Cellar, Walter O. Krawec begins a two-part series that describes an ENN he uses in robot development experiments, explains how short-term memory (STM) evaluates a network’s conditions and how to add data to STM, and discusses how an ENN uses a robot’s minimalistic “instincts” and “reflexes” to guide a robot’s evolution.

Krawec, who has been building robots since 1999, is a research assistant and PhD student in Computer Science at the Stevens Institute of Technology in Hoboken, N.J. The work presented in his two-part series is based on a paper published in the proceedings of the 13th International Artificial Life Conference in 2012.

The overall goal of the Krawec’s experiments in developmental robotics is to enable a robot to learn on its own without human intervention. “An ENN is used to accomplish this,” he says.  “This network will be capable of growing and learning in real time as the robot operates.”

In his series, Krawec presents an architecture he says “enables a robot to ‘grow’ from a naive individual with no knowledge of itself (i.e., no notion of what its sensors are reporting or what its outputs actually do) to one that can operate in an environment.”

“This architecture will consist of an evolving neural network (ENN), a short-term memory (STM), and simple instincts and reflexes.

“Despite a minimal set of instincts, which provide penalties and rewards for certain actions (e.g., crashing into a wall, the robots described in this article sometimes develop complicated and unexpected behaviors. Such behaviors range from following walls (despite the robots’ binary proximity sensors) to games of ‘follow the leader.’…

“This article explores basic artificial neural network (ANN) concepts and outlines the ENN I’m using in this project. This is a neural network that, over time, learns not only by adjusting synaptic weights but also by growing new neurons and new connections (generally resulting in a recurrent neural network). Finally, I’ll discuss the STM system and how it is used to evaluate a network’s fitness.”

The second article in Krawec’s series appears in Circuit Cellar’s October issue.

“In Part 2, I’ll examine the reflex and instinct system, which feeds reward information to an ENN and the ‘decision path’ system, which rewards or penalizes chains of actions,” Krawec says. “Finally, I’ll discuss experiments conducted to demonstrate this architecture in a simulated environment. In particular, I’ll describe some interesting behaviors that robots have developed in trial runs.”

For more, check out Krawec’s articles on “Experiments in Developmental Robotics” in the September and October issues. You will also find information and videos about his work with robots on his website.

 

AAR Arduino Autonomous Mobile Robot

The AAR Arduino Robot is a small autonomous mobile robot designed for those new to robotics and for experienced Arduino designers. The robot is well suited for hobbyists and school projects. Designed in the Arduino open-source prototyping platform, the robot is easy to program and run.

The AAR, which is delivered fully assembled, comes with a comprehensive CD that includes all the software needed to write, compile, and upload programs to your robot. It also includes a firmware and hardware self test. For wireless control, the robot features optional Bluetooth technology and a 433-MHz RF.

The AAR robot’s features include an Atmel ATmega328P 8-bit AVR-RISC processor with a 16-MHz clock, Arduino open-source software, two independently controlled 3-VDC motors, an I2C bus, 14 digital I/Os on the processor, eight analog input lines, USB interface programming, an on-board odometer sensor on both wheels, a line tracker sensor, and an ISP connector for bootloader programming.

The AAR’s many example programs help you get your robot up and running. With many expansion kits available, your creativity is unlimited.

Contact Global Specialties for pricing.

Global Specialties
http://globalspecialties.com