Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

I/O Raspberry Pi Expansion Card

The RIO is an I/O expansion card intended for use with the Raspberry Pi SBC. The card stacks on top of a Raspberry Pi to create a powerful embedded control and navigation computer in a small 20-mm × 65-mm × 85-mm footprint. The RIO is well suited for applications requiring real-world interfacing, such as robotics, industrial and home automation, and data acquisition and control.

RoboteqThe RIO adds 13 inputs that can be configured as digital inputs, 0-to-5-V analog inputs with 12-bit resolution, or pulse inputs capable of pulse width, duty cycle, or frequency capture. Eight digital outputs are provided to drive loads up to 1 A each at up to 24 V.
The RIO includes a 32-bit ARM Cortex M4 microcontroller that processes and buffers the I/O and creates a seamless communication with the Raspberry Pi. The RIO processor can be user-programmed with a simple BASIC-like programming language, enabling it to perform logic, conditioning, and other I/O processing in real time. On the Linux side, RIO comes with drivers and a function library to quickly configure and access the I/O and to exchange data with the Raspberry Pi.

The RIO features several communication interfaces, including an RS-232 serial port to connect to standard serial devices, a TTL serial port to connect to Arduino and other microcontrollers that aren’t equipped with a RS-232 transceiver, and a CAN bus interface.
The RIO is available in two versions. The RIO-BASIC costs $85 and the RIO-AHRS costs $175.

Roboteq, Inc.
www.roboteq.com

Electrical Engineering and Artistic Expression

I think we’re on the verge of the next artistic renaissance. This time, instead of magnificent architecture, beautifully painted portraits, and the rise of humanism, I think engineering (specifically electrical engineering) will begin to define exciting new forms of artistic expression.

Cornell University graduate and electrical engineer Jeremy Blum in 2011 blog post

Regular Circuit Cellar readers will recognize Jeremy Blum as our November issue interview subject. Blum’s post sums up a philosophy that seems to be shared by some other recent EE graduates or aspiring electrical engineers. They view their work as art, or at least they like to occasionally work in art.

For example, Circuit Cellar’s January issue will feature an interview with Andrew Godbehere, an Electrical Engineering PhD candidate at the University of California, Berkeley. He has intertwined engineering and art more than once.

This is the central control belt pack worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel Mega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

This is the central control belt pack worn by a dancer for CUMotive, the wearable accelerometer project. An Atmel Mega644V and an AT86RF230 were used inside to interface to synthesizer. The plastic enclosure has holes for the belt to attach to a dancer. Wires connect to accelerometers, which are worn on the dancer’s limbs.

When he was Cornell student, he collaborated with Nathan Ward on a final project to translate a dancer’s movement into music. They created a central control belt pack for the dancer, which connected to four wearable wireless accelerometers to measure the dancer’s movements. Inside the belt pack, an ATmega 644V connected to an Atmel AT86RF230 wireless transceiver interfaced with a musical instrument digital interface (MIDI) and synthesizer.

When Godbehere graduated from Cornell and headed to UC Berkeley, his focus shifted to theoretical topics and robotic systems. But he jumped at a professor’s invitation to become involved in the “Are We There Yet?” art installation in 2011 at the Contemporary Jewish Museum in San Francisco.

During the four-month exhibit, visitors entered a nearly empty gallery to encounter recorded questions emanating from numerous floor speakers. A camera followed each visitor’s moves and robotic algorithms enabled it to determine which floor speaker to activate. The questions heard could range from “What Is My Purpose?” to “What’s Up Doc?”

How a visitor moved through the interactive installation triggered the combination of questions he or she heard.

Video documentary of “Are We There Yet?” 

Godbehere was the computer vision system engineer working with artists Gil Gershoni and Ken Goldberg, who is also a robotics and new media professor at UC Berkeley.

“We installed a color camera in a beautiful gallery in the Contemporary Jewish Museum… and a set of speakers with a high-end controller system from Meyer Sound that enabled us to ‘position’ sound in the space and to sweep audio tracks around at (the computer’s programmed) will,” Godbehere says. “The Meyer Sound System is the D-Mitri control system, controlled by the computer with Open Sound Control (OSC).

“The hard work was then to program the computer to discern humans from floors, furniture, shadows, sunbeams, and reflections of clouds. The gallery had many skylights, making the lighting very dynamic. Then, I programmed the computer to keep track of people as they moved and found that this dynamic information was itself useful in determining if detected color-perturbance was human or not.”

Behind the technology of “Are We There Yet?”

Can such art also have “practical” consumer applications? Godbehere says there are elements that can be used as an embedded system.

“I’ve been told that the software I wrote works on iOS devices by the startup company Romo, which was evaluating my vision-tracking code for use in its cute iPhone rover. Further, I’d say that if someone were interested, they could create a similar pedestrian, auto, pet, or cloud tracking system using a Raspberry Pi and a reasonable webcam.”

If you’re interested in learning more about Godbehere’s engineering and artistic work, be sure to check out the January issue of Circuit Cellar.

And if you have an opinion on electrical engineering and art, please post your comments below.

MIT’s Self-Assembling Robots

Calling it a low-tech solution to a high-tech challenge, MIT researchers have received a lot of attention recently for their modular system of self-assembling robot cubes. The video of the so-called M-Blocks in action, which MIT posted earlier this month on YouTube, has also become high profile. A recent tally has the video at nearly 1.5 million views and counting.

 

The text accompanying the video explains how the cubes are able to move around and climb over each other,  jump into the air, and roll across surfaces as they connect in a variety of configurations. And they do all this without any external moving parts. Instead, each M-Block contains a flywheel that can reach speeds of 20,000 rpm. When the flywheel brakes, it imparts angular momentum to the cube.  Precisely placed magnets on every face and edge of each M-Block enable any two cubes to attach to each other.

The simple design holds short- and long-term promise.  According  to an October 4 article by Larry Hardesty of the MIT News Office, it is hoped that the blocks can be miniaturized someday, perhaps to swarming microbots that can self-assemble with a purpose. Even at their current size, further development of the M-Blocks might lead to “armies of mobile cubes” that can help repair bridges and buildings in emergencies, raise scaffolding, reconfigure into heavy equipment or furniture as needed, or head in to environments hostile to humans to diagnose and repair problems, the article suggests.

While it may not rise to “cooperative group behavior,”  the ability of one cube to drag another and influence its alignment is impressive. What could 100 or more of these robots accomplish as MIT researchers continue to develop algorithms to control them?

A prototype of the new modular robot, with its flywheel exposed. (Photo: M. Scott Brauer)

A prototype of the new modular robot, with its interior and flywheel exposed.
(Photo: M. Scott Brauer)

Q&A: Jeremy Blum, Electrical Engineer, Entrepreneur, Author

Jeremy Blum

Jeremy Blum

Jeremy Blum, 23, has always been a self-proclaimed tinkerer. From Legos to 3-D printers, he has enjoyed learning about engineering both in and out of the classroom. A recent Cornell University College of Engineering graduate, Jeremy has written a book, started his own company, and traveled far to teach children about engineering and sustainable design. Jeremy, who lives in San Francisco, CA, is now working on Google’s Project Glass.—Nan Price, Associate Editor

NAN: When did you start working with electronics?

JEREMY: I’ve been tinkering, in some form or another, ever since I figured out how to use my opposable thumbs. Admittedly, it wasn’t electronics from the offset. As with most engineers, I started with Legos. I quickly progressed to woodworking and I constructed several pieces of furniture over the course of a few years. It was only around the start of my high school career that I realized the extent to which I could express my creativity with electronics and software. I thrust myself into the (expensive) hobby of computer building and even built an online community around it. I financed my hobby through my two companies, which offered computer repair services and video production services. After working exclusively with computer hardware for a few years, I began to dive deeper into analog circuits, robotics, microcontrollers, and more.

NAN: Tell us about some of your early, pre-college projects.

JEREMY: My most complex early project was the novel prosthetic hand I developed in high school. The project was a finalist in the prestigious Intel Science Talent Search. I also did a variety of robotics and custom-computer builds. The summer before starting college, my friends and I built a robot capable of playing “Guitar Hero” with nearly 100% accuracy. That was my first foray into circuit board design and parallel programming. My most ridiculous computer project was a mineral oil-cooled computer. We submerged an entire computer in a fish tank filled with mineral oil (it was actually a lot of baby oil, but they are basically the same thing).

DeepNote Guitar Hero Robot

DeepNote Guitar Hero Robot

Mineral Oil-Cooled Computer

Mineral Oil-Cooled Computer

NAN: You’re a recent Cornell University College of Engineering graduate. While you were there, you co-founded Cornell’s PopShop. Tell us about the workspace. Can you describe some PopShop projects?

Cornell University's PopShop

Cornell University’s PopShop

JEREMY: I recently received my Master’s degree in Electrical and Computer Engineering from Cornell University, where I previously received my BS in the same field. During my time at Cornell, my peers and I took it upon ourselves to completely retool the entrepreneurial climate at Cornell. The PopShop, a co-working space that we formed a few steps off Cornell’s main campus, was our primary means of doing this. We wanted to create a collaborative space where students could come to explore their own ideas, learn what other entrepreneurial students were working on, and get involved themselves.

The PopShop is open to all Cornell students. I frequently hosted events there designed to get more students inspired about pursuing their own ideas. Common occurrences included peer office hours, hack-a-thons, speed networking sessions, 3-D printing workshops, and guest talks from seasoned venture capitalists.

Student startups that work (or have worked) out of the PopShop co-working space include clothing companies, financing companies, hardware startups, and more. Some specific companies include Rosie, SPLAT, LibeTech (mine), SUNN (also mine), Bora Wear, Yorango, Party Headphones, and CoVenture.

NAN: Give us a little background information about Cornell University Sustainable Design (CUSD). Why did you start the group? What types of CUSD projects were you involved with?

CUSD11JEREMY: When I first arrived at Cornell my freshman year, I knew right away that I wanted to join a research lab, and that I wanted to join a project team (knowing that I learn best in hands-on environments instead of in the classroom). I joined the Cornell Solar Decathlon Team, a very large group of mostly engineers and architects who were building a solar-powered home to enter in the biannual solar decathlon competition orchestrated by the Department of Energy.

By the end of my freshman year, I was the youngest team leader in the organization.  After competing in the 2009 decathlon, I took over as chief director of the team and worked with my peers to re-form the organization into Cornell University Sustainable Design (CUSD), with the goal of building a more interdisciplinary team, with far-reaching impacts.

CUSD3

Under my leadership, CUSD built a passive schoolhouse in South Africa (which has received numerous international awards), constructed a sustainable community in Nicaragua, has been the only student group tasked with consulting on sustainable design constraints for Cornell’s new Tech Campus in New York City, partnered with nonprofits to build affordable homes in upstate New York, has taught workshops in museums and school, contributed to the design of new sustainable buildings on Cornell’s Ithaca campus, and led a cross-country bus tour to teach engineering and sustainability concepts at K–12 schools across America. The group is now comprised of students from more than 25 different majors with dozens of advisors and several simultaneous projects. The new team leaders are making it better every day. My current startup, SUNN, spun out of an EPA grant that CUSD won.

CUSD7NAN: You spent two years working at MakerBot Industries, where you designed electronics for a 3-D printer and a 3-D scanner. Any highlights from working on those projects?

JEREMY: I had a tremendous opportunity to learn and grow while at MakerBot. When I joined, I was one of about two dozen total employees. Though I switched back and forth between consulting and full-time/part-time roles while class was in session, by the time I stopped working with MakerBot (in January 2013), the company had grown to more than 200 people. It was very exciting to be a part of that.

I designed all of the electronics for the original MakerBot Replicator. This constituted a complete redesign from the previous electronics that had been used on the second generation MakerBot 3-D printer. The knowledge I gained from doing this (e.g., PCB design, part sourcing, DFM, etc.) drastically outweighed much of what I had learned in school up to that point. I can’t say much about the 3-D scanner (the MakerBot Digitizer), as it has been announced, but not released (yet).

The last project I worked on before leaving MakerBot was designing the first working prototype of the Digitizer electronics and firmware. These components comprised the demo that was unveiled at SXSW this past April. This was a great opportunity to apply lessons learned from working on the Replicator electronics and find ways in which my personal design process and testing techniques could be improved. I frequently use my MakerBot printers to produce custom mechanical enclosures that complement the open-source electronics projects I’ve released.

NAN: Tell us about your company, Blum Idea Labs. What types of projects are you working on?

JEREMY: Blum Idea Labs is the entity I use to brand all my content and consulting services. I primarily use it as an outlet to facilitate working with educational organizations. For example, the St. Louis Hacker Scouts, the African TAHMO Sensor Workshop, and several other international organizations use a “Blum Idea Labs Arduino curriculum.” Most of my open-source projects, including my tutorials, are licensed via Blum Idea Labs. You can find all of them on my blog (www.jeremyblum.com/blog). I occasionally offer private design consulting through Blum Idea Labs, though I obviously can’t discuss work I do for clients.

NAN: Tell us about the blog you write for element14.

JEREMY: I generally use my personal blog to write about projects that I’ve personally been working on.  However, when I want to talk about more general engineering topics (e.g., sustainability, engineering education, etc.), I post them on my element14 blog. I have a great working relationship with element14. It has sponsored the production of all my Arduino Tutorials and also provided complete parts kits for my book. We cross-promote each-other’s content in a mutually beneficial fashion that also ensures that the community gets better access to useful engineering content.

NAN: You recently wrote Exploring Arduino: Tools and Techniques for Engineering Wizardry. Do you consider this book introductory or is it written for the more experienced engineer?

JEREMY: As with all the video and written content that I produce on my website and on YouTube, I tried really hard to make this book useful and accessible to both engineering veterans and newbies. The book builds on itself and provides tons of optional excerpts that dive into greater technical detail for those who truly want to grasp the physics and programming concepts behind what I teach in the book. I’ve already had readers ranging from teenagers to senior citizens comment on the applicability of the book to their varying degrees of expertise. The Amazon reviews tell a similar story. I supplemented the book with a lot of free digital content including videos, part descriptions, and open-source code on the book website.

NAN: What can readers expect to learn from the book?

JEREMY: I wrote the book to serve as an engineering introduction and as an idea toolbox for those wanting to dive into concepts in electrical engineering, computer science, and human-computer interaction design. Though Exploring Arduino uses the Arduino as a platform to experiment with these concepts, readers can expect to come away from the book with new skills that can be applied to a variety of platforms, projects, and ideas. This is not a recipe book. The projects readers will undertake throughout the book are designed to teach important concepts in addition to traditional programming syntax and engineering theories.

NAN: I see you’ve spent some time introducing engineering concepts to children and teaching them about sustainable engineering and renewable energy. Tell us about those experiences. Any highlights?

JEREMY: The way I see it, there are two ways in which engineers can make the world a better place: they can design new products and technologies that solve global problems or they can teach others the skills they need to assist in the development of solutions to global problems. I try hard to do both, though the latter enables me to have a greater impact, because I am able to multiply my impact by the number of students I teach. I’ve taught workshops, written curriculums, produced videos, written books, and corresponded directly with thousands of students all around the world with the goal of transferring sufficient knowledge for these students to go out and make a difference.

Here are some highlights from my teaching work:

bluestamp

I taught BlueStamp Engineering, a summer program for high school students in NYC in the summer of 2012. I also guest-lectured at the program in 2011 and 2013.

I co-organized a cross-country bus tour where we taught sustainability concepts to school children across the country.

indiaI was invited to speak at Techkriti 2013 in Kanpur, India. I had the opportunity to meet many students from IIT Kanpur who already followed my videos and used my tutorials to build their own projects.

Blum Idea Labs partnered with the St. Louis Hacker Scouts to construct a curriculum for teaching electronics to the students. Though I wasn’t there in person, I did welcome them all to the program with a personalized video.

brooklyn_childrens_zoneThrough CUSD, I organized multiple visits to the Brooklyn Children’s Zone, where my team and I taught students about sustainable architecture and engineering.

Again with CUSD, we visited the Intrepid museum to teach sustainable energy concepts using potato batteries.

intrepid

NAN: Speaking of promoting engineering to children, what types of technologies do you think will be important in the near future?

JEREMY: I think technologies that make invention more widely accessible are going to be extremely important in the coming years. Cheaper tools, prototyping platforms such as the Arduino and the Raspberry Pi, 3-D printers, laser cutters, and open developer platforms (e.g., Android) are making it easier than ever for any person to become an inventor or an engineer.  Every year, I see younger and younger students learning to use these technologies, which makes me very optimistic about the things we’ll be able to do as a society.

3-D Printed Robotics Innovation: A Low-Cost Solution for Prosthetic Hands

Gibbardholding DextrusUK-based inventor and robotist Joel Gibbard used a 3-D printer to design and build a prosthetic robotic hand. He founded the Open Hand Project with the goal of making the prosthetic hands available for amputees.

 

 NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

 JOEL: I was born in Bristol, UK, and grew up in that area. Bristol is a fantastic place for robotics in the UK, so I couldn’t have had a better place to start from. There’s a lot to engage children here, like the highly popular @Bristol science museum. I studied for a degree in Robotics at the University of Plymouth, which encourages a very practical approach to engineering. Right from the first year we were working with electronics, robotics, and writing code.

 NAN: When did you first start working with robotics?

 JOEL: The first robots I ever made were using the Lego MINDSTORMS NXT robotics kits. I was very lucky because these were just starting to come out when I was about 6 or 7 years old. I think from ages three to 15 every single birthday or Christmas present was a new Lego set. To this day, I still think Lego is the best tool for rapid prototyping in the early stages of an idea.

 NAN: Tell us about your first design/some of your early projects. Do you have any photos or diagrams?

 JOEL: The earliest project I remember working on with my father was a full-scale model of the space shuttle complete with robotic arm and fully motorized launch pad. When on the launch pad it was almost my height. I think my father took having kids as an opportunity to get back into making things. We also made a Saturn 5 rocket, Sydney Harbour Bridge and Concorde. One of my first robots was a Lego Technic creation. It had tracks, a double-barreled gun on one arm, a pincer on the other, and a submarine on the back, just in case. I think I was about eight years old when I made it.

 NAN: You originally developed the Dextrus robotic hand while you were at the University of Plymouth. Why did you design the system? How has its development progressed since the original concept?

Gibbarddesignprocess

Joel keeps an ongoing design sketchbook.

JOEL: I have a sketchbook of around 10 to 20 inventions that are options for the next thing I want to make. This grows faster than it shrinks. One day I was thinking about what to make next and the thought occurred to me that if I were to lose my hand, I wouldn’t be able to make anything. So it made the most sense to design a hand to have just in case. Once I have that, heaven forbid I need to; I could use it to then make a better hand, and so forth, until I have a robot hand that is as good as a human hand. It sounds ridiculous, but that was enough motivation for me to make the first one.

GibbardEarlyDextrus

This is an early version of the Dextrus hand.

After posting the project on YouTube, I received comments from people asking to have the designs to make their own, which wasn’t really possible, since it was such a one-off prototype. But I thought it was a good idea. Why not make an open-source hand? After that, I looked more into prostheses and discovered that this is really necessary and people want it.

 NAN: The Dextrus incorporates 3-D printed parts. How does the 3-D printing factor in your design? Does it make each hand customizable?

 JOEL: 3-D printing is essential to the design. Many of the parts have cavities inside them, which wouldn’t be possible to make using injection molding. One would have to make the parts in two halves then glue them together, which creates weak points. With 3-D printing, each part is one solid piece with cavities for the tendons to slide through.

Customization is a great area to explore in the future. It’s quite easy to modify things like the length and shape of the fingers while maintaining the functionality of the hand. In the not-too-distant future, I could envisage an amputee 3-D scanning their remaining hand and sending the scan to me. I could then reverse it and match their Dextrus hand (approximately) to the dimensions of their other hand.

Gibbard3DprintedDextrushand

The 3-D printed Dextrus hand.

 NAN: There are three types of Dextrus robotic hands: The Dextrus, the Dextrus EMG, and the Dextrus Research. Can you describe the differences?

 JOEL: They have the same basic design and components. The Dextrus and Dextrus EMG are exactly the same, but the EMG comes with all of the extras that enable someone to use it as a myoelectric prosthesis. The Dextrus Research has a number of differences that result in a more robust (but more expensive and heavier) hand. It has steel ball bearings instead of nylon bushes and is printed with denser plastic. It also comes with everything you need to use it straight out of the box (e.g., a power supply).

 NAN: You founded the Open Hand Project as a result of your work on the Dextrus robotic hand. Describe the project and its purpose.

 JOEL: The aim of the Open Hand Project is to make advanced prosthetic hands more accessible to amputees. It has the potential to revolutionize the prosthetics industry by trivializing the cost of prosthetics (to insurance companies). I also hope that it will help to advance prosthetic hands. If the hardware is much less expensive, we can start to focus on the human robot interface. At the moment, it uses electromyographical signals, which sound advanced but are actually 50-year-old technology and don’t give complex functionality like individual finger movement. If the hardware is inexpensive, then money can instead be spent on operations to tap into the nervous system and then the hand can literally be a direct replacement for the human hand. You’ll think about moving your hand and the robotic hand will do exactly what you’re thinking. If done correctly, you’ll also be able to feel with it. We’re talking Luke Skywalker Star Wars tech. It exists now, but is not yet fully tested and proven.

 NAN: Prior to venturing out on your own, you were an Applications Engineer at National Instruments (NI). Although you are no longer working for the company, it is backing the Open Hand Project by providing test and measurement equipment. How did NI become involved in the project?

 JOEL: National Instruments has been great since I’ve left the company. I explained what I wanted to do, and it was fully supportive. To get the equipment, all I had to do was ask! It really does live up to its reputation of being one of the best places to work. I hope that I’ll be able to repay them with business in the future. If I’m successful, then I’ll be able to buy equipment for future projects.

 NAN: Why did you decide to use crowdfunding for this project?

 JOEL: I wanted to keep everything open source for this project. Investors don’t want to fund an open-source project. You have no leverage to make money and your ideas will be taken and used by other people (which is encouraged). For this reason, only people who are genuinely interested in the vision of the project will want to invest, and that’s just not something that will make a company money. Crowdfunding is perfect, because people appreciate how this can help people and they’re willing to contribute to that.

I believe that everyone should have access to public health care and that your level of care should not be dependent on the size of your wallet. Making prosthetics open source will be a step in the right direction, but this model does not have to be limited to prosthetics. Take the drugs industry for example. Drugs companies work off patents, they have to patent their drugs in order to make back the millions of dollars they spend developing them and end up charging $1,000 for a pill that costs them $0.01 to make in order to cover all of their costs. If the research was publicly funded and open source, the innovations in this industry would be dramatically accelerated and once drugs were developed, they could be sold more cheaply, if sale of the drugs was government regulated, the price could be controlled and the money could go back into funding more developments.

 NAN: What’s next for the Dextrus?

 JOEL: There are a few directions I’d like this project to go in. First and foremost is the development of low-cost robotic prostheses for adults. After this, I’d like to look into partial amputations and finger prostheses. I’d also like to try and miniaturize the hand so that children can use it as well. Before any of this can happen I’ll need to reach my crowdfunding goal on indiegogo!

 

CC279: Working with RobotBasic

In Circuit Cellar’s October issue, columnist Jeff Bachiochi introduces readers to RobotBasic, a free robot control programming language that you can use to control real or simulated robots, and provides a detailed explanation on how to use it.

Photo 1: This army of robots all use the RobotBASIC Robot Operating System (RROS). Note the large robot has an arm located just above the wheels that is controlled by a second RROS. It uses an on-board laptop running a RobotBASIC (RB) application. The small robots are all controlled via a Bluetooth link from an external PC running an RB application.

Photo 1: This army of robots all use the RobotBASIC Robot Operating System (RROS). Note the large robot has an arm located just above the wheels that is controlled by a second RROS. It uses an on-board laptop running a RobotBASIC (RB) application. The small robots are all controlled via a Bluetooth link from an external PC running an RB application.

“About five years ago, John Blankenship and Samuel Mishal coauthored Robot Programmer’s Bonanza, a book explaining the freely available RobotBASIC IDE they offer. RobotBASIC (RB) is a powerful language that enables you to use standard BASIC syntax (or a modified C-style syntax ( i.e., ++, +=, !=, and &&) to quickly write a program to control and simulate a robot with many types of sensors,” Bachiochi says. “This is a great tool to teach programming.”

RB, with more than 800 commands and functions, can also be a tool for non-robotic applications such as tackling tough engineering problems or creating animated simulations, Bachiochi says.

It’s likely that anyone who starts out simulating with RobotBasic will eventually want to control real robot hardware.

“There is no need to worry,” Bachiochi says. “RB was written to make use of a PC’s I/O. The parallel port is a good source for digital I/O and the serial port is well suited for external communication. The same commands used for robot movement in the simulator can alternatively be sent to a serial port establishing a sort of serial robot command protocol. But, tethered robots aren’t so cool, and many robots are too small to tote around a PC as their “great and powerful Oz.”

“Luckily, much has changed since RB’s original concepts were put into practice,” he adds. “We all know what has happened to these PC ports. They’ve fallen under the USB’s mighty power. RB doesn’t care whether it is talking with a serial port or a USB virtual serial port. USB offers inexpensive Bluetooth dongles and can create wireless serial communication to external devices.”

Bachiochi also discusses the RobotBASIC Robot Operating System (RROS), created to support RB’s serial robot command protocol. The module is available from RB’s website.

“The RROS is a preprogrammed module that can receive communication from RB, interpret commands, and directly interface to hardware,” Bachiochi says. “The module is a Pololu Baby Orangutan robot controller, consisting of an Atmel ATmega328P microcontroller and a Pololu TB6612 dual motor driver carrier in a DIP24 form factor. You can use the module (which comes preprogrammed with the RROS) to build robots like those shown in Photo 1.”

Bachiochi’s look at RB and RROS is a two-part series. In Part 2, appearing in Circuit Cellar’s November issue, Bachiochi will explain how to translate between RROS and the iRobot Create Open Interface.

So, if you want explore a programming language that can take you from simulated to real-world robotics control, check out the October and November issues.

 

CC278: Evolving Neural Networks in Robotics

ccpostrobotAre you curious about how an evolving neural network helps a robot learn about itself and its environment?

CCKRAWECPOST

A neural network with two inputs, one output, and three hidden neurons.

In the September issue of Circuit Cellar, Walter O. Krawec begins a two-part series that describes an ENN he uses in robot development experiments, explains how short-term memory (STM) evaluates a network’s conditions and how to add data to STM, and discusses how an ENN uses a robot’s minimalistic “instincts” and “reflexes” to guide a robot’s evolution.

Krawec, who has been building robots since 1999, is a research assistant and PhD student in Computer Science at the Stevens Institute of Technology in Hoboken, N.J. The work presented in his two-part series is based on a paper published in the proceedings of the 13th International Artificial Life Conference in 2012.

The overall goal of the Krawec’s experiments in developmental robotics is to enable a robot to learn on its own without human intervention. “An ENN is used to accomplish this,” he says.  “This network will be capable of growing and learning in real time as the robot operates.”

In his series, Krawec presents an architecture he says “enables a robot to ‘grow’ from a naive individual with no knowledge of itself (i.e., no notion of what its sensors are reporting or what its outputs actually do) to one that can operate in an environment.”

“This architecture will consist of an evolving neural network (ENN), a short-term memory (STM), and simple instincts and reflexes.

“Despite a minimal set of instincts, which provide penalties and rewards for certain actions (e.g., crashing into a wall, the robots described in this article sometimes develop complicated and unexpected behaviors. Such behaviors range from following walls (despite the robots’ binary proximity sensors) to games of ‘follow the leader.’…

“This article explores basic artificial neural network (ANN) concepts and outlines the ENN I’m using in this project. This is a neural network that, over time, learns not only by adjusting synaptic weights but also by growing new neurons and new connections (generally resulting in a recurrent neural network). Finally, I’ll discuss the STM system and how it is used to evaluate a network’s fitness.”

The second article in Krawec’s series appears in Circuit Cellar’s October issue.

“In Part 2, I’ll examine the reflex and instinct system, which feeds reward information to an ENN and the ‘decision path’ system, which rewards or penalizes chains of actions,” Krawec says. “Finally, I’ll discuss experiments conducted to demonstrate this architecture in a simulated environment. In particular, I’ll describe some interesting behaviors that robots have developed in trial runs.”

For more, check out Krawec’s articles on “Experiments in Developmental Robotics” in the September and October issues. You will also find information and videos about his work with robots on his website.

 

AAR Arduino Autonomous Mobile Robot

The AAR Arduino Robot is a small autonomous mobile robot designed for those new to robotics and for experienced Arduino designers. The robot is well suited for hobbyists and school projects. Designed in the Arduino open-source prototyping platform, the robot is easy to program and run.

The AAR, which is delivered fully assembled, comes with a comprehensive CD that includes all the software needed to write, compile, and upload programs to your robot. It also includes a firmware and hardware self test. For wireless control, the robot features optional Bluetooth technology and a 433-MHz RF.

The AAR robot’s features include an Atmel ATmega328P 8-bit AVR-RISC processor with a 16-MHz clock, Arduino open-source software, two independently controlled 3-VDC motors, an I2C bus, 14 digital I/Os on the processor, eight analog input lines, USB interface programming, an on-board odometer sensor on both wheels, a line tracker sensor, and an ISP connector for bootloader programming.

The AAR’s many example programs help you get your robot up and running. With many expansion kits available, your creativity is unlimited.

Contact Global Specialties for pricing.

Global Specialties
http://globalspecialties.com

Microcontroller-Based, Cube-Solving Robot

Cube Solver in ActionCanadian Nelson Epp has earned degrees in physics and electrical engineering. But as a child, he was stumped by the Rubik’s Cube puzzle. So, as an adult, he built a Rubik’s Cube-solving robot that uses a Parallax Propeller microcontroller and a 52-move algorithm to solve the 3-D puzzle.

Designing and completing the robot wasn’t easy. Epp says he originally used a “gripper”-type robot that was “a complete disaster.” Then he experimented with different algorithms–“human memorizable ones”—before settling on a solution method developed by mathematician Morwen Thistlethwaite. (The algorithm is based on the mathematical concepts of a group, a subgroup, and generator and coset representatives.)

Nelson also developed a version of his Rubik’s Cube solver that used neural networks to analyze the cube’s colors, but that worked only half the time.

So, considering the time he had to spend on project trial and error (and his obligations to work, family, and pets), it took about six years to complete the robot. He writes about the results in the September issue of Circuit Cellar magazine. 

Here, he describes some of the choices he made in hardware components.

“The cube solver hardware uses two external power supplies: 5 VDC for the servomotors and 12 VDC for the remaining circuits. The 12-VDC power supply feeds a Texas Instruments (TI) UA78M33 and a UA78M05 linear regulator. The UA78M05 regulator powers an Electronics123 C3088 camera board. The UA78M33 regulator powers a Maxim Integrated MAX3232 ECPE RS-232 transceiver, a Microchip Technology 24LC256 CMOS serial EEPROM, remote reset circuitry, the Propeller, a SD/MMC card, the camera board’s digital output circuitry, and an ECS ECS-300C-160 oscillator. The images at right show my cube solver and circuit board.
“The ECS-300C-160 is a self-contained dual-output oscillator that can produce clock signals that are binary fractions of the 16-MHz base signal. My application uses the 8- and 16-MHz clock taps. The Propeller is clocked with the 8-MHz signal and then internally multiplied up to 64 MHz. The 16-MHz signal is fed to the camera.

“I used a MAX3232 transceiver to communicate to the host’s RS-232 port. The Propeller’s serial input pin and serial output pin are only required at startup. After the Propeller starts up, these pins can be used to exchange commands with the host. The Propeller also has pins for serial communication to an EEPROM, which are used during power up when a host is not sending a program.

“The cube-solving algorithm uses the coset representative file stored on an SD card, which is read by the Propeller via a SparkFun Electronics Breakout Board for SD-MMC cards. The Propeller interface to the SD card consists of a chip select, data in, data out, data clock, and power. The chip select is fixed into the active state. The three lines associated with data are wired to the Propeller.

“The Propeller uses a camera to determine the cube’s starting permutation. The C3088 uses an Electronics123 OV6630 color sensor module. I chose the camera because its data format and clocking speed was within the range of the Propeller’s capabilities. The C3088 has jumpers for external or internal clocking.”

To read more about Epp’s design journey—and outcomes—check out Circuit Cellar’s September issue. And click here for a video of his robot at work.

 

CC 276: MCU-Based Prosthetic Arm with Kinect

In its July issue, Circuit Cellar presents a project that combines the technology behind Microsoft’s Kinect gaming device with a prototype prosthetic arm.

The project team and  authors of the article include Jung Soo Kim, an undergraduate student in Biomedical Engineering at Ryerson University in Toronto, Canada, Nika Zolfaghari, a master’s student at Ryerson, and Dr. James Andrew Smith, who specializes in Biomedical Engineering at Ryerson.

“We designed an inexpensive, adaptable platform for prototype prosthetics and their testing systems,” the team says. “These systems use Microsoft’s Kinect for Xbox, a motion sensing device, to track a healthy human arm’s instantaneous movement, replicate the exact movement, and test a prosthetic prototype’s response.”

“Kelvin James was one of the first to embed a microprocessor in a prosthetic limb in the mid-1980s…,” they add. “With the maker movement and advances in embedded electronics, mechanical T-slot systems, and consumer-grade sensor systems, these applications now have more intuitive designs. Integrating Xbox provides a platform to test prosthetic devices’ control algorithms. Xbox also enables prosthetic arm end users to naturally train their arms.”

They elaborate on their choices in building the four main hardware components of their design, which include actuators, electronics, sensors, and mechanical support:

“Robotis Dynamixel motors combine power-dense neodymium motors from Maxon Motors with local angle sensing and high gear ratio transmission, all in a compact case. Atmel’s on-board 8-bit ATmega8 microcontroller, which is similar to the standard Arduino, has high (17-to-50-ms) latency. Instead, we used a 16-bit Freescale Semiconductor MC9S12 microcontroller on an Arduino-form-factor board. It was bulkier, but it was ideal for prototyping. The Xbox system provided high-level sensing. Finally, we used Twintec’s MicroRAX 10-mm profile T-slot aluminum to speed the mechanical prototyping.”

The team’s goal was to design a  prosthetic arm that is markedly different from others currently available. “We began by building a working prototype of a smooth-moving prosthetic arm,” they say in their article.

“We developed four quadrant-capable H-bridge-driven motors and proportional-derivative (PD) controllers at the prosthetic’s joints to run on a MC9S12 microcontroller. Monitoring the prosthetic’s angular position provided us with an analytic comparison of the programmed and outputted results.”

A Technological Arts Esduino microcontroller board is at the heart of the prosthetic arm design.

The team concludes that its project illustrates how to combine off-the-shelf Arduino-compatible parts, aluminum T-slots, servomotors, and a Kinect into an adaptable prosthetic arm.

But more broadly, they say, it’s a project that supports the argument that  “more natural ways of training and tuning prostheses” can be achieved because the Kinect “enables potential end users to manipulate their prostheses without requiring complicated scripting or programming methods.”

For more on this interesting idea, check out the July issue of Circuit Cellar. And for a video from an earlier Circuit Cellar post about this project, click here.

 

Electrostatic Cleaning Robot Project

How do you clean a clean-energy generating system? With a microcontroller (and a few other parts, of course). An excellent example is US designer Scott Potter’s award-winning, Renesas RL78 microcontroller-based Electrostatic Cleaning Robot system that cleans heliostats (i.e., solar-tracking mirrors) used in solar energy-harvesting systems. Renesas and Circuit Cellar magazine announced this week at DevCon 2012 in Garden Grove, CA, that Potter’s design won First Prize in the RL78 Green Energy Challenge.

This image depicts two Electrostatic Cleaning Robots set up on two heliostats. (Source: S. Potter)

The nearby image depicts two Electrostatic Cleaning Robots set up vertically in order to clean the two heliostats in a horizontal left-to-right (and vice versa) fashion.

The Electrostatic Cleaning Robot in place to clean

Potter’s design can quickly clean heliostats in Concentrating Solar Power (CSP) plants. The heliostats must be clean in order to maximize steam production, which generates power.

The robot cleaner prototype

Built around an RL78 microcontroller, the Electrostatic Cleaning Robot provides a reliable cleaning solution that’s powered entirely by photovoltaic cells. The robot traverses the surface of the mirror and uses a high-voltage AC electric field to sweep away dust and debris.

Parts and circuitry inside the robot cleaner

Object oriented C++ software, developed with the IAR Embedded Workbench and the RL78 Demonstration Kit, controls the device.

IAR Embedded Workbench IDE

The RL78 microcontroller uses the following for system control:

• 20 Digital I/Os used as system control lines

• 1 ADC monitors solar cell voltage

• 1 Interval timer provides controller time tick

• Timer array unit: 4 timers capture the width of sensor pulses

• Watchdog timer for system reliability

• Low voltage detection for reliable operation in intermittent solar conditions

• RTC used in diagnostic logs

• 1 UART used for diagnostics

• Flash memory for storing diagnostic logs

The complete project (description, schematics, diagrams, and code) is now available on the Challenge website.

 

Autonomous Mobile Robot (Part 2): Software & Operation

I designed a microcontroller-based mobile robot that can cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. In the first part of this series, I introduced my TOMBOT robot’s hardware. Now I’ll describe its software and how to achieve autonomous robot behavior.

Autonomous Behavior Model Overview
The TOMBOT is a minimalist system with just enough components to demonstrate some simple autonomous behaviors: Cruise, Escape, Avoid, and Home behaviors (see Figure 1). All the behaviors require left and right servos for maneuverability. In general, “Cruise” just keeps the robot in motion in lieu of any stimulus. “Escape” uses the bumper to sense a collision and then 180 spin with reverse. “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle. Finally “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.

Figure 1: High-level autonomous behavior flow

Figure 2 shows more details. The diagram captures the interaction of TOMBOT hardware and software. On the left side of the diagram are the sensors, power sources, and command override (the XBee radio command input). All analog sensor inputs and bumper switches are sampled (every 100 ms automatically) during the Microchip Technology PIC32 Timer 1 interrupt. The bumper left and right switches undergo debounce using 100 ms as a timer increment. The analog sensors inputs are digitized using the PIC32’s 10-bit ADC. Each sensor is assigned its own ADC channel input. The collected data is averaged in some cases and then made available for use by the different behaviors. Processing other than averaging is done within the behavior itself.

Figure 2: Detailed TOMBOT autonomous model

All behaviors are implemented as state machines. If a behavior requests motor control, it will be internally arbitrated against all other behaviors before motor action is taken. Escape has the highest priority (the power behavior is not yet implemented) and will dominate with its state machine over all the other behaviors. If escape is not active, then avoid will dominate as a result of its IR detectors are sensing an object in front of the TOMBOT less than 8″ away. If escape and avoid are not active, then home will overtake robot steering to sense track a light source that is immediately in front of TOMBOT. Finally cruise assumes command, and takes the TOMBOT in a forward direction temporarily.

A received command from the XBee RF module can stop and start autonomous operation remotely. This is very handy for system debugging. Complete values of all sensors and battery power can be viewed on graphics display using remote command, with LEDs and buzzer, announcing remote command acceptance and execution.

Currently, the green LED is used to signal that the TOMBOT is ready to accept a command. Red is used to indicate that the TOMBOT is executing a command. The buzzer indicates that the remote command has been completed coincident with the red led turning on.

With behavior programming, there are a lot of considerations. For successful autonomous operation, calibration of the photocells and IR sensors and servos is required. The good news is that each of these behaviors can be isolated (selectively comment out prior to compile time what is not needed), so that phenomena can be isolated and the proper calibrations made. We will discuss this as we get a little bit deeper into the library API, but in general, behavior modeling itself does not require accurate modeling and is fairly robust under less than ideal conditions.

TOMBOT Software Library
The TOMBOT robot library is modular. Some experience with C programming is required to use it (see Figure 3).

Figure 3: TOMBOT Library

The entire library is written using Microchip’s PIC32 C compiler. Both the compiler and Microchip’s 8.xx IDE are available as free downloads at www.microchip.com. The overall library structure is shown. At a highest level library has three main sections: Motor, I/O and Behavior. We cover these areas in some detail.

TOMBOT Motor Library
All functions controlling the servos’ (left and right wheel) operation is contained in this part of the library (see Listing1 Motor.h). In addition the Microchip PIC32 peripheral library is also used. Motor initialization is required before any other library functions. Motor initialization starts up both left and right servo in idle position using PIC32 PWM peripherals OC3 and OC4 and the dual Timer34 (32 bits) for period setting. C Define statements are used to set pulse period and duty cycle for both left and right wheels. These defines provide PWM varies from 1 to 2 ms for different speed CCW rotation over a 20-ms period and from 1.5 ms to 1 ms for CC rotation.

Listing 1: All functions controlling the servos are in this part of the library.

V_LEFT and V_RIGHT (velocity left and right) use the PIC32 peripheral library function to set duty cycle. The other motor functions, in turn, use V_LEFT and V_RIGHT with the define statements. See FORWARD and BACKWARD functions as an example (see Listing 2).

Listing 2: Motor function code examples

In idle setting both PWM set to 1-ms center positions should cause the servos not to turn. A servo calibration process is required to ensure center position does not result in any rotation. For the servos we have a set screw that can be used to adjust motor idle to no spin activity with a small Philips screwdriver.

TOMBOT I/O Library

This is a collection of different low level library functions. Let’s deal with these by examining their files and describing the function set starting with timer (see Listing 3). It uses Timer45 combination (full 32 bits) for precision timer for behaviors. The C defines statements set the different time values. The routine is noninterrupt at this time and simply waits on timer timeout to return.

Listing 3: Low-level library functions

The next I/O library function is ADC. There are a total of five analog inputs all defined below. Each sensor definition corresponds to an integer (32-bit number) designating the specific input channel to which a sensor is connected. The five are: Right IR, Left IR, Battery, Left Photo Cell, Right Photo Cell.

The initialization function initializes the ADC peripheral for the specific channel. The read function performs a 10-bit ADC conversion and returns the result. To faciliate operation across the five sensors we use SCAN_SENSORS function. This does an initialization and conversion of each sensor in turn. The results are placed in global memory where the behavior functions can access . SCAN_SENOR also performs a running average of the last eight samples of photo cell left and right (see Listing 4).

Listing 4: SCAN_SENOR also performs a running average of the last eight samples

The next I/O library function is Graphics (see Listing 5). TOMBOT uses a 102 × 64 monchrome graphics display module that has both red and green LED backlights. There are also red and green LEDs on the module that are independently controlled. The module is driven by the PIC32 SPI2 interface and has several control lines CS –chip select, A0 –command /data.

Listing 5: The Graphics I/O library function

The Graphics display relies on the use of an 8 × 8 font stored in as a project file for character generation. Within the library there are also cursor position macros, functions to write characters or text strings, and functions to draw 32 × 32 bit maps. The library graphic primitives are shown for intialization, module control, and writing to the module. The library writes to a RAM Vmap memory area. And then from this RAM area the screen is updated using dumpVmap function. The LED and backlight controls included within these graphics library.

The next part of I/O library function is delay (see Listing 6). It is just a series of different software delays that can be used by other library function. They were only included because of legacy use with the graphics library.

Listing 6: Series of different software delays

The next I/O library function is UART-XBEE (see Listing 7). This is the serial driver to configure and transfer data through the XBee radio on the robot side. The library is fairly straightforward. It has an initialize function to set up the UART1B for 9600 8N1, transmit and receive.

Listing 7: XBee library functions

Transmission is done one character at a time. Reception is done via interrupt service routine, where the received character is retrieved and a semaphore flag is set. For this communication, I use a Sparkfun XBee Dongle configured through USB as a COM port and then run HyperTerminal or an equivalent application on PC. The default setting for XBee is all that is required (see Photo 1).

Photo 1: XBee PC to TOMBOT communications

The next I/O library function is buzzer (see Listing 8). It uses a simple digital output (Port F bit 1) to control a buzzer. The functions are initializing buzzer control and then the on/off buzzer.

Listing 8: The functions initialize buzzer control

TOMBOT Behavior Library
The Behavior library is the heart of the autonomous TOMBOT and where integrated behavior happens. All of these behaviors require the use of left and right servos for autonomous maneuverability. Each behavior is a finite state machine that interacts with the environment (every 0.1 s). All behaviors have a designated priority relative to the wheel operation. These priorities are resolved by the arbiter for final wheel activation. Listing 9 shows the API for the entire Behavior Library.

Listing 9: The API for the entire behavior library

Let’s briefly cover the specifics.

  • “Cruise” just keeps the robot in motion in lieu of any stimulus.
  • “Escape” uses the bumper to sense a collision and then 180° spin with reverse.
  • “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle.
  • “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.
  • “Remote operation” allows for the TOMBOT to respond to the PC via XBee communications to enter/exit autonomous mode, report status, or execute a predetermined motion scenario (i.e., Spin X times, run back and forth X times, etc.).
  • “Dump” is an internal function that is used within Remote.
  • “Arbiter” is an internal function that is an intrinsic part of the behavior library that resolves different behavior priorities for wheel activation.

Here’s an example of the Main function-invoking different Behavior using API (see Listing 10). Note that this is part of a main loop. Behaviors can be called within a main loop or “Stacked Up”. You can remove or stack up behaviors as you choose ( simply comment out what you don’t need and recompile). Keep in mind that remote is a way for a remote operator to control operation or view status.

Listing 10: TOMBOT API Example

Let’s now examine the detailed state machine associated with each behavior to gain a better understanding of behavior operation (see Listing 11).

Listing 11:The TOMBOT’s arbiter

The arbiter is simple for TOMBOT. It is a fixed arbiter. If either during escape or avoid, it abdicates to those behaviors and lets them resolve motor control internally. Home or cruise motor control requests are handled directly by the arbiter (see Listing 12).

Listing 12: Home behavior

Home is still being debugged and is not yet a final product. The goal is for the TOMBOT during Home is to steer the robot toward a strong light source when not engaged in higher priority behaviors.

The Cruise behavior sets motor to forward operation for one second if no other higher priority behaviors are active (see Listing 13).

Listing 13: Cruise behavior

The Escape behavior tests the bumper switch state to determine if a bump is detected (see Listing 14). Once detected it runs through a series of states. The first is an immediate backup, and then it turns around and moves away from obstacle.

Listing 14: Escape behavior

This function is a response to the remote C or capture command that formats and dumps (see Listing 15) to the graphics display The IR left and right, Photo left and Right, and battery in floating point format.

Listing 15: The dump function

This behavior uses the IR sensors and determines if an object is within 8″ of the front of TOMBOT (see Listing 16).

Listing 16: Avoid behavior

If both sensors detect a target within 8″ then it just turns around and moves away (pretty much like escape). If only the right sensor detects an object in range spins away from right side else if on left spins away on left side (see Listing 17).

Listing 17: Remote part 1

Remote behavior is fairly comprehensive (see Listing 18). There are 14 different cases. Each case is driven by a different XBee received radio character. Once a character is received the red LED is turned on. Once the behavior is complete, the red LED is turned off and a buzzer is sounded.

Listing 18: Remote part 2

The first case toggles Autonomous mode on and off. The other 13 are prescribed actions. Seven of these 13 were written to demonstrate TOMBOT mobile agility with multiple spins, back and forwards. The final six of the 13 are standard single step debug like stop, backward, and capture. Capture dumps all sensor output to the display screen (see Table 1).

Table 1: TOMBOT remote commands

Early Findings & Implementation
Implementation always presents a choice. In my particular case, I was interested in rapid development. At that time, I selected to using non interrupt code and just have linear flow of code for easy debug. This amounts to “blocking code.” Block code is used throughout the behavior implementation and causes the robot to be nonresponsive when blocking occurs. All blocking is identified when timeout functions occur. Here the robot is “blind” to outside environmental conditions. Using a real-time operating system (e.g., Free RTOS) to eliminate this problem is recommended.

The TOMBOT also uses photocells for homing. These sensitive devices have different responses and need to be calibrated to ensure correct response. A photocell calibration is needed within the baseline and used prior to operation.

TOMBOT Demo

The TOMBOT was successfully demoed to a large first-grade class in southern California as part of a Science, Technology, Engineering and Mathematics (STEM) program. The main behaviors were limited to Remote, Avoid, and Escape. With autonomous operation off, the robot demonstrated mobility and maneuverability. With autonomous operation on, the robot could interact with a student to demo avoid and escape behavior.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.

DIY Green Energy Design Projects

Ready to start a low-power or energy-monitoring microcontroller-based design project? You’re in luck. We’re featuring eight award-winning, green energy-related designs that will help get your creative juices flowing.

The projects listed below placed at the top of Renesas’s RL78 Green Energy Challenge.

Electrostatic Cleaning Robot: Solar tracking mirrors, called heliostats, are an integral part of Concentrating Solar Power (CSP) plants. They must be kept clean to help maximize the production of steam, which generates power. Using an RL78, the innovative Electrostatic Cleaning Robot provides a reliable cleaning solution that’s powered entirely by photovoltaic cells. The robot traverses the surface of the mirror and uses a high voltage AC electric field to sweep away dust and debris.

Parts and circuitry inside the robot cleaner

Cloud Electrofusion Machine: Using approximately 400 times less energy than commercial electrofusion machines, the Cloud Electrofusion Machine is designed for welding 0.5″ to 2″ polyethylene fittings. The RL78-controlled machine is designed to read a barcode on the fitting which determines fusion parameters and traceability. Along with the barcode data, the system logs GPS location to an SD card, if present, and transmits the data for each fusion to a cloud database for tracking purposes and quality control.

Inside the electrofusion machine (Source: M. Hamilton)

The Sun Chaser: A GPS Reference Station: The Sun Chaser is a well-designed, solar-based energy harvesting system that automatically recalculates the direction of a solar panel to ensure it is always facing the sun. Mounted on a rotating disc, the solar panel’s orientation is calculated using the registered GPS position. With an external compass, the internal accelerometer, a DC motor and stepper motor, you can determine the solar panel’s exact position. The system uses the Renesas RDKRL78G13 evaluation board running the Micrium µC/OS-III real-time kernel.

[Video: ]

Water Heater by Solar Concentration: This solar water heater is powered by the RL78 evaluation board and designed to deflect concentrated amounts of sunlight onto a water pipe for continual heating. The deflector, armed with a counterweight for easy tilting, automatically adjusts the angle of reflection for maximum solar energy using the lowest power consumption possible.

RL78-based solar water heater (Source: P. Berquin)

Air Quality Mapper: Want to make sure the air along your daily walking path is clean? The Air Quality Mapper is a portable device designed to track levels of CO2 and CO gasses for constructing “Smog Maps” to determine the healthiest routes. Constructed with an RDKRL78G13, the Mapper receives location data from its GPS module, takes readings of the CO2 and CO concentrations along a specific route and stores the data in an SD card. Using a PC, you can parse the SD card data, plot it, and upload it automatically to an online MySQL database that presents the data in a Google map.

Air quality mapper design (Source: R. Alvarez Torrico)

Wireless Remote Solar-Powered “Meteo Sensor”: You can easily measure meteorological parameters with the “Meteo Sensor.” The RL78 MCU-based design takes cyclical measurements of temperature, humidity, atmospheric pressure, and supply voltage, and shares them using digital radio transceivers. Receivers are configured for listening of incoming data on the same radio channel. It simplifies the way weather data is gathered and eases construction of local measurement networks while being optimized for low energy usage and long battery life.

The design takes cyclical measurements of temperature, humidity, atmospheric pressure, and supply voltage, and shares them using digital radio transceivers. (Source: G. Kaczmarek)

Portable Power Quality Meter: Monitoring electrical usage is becoming increasingly popular in modern homes. The Portable Power Quality Meter uses an RL78 MCU to read power factor, total harmonic distortion, line frequency, voltage, and electrical consumption information and stores the data for analysis.

The portable power quality meter uses an RL78 MCU to read power factor, total harmonic distortion, line frequency, voltage, and electrical consumption information and stores the data for analysis. (Source: A. Barbosa)

High-Altitude Low-Cost Experimental Glider (HALO): The “HALO” experimental glider project consists of three main parts. A weather balloon is the carrier section. A glider (the payload of the balloon) is the return section. A ground base section is used for communication and display telemetry data (not part of the contest project). Using the REFLEX flight simulator for testing, the glider has its own micro-GPS receiver, sensors and low-power MCU unit. It can take off, climb to pre-programmed altitude and return to a given coordinate.

High-altitude low-cost experimental glider (Source: J. Altenburg)

Autonomous Mobile Robot (Part 1): Overview & Hardware

Welcome to “Robot Boot Camp.” In this two-part article series, I’ll explain what you can do with a basic mobile machine, a few sensors, and behavioral programming techniques. Behavioral programming provides distinct advantages over other programming techniques. It is independent of any environmental model, and it is more robust in the face of sensor error, and the behaviors can be stacked and run concurrently.

My objectives for my recent robot design were fairly modest. I wanted to build a robot that could cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. I knew that if I could meet such objective other more complex behaviors would be possible (e.g., self-docking on low power). There certainly many commercial robots on the market that could have met my requirements. But I decided that my best bet would be to roll my own. I wanted to keep things simple, and I wanted to fully understand the sensors and controls for behavioral autonomous operation. The TOMBOT is the fruit of that labor (see Photo 1a). A colleague came up with the name TOMBOT in honor of its inventor, and the name kind of stuck.

Photo 1a—The complete TOMBOT design. b—The graphics display is nice feature.

In this series of articles, I’ll present lessons learned and describe the hardware/software design process. The series will detail TOMBOT-style robot hardware and assembly, as well as behavior programming techniques using C code. By the end of the series, I’ll have covered a complete behavior programming library and API, which will be available for experimentation.

DESIGN BASICS

The TOMBOT robot is certainly minimal, no frills: two continuous-rotation, variable-speed control servos; two IR (850 nm) analog distance measurement sensors (4- to 30-cm range); two CdS photoconductive cells with good lux response in visible spectrum; and, finally, a front bumper (switch-activated) for collision detection. The platform is simple: servos and sensors on the left and right side of two level platforms. The bottom platform houses bumper, batteries, and servos. The top platform houses sensors and microcontroller electronics. The back part of the bottom platform uses a central skid for balance between the two servos (see Photo 1).

Given my background as a Microchip Developer and Academic Partner, I used a Microchip Technology PIC32 microcontroller, a PICkit 3 programmer/debugger, and a free Microchip IDE and 32-bit complier for TOMBOT. (Refer to the TOMBOT components list at the end of this article.)

It was a real thrill to design and build a minimal capability robot that can—with stacking programming behaviors—emulate some “intelligence.” TOMBOT is still a work in progress, but I recently had the privilege of demoing it to a first grade class in El Segundo, CA, as part of a Science Technology Engineering and Mathematics (STEM) initiative. The results were very rewarding, but more on that later.

BEHAVIORAL PROGRAMMING

A control system for a completely autonomous mobile robot must perform many complex information-processing tasks in real time, even for simple applications. The traditional method to building control systems for such robots is to separate the problem into a series of sequential functional components. An alternative approach is to use behavioral programming. The technique was introduced by Rodney Brooks out of the MIT Robotics Lab, and it has been very successful in the implementation of a lot of commercial robots, such as the popular Roomba vacuuming. It was even adopted for space applications like NASA’s Mars Rover and military seekers.

Programming a robot according to behavior-based principles makes the program inherently parallel, enabling the robot to attend simultaneously to all hazards it may encounter as well as any serendipitous opportunities that may arise. Each behavior functions independently through sensor registration, perception, and action. In the end, all behavior requests are prioritized and arbitrated before action is taken. By stacking the appropriate behaviors, using arbitrated software techniques, the robot appears to show (broadly speaking) “increasing intelligence.” The TOMBOT modestly achieves this objective using selective compile configurations to emulate a series of robot behaviors (i.e., Cruise, Home, Escape, Avoid, and Low Power). Figure 1 is a simple model illustration of a behavior program.

Figure 1: Behavior program

Joseph Jones’s Robot Programming: A Practical Guide to Behavior-Based Robotics (TAB Electronics, 2003) is a great reference book that helped guide me in this effort. It turns out that Jones was part of the design team for the Roomba product.

Debugging a mobile platform that is executing a series of concurrent behaviors can be daunting task. So, to make things easier, I implemented a complete remote control using a wireless link between the robot and a PC. With this link, I can enable or disable autonomous behavior, retrieve the robot sensor status and mode of operations, and curtail and avoid potential robot hazard. In addition to this, I implemented some additional operator feedback using a small graphics display, LEDs, and a simple sound buzzer. Note the TOMBOT’s power-up display in Photo 1b. We take Robot Boot Camp very seriously.

Minimalist System

As you can see in the robot’s block diagram (see Figure 2), the TOMBOT is very much a minimalist system with just enough components to demonstrate autonomous behaviors: Cruise, Escape, Avoid, and Home. All these behaviors require the use of left and right servos for autonomous maneuverability.

Figure 2: The TOMBOT system

The Cruise behavior just keeps the robot in motion in lieu of any stimulus. The Escape behavior uses the bumper to sense a collision and then 180° spin with reverse. The Avoid behavior makes use of continuous forward-looking IR sensors to veer left or right upon approaching a close obstacle. The Home behavior utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source. It all should add up to some very distinct “intelligent” operation. Figure 3 depicts the basic sensor and electronic layout.

Figure 3: Basic sensor and electronic layout

TOMBOT Assembly

The TOMBOT uses the low-cost robot platform (ArBot Chassis) and wheel set (X-Wheel assembly) from Budget Robotics (see Figure 4).

Figure 4: The platform and wheel set

A picture is worth a thousand words. Photo 2 shows two views of the TOMBOT prototype.

Photo 2a: The TOMBOT’s Sharp IR sensors, photo assembly, and more. b: The battery pack, right servo, and more.

Photo 2a shows dual Sharp IR sensors. Just below them is the photocell assembly. It is a custom board with dual CdS GL5528 photoconductive cells and 2.2-kΩ current-limiting resistors. Below this is a bumper assembly consisting of two SPDT Snap-action switches with lever (All Electronics Corp. CAT# SMS-196, left and right) fixed to a custom pre-fab plastic front bumper. Also shown is the solderless breakout board and left servo. Photo 2b shows the rechargeable battery pack that resides on the lower base platform and associated power switch. The electronics stack is visible. Here the XBee/Buzzer and graphics card modules residing on the 32-bit Experimenter. The Experimenter is plugged into a custom carrier board that allows for an interconnection to the solderless breakout to the rest of the system. Finally, note that the right servo is highlighted. The total TOMBOT package is not ideal; but remember, I’m talking about a prototype, and this particular configuration has held up nicely in several field demos.

I used Parallax (Futaba) continuous-rotation servos. They use a three-wire connector (+5 V, GND, and Control).

Figure 5 depicts a second-generation bumper assembly.  The same snap-action switches with extended levers are bent and fashioned to interconnect a bumper assembly as shown.

Figure 5: Second-generation bumper assembly

TOMBOT Electronics

A 32-bit Micro Experimenter is used as the CPU. This board is based the high-end Microchip Technology PIC32MX695F512H 64-pin TQFP with 128-KB RAM, 512-KB flash memory, and an 80-MHz clock. I did not want to skimp on this component during the prototype phase. In addition the 32-bit Experimenter supports a 102 × 64 monographic card with green/red backlight controls and LEDs. Since a full graphics library was already bundled with this Experimenter graphics card, it also represented good risk reduction during prototyping phase. Details for both cards are available on the Kiba website.

The Experimenter supports six basic board-level connections to outside world using JP1, JP2, JP3, JP4, BOT, and TOP headers.  A custom carrier board interfaces to the Experimenter via these connections and provides power and signal connection to the sensors and servos. The custom carrier accepts battery voltage and regulates it to +5 VDC. This +5 V is then further regulated by the Experimenter to its native +3.3-VDC operation. The solderless breadboard supports a resistor network to sense a +9-V battery voltage for a +3.3-V PIC processor. The breadboard also contains an LM324 quad op-amp to provide a buffer between +3.3-V logic of the processor and the required +5-V operation of the servo. Figure 6 is a detailed schematic diagram of the electronics.

Figure 6: The design’s circuitry

A custom card for the XBee radio carrier and buzzer was built that plugs into the Experimenter’s TOP and BOT connections. Photo 3 shows the modules and the carrier board. The robot uses a rechargeable 1,600-mAH battery system (typical of mid-range wireless toys) that provides hours of uninterrupted operation.

Photo 3: The modules and the carrier board

PIC32 On-Chip Peripherals

The major PIC32 peripheral connection for the Experimenter to rest of the system is shown. The TOMBOT uses PWM for servo, UART for XBee, SPI and digital for LCD, analog input channels for all the sensors, and digital for the buzzer and bumper detect. The key peripheral connection for the Experimenter to rest of the system is shown in Figure 7.

Figure 7: Peripheral usage

The PIC32 pinouts and their associated Experimenter connections are detailed in Figure 8.

Figure 8: PIC32 peripheral pinouts and EXP32 connectors

The TOMBOT Motion Basics and the PIC32 Output Compare Peripheral

Let’s review the basics for TOMBOT motor control. The servos use the Parallax (Futaba) Continuous Rotation Servos. With two-wheel control, the robot motion is controlled as per Table 1.

Table 1: Robot motion

The servos are controlled by using a 20-ms (500-Hz) pulse PWM pattern where the PWM pulse can from 1.0 ms to 2.0 ms. The effects on the servos for the different PWM are shown in Figure 9.

Figure 9: Servo PWM control

The PIC32 microcontroller (used in the Experimenter) has five Output Compare modules (OCX, where X =1 , 2, 3, 4, 5). We use two of these peripherals, specifically OC3, OC4 to generate the PWM to control the servo speed and direction. The OCX module can use either 16 Timer2 (TMR2) or 16 Timer3 (TMR3) or combined as 32-bit Timer23 as a time base and for period (PR) setting for the output pulse waveform. In our case, we are using Timer23 as a PR set to 20 ms (500 Hz). The OCXRS and OCXR registers are loaded with a 16-bit value to control width of the pulse generated during the output period. This value is compared against the Timer during each period cycle. The OCX output starts high and then when a match occurs OCX logic will generate a low on output. This will be repeated on a cycle-by-cycle basis (see Figure 10).

Figure 10: PWM generation

Next Comes Software

We set the research goals and objectives for our autonomous robot. We covered the hardware associated with this robot and in the next installment we will describe the software and operation.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.