Adaptive Robotics: An Interview with Henk Kiela

The Adaptive Robotics Lab at Fontys University in Eindhoven, Netherlands, has a high “Q” factor (think “007”). Groups of students are always working on robotics projects. Systems are constantly humming. Robots are continually moving around. Amid the melee, Circuit Cellar interviewed Professor Henk Kiela about the lab, innovations like adaptive robotics, and more.

“Adaptive robotics is the new breed of robots that are going to assist workers on the shopfloor and that will take care of a high variety of routine activities. Relieving them from routine work allows the workers to concentrate on their skills and knowledge and prevent them from getting lost in details. In a car-manufacturing operation you have a lot of robots doing more or less the same job, a top-down controlled robotization. We recognise that the new generation of robots will act more like an assistant for the worker— a flexible workforce that can be configured for different types of activities.”—Henk Kiela

3-D Object Segmentation for Robot Handling

A commercial humanoid service robot needs to have capabilities to perform human-like tasks. One such task for a robot in a medical scenario would be to provide medicine to a patient. The robot would need to detect the medicine bottle and move its hand to the object to pick it up. The task of locating and picking a medicine bottle up is quite trivial for a human. What does it take to enable a robot to do the same task? This, in fact, is a challenging problem for a robot. A robot tries to make sense of its environment based on the visual information it receives from a camera. Even then, creating efficient algorithms to identify an object of interest in an image, calculating the location of the robot’s arm in space, and enabling it to pick the object up is a daunting task. For our senior capstone project at Portland State University, we researched techniques that would enable a humanoid robot to locate and identify a common object (e.g., a medicine bottle) and acquire real-time position information about the robot’s hand in order to guide it to the target object. We used an InMoov open-source, 3-D humanoid robot for this project (see Photo 1).

Photo 1 The InMoov robot built at Portland State University’s robotics lab

Photo 1: The InMoov robot built at Portland State University’s robotics lab


In the field of computer vision, there are two dominant approaches to this problem—one using pixel-based 2-D imagery and another using 3-D depth imagery. We chose the 3-D approach because of the availability of state-of-the-art open source algorithms, and because of the recent influx of cheap stereo depth cameras, like the Intel RealSense R200.

Solving this problem further requires a proper combination of hardware and software along with a physical robot to implement the concept. We used an Intel Realsense R200 depth camera to collect 3-D images, and an Intel NUC with a 5th Generation Core i5 to process the 3-D image information. Likewise, for software, we used the open-source Point Cloud Library (PCL) to process 3-D point cloud data.[1] PCL contains several state-of-the-art 3-D segmentation and recognition algorithms, which made it easier for us to compare our design with other works in the same area. Similarly, the information relating to the robot arm and object position computed using our algorithms is published to the robot via the Robot Operating System (ROS). It can then be used by other modules, such as a robot arm controller, to move the robot hand.


Object segmentation is widely applied in computer vision to locate objects in an image.[2] The basic architecture of our package, as well as many others in this field, is a sequence of processing stages—that is, a pipeline. The segmentation pipeline starts with capturing an image from a 3-D depth camera. By the last stage of the pipeline, we have obtained the location and boundary information of the objects of interest, such as the hand of the robot and the nearest grabbable object.

Figure 1: 3-D object segmentation pipeline

Figure 1: 3-D object segmentation pipeline

The object segmentation pipeline of our design is shown in Figure 1. There are four main stages in our pipeline: downsampling the input raw image, using RANSAC and plane extraction algorithms, using the Euclidean Clustering technique to segment objects, and applying a bounding box to separate objects. Let’s review each one.

The raw clouds coming from the camera have a resolution which is far too high for segmentation to be feasible in real time. The basic technique for solving this problem is called “voxel filtering,” which entails compressing several nearby points into a single point.[3] In other words, all points in some specified cubical region of volume will be combined into a single point. The parameter that controls the size of this volume element is called the “leaf size.” Figure 2 shows an example of applying the voxel filter with several different leaf sizes. As the leaf size increases, the point cloud density decreases proportionally.

Figure 2: Down-sampling results for different leaf sizes

Figure 2: Down-sampling results for different leaf sizes

Random sample consensus (RANSAC) is a quick method of finding mathematical models. In the case of a plane, the RANSAC method will create a virtual plane that is then rotated and translated throughout the scene, looking for the plane with the data points that fit the model (i.e., inliers). The two parameters used are the threshold distance and the number of iterations. The greater the threshold, the thicker the plane can be. The more iteration RANSAC is allowed, the greater the probability of finding the plane with the most inliers.

Figure 3: The effects of varying the number of iterations of RANSAC. Notice that the plane on the left (a), which only used 200 iterations, was not correctly identified, while the one on the right (b), with 600 iterations, was correctly identified.

Figure 3: The effects of varying the number of iterations of RANSAC. Notice that the plane on the left, which only used 200 iterations, was not correctly identified, while the one on the right, with 600 iterations, was correctly identified.

Refer to Figure 3 to see what happens as the number of iterations is changed. The blue points represent the original data. The red points represent the plane inliers. The magenta points represent the noise (i.e., outliers) remaining after a prism extraction. As you can see, the image on the left shows how the plane of the table was not found due to RANSAC not being given enough iterations. The image on the right shows the plane being found, and the objects above the plane are properly segmented from the original data.

After RANSAC and plane extraction in the segmentation pipeline, Euclidean Clustering is performed. This process takes the down-sampled point cloud—without the plane and its convex hull—and breaks it into clusters. Each cluster hopefully corresponds to one of the objects on the table.[4] This is accomplished by first creating a kd-tree data structure, which stores the remaining points in the cloud in a way that can be searched efficiently. The cloud points are then iterated again with a radius search being performed for each point. Neighboring points within the threshold radius are then added to the current cluster and marked as processed. This continues until all points in the cloud have been marked as processed and put into different segments before the algorithm terminates. After the object segmentation and recognition has been performed, the robot knows which object to pick up, but it doesn’t know the boundaries of the object.

Saroj Bardewa ( is pursuing an MS in Electrical and Computer Engineering at Portland State University, where he earned a BS in Computer Engineering in June 2016. His interests include computer architecture, computer vision, machine learning, and robotics.

Sean Hendrickson ( is a senior studying Computer Engineering at Portland State University. His interests include computer vision and machine learning.

This complete article appears in Circuit Cellar 320 (March 2017).

Scribbler 3 (S3) Hackable Robots

Parallax’s Scribbler 3 (S3) is a fully-assembled, preprogrammed, reprogrammable, and hackable robot that’s well suited for students and electronics enthusiasts. You can program the S3 in Parallax’s Graphical User Interface (GUI) software or its BlocklyProp tool. The visual programming support in Google’s Blockly makes learning to program easier than ever.

The S3’s improvements over its predecessor include:

  • Rechargeable lithium ion battery pack
  • Exposed Hacker Port with access to I/O and high-current power connections
  • XBee socket inside for RF networking and future wireless programming
  • Line sensor improvement: easy to follow lines of all types with Blockly
  • Up to 25% faster

The Scribbler 3 robot costs $179.

Source: Parallax

The Future of Robotics Technology

Advancements in technology mean that the dawn of a new era of robotics is upon us. Automation is moving out of the factory and in to the real world. As this happens, we will see significant increases in productivity as well as drastic cuts in employment. We have an opportunity to markedly improve the lives of all people. Will we seize it?

For decades, the biggest limitations in robotics were related to computing and perception. Robots couldn’t make sense of their environments and so were fixed to the floor. Their movements were precalculated and repetitive. Now, however, we are beginning to see those limitations fall away, leading to a step-change in the capabilities of robotic systems. Robots now understand their environment with high fidelity, and safely navigate through it.

On the sensing side, we’re seeing multiple order of magnitude reductions in the cost of 3-D sensors used for mapping, obstacle avoidance, and task comprehension. Time of flight cameras such as those in the Microsoft Kinect or Google Tango devices are edging their way into the mainstream in high volumes. LIDAR sensors commonly used on self-driving cars were typically $60,000 or more just a few years ago. This year at the Consumer Electronics Show (CES), however, two companies, Quanergy and Velodyne, announced new solid-state LIDAR devices that eliminate all moving parts and carry a sub-$500 price point.

Understanding 3-D sensor data is a computationally intensive task, but advancements in general purpose GPU computing have introduced new ways to quickly process the information. Smartphones are pushing the development of small, powerful processors, and we’re seeing companies like NVIDIA shipping low cost GPU/CPU combos such as the X1 that are ideal for many robotics applications.

To make sense of all this data, we’re seeing significant improvements in software for robotics. The open-source Robot Operating System (ROS), for example, is widely used in industry and at 9 years old, just hit version 2.0. Meanwhile advances in machine learning mean that computers can now perform many tasks better than humans.

All these advancements mean that robots are moving beyond the factory floor and in to the real world. Soon we’ll see a litany of problems being solved by robotics. Amazon already uses robots to lower warehousing costs, and several new companies are looking to solve the last mile delivery problem. Combined with self-driving cars and trucks this will mean drastic cost reductions for the logistics industry, with a ripple effect that lowers the cost of all goods.

As volumes go up, we will see cost reductions in expensive mechanical components such as motors and linkages. In five years, most of the patents for metal 3-D printers will expire, which will bring on a wave of competition to lower costs for new manufacturing methods.
While many will benefit greatly from these advances, there are worrying implications for others. Truck driver is the most common job in nearly every state, but within a decade those jobs will see drastic cuts. Delivery companies like Amazon Fresh and Google Shopping Express currently rely on fleets of human drivers, as do taxi services Uber and Lyft. It seems reasonable that those companies will move to automated vehicles.

Meanwhile, there are a great number of unskilled jobs that have already reduced workers to near machines. Fast food restaurants, for example, provide clear cut scripts for workers to follow, eliminating any reliance on human intelligence. It won’t be long before robots are smart enough to do those jobs too. Some people believe new jobs will be created to replace the old ones, but I believe that at some point robots will simply surpass low-skilled workers in capability and become more desirable laborers. It is my deepest hope that long before that happens, we as a society take a serious look at the way we share the collective wealth of our Earth. Robots should not simply replace workers, but eliminate the need for humans to work for survival. Robots can so significantly increase productivity that we can eliminate scarcity for all of life’s necessities. In doing so, we can provide all people with wealth and freedom unseen in human history.

Making that happen is technologically simple, but will require significant changes to the way we think about society. We need many new thinkers to generate ideas, and would do well to explore concepts like basic income and the work of philosophers like Karl Marx and Friedrich Engels, among others. The most revolutionary aspect of the change robotics brings will not be the creation of new wealth, but in how it enables access to the wealth we already have.

Taylor Alexander is a multidisciplinary engineer focused on robotics. He is founder of Flutter Wireless and works as a Software Engineer at a secretive robotics startup in Silicon Valley. When he’s not designing for open source, he’s reading about the social and political implications of robotics and writing for his blog at

This essay appears in Circuit Cellar 308, March 2016.

Brain Control: An Interview with Dr. Max Ortiz Catalan

Dr. Max Ortiz Catalan is Research Director at Integrum AB, a medical device company based in Molndal, Sweden. Wisse Hettinga recently interviewed him about his work in the field of prosthetic design and biomedical systems.MOC_Lab3

As an electrical engineer, your first focus is to create new technology or to bring a new schematic design come to life. Dr. Max Ortiz Catalan is taking this concept much further. His research and work is enabling people to really start a new life!

People without an upper limb often find it difficult to manage tasks due to the limitations of prostheses. Dr. Catalan’s research at Chalmers University of Technology and Sahlgrenska University Hospital in Gothenburg, Sweden, focuses on the use of osseointegrated implants and a direct electronic connection between the nervous system and a prosthetic hand. People can control the prosthesis just like you control your hand, and they are able to sense forces as well. The results are impressive. The first patient received his implant three years ago and is successfully using it today. And more patients will be treated this year. I recently interviewed Dr. Catalan about his work. I trust this interview will inspire seasoned and novice engineers alike.—Wisse Hettinga

HETTINGA: What led you to this field of research?

CATALAN: I was always interested in working on robotics and the medical field. After my bachelor’s in electronics, my first job was in the manufacturing industry, but I soon realized that I was more interested in research and the development of technology. So I left that job to go back to school and do a master’s in Complex Adaptive System. I also took some additional courses in biomedical engineering and then continued working in this field where I did my doctoral work.

HETTINGA: I was surprised you did not mention the word “robot” once in your TEDx presentation (“Bionic Limbs Integrated to Bone, Nerves, and Muscles”)? Was that coincidence or on purpose?

CATALAN: That was coincidence, you can call a prosthesis a “robotic device” or “robotic prosthesis.” When you talk about a “robot,” you often see it as an independent entity. In this case, the robotic arm is fully controlled by the human so it makes more sense to talk about bionics or biomechatronics.

HETTINGA: What will be the next field of research for you?

CATALAN: The next step for us is the restoration of the sense of touch and proprioception via direct nerve stimulation, or “neurostimulation.” We have developed an embedded control system for running all the signal processing and machine learning algorithms, but it also contains a neurostimulation unit that we use to elicit sensations in the patient that are perceived as arising from the missing limb. The patients will start using this system in their daily life this year.

HETTINGA: You are connecting the controls of the prosthesis with nerves. How do you connect a wire to a nerve?

CATALAN: There are a variety of neural interfaces (or electrodes) which can be used to connect with the nerves. The most invasive and selective neural interfaces suffer from long-term instability. In our case we decided to go for a cuff electrode, which is considered as a extra-neural interface since it does not penetrate the blood-nerve barrier and is well tolerated by the body for long periods of time, while also remaining functional.

HETTINGA: Can you explain how the nerve signals are transferred into processable electric signals?

CATALAN: Electricity travels within the body in the form of ions and the variations in electric potentials, or motor action potentials for control purposes. They are transduced into electrons by the electrodes so the signals can be finally amplified by analog electronics and then decoded on the digital side to reproduce motor volition by the prosthesis.

HETTINGA: What is the signal strength?

CATALAN: Nerve signals (ENG) are in the order of microvolts and muscle signals (EMG) in the order of millivolts.

HETTINGA: What technologies are you using to cancel out signal noise?

CATALAN: We use low-noise precision amplifiers and active filtering for the initial signal conditioning, then we can use adaptive filters implemented in software if necessary.

HETTINGA: How do you protect the signals being disturbed by external sources or EM signals?

CATALAN: Since we are using implanted electrodes, we use the body as a shielding, as well as the titanium implant and the electronics housing. This shielding becomes part of the amplifier’s reference so it is rejected as common noise.

HETTINGA: How are the signals transferred from the nerves to the prosthesis?

CATALAN: The signals from nerves and muscles are transferred via the osseointegrated implant to reach the prosthesis where they are amplified and processed. In a similar ways, signals coming from sensors in the prosthesis are sent into the body to stimulate the neural pathways that used to be connected to the biological sensors in the missing hand. Osseointegration is the key difference between our work and previous approaches.

HETTINGA: What sensors technologies are you using in the prosthetic hand?

CATALAN: At this point it is rather straightforward with strain gauges and FSRs (Force Sensitive Resistor), but on research prostheses, motors are normally instrumented as well so we can infer joint angles.

This interview appears in Circuit Cellar 307 February.

Innovative Product Design: An Interview with Rich Legrand

Rich Legrand founded Charmed Labs in 2002 to develop and sell innovative robotics-related designs, including the Xport Robot Kit, the Qwerk robot controller, the GigaPan robotic camera mount, and the Pixy vision sensor. He recently told us about his background, passion for robotics, and interest in open-source hardware.

Legrand-IMG_5660CIRCUIT CELLAR: Tell us a bit about your background. When did you first get started with electronics and engineering?

RICH: Back in 1982 when I was 12, one of my older brother’s friends was what they called a “whiz kid.” I would show up uninvited at his place because he was always creating something new, and he didn’t treat me like a snotty-nosed kid (which I was). On one particular afternoon he had disassembled a Big Trak toy (remember those?) and connected it to his Atari 800, so the Atari could control its movements. He wrote a simple BASIC program to read the joystick movements and translate them to Big Trak movements. You could then hit the return key and the Atari would play back the motions you just made. There were relays clicking and LEDs flashing, and the Big Trak did exactly what you told it to do. I had never seen a computer do this before, and I was absolutely amazed. I wanted to learn as much as I could about electronics after that. And I’m still learning, of course.

CIRCUIT CELLAR: You studied electrical engineering at both Rice University and North Carolina State University. Why electrical engineering?

RICH: I think it goes back to when I was 12 and trying to learn more about robotics. With a limited budget, it was largely a question of what I get my hands on. Back then you could go into Radio Shack and buy a handful of 7400 series parts and create something simple, but pretty amazing. Forrest Mims’s books (also available at Radio Shack) were full of inspiring circuit designs. And Steve Ciarcia’s “Circuit Cellar” column in Byte magazine focused on seat-of-the-pants electronics projects you could build yourself. The only tools you needed were a soldering iron, a voltmeter, and a logic probe. I think young people today see a similar landscape where it’s easier to get involved in electrical engineering than say mechanical engineering (although 3-D printing might change this). The Internet is full of source material and the hardware (computers, microcontrollers, power supplies, etc.) is lower-cost and easier to find. The Arduino is a good example of this. It has its own ecosystem from which you can launch practically any project or idea.

CIRCUIT CELLAR: Photography factors in a lot of your work and work history. Is photography a passion of yours?

RICH: I don’t think so, but I enjoy photography. Image processing, image understanding, machine vision—the idea that you can extract useful information from a digital image with a piece of software, an algorithm. It’s a cool idea to me because you can have multiple vision algorithms and effectively have several sensors in one package. Or in the case of Gigapan, being able to create a gigapixel imager from a fairly low-cost point-and-shoot camera, some motors, and customized photo stitching software. I’m a hardware guy at heart, but hardware tends to be expensive. Combining inexpensive hardware with software to create something that’s lower-cost—it sounds like a pretty niche idea, but these are the projects that I seem to fall for over and over again. Working on these projects is what I really enjoy.

CIRCUIT CELLAR: Prior to your current gig at Charmed Labs, you were with Gigapan Systems, which you co-founded. Tell us about how you came to launch Gigapan.

RICH: Gigapan is robotic camera mount that allows practically anyone with a digital camera to make high-resolution panoramas. The basic idea is that you take a camera with high resolution but narrow field-of-view (high-zoom) to capture a mosaic of pictures that can be later stitched together with software to form a much larger, highly-detailed panorama of the subject, whether it’s the Grand Canyon or the cockpit of the Space Shuttle. This technique is used by the Mars rovers, so it’s not surprising that a NASA engineer (Randy Sargent) first conceived Gigapan. Charmed Labs got a chance to bid on the hardware, and we designed and manufactured the first Gigapan units as part of a public beta program. (The beta was funded by Carnegie Mellon University through donations from NASA and Google.) The beta garnered enough attention to get investors and start a company to focus on Gigapan, which we did. We were on CNN, we were mentioned on Jay Leno. It was a fun and exciting time!

he first Xport was a simple circuit board with flash for program storage and an FPGA for programmable I/O.

The first Xport was a simple circuit board with flash for program storage and an FPGA for programmable I/O.

CIRCUIT CELLAR: In a 2004 article, “Closed-Loop Motion Control for Mobile Robotics“ (Circuit Cellar 169), you introduced us to your first product, the Xport. How did you come to design the Xport?

RICH: When the Gameboy Advance was announced back in 1999, I thought it was a perfect robot platform. It had a color LCD and a powerful 32-bit processor, it was optimized for battery power, and it was low-cost. The pitch went something like: “For $40 you can buy a cartridge for your Gameboy that allows you to play a game. For $99 you can buy a cartridge with motors and sensors that turns your Gameboy into a robot.” So the Gameboy becomes the “brains” of the robot if you will. I didn’t know what the robot would do exactly, other than be cool and robot-like, and I didn’t know how to land a consumer electronics product on the shelves of Toys “R” Us, so I tackled some of the bigger technical problems instead, like how to turn the Gameboy into an embedded system with the required I/O for robotics. I ordered a Gameboy from Japan through eBay prior to the US release and reverse-engineered the cartridge port. The first “Xport” prototype was working not long after the first Gameboys showed up in US stores, so that was pretty cool. It was a simple circuit board that plugged into the Gamboy’s cartridge port. It had flash for program storage and an FPGA for programmable I/O. The Xport seemed like an interesting product by itself, so I decided to sell it. I quit my job as a software engineer and started Charmed Labs.

CIRCUIT CELLAR: Tell us about the Xport Botball Controller (XBC).

RICH: The Xport turned the Gameboy into an embedded system with lots of I/O, but my real goal was to make a robot. So I added more electronics around the Xport for motor control, sensor inputs, a simple vision system, even Bluetooth. I sold it online for a while before the folks at Botball expressed interest in using it for their robot competition, which is geared for middle school and high school students. Building a robot out of a Gameboy was a compelling idea, especially for kids, and tens of thousands of students used the XBC to learn about engineering—that was really great. I never got the Gameboy robot on the shelves of Toys “R” Us, but it was a really happy ending to the project.

CIRCUIT CELLAR: Charmed Labs collaborated with the Carnegie Mellon CREATE Lab on the Qwerk robot controller. How did you end up collaborating with CMU?

RICH: I met Illah Nourbakhsh who runs the CREATE lab at a robot competition back when he was a grad student. His lab’s Telepresence Robotics Kit (TeRK) was created in part to address the falling rate of computer science graduates in the US. The idea was to create a curriculum that featured robotics to help attract more students to the field. Qwerk was an embedded Linux system that allowed you make a telepresence robot easily. You could literally plug in some motors, a webcam, and a battery, and fire up a web browser and become “telepresent” through the robot. We designed and manufactured Qwerk for a couple years before we licensed it.

The Qwerk

The Qwerk

CIRCUIT CELLAR: Pixy is a cool vision sensor for robotics that you can teach to track objects. What was the impetus for that design?

RICH: Pixy is actually the fifth version of the CMUcam. The first CMUcam was invented at  Carnegie Mellon by Anthony Rowe back in 2000 when he was a graduate student. I got involved on a bit of a lark. NXP Semiconductors had just announced a processor that looked like an good fit for a low-cost vision sensor, so I sent Anthony a heads-up, that’s all. He was looking for someone to help with the next version of CMUcam, so it was a happy coincidence.

The Pixy vision sensor

The Pixy vision sensor

CIRCUIT CELLAR: You launched Pixy in 2013 on Kickstarter. Would you recommend Kickstarter to Circuit Cellar readers who are thinking of launching a hardware product?

RICH: Before crowdfunding was a thing, you either had to self-fund or convince a few investors to contribute a decent amount of cash based on the premise that you had a good idea. And the investors typically didn’t have your background or perspective, so it was usually a difficult sell. With crowdfunding, a couple hundred people with similar backgrounds and perspectives contribute $50 (or so) in exchange for becoming the very first customers. It’s an easier path I think, and it’s a great fit for products like Pixy that have a limited but enthusiastic audience. I think of crowdfunding as a cost-effective marketing strategy. Sites like Kickstarter get huge amounts of traffic, and getting your idea in front of such a large audience is usually expensive—cost-prohibitive in my case. It also answers two important questions for hardware makers: Are enough people interested in this thing to make it worthwhile? And if it is worthwhile, how many should I make?

But I really didn’t think many people would be interested in a vision sensor for hobbyist robotics, so when faced with the task of creating a Kickstarter for Pixy, I thought of lots of excuses not to move forward with it. Case in point—if your Kickstarter campaign fails, it’s public Internet knowledge. (Yikes!) But I’m always telling my boys that you learn more from your mistakes than from your successes, so it seemed pretty lame that I was dragging my heals on the Kickstarter thing because I wanted to avoid potential embarrassment. I eventually got the campaign launched, and it was a success, and Pixy got a chance to see the light of day, so that was good. It was a lot of work, and it was psychologically exhausting, but it was really fun to see folks excited about your idea. I’d totally do it again though, and I’d like to crowdfund my next project.

CIRCUIT CELLAR: Can you tell us about one or two of the more interesting projects you’ve seen featuring Pixy?

RICH: Ben Heck used Pixy in a couple of his episodes of the Ben Heck Show ( He used Pixy to create a camera that can automatically track what he’s filming. And Microsoft used Pixy for an Windows 10 demo that played air hockey IR-Lock ( is a small company that launched a successful Kickstarter campaign that featured Pixy as a beacon detector for use in autonomous drones. All of these projects have a high fun-factor, which I really enjoy seeing.

CIRCUIT CELLAR: What’s next for Charmed Labs?

RICH: I’ll tell you about one of my crazier ideas. My wife gets on my case every holiday season to hang lights on the house. It wouldn’t be that bad, except our next-door neighbors go all-out. They hang lights on every available surface of their house—think Griswolds from the Christmas Vacation movie. So anything I do to our house looks pretty sad by comparison. I’m competitive. But I had the idea that if I created a computer-controlled light show that’s synchronized to music, it might be a good face-saving technology, a way to possibly one-up the neighbors, because that’s what it’s all about, right? (Ha!) So I’ve been working on an easy-to-set-up and low-cost way to make your own holiday light show. It’s way outside of my robotics wheelhouse. I’m learning about high-voltage electronics and UL requirements, and there’s a decent chance it won’t be cost-competitive, or even work, but my hope is to launch a crowdfunding campaign in the next year or so.

CIRCUIT CELLAR: What are your thoughts on the future of open-source hardware?

RICH: We can probably thank the Arduino folks because before they came along, very few were talking about open hardware. They showed that you can fully open-source a design (including the hardware) and still be successful. Pixy was my first open hardware project and I must admit that I was a little nervous moving forward with it, but open hardware principles have definitely helped us. More people are using Pixy because it’s fully open. If you’re interested in licensing your software or firmware, open hardware is an effective marketing strategy, so I don’t think it’s about “giving it all away” as some might assume. That is, you can still offer closed-source licenses to customers that want to use your software, but not open-source their customizations. I’ve always liked the idea of open vs. proprietary, and I’ve learned plenty from fellow engineers who choose to share instead of lock things down. It’s great for innovation.

On a different robot, a flapping winged ornithopter, we had this PC104 computer running matlab as the controller. It probably weighed about 2 pounds, which forced us to build a huge wingspan – almost 6 feet. We dreamed about adding some machine vision to the platform as well. Having just built a vision-based robot for MIT’s MASLAB competition using an FPGA paired with an Arduino – the PC104 solution started to look pretty stupid to me. That was what really got me interested embedded work. FPGAs and Microcontrollers gave you an insane amount of computing power at comparatively minuscule power and weight footprints. And so died the PC104 standard.


This interview appears in Circuit Cellar 305 (December 2015).

Innovations in Mobile Robotics: An Interview with Nick Kohut

Nick Kohut and a lab mate turned their academic interest in mobile robotics into an exciting business—Dash Robotics, which sells a small, insect-like running robot that you can control with a smartphone. We recently asked Nick about advances in running robot technology, the benefits of aerodynamic turning , and his thoughts on the future of robotics.

Nick Kohut (Co-Founder, Dash Robotics)

Nick Kohut (Co-Founder, Dash Robotics)

CIRCUIT CELLAR: When did you become interested in robotics? Can you tell us about your first robotics project?

NICK: I actually first became interested in robotics in 2010, which was my third year of graduate school. I had become an engineer originally because I was really interested in cars, specifically vehicle dynamics. I had just wrapped up my Master’s working on a research project at Cal with Audi, and I needed a new project for my PhD.

I looked around at different labs, and the work being done in Ron Fearing’s robotics lab seemed really interesting—basically vehicles with legs. Believe it or not, I had never done any robotics or even soldered a single joint until that point. I had a steep learning curve in the lab, and my first robotics project was MEDIC, a 4 cm walking robot. It was pretty tough but I learned a lot, and fell in love with the subject.

CIRCUIT CELLAR: Why did you decide to focus your studies on control systems? Whose work inspired you to focus on control systems?

NICK: At the University of Illinois Prof. Andrew Alleyne was one of my advisors, and I took his intro controls course junior year. I really liked it—controls and dynamics are definitely my favorite subject, anything that moves keeps my interest. It also had a lot of math which I was pretty decent at for an engineer so I did really well in the class. I decided I should study it in grad school. What they don’t tell you is that grad school controls is totally different, but I ended up liking that too.

The TAYLRoACH (tail-actuated yaw locomotion roach)

The TAYLRoACH (tail-actuated yaw locomotion roach)

CIRCUIT CELLAR: Tell us about the work you did in the Biomimetics and Dexterous Manipulation Laboratory under Professor Mark Cutkosky.

NICK: I was only in Mark’s lab for about seven months. It was a great place to work, but I had founded Dash Robotics in between taking the postdoc position and actually starting the postdoc. Because of that I only worked on one project and in seven months there’s only so much you can do. I was trying to scale up an Electroactive Polymer actuator (EAP) for use in Honda’s Asimo robot. It’s an interesting challenge that involves a lot of rapid prototyping, materials research, and solid mechanics. Also quality control, which is hard to in a lab setting.

CIRCUIT CELLAR: How did you come to use aerodynamic forces to turn running robots? What led you to this field of research?

NICK: This actually started with biologists like Robert Full and Tom Libby studying lizards. Bob and Tom had discovered that when lizards jump they use their tail as a form of attitude control. They had also shown that in a wind tunnel they will use their tail to turn. I was tasked with getting a robot to turn using a tail, which I did with some pretty good success. TaylRoACH (the robot I built in 2012) ended up being the fastest turning legged robot in the world. It could turn 90° in 1/4 of a second.

After I had shown that, I started to wonder what else the tail could do. I tried a lot of things – mostly back of the envelope ideas—like stability on inclines or using as a “7th leg” in confined places. A lot of those didn’t work out, and someone suggested I use it as a helicopter blade, half-joking. It got me to thinking, what if you used it as a sail? I ran the numbers in like an hour and realized, man, this might actually work.

CIRCUIT CELLAR: What are the benefits of aerodynamic turning?

NICK: There are a few benefits. One interesting thing is that it will only work at small scales, but that’s probably where you want it. When robots start to get smaller and smaller, you become really limited with what you can do. You can’t add a lot of sensors, actuators, or computing power (though this is changing every day!). So you probably have a very simple robot, maybe only a few actuators. The SailRoACH has six legs, and only three actuators, but can make wide turns, rapid turns, and pretty much everything in between. So it can keep things simple.

It also can be used in a research setting to study the dynamics of the robot. If you want to add a constant yaw disturbance to the robot and measure how that affects its running ability, this is a way to do that. This may sound like an esoteric need but it’s how research gets done, and it helps us understand running robots better.

CIRCUIT CELLAR: Tell us about the Millirobot Enabled Diagnostic of Integrated Circuits (MEDIC) project. Why did you start the project and what were the results?

NICK: MEDIC was an interesting project because it was my first robotics project and we were trying to solve a very difficult problem, which was “Can you build a robot to navigate inside a computer motherboard?” We were contracted to work on this with Lockheed Martin, and they supplied the software end of things.

Basically we built this incredibly small robot (~5 cm and 5 g) that had legs and a hull that would allow it to scoot around a motherboard, turn, and climb over basic, short obstacles (like a microchip). I worked on the mechanics and design of the robot, with a lot of help from other lab members on the electronics, and Lockheed provided the software that allowed MEDIC (called “Adorable Turtle Bot” by us) to navigate. It actually had a little camera on it, so it would take a picture, send that information to a laptop, then the laptop would send a few instructions (“go forward two steps, then turn left for two step”), the robot would execute the instructions, take another picture, and repeat the process.

It was pretty cool because you had this tiny robot doing SLAM and navigating autonomously inside a computer motherboard. Unfortunately it was slower than oatmeal running downhill and didn’t work most of the time, but that’s research. By the end we had some results we were very happy with and wrote two solid publications on it.

You can control Dash robots with a cell phone

You can control Dash robots with a cell phone

CIRCUIT CELLAR: What led you and your co-founders to launch Dash Robotics in 2013? And can you tell us about your current team?

NICK: My co-founder Andrew and I were lab mates in grad school and climbing buddies. We both knew that we wanted to run our own business, because we couldn’t stand working in a cubicle; it’s why we were in grad school in the first place.

This idea of starting a business bounced around for a couple years but we never did anything with it. In the meantime, we had been to various events at schools and museums and saw that people loved the robot, they just went wild for it. Everyone asked to buy it but we always told them, “no, this is just a research tool.” In late 2012 we saw the first beginnings of all these smart toys and thought “well, what we have here is way cooler than that.” So we formed Dash Robotics, Inc. We hadn’t even graduated yet but we got a lot of support from the University and friends and family and were able to make it until February 2015 without taking any venture investment. Now I’m very happy we have that.

The robots are made flat. Simple fold and assemble them.

The robots are made flat. Simple fold and assemble them.

CIRCUIT CELLAR: Dash’s first robot is a phone-controlled, insect-like running robot. It is shipped “origami”-style for people to assemble themselves. Tell us a bit about the process of planning and designing the robot.

NICK: This is pretty tough to answer in one question. The “origami” style is a process called SCM that was originally developed at UC Berkeley. The design is all done in 2-D and then cut out and folded up to 3-D, so it takes a bit of experience to become good at designing mechanisms using this process. You can’t just build it in 3-D CAD and see what it will look like before making it.

There are some people who are trying to change that, like Dan Aukes from Harvard. Right now we still do it all on intuition and experience. The original robot was developed in 2009, and it saw incremental changes over the next 4 years or so. In 2013 when we founded the company we had a whole new set of requirements for the robot (a research tool and consumer product are vastly different) so we started making a lot of changes. There have probably been at least 50 revisions since 2013—maybe 100. Each time it gets a little better, and we do a lot of testing to make sure we’re on the right track.

CIRCUIT CELLAR: Is DIY, hobby robotics your main focus at Dash Robotics? Do plan you branch out, perhaps into robot systems for industry, military, or medical applications?

NICK: That’s our main focus right now, along with making a product that kids will love as well. I think there are a lot of potential directions like agriculture, infrastructure inspection, search and rescue, etc. That’s much further down the road though.

CIRCUIT CELLAR: What’s next for Dash Robotics? Where would you like to see the company in 12 months?

NICK: With its products flying off store shelves and a great team in place making it all happen!

CIRCUIT CELLAR: What are your thoughts on the future of robotics?

NICK: This is a great, and of course difficult, question. It also depends on how you define robotics. I think on one end you’re going to see a lot of jobs displaced by self-driving cars and trucks, robotic dishwashers, housecleaners, etc. On the other end AI is going to be able to do a lot of knowledge work now done by lawyers, doctors, and engineers. Both of those advances are going to be a major challenge for society.

If you’re talking about mobile robotics specifically, where a lot of my interest lies, there is a major challenge in actuators and power density. Boston Dynamics builds some amazing machines but the internal combustion engine is loud and dirty, and current lithium batteries are only going to get you so far. Tesla is working very hard on the battery problem, and hopefully its new Gigafactory will bring prices down. If Tesla makes a big advance in battery technology I think you may see a whole new category of mobile robots breaking out.

This interview appears in Circuit Cellar 304 (November 2015).


Matrix Launches Formula AllCode Kickstarter Campaign (sponsored)

Matrix TSL has launched a Kickstarter campaign for its Formula AllCode robotics course, which features a high-specification, Bluetooth-enabled robot. You can program the robot via Python, AppBuilder, Flowcode, Matlab, LabVIEW, C, and more. It is compatible with Raspberry Pi, Android, iPhone, and Windows devices.AllCodeKickstarter


Formula AllCode is a platform for both novice or advanced electronics enthusiasts to learn and test their robotics skills. Participate in the campaign: Formula AllCode

The funds raised from this Kickstarter project will allow Matrix to take the current prototype development shown in the project videos to the next level with a technical specification to beat any other like-for-like robot buggy and subsequent manufacture of 1000 units to be launched world-wide.

By backing the  Kickstarter campaign, you are supporting a project which allows users to develop their robotics understanding on a platform of their choice. Whether your starting out with your first robotics project or you’re a fully fledged robotics developer, the Formula AllCode will work for you. The project must be funded by Sunday, September 6, 2015.AllCodeSpecs



  • 2 Push to make switch
  • 8 IR distance sensors
  • Light sensor
  • Microphone
  • I2C accelerometer/compass
  • 2 Line following sensors
  • Audio gain


  • 8 LEDs (one port)
  • Speaker
  • Expansion port (8 bit)
  • 4 Servo outputs
  • E-blocks expansion port


  • Left and Right
  • Integrated gear box
  • Integrated encoders


  • Reset switch
  • 16-bit PIC24 microcontroller
  • USB rechargeable lithium battery
  • 4 × 40 char backlit LCD
  • Micro SD card
  • Integrated Bluetooth
  • Crystal Oscillator
  • Micro USB Socket

Wireless Data Link

In 2001, while working on self-contained robot system called “Scout,” Tom Dahlin and Donald Krantz developed an interesting wireless data link. A tubular, wheeled robot, Scout’s wireless data link is divided into separate boards, one for radio control and another containing RF hardware.

Dahlin and Krantz write:

This article will describe the hardware and software design and implementation of a low-power, wireless RF data link. We will discuss a robotic application in which the RF link facilitates the command and control functions of a tele-operated miniature robot. The RF Monolithics (RFM) TR-3000 chip is the core of the transceiver design. We use a straightforward interface to a PIC controller, so you should be able to use or adapt much of this application for your needs…

Photo 1: The robot measures a little over 4″. Designed for tele-operated remote surveillance, it contains a video camera and transmitter. Scout can hop over obstacles by hoisting its tail spring (shown extended) and quickly releasing it to slap the ground and propel the robot into the air.

Photo 1: The robot measures a little over 4″. Designed for teleoperated remote surveillance, it contains a video camera and transmitter. Scout can hop over obstacles by hoisting its tail spring (shown extended) and quickly releasing it to slap the ground and propel the robot into the air.

The robot, called Scout, is packed in a 38-mm diameter tube with coaxial-mounted wheels at each end, approximately 110-mm long. The robot is shown in Photo 1. (For additional information, see the “Key Specifications for Scout Robot” sidebar.) Scout carries a miniature video camera and video transmitter, allowing you to tele-operate the robot by sending it steering commands while watching video images sent back from Scout. The video transmitter and data transceiver contained on the robot are separate devices, operating at 915 and 433MHz, respectively. Also contained on Scout are dual-axis magnetometers (for compass functions) and dual-axis accelerometers (for tilt/inclination measurement).

Figure 1: For the radio processor board, a PIC16F877 provides the horsepower to perform transceiver control, Manchester encoding, and packet formatting.

Figure 1: For the radio processor board, a PIC16F877 provides the horsepower to perform transceiver control, Manchester encoding, and packet formatting.

Scout’s hardware and software were designed to be modular. The wireless data link is physically partitioned onto two separate boards, one containing a PIC processor for radio control, message formatting, and data encoding (see Figure 1). The other board contains the RF hardware, consisting of the RFM TR3000 chip and supporting discrete components. By separating the two boards, we were able to keep the digital noise and trash away from the radio.

Read the full article.

Advances in Haptics Research

Katherine J. Kuchenbecker is an Associate Professor in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, with a secondary appointment in Computer and Information Science. She directs the Penn Haptics Group, which is part of the General Robotics, Automation, Sensing, and Perception (GRASP) Laboratory. In this interview, she tells us about her research, which centers on the design and control of haptic interfaces for applications such as robot-assisted surgery, medical simulation, stroke rehabilitation, and personal computing.

Katherine J. Kuchenbecker

Katherine J. Kuchenbecker

CIRCUIT CELLAR: When did you first become interested in haptics and why did you decide to pursue it?

KATHERINE: I chose to become an engineer because I wanted to create technology that helps people. Several topics piqued my interest when I was pursuing my undergraduate degree in mechanical engineering at Stanford, including mechatronics, robotics, automotive engineering, product design, human-computer interaction, and medical devices. I was particularly excited about areas that involve human interaction with technology. Haptics is the perfect combination of these interests because it centers on human interaction with real, remote, or virtual objects, as well as robotic interaction with physical objects.

My first exposure to this field was a “haptic paddle” lab in a Stanford course on system dynamics, but that alone wouldn’t have been enough to make me fall in love with this field. Instead, it was conversations with Günter Niemeyer, the professor who advised me in my PhD at Stanford. I knew I wanted a doctorate so that I could become a faculty member myself, and I was inspired by the work he had done as an engineer at Intuitive Surgical, Inc., the maker of the da Vinci system for robotic surgery. Through my early research with Günter, I realized that it is incredibly satisfying to create computer-controlled electromechanical systems that enable the user to touch virtual objects or control a robot at a distance. I love demonstrating haptic systems because people make such great faces when they feel how the system responds to their movements. Another great benefit of studying haptics is that I get to work on a wide variety of applications that could potentially impact people in the near future: robotic surgery, medical training, stroke rehabilitation, personal robotics, and personal computing, to name a few.

CIRCUIT CELLAR: What is haptography? What are its benefits?

KATHERINE: I coined the term “haptography” (haptic photography) to proclaim an ambitious goal for haptics research: we should be able to capture and reproduce how surfaces feel with the same acuity that we can capture and reproduce how surfaces look.

When I entered the field of haptics in 2002, a lot of great research had been done on methods for letting a user feel a virtual three-dimensional shape through a stylus or thimble. Essentially, the user holds on to a handle attached to the end of a lightweight, back-drivable robot arm; the 3D Systems Touch device is the most recent haptic interface of this type. A computer measures the motion that the person makes and constantly outputs a three-dimensional force vector to give the user the illusion that they are touching the object shown on the screen. I was impressed with the haptics demonstrations I tried back in 2002, but I was also deeply disappointed with how the virtual surfaces felt. Everything was soft, squishy, and indistinct compared to how real objects feel. That’s one of the benefits of being new to a field; you’re not afraid to question the state of the art.

I started working to improve this situation as a doctoral student, helping invent a way to make hard virtual surfaces like wood and metal feel really hard and realistic. The key was understanding that the human haptic perceptual system keys in on transients instead of steady-state forces when judging hardness. I had to write a research statement to apply for faculty positions at the end of 2005, so I wrote all about haptography. Rather than trying to hand-program how various surfaces should feel, I wanted to make it all data driven. The idea is to use motion and force sensors to record everything a person feels when using a tool to touch a real surface. We then analyze the recorded data to make a model of how the surface responds when the tool moves in various ways. As with hardness, high-frequency vibration transients are also really important to human perception of texture, which is a big part of what makes different surfaces feel distinct. Standard haptic interfaces weren’t designed to output high-frequency vibrations, so we typically attach a voice-coil actuator (much like an audio speaker) to the handle, near the user’s fingertips. When the user is touching a virtual surface, we output data-driven tapping transients, friction forces, and texture vibrations to try to fool them into thinking they are touching the real surface from which the model was constructed.

After many years of research by my PhD students Heather Culbertson and Joe Romano, we’ve been able to create the most realistic haptic surfaces in the world. My work in haptography is motivated by a belief that there are myriad applications for highly realistic haptic virtual surfaces.

One exciting use is in recording what doctors and other clinical practitioners feel as they use various tools to care for their patients, such as inserting an epidural needle or examining teeth for decay (more on this below). Haptography would enable us to accurately simulate those interactions so that trainees can practice critical perceptualmotor skills on a computer model instead of on a human patient.

Another application that excites us is adding tactile feedback to online shopping. We’d love to use our technology to let consumers feel the fabrics and surfaces of products they’re considering without having to visit a physical store. Touch-mediated interaction plays an important role in many facets of human life; I hope that my team’s work on haptography will help bring highly realistic touch feedback into the digital domain.

Read Circuit Cellar’s interviews with other engineers, academics, and innovators.

CIRCUIT CELLAR: Which of the Penn Haptics Group’s projects most interest you at this time?

KATHERINE: That’s a hard question! I’m excited about all of the projects we are pursuing. There are a few I can’t talk about, because we’re planning to patent the underlying technology once we confirm that it works as well as we think it does. Two of those that are in the public domain have been fascinating me recently. Tactile Teleoperation: My lab shares a Willow Garage PR2 (Personal Robot 2) humanoid robot with several of the other faculty in Penn’s GRASP Lab. Our PR2’s name is Graspy.

This wearable device allows the user to control the motion of the PR2 robot’s hand and also feel what the PR2 is feeling. The haptic feedback is delivered via a geared DC motor and two voice-coil actuators.

This wearable device allows the user to control the motion of the PR2 robot’s hand and also feel what the PR2 is feeling. The haptic feedback is delivered via a geared DC motor and two voice-coil actuators.

While we’ve done lots of fun research to enable this robot to autonomously pick up and set down unknown objects, I’d always dreamed of having a great system for controlling Graspy from a distance. Instead of making the operator use a joystick or a keyboard, we wanted to let him or her control Graspy using natural hand motions and also feel what Graspy was feeling during interactions with objects.

My PhD student Rebecca Pierce recently led the development of a wearable device that accomplishes exactly this goal. It uses a direct drive geared DC motor with an optical encoder to actuate and sense a revolute joint that is aligned with the base joint of the operator’s index finger. Opening and closing your hand opens and closes the robot’s paralleljaw gripper, and the motor resists the motion of your hand if the robot grabs onto something. We supplement this kinesthetic haptic feedback with tactile feedback delivered to the pads of the user’s index finger and thumb. A voice coil actuator mounted in each location moves a platform into and out of contact with the finger to match what the robot’s tactile sensors detect. Each voice coil presses with a force proportional to what the corresponding robot finger is feeling, and the voice coils also transmit the high-frequency vibrations (typically caused by collisions) that are sensed by the MEMS-based accelerometer embedded in the robot’s hand. We track the movement of this wearable device using a Vicon optical motion tracking system, and Graspy follows the movements of the operator in real time. The operator sees a video of the interaction taking place. We’re in the process of having human participants test this teleoperation setup right now, and I’m really excited to learn how the haptic feedback affects the operator’s ability to control the robot.

high-bandwidth MEMS-based accelerometer records the sensations a dentist feels as she probes an extracted human tooth. Feeling these recordings lets dental trainees practice diagnosing dental decay before they treat live patients.

The high-bandwidth MEMS-based accelerometer records thesensations a dentist feels as she probes an extracted human tooth. Feeling these recordings lets dental trainees practice diagnosing dental decay before they treat live patients.

CIRCUIT CELLAR: In your TEDYouth talk, you describe a project in which a dental tool is fitted with an accelerometer to record what a dentist feels and then replay it back for a dental student. Can you tell us a bit about the project?

KATHERINE: This project spun out of my haptography research, which I described above. While we were learning to record and model haptic data from interactions between tools and objects, we realized that the original recordings had value on their own, even before we distilled them into a virtual model of what the person was touching. One day I gave a lab tour to two faculty members from the Penn School of Dental Medicine who were interested in new technologies. I hit it off with Dr. Margrit Maggio, who had great experience in teaching general dentistry skills to dental students. She explained that some dental students really struggled to master some of the tactile judgments needed to practice dentistry, particularly in discerning whether or not a tooth surface is decayed (in popular parlance, whether it has a cavity). A few students and I went over to her lab to test whether our accelerometer-based technology could capture the subtle details of how decayed vs. healthy tooth tissue feels. While the recordings are a little creepy to feel, they are super accurate. We refined our approach and conducted several studies on the potential of this technology to be used in training dental students. The results were really encouraging, once again showing the potential that haptic technology holds for improving clinical training.

CIRCUIT CELLAR: What is the “next big thing” in the field of haptics? Is there a specific area or technology that you think will be a game changer?

KATHERINE: Of course this depends on where you’re looking. While cell phones and game controllers have had vibration alerts for a long time, I think we’re just starting to see highquality haptic feedback emerge in consumer products. Haptics can definitely improve the user experience, which will give haptic products a market advantage, but their cost and implementation complexity need to be low enough to keep the product competitive. On the research side, I’m seeing a big move toward tactile feedback and wearable devices. Luckily there are enough interesting open research questions to keep my students and me busy for 30 more years, if not longer!

The complete interview appears in Circuit Cellar 296 (March 2015).

DIY Interactive Robots: An Interview with Erin Kennedy

Erin “RobotGrrl” Kennedy designs award-winning robots. Her RoboBrrd DIY robot-building kit successfully launched in 2012 and was featured in IEEE Spectrum, Forbes, Wired, and on the Discovery Channel. Erin was recognized as  one of the 20 Intel Emerging Young Entrepreneurs. In this interview she tells us about her passion for robotics, early designs, and future plans.5938310667_89a68ca380_o

CIRCUIT CELLAR: How and when did Erin Kennedy become “RobotGrrl?”

ERIN: I used to play an online game, but didn’t want to use my nickname from there. I was building LEGO robots at the time, so my friend suggested “RobotGrrl.” It sounds like a growl without the “ow.”

CIRCUIT CELLAR: Tell us about Why and when did you decide to start blogging?

ERIN: I started around 2006 to document my adventures into the world of robotics. I would post updates to my project on there, similar to a log book. It helped me gain a community that would follow my adventures.

CIRCUIT CELLAR: Your RoboBrrd company is based on the success of your RoboBrrd beginner robot-building kit, which was funded by Indiegogo in 2012. How does the robot work? What is included in the kit?

ERIN: RoboBrrd works by using three servos, a laser-cut chassis, and an Arduino derivative for its brain. Two of the servos are used for the robot’s wings and the third one is used for the beak mechanism. To construct the chassis, all you need is glue. The brains are on a custom-designed Arduino derivative, complete with RoboBrrd doodles on the silkscreen.



The first prototype of RoboBrrd was created with pencils and popsicle sticks. Adafruit sent me the electronics and in return I would make weekly videos about building the robot. People seemed to like the robot, so I kept making newer prototypes that would improve on problems and add more to the design.

Eventually I started working on a laser-cut kit version. I won the WyoLum Open Hardware grant and, with the money, I was able to order PCBs I designed for RoboBrrd.

I had enough money for a flight out to California (for RoboGames and Maker Faire Bay Area) where I was an artist in residence at Evil Mad Scientist Laboratories. It was helpful to be able to use their laser cutter right when a new design was ready. Plus, I was able to build a really old and cool Heathkit.

RoboBrrd chassis

RoboBrrd chassis

Afterward, I worked on the design a little more. SpikenzieLabs ( helped laser cut it for me and eventually it was all finished. It was such an awesome feeling to finally have a solid design!

In 2012, RoboBrrd launched on Indiegogo and luckily there were enough friends out there who were able to help the project and back it. They were all very enthusiastic about the project. I was really lucky.

Now I am working on a newer version of the 3-D printed RoboBrrd and some iOS applications that use Bluetooth Low Energy (BLE) to communicate with it. The design has come a long way, and it has been fun to learn many new things from RoboBrrd.

CIRCUIT CELLAR: RoboBrrd has had widespread popularity. The robots have been featured on The Discovery Channel, Forbes, MAKE, and WIRED. To what do you attribute your success?

ERIN: The success of RoboBrrd is attributed to everyone who is enthusiastic about it, especially those who have bought a kit or made their own RoboBrrds. It is always fun to see whenever people make modifications to their RoboBrrds.

All I did was make and deliver the kit. It’s all of the “friends of RoboBrrd” who bring their own creative ideas to make it really shine. Also, from the previous question, the readers can see that I had a lot of help along the way.

Having the robots featured on many websites required some luck. You never know if your e-mail pitch is what the journalists are looking for to cover the robot. I was really lucky that websites featured RoboBrrd; it provides it with a little more credibility.

In my opinion, the quirkiness of RoboBrrd helps as well. Sometimes people view it as the “open-source hardware (OSHW) Furby.” It’s a robotic bird and it isn’t your regular wheeled-robot.

CIRCUIT CELLAR: What was the first embedded system you designed. Where were you at the time? What did you learn from the experience?

ERIN: There were systems that I designed using the LEGO Mindstorms RCX 2.0, but my very first design from scratch was a robot called BubbleBoy. The outer appearance looked like a pink snowman. It sat on a green ice cream container and wore a top hat. It was very rudimentary. At the time I was in Grade 11.

Inside of the body sphere were two servos. The servos would push/pull on paper clips that were attached to the head. Inside the head there was a DC motor to spin the top hat around. There was also a smaller DC motor inside the body to attach to a hula hoop to wiggle it. The electronics were enclosed in the container. The robot used an Arduino Diecimila microcontroller board (limited-edition prototype version) and some transistors to control the motors from battery power. There was also a LCD to display the robot’s current mood and water and food levels. On each side of the screen buttons incremented the water or food levels.

There’s a 2009 video of me showing BubbleBoy on Fat Man & Circuit Girl. (Jeri Ellsworth co-hosted the webcast.)

There was not as much documentation online about the Arduino and learning electronics as there is now. I gained many skills from this experience.

The biggest thing I learned from BubbleBoy was how to drive DC motors by using transistors. I also learned how to not mount servos. The hot glue on polystyrene was never rigid enough and kept moving. It was a fun project; the hands-on making of a robot character can really help you kick off making bigger projects.

You can read the entire interview in Circuit Cellar 293 (December 2014).

Robotics & Intelligent Gaming

When Alessandro Giacomel discovered Arduino in 2009, he quickly became hooked. Since then, he’s been designing “little robots” around Ardunio and blogging about his work and findings. In this interview, Alessandro tells us about his most interesting projects and shares his thoughts on the future of robotics, 3-D printing, and more.

CIRCUIT CELLAR: How long have you been designing embedded systems and what sparked your interest

ALESSANDRO: I have been designing embedded systems for about five years. My interest arose from the possibility of building robots. When I was a kid, I found robots extremely fascinating. The ability to force matter to do something we decided always seemed to be one of the main goals conceded to man.

CIRCUIT CELLAR: Tell us about your first design.

ALESSANDRO: My first embedded system was an Arduino 2009. The availability of a huge shield, sensors, and actuators has enabled me to design many applications at an acceptable price for an amateur like me.


Alessandro’s first robot

I started like many people, with a robot on wheels moving around avoiding obstacles. It’s a standard robot that almost all beginners build. It’s simple because it is built with only a few components and a standard Arduino 2009. The design included modified servomotors that can rotate 360° moving the robot and connected to the wheels and a servomotor to move a little head where there is an ultrasonic distance sensor. The distance sensor lets you know when the robot is in front of an obstacle and helps you decide the most convenient way for the robot to escape.

In its simplicity, this robot enables one to understand the basics for the development of a microcontroller-based robot: the need to have separate power supplies for the motors’ power circuits and for the microcontroller’s logic, the need to have precise sensor reading timing, and the importance of having efficient algorithms to ensure that the robot moves in the desired mode.

My first robot took me a long time to build. But all the elements of the robot (hardware and software) were developed by me and this was important because it let me begin to face the real problems that arise when you are building a robot. Today there are many resources on the Internet that enable one to build a robot simply replicating a set of steps anyone has described. These guides should be used as a source of inspiration, never followed exactly step-by-step, otherwise—while in the end it is true that you can build a robot—you don’t own the knowledge of what has been done.

My robot evolved with the ability to speak, thanks to a sound module. When I build a robot the goal is always to experiment with a technology and to have fun. My friends have enjoyed seeing the robot turning around, speaking, and telling funny stories.

CIRCUIT CELLAR: Your blog, Robottini (, is described as “little robots with Arduino.” What inspired you to begin the blog

ALESSANDRO: I strongly believe in sharing knowledge and open-source hardware and software. I thought it was normal to try to share what I was designing when I started to build robots. When I started, I had the benefit of what others had made and published on the Internet. I thought about writing a blog in my language, Italian, but I thought also it would be a good exercise for me to try to write in English and, most importantly, this enabled me to reach a much wider audience.

The site description includes the philosophy at the basis of the blog: small robots built using Arduino. I build small robots because I’m an amateur and my house isn’t very big, so I only build robots that I can put in an armoire. I use Arduino because it is a microcontroller developed in Italy, it was obvious for me to use it, and it is really a great board for a beginner—inexpensive and robust.


Alessandro’s first robot at the Arduino Day 2011 event

The community has developed thousands of applications that can be reused. When I started the blog in 2011, I was building small robots for a few years. In the beginning, finding information was much more complicated and there were few shields that were not cheap. So, I always tried to use “poor” materials (e.g., recovered or recycled). Decreasing the cost of implementation and reusing and imagining new purposes for the things already available in a normal house seemed like a good way to work.

My achievements documented in the blog are never step-by-step guides to build the robot. I include a list of components to buy, the source code, and sometimes the wiring diagram. But I never provide a complete guide, since I think everyone should try to build their own robot because, once built, the satisfaction is enormous.

Through my blog I am available to help with problems people encounter when they are building robots, but I think it is important to give people the tools to build, rather than providing detailed explanations. Everyone can learn only by fighting the difficulties, without having someone preparing everything perfectly.

CIRCUIT CELLAR: Robottini obviously includes quite a few robotics projects. Why did you build them? Do you have a favorite?

ALESSANDRO: Many times people ask me what is the meaning of the robots I build. The answer that I give them leaves people puzzled. The answer is this: My robots are useless. They are useful only as fun—as a passion. I’m happy when I see my little son, Stefano, who is three years old, watching and laughing at a robot turning around in our house. But this does not mean I don’t follow a branch of research when I build robots.

Initially, I built robots to understand how the driver for the motors works, the sensors, and the problems related to the logic of the robot. Afterward, the first branch of research was the issue of control, how to set the proportional, integral, derivative (PID) control to follow a line or make a robot that is in balance. This has enabled me to address the management of complex sensors, such as the inertial measurement unit (IMU).

To have a robot balance on two wheels it is important to measure how much the robot is tilting from the vertical. To do this, typically a cluster of sensors is used, called IMU, which are based on multi-axes combinations of precision gyroscopes, accelerometers, magnetometers, and pressure sensors. In a more simple version, the IMU uses an accelerometer and a gyroscope, and it is mandatory to use both signals to obtain a correct value of the tilt angle from the vertical (it is called fusion of signals).

The most common method used is based on the Kalman filter, which is a mathematical tool that enables you to combine two or more signals to obtain the value of the angle. But it is a highly sophisticated and difficult for an amateur to understand, and it requires fairly advanced knowledge of mathematics. A new method that is rather simple has been proposed in the last years. It is called the “complementary filter.“

One of the studies I performed and posted on my blog compares in practice the signals of the two filters to verify if the complementary filter is able to approximate the Kalman filter in typical situations coming up in robotics. This post has had a great following, and I’ve been surprised to see that several university-level scientific publications have linked to it. I only wrote the post because I was curious to see a simple and almost trivial method that has become helpful to researchers and hobbyists. It has been a pleasure for me.

In the last year, I have followed the trend of art and interaction (i.e., the possibility of building something that can somehow marry art with technology). It was the theme of the stall I had at Maker Faire Europe in Rome, Italy, in October 2013. Arduino is an electronic circuit without a heart and without a soul. Can an Arduino be an artist? I’m trying to do something with Arduino that could be “art.” The arts include painting, poetry, music, sculpture, and so on. I’m trying to do something in different fields of art.

My first experiment is the Dadaist Poetry Box, which is a box capable of composing and printing Dadaist poems. It’s made with an Arduino and uses a printer for receipts to write poems. The box uses an algorithm to compose the poems in autonomy. You push the button and here is your Dadaist poem.


Dadaist poetry box design

Normally, the poem is a valuable asset, the result of an intimate moment when the poet transposes on paper the emotions of his soul. It is an inspired act, an act of concentration and transport. It’s not immediate. The poem box instead is trivial, it seems almost “anti-poem.” But it’s not; it’s a Dadaist poem. A user can push the button and have an original poem. I like the machine because it gives everyone something material to take home. In this way, the experience of interaction with the machine goes beyond the moment.

Another of my favorite robots is one that is capable of drawing portraits. I’ve never been good at drawing, and I’ve always been envious of those who can easily use a pencil to make a portrait. So I tried using my technical skills to fill this gap.


Portrait-drawing robot

The search of the algorithm that—starting from a picture—is able to detect the most important lines of the face has been particularly long and difficult. I used the OpenCV open-source libraries for computer vision and image processing, which are very powerful, but hard to handle. Installing the libraries is not a simple undertaking and using them is even more complicated. I used the OpenCV for Processing. Processing is an open-source programming language and integrated development environment (IDE) built for the electronic arts, new media art, and visual design communities with the purpose of teaching the fundamentals of computer programming in a visual context.

The algorithm initially found facial lines using the algorithms for calculation of edges of a picture. So I used the Canny edge detector, the Sobel edge detector, and all the other main edge detection algorithms, but none of these proved to be adequate to draw a face. Then I changed the course and used the Laplacian filter with threshold. I think I reached a good result because it takes less than 10 minutes to draw a portrait, which enables me to take pictures of people and make their portrait before they lose their patience.

CIRCUIT CELLAR: What new technologies excite you and why?

ALESSANDRO: I work almost strictly with Arduino microcontrollers. I was excited with the arrival of Linux-embedded mini-PCs (e.g., the Raspberry PI, the pcDuino, and’s BeagleBone Black). Forcibly, I’m very intrigued by the new Arduino Tre, which is a mini-PC with Linux joined with an Arduino Leonardo. Combining a PC’s processing power of with Linux with the real-time management of the sensors and actuators made by an Arduino is an interesting road. It offers the possibility to manage the real-time processing of video streams through, for example, the OpenCV libraries, with the option of acquiring signals from analog sensors and the possibility of drive motors. For example, this enables one to have a completely autonomous 3-D printer and to perform the slicing and management of the 3-D printer. It also opens up new perspectives in the robotics and computer vision. The main limitation, which is now present in embedded systems, is the limited processing capacity. The ability to have in the same card a Linux system—with the world of applications and drivers already available—linked to the ability to manage physical devices brings a revolution. And I’m already excited to see the next results.

Read the complete interview in Circuit Cellar 292 November 2014.

Book: Advanced Control Robotics

When it comes to robotics, the future is now! With the ever-increasing demand for robotics applications—from home control systems to animatronic toys to unmanned planet rovers—it’s an exciting time to be a roboticist. Whether you’re a weekend DIYer, a computer science student, or a professional engineer, you’ll find this book to be a valuable reference tool.

Advanced Control Robotics, by Hanno Sander

It doesn’t matter if you’re building a line-following robot toy or tasked with designing a mobile system for an extraterrestrial exploratory mission: the more you know about advanced robotics technologies, the better you’ll fare at your workbench. Hanno Sander’s Advanced Control Robotics (Elektor/Circuit Cellar, 2014) is intended to help roboticists of various skill levels take their designs to the next level with microcontrollers and the know-how to implement them effectively.

Advanced Control Robotics simplifies the theory and best practices of advanced robot technologies. You’re taught basic embedded design theory and presented handy code samples, essential schematics, and valuable design tips (from construction to debugging).

Sponsored by Circuit Cellar — Read the Table of Contents for Advanced Control Robotics. Ready to start learning? Purchase a copy of Advanced Control Robotics today!

You will learn about:

  • Control Robotics: robot actions, servos, and stepper motors
  • Embedded Technology: microcontrollers and peripherals
  • Programming Languages: machine level (Assembly), low level (C/BASIC/Spin), and human (12Blocks)
  • Control Structures: functions, state machines, multiprocessors, and events
  • Visual Debugging: LED/speaker/gauges, PC-based development environments, and test instruments
  • Output: sounds and synthesized speech
  • Sensors: compass, encoder, tilt, proximity, artificial markers, and audio
  • Control Loop Algorithms: digital control, PID, and fuzzy logic
  • Communication Technologies: infrared, sound, and XML-RPC over HTTP
  • Projects: line following with vision and pattern tracking
Hanno Sander at Work

Hanno Sander at Work

About the author: Hanno Sander earned a degree in Computer Science from Stanford University, where he built one of the first hybrid cars, collaborated on a microsatellite, and studied artificial intelligence. He later founded a startup to develop customized information services and then transitioned to product marketing in Silicon Valley with Oracle, Yahoo, and Verity. Today, Hanno’s company, HannoWare, seeks to make sophisticated technology—robots, programming languages, debugging tools, and oscilloscopes—more accessible. Hanno lives in Christchurch, New Zealand, where he enjoys his growing family and focuses on his passion of improving education with technology.

Self-Reconfiguring Robotic Systems & M-Blocks

Self-reconfiguring robots are no longer science fiction. Researchers at MIT are rapidly innovating shape-shifting robotic systems. In the August 2014 issue of Circuit Cellar, MIT researcher Kyle Gilpin presents M-Blocks, which are 50-mm cubic modules capable of controlled self-reconfiguration.

The creation of autonomous machines capable of shape-shifting has been a long-running dream of scientists and engineers. Our enthusiasm for these self-reconfiguring robots is fueled by fantastic science fiction blockbusters, but it stems from the potential that self-reconfiguring robots have to revolutionize our interactions with the world around us.

Source: Kyle Gilpin

Source: Kyle Gilpin

Imagine the convenience of a universal toolkit that can produce even the most specialized tool on demand in a matter of minutes. Alternatively, consider a piece of furniture, or an entire room, that could change its configuration to suit the personal preferences of its occupant. Assembly lines could automatically adapt to new products, and construction scaffolding could build itself while workers sleep. At MIT’s Distributed Robotics Lab, we are working to make these dreams into reality through the development of the M-Blocks.

The M-Blocks are a set of 50-mm cubic modules capable of controlled self-reconfiguration. Each M-Block is an autonomous robot that can not only move independently, but can also magnetically bond with other M-Blocks to form larger reconfigurable systems. When part of a group, each module can climb over and around its neighbors. Our goal is that a set of M-Blocks, dispersed randomly across the ground, could locate one another and then independently move to coalesce into a macro-scale object, like a chair. The modules could then reconfigure themselves into a sphere and collectively roll to a new location. If, in the process, the collective encounters an obstacle (e.g., a set of stairs to be ascended), the sphere could morph into an amorphous collection in which the modules climb over one another to surmount the obstacle.  Once they have reached their final destination, the modules could reassemble into a different object, like a desk.

The M-Blocks move and reconfigure by pivoting about their edges using an inertial actuator. The energy for this actuation comes from a 20,000-RPM flywheel contained within each module. Once the motor speed has stabilized, a servomotor-driven, self-tightening band brake decelerates the flywheel to a complete stop in 15 ms. All of the momentum that had been accumulated in the flywheel is transferred to the frame of the M-Block. Consequently, the module rolls forward from one face to the next, or if the flywheel velocity is high enough, it rapidly shoots across the ground or even jumps several body lengths through the air. (Refer to  to watch the cubes move.)

While the M-Blocks are capable of independent movement, their true potential is only realized when many modules operate as a group. Permanent magnets on the outside of each M-Block serve as un-gendered connectors. In particular, each of the 12 edges holds two cylindrical magnets that are captive, but free to rotate, in a semi-enclosing cage. These magnets are polarized through their radii, not through their long axes, so as they rotate, they can present either magnetic pole. The benefit of this arrangement is that as two modules are brought together, the magnets will automatically rotate to attract. Furthermore, as one and then two additional M-Blocks are added to form a 2 × 2 grid, the magnets will always rotate to realign and accommodate the additional modules.

The same cylindrical magnets that bond neighboring M-Blocks together form excellent pivot axes, about which the modules may roll over and around one another. We have shown that the modules can climb vertically over other modules, move horizontally while cantilevered from one side, traverse while suspended from above, and even jump over gaps. The permanent magnet connectors are completely passive, requiring no control and no planning. Because all of the active components of an M-Block are housed internally, the modules could be hermetically sealed, allowing them to operate in extreme environment where other robotic systems may fail.

While we have made significant progress, many exciting challenges remain. In the current generation of modules, there is only a single flywheel, and it is fixed to the module’s frame, so the modules can only move in one direction along a straight line. We are close to publishing a new design that enables the M-Blocks to move in three dimensions, makes the system more robust, and ensures that the modules’ movements are highly repeatable. We also hope to build new varieties of modules that contain cameras, grippers, and other specialized, task-specific tools. Finally, we are developing algorithms that will allow for the coordinated control of large ensembles of hundreds or thousands of modules. With this continued development, we are optimistic that the M-Blocks will be able to solve a variety of practical challenges that are, as of yet, largely untouched by robotics.

Kyle Gilpin

Kyle Gilpin


Kyle Gilpin, PhD, is a Postdoctoral Associate in the Distributed Robotics Lab at the Massachusetts Institute of Technology (MIT) where he is collaborating with Professor Daniela Rus and John Romanishin to develop the M-Blocks. Kyle works to improve communication and control in large distributed robotic systems. Before earning his PhD, Kyle spent two years working as a senior electrical engineer at a biomedical device start-up. In addition to working for MIT, he owns a contract design and consulting business, Crosscut Prototypes. His past projects include developing cellular and Wi-Fi devices, real-time image processing systems, reconfigurable sensor nodes, robots with compliant SMA actuators, integrated production test systems, and ultra-low-power sensors.

Circuit Cellar 289 (August 2014) is now available.

24-Channel Digital I/O Interface for Arduino & Compatibles

SCIDYNE Corp. recently expanded its product line by developing a digital I/O interface for Arduino hardware. The DIO24-ARD makes it easy to connect to solid-state I/O racks, switches, relays, LEDs, and many other commonly used peripheral devices. Target applications include industrial control systems, robotics, IoT, security, and education.Scidyne

The board provides 24 nonisolated I/O channels across three 8-bit ports. Each channel’s direction can be individually configured as either an Input or Output using standard SPI library functions. Outputs are capable of sinking 85 mA at 5 V. External devices attach by means of a 50 position ribbon-cable style header.

The DIO24-ARD features stack-through connectors with long-leads allowing systems to be built around multiple Arduino shields. It costs $38.

[Source: SCIDYNE Corp.]