Robot Design with Microsoft Kinect, RDS 4, & Parallax’s Eddie

Microsoft announced on March 8 the availability of Robotics Developer Studio 4 (RDS 4) software for robotics applications. RDS 4 was designed to work with the Kinect for Windows SDK. To demonstrate the capabilities of RDS 4, the Microsoft robotics team built the Follow Me Robot with a Parallax Eddie robot, laptop running Windows 7, and the Kinect.

In the following short video, Microsoft software developer Harsha Kikkeri demonstrates Follow Me Robot.

Circuit Cellar readers are already experimenting Kinect and developing embedded system to work with it n interesting ways. In an upcoming article about a Kinect-based project, designer Miguel Sanchez describes a interesting Kinect-based 3-D imaging system.

Sanchez writes:

My project started as a simple enterprise that later became a bit more challenging. The idea of capturing the silhouette of an individual standing in front of the Kinect was based on isolating those points that are between two distance thresholds from the camera. As depth image already provides the distance measurement, all the pixels of the subject will be between a range of distances, while other objects in the scene will be outside of this small range. But I wanted to have just the contour line of a person and not all the pixels that belong to that person’s body. OpenCV is a powerful computer vision library. I used it for my project because of function blobs. This function extracts the contour of the different isolated objects of a scene. As my image would only contain one object—the person standing in front of the camera—function blobs would return the exact list of coordinates of the contour of the person, which was what I needed. Please note that this function is a heavy image processing made easy for the user. It provides not just one, but a list of all the different objects that have been detected in the image. It can also specify is holes inside a blob are permitted. It can also specify the minimum and maximum areas of detected blobs. But for my project, I am only interested in detecting the biggest blob returned, which will be the one with index zero, as they are stored in decreasing order of blob area in the array returned by the blobs function.

Though it is not a fault of blobs function, I quickly realized that I was getting more detail than I needed and that there was a bit of noise in the edges of the contour. Filtering out on a bit map can be easily accomplished with a blur function, but smoothing out a contour did not sound so obvious to me.

A contour line can be simplified by removing certain points. A clever algorithm can do this by removing those points that are close enough to the overall contour line. One of these algorithms is the Douglas-Peucker recursive contour simplification algorithm. The algorithm starts with the two endpoints and it accepts one point in between whose orthogonal distance from the line connecting the two first points is larger than a given threshold. Only the point with the largest distance is selected (or none if the threshold is not met). The process is repeated recursively, as new points are added, to create the list of accepted points (those that are contributing the most to the general contour given a user-provided threshold). The larger the threshold, the rougher the resulting contour will be.

By simplifying a contour, now human silhouettes look better and noise is gone, but they look a bit synthetic. The last step I did was to perform a cubic-spline interpolation so contour becomes a set of curves between the different original points of the simplified contour. It seems a bit twisted to simplify first to later add back more points because of the spline interpolation, but this way it creates a more visually pleasant and curvy result, which was my goal.


(Source: Miguel Sanchez)
(Source: Miguel Sanchez)

The nearby images show aspects of the process Sanchez describes in his article, where an offset between the human figure and the drawn silhouette is apparent.

The entire article is slated to appear in the June or July edition of Circuit Cellar.

Aerial Robot Demonstration Wows at TEDTalk

In a TEDTalk Thursday, engineer Vijay Kumar presented an exciting innovation in the field of unmanned aerial vehicle (UAV) technology. He detailed how a team of UPenn engineers retrofitted compact aerial robots with embedded technologies that enable them to swarm and operate as a team to take on a variety of remarkable tasks. A swarm can complete construction projects, orchestrate a nine-instrument piece of music, and much more.

The 0.1-lb aerial robot Kumar presented on stage—built by UPenn students Alex Kushleyev and Daniel Mellinger—consumed approximately 15 W, he said. The 8-inch design—which can operate outdoors or indoors without GPS—featured onboard accelerometers, gyros, and processors.

“An on-board processor essentially looks at what motions need to be executed, and combines these motions, and figures out what commands to send to the motors 600 times a second,” Kumar said.

Watch the video for the entire talk and demonstration. Nine aerial robots play six instruments at the 14:49 minute mark.

Q&A: Hanno Sander on Robotics

I met Hanno Sander in 2008 at the Embedded Systems Conference in San Jose, CA. At the time, Hanno was at the Parallax booth demonstrating a Propeller-based, two-wheeled balancing robot. Several months later, we published an article he wrote about the project in issue March 2009. Today, Hanno runs HannoWare and works with school systems to improve youth education by focusing technological innovation in classrooms.

Hanno Sander at Work

The March issue of Circuit Cellar, which will hit newsstands soon, features an in-depth interview with Hanno. It’s an inspirational story for experienced and novice roboticists alike.

Hanno Sander's Turing maching debugged with ViewPort

Here’s an excerpt from the interview:

HannoWare is my attempt to share my hobbies with others while keeping my kids fed and wife happy. It started with me simply selling software online but is now a business developing and selling software, hardware, and courseware directly and through distributors. I get a kick out of collaborating with top engineers on our projects and love hearing from customers about their success.

Our first product was the ViewPort development environment for the Parallax Propeller, which features both traditional tools like line-by-line stepping and breakpoints as well as real-time graphs of variables and pin I/O states to help developers debug their firmware. ViewPort has been used for applications ranging from creating a hobby Turing machine to calibrating a resolver for a 6-MW motor. 12Blocks is a visual programming language for hobby microcontrollers.

The drag-n-drop style of programming with customizable blocks makes it ideal for novice programmers. Like ViewPort, 12Blocks uses rich graphics to help programmers understand what’s going on inside the processor.

The ability to view and edit the underlying sourcecode simplifies transition to text languages like BASIC and C when appropriate. TBot is the result of an Internetonly collaboration with Chad George, a very talented roboticist. Our goal for the robot was to excel at typical robot challenges in its stock configuration while also allowing users to customize the platform to their needs. A full set of sensors and actuators accomplish the former while the metal frame, expansion ports, and software libraries satisfy the latter.

Click here to read the entire interview.


Robot Nav with Acoustic Delay Triangulation

Building a robot is a rite of passage for electronics engineers. And thus this magazine has published dozens of robotics-related articles over the years.

In the March issue, we present a particularly informative article on the topic of robot navigation in particular. Larry Foltzer tackles the topic of robot positioning with acoustic delay triangulation. It’s more of a theoretical piece than a project article. But we’re confident you’ll find it intriguing and useful.

Here’s an excerpt from Foltzer’s article:

“I decided to explore what it takes, algorithmically speaking, to make a robot that is capable of discovering its position on a playing field and figuring out how to maneuver to another position within the defined field of play. Later on I will build a minimalist-like platform to test algorithms performance.

In the interest of hardware simplicity, my goal is to use as few sensors as possible. I will use ultrasonic sensors to determine range to ultrasonic beacons located at the corners of the playing field and wheel-rotation sensors to measure distance traversed, if wheel-rotation rate times time proves to be unreliable.

From a software point of view, the machine must be able to determine robot position on a defined playing field, determine robot position relative to the target’s position, determine robot orientation or heading, calculate robot course change to approach target position, and periodically update current position and distance to the target. Because of my familiarity with Microchip Technology’s 8-bit microcontrollers and instruction sets, the PIC16F627A is my choice for the microcontrollers (mostly because I have them in my inventory).

To this date, the four goals listed—in terms of algorithm development and code—are complete and are the main subjects of this article. Going forward, focus must now shift to the hardware side, including software integration to test beyond pure simulation.

A brief survey of ultrasonic ranging sensors indicates that most commercially available units have a range capability of 20’ or less. This is for a sensor type that detects the echo of its own emission. However, in this case, the robot’s sensor will not have to detect its own echoes, but will instead receive the response to its query from an addressable beacon that acts like an active mirror. For navigation purposes, these mirrors are located at three of the four corners of the playing field. By using active mirrors or beacons, received signal strength will be significantly greater than in the usual echo ranging situation. Further, the use of the active mirror approach to ranging should enable expansion of the effective width of the sensor’s beam to increase the sensor’s effective field of view, reducing cost and complexity.

Taking the former into account, I decided the size of the playing field will be 16’ on a side and subdivided into 3” squares forming an (S × S) = (64 × 64) = (26, 26) unit grid. I selected this size to simplify the binary arithmetic used in the calculations. For the purpose of illustration here, the target is considered to be at the center of the playing field, but it could very well be anywhere within the defined boundaries of the playing field.

Figure 1: Squarae playing field (Source: Larry Foltzer CC260)

Referring to Figure 1, the corners of the square playing field are labeled in clockwise order from A to D. Ultrasonic sonar transceiver beacons/active mirrors are placed at three of the corners of the playing field, at the corners marked A, B, and D.”

The issue in which this article appears will available here in the coming days.