“Robocup” Soccer: Robot Designs Compete in Soccer Matches

Roboticists and soccer fans from around the world converged on Eindhoven, The Netherlands, from April 25–29 for the Roboup Dutch Open. The event was an interesting combination of sports and electronics engineering.

Soccer action at the Robocup Dutch Open

Since I have dozens of colleagues based in The Netherlands, I decided to see if someone would provide event coverage for our readers and members. Fortunately, TechtheFuture.com’s Tessel Rensenbrink was available and willing to cover the event. She reports:

Attending the Robocup Dutch Open is like taking a peek into the future. Teams of fully autonomous robots compete with each other in a soccer game. The matches are as engaging as watching humans compete in sports and the teams even display particular characteristics. The German bots suck at penalties and the Iranian bots are a rough bunch frequently body checking their opponents.

The Dutch Open was held in Eindhoven, The Netherlands from the 25th to the 29th of April. It is part of Robocup, a worldwide educational initiative aiming to promote robotics and artificial intelligence research. The soccer tournaments serve as a test bed for developments in robotics and help raise the interest of the general public. All new discoveries and techniques are shared across the teams to support rapid development.
The ultimate goal is to have a fully autonomous humanoid robot soccer team defeat the winner of the World Cup of Human Soccer in 2050.

In Eindhoven the competition was between teams from the Middle Size Robot League. The bots are 80 cm (2.6 ft) high, 50 cm (1.6 ft) in diameter and move around on wheels. They have an arm with little wheels to control the ball and a kicker to shoot. Because the hardware is mostly standardized the development teams have to make the difference with the software.

Once the game starts the developers aren’t allowed to aid or moderate the robots. Therefore the bots are equipped with all the hardware they need to play soccer autonomously. They’re mounted with a camera and a laser scanner to locate the ball and determine the distance. A Wi-Fi network allows the team members to communicate with each other and determine the strategy.

The game is played on a field similar to a scaled human soccer field. Playing time is limited to two halves of 15 minutes each. The teams consist of five players. If a robot does not function properly it may be taken of the field for a reset while the game continues. There is a human referee who’s decisions are communicated to the bots over the Wi-Fi network.

The Dutch Open finals were between home team TechUnited and MRL from Iran. The Dutch bots scored their first goal within minutes of the start of the game to the excitement of the predominantly orange-clad audience. Shortly thereafter a TechUnited bot went renegade and had to be taken out of the field for a reset. But even with a bot less the Dutchies scored again. When the team increased their lead to 3 – 0 the match seemed all but won. But in the second half MRL came back strong and had everyone on the edge of their seats by scoring two goals.

When the referee signaled the end of the game, the score was 3-2 for TechUnited. By winning the tournament the Dutch have established themselves as a favorite for the World Cup held in Mexico in June. Maybe, just maybe, the Dutch will finally bring home a Soccer World Cup trophy.

The following video shows a match between The Netherlands and Iran. The Netherlands won 2-1.

TechTheFuture.com is part of the Elektor group. 

 

Design West Update: Intel’s Computer-Controlled Orchestra

It wasn’t the Blue Man Group making music by shooting small rubber balls at pipes, xylophones, vibraphones, cymbals, and various other sound-making instruments at Design West in San Jose, CA, this week. It was Intel and its collaborator Sisu Devices.

Intel's "Industrial Controller in Concert" at Design West, San Jose

The innovative Industrial Controller in Concert system on display featured seven Atom processors, four operating systems, 36 paint ball hoppers, and 2300 rubber balls, a video camera for motion sensing, a digital synthesizer, a multi-touch display, and more. PVC tubes connect the various instruments.

Intel's "Industrial Controller in Concert" features seven Atom processors 2300

Once running, the $160,000 system played a 2,372-note song and captivated the Design West audience. The nearby photo shows the system on the conference floor.

Click here learn more and watch a video of the computer-controlled orchestra in action.

Build a CNC Panel Cutter Controller

Want a CNC panel cutter and controller for your lab, hackspace, or workspace? James Koehler of Canada built an NXP Semiconductors mbed-based system to control a three-axis milling machine, which he uses to cut panels for electronic equipment. You can customize one yourself.

Panel Cutter Controller (Source: James Koehler)

According to Koehler:

Modern electronic equipment often requires front panels with large cut-outs for LCD’s, for meters and, in general, openings more complicated than can be made with a drill. It is tedious to do this by hand and difficult to achieve a nice finished appearance. This controller allows it to be done simply, quickly and to be replicated exactly.

Koehler’s design is an interesting alternative to a PC program. The self-contained controller enables him to run a milling machine either manually or automatically (following a script) without having to clutter his workspace with a PC. It’s both effective and space-saving!

The Controller Setup (Source: James Koehler)

How does it work? The design controls three stepping motors.

The Complete System (Source: James Koehler)

Inside the controller are a power supply and a PCB, which carries the NXP mbed module plus the necessary interface circuitry and a socket for an SD card.

The Controller (Source: James Koehler)

Koehler explains:

In use, a piece of material for the panel is clamped onto the milling machine table and the cutting tool is moved to a starting position using the rotary encoders. Then the controller is switched to its ‘automatic’ mode and a script on the SD card is then followed to cut the panel. A very simple ‘language’ is used for the script; to go to any particular (x, y) position, to lift the cutting tool, to lower the cutting tool, to cut a rectangle of any dimension and to cut a circle of any dimension, etc. More complex instructions sequences such as those needed to cut the rectangular opening plus four mounting holes for a LCD are just combinations, called macros, of those simple instructions; every new device (meter mounting holes, LCD mounts, etc.) will have its own macro. The complete script for a particular panel can be any combination of simple commands plus macros. The milling machine, a Taig ‘micro mill’, with stepping motors is shown in Figure 2. In its ‘manual’ mode, the system can be used as a conventional three axis mill controlled via the rotary encoders. The absolute position of the cutting tool is displayed in units of either inches, mm or thousandths of an inch.

Click here to read Koehler’s project abstract. Click here to read his complete documentation PDF, which includes block diagrams, schematics, and more.

This project won Third Place in the 2010 NXP mbed Design Challenge and is posted as per the terms of the Challenge.

 

 

Robot Design with Microsoft Kinect, RDS 4, & Parallax’s Eddie

Microsoft announced on March 8 the availability of Robotics Developer Studio 4 (RDS 4) software for robotics applications. RDS 4 was designed to work with the Kinect for Windows SDK. To demonstrate the capabilities of RDS 4, the Microsoft robotics team built the Follow Me Robot with a Parallax Eddie robot, laptop running Windows 7, and the Kinect.

In the following short video, Microsoft software developer Harsha Kikkeri demonstrates Follow Me Robot.

Circuit Cellar readers are already experimenting Kinect and developing embedded system to work with it n interesting ways. In an upcoming article about a Kinect-based project, designer Miguel Sanchez describes a interesting Kinect-based 3-D imaging system.

Sanchez writes:

My project started as a simple enterprise that later became a bit more challenging. The idea of capturing the silhouette of an individual standing in front of the Kinect was based on isolating those points that are between two distance thresholds from the camera. As depth image already provides the distance measurement, all the pixels of the subject will be between a range of distances, while other objects in the scene will be outside of this small range. But I wanted to have just the contour line of a person and not all the pixels that belong to that person’s body. OpenCV is a powerful computer vision library. I used it for my project because of function blobs. This function extracts the contour of the different isolated objects of a scene. As my image would only contain one object—the person standing in front of the camera—function blobs would return the exact list of coordinates of the contour of the person, which was what I needed. Please note that this function is a heavy image processing made easy for the user. It provides not just one, but a list of all the different objects that have been detected in the image. It can also specify is holes inside a blob are permitted. It can also specify the minimum and maximum areas of detected blobs. But for my project, I am only interested in detecting the biggest blob returned, which will be the one with index zero, as they are stored in decreasing order of blob area in the array returned by the blobs function.

Though it is not a fault of blobs function, I quickly realized that I was getting more detail than I needed and that there was a bit of noise in the edges of the contour. Filtering out on a bit map can be easily accomplished with a blur function, but smoothing out a contour did not sound so obvious to me.

A contour line can be simplified by removing certain points. A clever algorithm can do this by removing those points that are close enough to the overall contour line. One of these algorithms is the Douglas-Peucker recursive contour simplification algorithm. The algorithm starts with the two endpoints and it accepts one point in between whose orthogonal distance from the line connecting the two first points is larger than a given threshold. Only the point with the largest distance is selected (or none if the threshold is not met). The process is repeated recursively, as new points are added, to create the list of accepted points (those that are contributing the most to the general contour given a user-provided threshold). The larger the threshold, the rougher the resulting contour will be.

By simplifying a contour, now human silhouettes look better and noise is gone, but they look a bit synthetic. The last step I did was to perform a cubic-spline interpolation so contour becomes a set of curves between the different original points of the simplified contour. It seems a bit twisted to simplify first to later add back more points because of the spline interpolation, but this way it creates a more visually pleasant and curvy result, which was my goal.

 

(Source: Miguel Sanchez)
(Source: Miguel Sanchez)

The nearby images show aspects of the process Sanchez describes in his article, where an offset between the human figure and the drawn silhouette is apparent.

The entire article is slated to appear in the June or July edition of Circuit Cellar.

Aerial Robot Demonstration Wows at TEDTalk

In a TEDTalk Thursday, engineer Vijay Kumar presented an exciting innovation in the field of unmanned aerial vehicle (UAV) technology. He detailed how a team of UPenn engineers retrofitted compact aerial robots with embedded technologies that enable them to swarm and operate as a team to take on a variety of remarkable tasks. A swarm can complete construction projects, orchestrate a nine-instrument piece of music, and much more.

The 0.1-lb aerial robot Kumar presented on stage—built by UPenn students Alex Kushleyev and Daniel Mellinger—consumed approximately 15 W, he said. The 8-inch design—which can operate outdoors or indoors without GPS—featured onboard accelerometers, gyros, and processors.

“An on-board processor essentially looks at what motions need to be executed, and combines these motions, and figures out what commands to send to the motors 600 times a second,” Kumar said.

Watch the video for the entire talk and demonstration. Nine aerial robots play six instruments at the 14:49 minute mark.

Q&A: Hanno Sander on Robotics

I met Hanno Sander in 2008 at the Embedded Systems Conference in San Jose, CA. At the time, Hanno was at the Parallax booth demonstrating a Propeller-based, two-wheeled balancing robot. Several months later, we published an article he wrote about the project in issue March 2009. Today, Hanno runs HannoWare and works with school systems to improve youth education by focusing technological innovation in classrooms.

Hanno Sander at Work

The March issue of Circuit Cellar, which will hit newsstands soon, features an in-depth interview with Hanno. It’s an inspirational story for experienced and novice roboticists alike.

Hanno Sander's Turing maching debugged with ViewPort

Here’s an excerpt from the interview:

HannoWare is my attempt to share my hobbies with others while keeping my kids fed and wife happy. It started with me simply selling software online but is now a business developing and selling software, hardware, and courseware directly and through distributors. I get a kick out of collaborating with top engineers on our projects and love hearing from customers about their success.

Our first product was the ViewPort development environment for the Parallax Propeller, which features both traditional tools like line-by-line stepping and breakpoints as well as real-time graphs of variables and pin I/O states to help developers debug their firmware. ViewPort has been used for applications ranging from creating a hobby Turing machine to calibrating a resolver for a 6-MW motor. 12Blocks is a visual programming language for hobby microcontrollers.

The drag-n-drop style of programming with customizable blocks makes it ideal for novice programmers. Like ViewPort, 12Blocks uses rich graphics to help programmers understand what’s going on inside the processor.

The ability to view and edit the underlying sourcecode simplifies transition to text languages like BASIC and C when appropriate. TBot is the result of an Internetonly collaboration with Chad George, a very talented roboticist. Our goal for the robot was to excel at typical robot challenges in its stock configuration while also allowing users to customize the platform to their needs. A full set of sensors and actuators accomplish the former while the metal frame, expansion ports, and software libraries satisfy the latter.

Click here to read the entire interview.

 

Robot Nav with Acoustic Delay Triangulation

Building a robot is a rite of passage for electronics engineers. And thus this magazine has published dozens of robotics-related articles over the years.

In the March issue, we present a particularly informative article on the topic of robot navigation in particular. Larry Foltzer tackles the topic of robot positioning with acoustic delay triangulation. It’s more of a theoretical piece than a project article. But we’re confident you’ll find it intriguing and useful.

Here’s an excerpt from Foltzer’s article:

“I decided to explore what it takes, algorithmically speaking, to make a robot that is capable of discovering its position on a playing field and figuring out how to maneuver to another position within the defined field of play. Later on I will build a minimalist-like platform to test algorithms performance.

In the interest of hardware simplicity, my goal is to use as few sensors as possible. I will use ultrasonic sensors to determine range to ultrasonic beacons located at the corners of the playing field and wheel-rotation sensors to measure distance traversed, if wheel-rotation rate times time proves to be unreliable.

From a software point of view, the machine must be able to determine robot position on a defined playing field, determine robot position relative to the target’s position, determine robot orientation or heading, calculate robot course change to approach target position, and periodically update current position and distance to the target. Because of my familiarity with Microchip Technology’s 8-bit microcontrollers and instruction sets, the PIC16F627A is my choice for the microcontrollers (mostly because I have them in my inventory).

To this date, the four goals listed—in terms of algorithm development and code—are complete and are the main subjects of this article. Going forward, focus must now shift to the hardware side, including software integration to test beyond pure simulation.

SENSOR TECHNOLOGY & THE PLAYING FIELD
A brief survey of ultrasonic ranging sensors indicates that most commercially available units have a range capability of 20’ or less. This is for a sensor type that detects the echo of its own emission. However, in this case, the robot’s sensor will not have to detect its own echoes, but will instead receive the response to its query from an addressable beacon that acts like an active mirror. For navigation purposes, these mirrors are located at three of the four corners of the playing field. By using active mirrors or beacons, received signal strength will be significantly greater than in the usual echo ranging situation. Further, the use of the active mirror approach to ranging should enable expansion of the effective width of the sensor’s beam to increase the sensor’s effective field of view, reducing cost and complexity.

Taking the former into account, I decided the size of the playing field will be 16’ on a side and subdivided into 3” squares forming an (S × S) = (64 × 64) = (26, 26) unit grid. I selected this size to simplify the binary arithmetic used in the calculations. For the purpose of illustration here, the target is considered to be at the center of the playing field, but it could very well be anywhere within the defined boundaries of the playing field.

Figure 1: Squarae playing field (Source: Larry Foltzer CC260)

ECHOES TO POSITION VECTORS
Referring to Figure 1, the corners of the square playing field are labeled in clockwise order from A to D. Ultrasonic sonar transceiver beacons/active mirrors are placed at three of the corners of the playing field, at the corners marked A, B, and D.”

The issue in which this article appears will available here in the coming days.