DIY Solar-Powered, Gas-Detecting Mobile Robot

German engineer Jens Altenburg’s solar-powered hidden observing vehicle system (SOPHECLES) is an innovative gas-detecting mobile robot. When the Texas Instruments MSP430-based mobile robot detects noxious gas, it transmits a notification alert to a PC, Altenburg explains in his article, “SOPHOCLES: A Solar-Powered MSP430 Robot.”  The MCU controls an on-board CMOS camera and can wirelessly transmit images to the “Robot Control Center” user interface.

Take a look at the complete SOPHOCLES design. The CMOS camera is located on top of the robot. Radio modem is hidden behind the camera so only the antenna is visible. A flexible cable connects the camera with the MSP430 microcontroller.

Altenburg writes:

The MSP430 microcontroller controls SOPHOCLES. Why did I need an MSP430? There are lots of other micros, some of which have more power than the MSP430, but the word “power” shows you the right way. SOPHOCLES is the first robot (with the exception of space robots like Sojourner and Lunakhod) that I know of that’s powered by a single lithium battery and a solar cell for long missions.

The SOPHOCLES includes a transceiver, sensors, power supply, motor
drivers, and an MSP430. Some block functions (i.e., the motor driver or radio modems) are represented by software modules.

How is this possible? The magic mantra is, “Save power, save power, save power.” In this case, the most important feature of the MSP430 is its low power consumption. It needs less than 1 mA in Operating mode and even less in Sleep mode because the main function of the robot is sleeping (my main function, too). From time to time the robot wakes up, checks the sensor, takes pictures of its surroundings, and then falls back to sleep. Nice job, not only for robots, I think.

The power for the active time comes from the solar cell. High-efficiency cells provide electric energy for a minimum of approximately two minutes of active time per hour. Good lighting conditions (e.g., direct sunlight or a light beam from a lamp) activate the robot permanently. The robot needs only about 25 mA for actions such as driving its wheel, communicating via radio, or takes pictures with its built in camera. Isn’t that impossible? No! …

The robot has two power sources. One source is a 3-V lithium battery with a 600-mAh capacity. The battery supplies the CPU in Sleep mode, during which all other loads are turned off. The other source of power comes from a solar cell. The solar cell charges a special 2.2-F capacitor. A step-up converter changes the unregulated input voltage into 5-V main power. The LTC3401 changes the voltage with an efficiency of about 96% …

Because of the changing light conditions, a step-up voltage converter is needed for generating stabilized VCC voltage. The LTC3401 is a high-efficiency converter that starts up from an input voltage as low as 1 V.

If the input voltage increases to about 3.5 V (at the capacitor), the robot will wake up, changing into Standby mode. Now the robot can work.

The approximate lifetime with a full-charged capacitor depends on its tasks. With maximum activity, the charging is used after one or two minutes and then the robot goes into Sleep mode. Under poor conditions (e.g., low light for a long time), the robot has an Emergency mode, during which the robot charges the capacitor from its lithium cell. Therefore, the robot has a chance to leave the bad area or contact the PC…

The control software runs on a normal PC, and all you need is a small radio box to get the signals from the robot.

The Robot Control Center serves as an interface to control the robot. Its main feature is to display the transmitted pictures and measurement values of the sensors.

Various buttons and throttles give you full control of the robot when power is available or sunlight hits the solar cells. In addition, it’s easy to make short slide shows from the pictures captured by the robot. Each session can be saved on a disk and played in the Robot Control Center…

The entire article appears in Circuit Cellar 147 2002. Type “solarrobot”  to access the password-protected article.

Q&A: Guido Ottaviani (Roboticist, Author)

Guido Ottaviani designs and creates innovative microcontroller-based robot systems in the city from which remarkable innovations in science and art have originated for centuries.

Guido Ottaviani

By day, the Rome-based designer is a technical manager for a large Italian editorial group. In his spare time he designs robots and interacts with various other “electronics addicts.” In an a candid interview published in Circuit Cellar 265 (August 2012), Guido described his fascination with robotics, his preferred microcontrollers, and some of his favorite design projects. Below is an abridged version of the interview.

NAN PRICE: What was the first MCU you worked with? Where were you at the time? Tell us about the project and what you learned.

GUIDO OTTAVIANI: The very first one was not technically an MCU, that was too early. It was in the mid 1980s. I worked on an 8085 CPU-based board with a lot of peripherals, clocked at 470 kHz (less than half a megahertz!) used for a radio set control panel. I was an analog circuits designer in a big electronics company, and I had started studying digital electronics on my own on a Bugbook series of self-instruction books, which were very expensive at that time. When the company needed an assembly programmer to work on this board, I said, “Don’t worry, I know the 8085 CPU very well.” Of course this was not true, but they never complained, because that job was done well and within the scheduled time.

I learned a lot about how to optimize CPU cycles on a slow processor. The program had very little time to switch off the receiver to avoid destroying it before the powerful transmitter started.

Flow charts on paper, a Motorola developing system with the program saved on an 8” floppy disk, a very primitive character-based editor, the program burned on an external EPROM and erased with a UV lamp. That was the environment! When, 20 years later, I started again with embedded programming for my hobby, using Microchip Technology’s MPLAB IDE (maybe still version 6.xx) and a Microchip Technology PIC16F84, it looked like paradise to me, even if I had to relearn almost everything.

But, what I’ve learned about code optimization—both for speed and size—is still useful even when I program the many resources on the dsPIC33F series…

NAN: You worked in the field of analog and digital development for several years. Tell us a bit about your background and experiences.

GUIDO: Let me talk about my first day of work, exactly 31 years ago.

Being a radio amateur and electronics fan, I went often to the surplus stores to find some useful components and devices, or just to touch the wonderful receivers or instruments: Bird wattmeters, Collins or Racal receivers, BC 312, BC 603 or BC 1000 military receivers and everything else on the shelves.

The first day of work in the laboratory they told to me, “Start learning that instrument.” It was a Hewlett-Packard spectrum analyzer (maybe an HP85-something) that cost more than 10 times my annual gross salary at that time. I still remember the excitement of being able to touch it, for that day and the following days. Working in a company full of these kinds of instruments (the building even had a repair and calibration laboratory with HP employees), with more than a thousand engineers who knew everything from DC to microwaves to learn from, was like living in Eden. The salary was a secondary issue (at that time).

I worked on audio and RF circuits in the HF to UHF bands: active antennas, radiogoniometers, first tests on frequency hopping and spread spectrum, and a first sample of a Motorola 68000-based GPS as big as a microwave oven.

Each instrument had an HPIB (or GPIB or IEEE488) interface to the computer. So I started approaching this new (for me) world of programming an HP9845 computer (with a cost equivalent to 5 years of my salary then) to build up automatic test sets for the circuits I developed. I was very satisfied when I was able to obtain a 10-Hz frequency hopping by a Takeda-Riken frequency synthesizer. It was not easy with such a slow computer, BASIC language, and a bus with long latencies. I had to buffer each string of commands in an array and use some special pre-caching features of the HPIB interface I found in the manual.

Every circuit, even if it was analog, was interfaced with digital ports. The boards were full of SN74xx (or SN54xx) ICs, just to make some simple functions as switching, multiplexing, or similar. Here again, my lack of knowledge was filled with the “long history experience” on Bugbook series.

Well, audio, RF, programming, communications, interfacing, digital circuits. What was I still missing to have a good background for the next-coming hobby of robotics? Ah! Embedded programming. But I’ve already mentioned this experience.

After all these design jobs, because my knowledge started spreading on many different projects, it was natural to start working as a system engineer, taking care of all the aspects of a complex system, including project management.

NAN: You have a long-time interest in robotics and autonomous robot design. When did you first become interested in robots and why?

GUIDO: I’ve loved the very simple robots in the toy store windows since I was young, the same I have on my website (Pino and Nino). But they were too simple. Just making something a little bit more sophisticated required too much electronics at that time.

After a big gap in my electronics activity, I discovered a newly published robotic magazine, with an electronic parts supplement. This enabled me to build a programmable robot based on a Microchip PIC16F84. A new adventure started. I felt much younger. Suddenly, all the electronics-specialized neurons inside my brain, after being asleep for many years, woke up and started running again. Thanks to the Internet (not yet available when I left professional electronics design), I discovered a lot of new things: MCUs, free IDEs running even on a simple computer, free compilers, very cheap programming devices, and lots of documentation freely available. I threw away the last Texas Instruments databook I still had on my bookshelf and started studying again. There were a lot of new things to know, but, with a good background, it was a pleasant (if frantic) study. I’ve also bought some books, but they became old before I finished reading them.

Within a few months, jumping among all the hardware and software versions Microchip released at an astonishing rate, I found Johann Borenstein et al’s book Where Am I?: Systems and Methods for Mobile Robot Positioning (University of Michigan, 1996). This report and Borenstein’s website taught me a lot about autonomous navigation techniques. David P. Anderson’s “My Robots” webpage (www.geology.smu.edu/~dpa-www/myrobots.html) inspired all my robots, completed or forthcoming.

I’ve never wanted to make a remote-controlled car, so my robots must navigate autonomously in an unknown environment, reacting to the external stimuli. …

NAN: Robotics is a focal theme in many of the articles you have contributed to Circuit Cellar. One of your article series, “Robot Navigation and Control” (224–225, 2009), was about a navigation control subsystem you built for an autonomous differential steering explorer robot. The first part focused on the robotic platform that drives motors and controls an H-bridge. You then described the software development phase of the project. Is the project still in use? Have you made any updates to it?

The “dsNavCon” system features a Microchip Technology dsPIC30F4012 motor controller and a general-purpose dsPIC30F3013. (Source: G. Ottaviani, CC224)

GUIDO: After I wrote that article series, that project evolved until the beginning of this year. There is a new switched power supply, a new audio sensor, the latest version of dsNav dsPIC33-based board became commercially available online, some mechanical changing, improvements on telemetry console, a lot of modifications in the firmware, and the UMBmark calibration performed successfully.

The goal is reached. That robot was a lab to experiment sensors, solutions, and technologies. Now I’m ready for a further step: outdoors.

NAN: You wrote another robotics-related article in 2010 titled, “A Sensor System for Robotics Applications” (Circuit Cellar 236). Here you describe adding senses—sight, hearing, and touch—to a robotics design. Tell us about the design, which is built around an Arduino Diecimila board. How does the board factor into the design?

An Arduino-based robot platform (Source: G. Ottavini, CC236)

GUIDO: That was the first time I used an Arduino. I’ve always used PICs, and I wanted to test this well-known board. In that case, I needed to interface many I2C, analog sensors, and an I2C I/O expander. I didn’t want to waste time configuring peripherals. All the sensors had 5-V I/O. The computing time constraints were not so strict. Arduino fits perfectly into all of these prerequisites. The learning curve was very fast. There was already a library of every device I’ve used. There was no need for a voltage level translation between 3.3 and 5 V. Everything was easy, fast, and cheap. Why not use it for these kinds of projects?

NAN: You designed an audio sensor for a Rino robotic platform (“Sound Tone Detection with a PSoC Part 1 and Part 2,” Circuit Cellar 256–257, 2011). Why did you design the system? Did you design it for use at work or home? Give us some examples of how you’ve used the sensor.

GUIDO: I already had a sound board based on classic op-amp ICs. I discovered the PSoC devices in a robotic meeting. At that moment, there was a special offer for the PSoC1 programmer and, incidentally, it was close to my birthday. What a perfect gift from my relatives!

This was another excuse to study a completely different programmable device and add something new to my background. The learning curve was not as easy as the Arduino one. It is really different because… it does a very different job. The new PSoC-based audio board was smaller, simpler, and with many more features than the previous one. The original project was designed to detect a fixed 4-kHz tone, but now it is easy to change the central frequency, the band, and the behavior of the board. This confirms once more, if needed, that nowadays, this kind of professional design is also available to hobbyists. …

NAN: What do you consider to be the “next big thing” in the embedded design industry? Is there a particular technology that you’ve used or seen that will change the way engineers design in the coming months and years?

GUIDO: As often happens, the “big thing” is one of the smallest ones. Many manufacturers are working on micro-nano-pico watt devices. I’ve done a little but not very extreme experimenting with my Pendulum project. Using the sleeping features of a simple PIC10F22P with some care, I’ve maintained the pendulum’s oscillation bob for a year with a couple of AAA batteries and it is still oscillating behind me right now.

Because of this kind of MCU, we can start seriously thinking about energy harvesting. We can get energy from light, heat, any kind of RF, movement or whatever, to have a self-powered, autonomous device. Thanks to smartphones, PDAs, tablets, and other portable devices, the MEMS sensors have become smaller and less expensive.

In my opinion, all this technology—together with supercapacitors, solid-state batteries or similar—will spread many small devices everywhere to monitor everything.

The entire interview is published in Circuit Cellar 265 (August 2012).

Seven-Controller EtherCAT Orchestra

When I first saw the Intel Industrial Control in Concert demonstration at Design West 2012 in San Jose, CA, I immediately thought of Kurt Vonnegut ‘s 1952 novel Player Piano. The connection, of course, is that the player piano in the novel and Intel’s Atom-based robotic orchestra both play preprogrammed music without human involvement. But the similarities end there. Vonnegut used the self-playing autopiano as a metaphor for a mechanized society in which wealthy industrialists replaced human workers with automated machines. In contrast, Intel’s innovative system demonstrated engineering excellence and created a buzz in the in the already positive atmosphere at the conference.

In “EtherCAT Orchestra” (Circuit Cellar 264, July 2012), Richard Wotiz carefully details the awe-inspiring music machine that’s built around seven embedded systems, each of which is based on Intel’s Atom D525 dual-core microprocessor. He provides information about the system you can’t find on YouTube or hobby tech blogs. Here is the article in its entirety.

EtherCAT Orchestra

I have long been interested in automatically controlled musical instruments. When I was little, I remember being fascinated whenever I ran across a coin-operated electromechanical calliope or a carnival hurdy-gurdy. I could spend all day watching the many levers, wheels, shafts, and other moving parts as it played its tunes over and over. Unfortunately, the mechanical complexity and expertise needed to maintain these machines makes them increasingly rare. But, in our modern world of pocket-sized MP3 players, there’s still nothing like seeing music created in front of you.

I recently attended the Design West conference (formerly the Embedded Systems Conference) in San Jose, CA, and ran across an amazing contraption that reminded me of old carnival music machines. The system was created for Intel as a demonstration of its Atom processor family, and was quite successful at capturing the attention of anyone walking by Intel’s booth (see Photo 1).

Photo 1—This is Intel’s computer-controlled orchestra. It may not look like any musical instrument you’ve ever seen, but it’s quite a thing to watch. The inspiration came from Animusic’s “Pipe Dream,” which appears on the video screen at the top. (Source: R. Wotiz)

The concept is based on Animusic’s music video “Pipe Dream,” which is a captivating computer graphics representation of a futuristic orchestra. The instruments in the video play when virtual balls strike against them. Each ball is launched at a precise time so it will land on an instrument the moment each note is played.

The demonstration, officially known as Intel’s Industrial Control in Concert, uses high-speed pneumatic valves to fire practice paintballs at plastic targets of various shapes and sizes. The balls are made of 0.68”-diameter soft rubber. They put on quite a show bouncing around while a song played. Photo 2 shows one of the pneumatic firing arrays.

Photo 2—This is one of several sets of pneumatic valves. Air is supplied by the many tees below the valves and is sent to the ball-firing nozzles near the top of the photo. The corrugated hoses at the top supply balls to the nozzles. (Source: R. Wotiz)

The valves are the gray boxes lined up along the center. When each one opens, a burst of air is sent up one of the clear hoses to a nozzle to fire a ball. The corrugated black hoses at the top supply the balls to the nozzles. They’re fed by paintball hoppers that are refilled after each performance. Each nozzle fires at a particular target (see Photo 3).

Photo 3—These are the targets at which the nozzles from Photo 2 are aimed. If you look closely, you can see a ball just after it bounced off the illuminated target at the top right. (Source: R. Wotiz)

Each target has an array of LEDs that shows when it’s activated and a piezoelectric sensor that detects a ball’s impact. Unfortunately, slight variations in the pneumatics and the balls themselves mean that not every ball makes it to its intended target. To avoid sounding choppy and incomplete, the musical notes are triggered by a fixed timing sequence rather than the ball impact sensors. Think of it as a form of mechanical lip syncing. There’s a noticeable pop when a ball is fired, so the system sounds something like a cross between a pinball machine and a popcorn popper. You may expect that to detract from the music, but I felt it added to the novelty of the experience.

The control system consists of seven separate embedded systems, all based on Intel’s Atom D525 dual-core microprocessor, on an Ethernet network (see Figure 1).

Figure 1—Each block across the top is an embedded system providing some aspect of the user interface. The real-time interface is handled by the modules at the bottom. They’re controlled by the EtherCAT master at the center. (Source. R. Wotiz)

One of the systems is responsible for the real-time control of the mechanism. It communicates over an Ethernet control automation technology (EtherCAT) bus to several slave units, which provide the I/O interface to the sensors and actuators.

EtherCAT

EtherCAT is a fieldbus providing high-speed, real-time control over a conventional 100 Mb/s Ethernet hardware infrastructure. It’s a relatively recent technology, originally developed by Beckhoff Automation GmbH, and currently managed by the EtherCAT Technology Group (ETG), which was formed in 2003. You need to be an ETG member to access most of their specification documents, but information is publicly available. According to information on the ETG website, membership is currently free to qualified companies. EtherCAT was also made a part of international standard IEC 61158 “Industrial Communication Networks—Fieldbus Specifications” in 2007.

EtherCAT uses standard Ethernet data frames, but instead of each device decoding and processing an individual frame, the devices are arranged in a daisy chain, where a single frame is circulated through all devices in sequence. Any device with an Ethernet port can function as the master, which initiates the frame transmission. The slaves need specialized EtherCAT ports. A two-port slave device receives and starts processing a frame while simultaneously sending it out to the next device (see Figure 2).

Figure 2—Each EtherCAT slave processes incoming data as it sends it out the downstream port. (Source: R. Wotiz))

The last slave in the chain detects that there isn’t a downstream device and sends its frame back to the previous device, where it eventually returns to the originating master. This forms a logical ring by taking advantage of both the outgoing and return paths in the full-duplex network. The last slave can also be directly connected to a second Ethernet port on the master, if one is available, creating a physical ring. This creates redundancy in case there is a break in the network. A slave with three or more ports can be used to form more complex topologies than a simple daisy chain. However, this wouldn’t speed up network operation, since a frame still has to travel through each slave, one at a time, in both directions.

The EtherCAT frame, known as a telegram, can be transmitted in one of two different ways depending on the network configuration. When all devices are on the same subnet, the data is sent as the entire payload of an Ethernet frame, using an EtherType value of 0x88A4 (see Figure 3a).

Figure 3a—An EtherCAT frame uses the standard Ethernet framing format with very little overhead. The payload size shown includes both the EtherCAT telegram and any padding bytes needed to bring the total frame size up to 64 bytes, the minimum size for an Ethernet frame. b—The payload can be encapsulated inside a UDP frame if it needs to pass through a router or switch. (Source: R. Wotiz)

If the telegrams must pass through a router or switch onto a different physical network, they may be encapsulated within a UDP datagram using a destination port number of 0x88A4 (see Figure 3b), though this will affect network performance. Slaves do not have their own Ethernet or IP addresses, so all telegrams will be processed by all slaves on a subnet regardless of which transmission method was used. Each telegram contains one or more EtherCAT datagrams (see Figure 4).

Each datagram includes a block of data and a command indicating what to do with the data. The commands fall into three categories. Write commands copy the data into a slave’s memory, while read commands copy slave data into the datagram as it passes through. Read/write commands do both operations in sequence, first copying data from memory into the outgoing datagram, then moving data that was originally in the datagram into memory. Depending on the addressing mode, the read and write operations of a read/write command can both access the same or different devices. This enables fast propagation of data between slaves.

Each datagram contains addressing information that specifies which slave device should be accessed and the memory address offset within the slave to be read or written. A 16-bit value for each enables up to 65,535 slaves to be addressed, with a 65,536-byte address space for each one. The command code specifies which of four different addressing modes to use. Position addressing specifies a slave by its physical location on the network. A slave is selected only if the address value is zero. It increments the address as it passes the datagram on to the next device. This enables the master to select a device by setting the address value to the negative of the number of devices in the network preceding the desired device. This addressing mode is useful during system startup before the slaves are configured with unique addresses. Node addressing specifies a slave by its configured address, which the master will set during the startup process. This mode enables direct access to a particular device’s memory or control registers. Logical addressing takes advantage of one or more fieldbus memory management units (FMMUs) on a slave device. Once configured, a FMMU will translate a logical address to any desired physical memory address. This may include the ability to specify individual bits in a data byte, which provides an efficient way to control specific I/O ports or register bits without having to send any more data than needed. Finally, broadcast addressing selects all slaves on the network. For broadcast reads, slaves send out the logical OR of their data with the data from the incoming datagram.

Each time a slave successfully reads or writes data contained in a datagram, it increments the working counter value (see Figure 4).

Figure 4—An EtherCAT telegram consists of a header and one or more datagrams. Each datagram can be addressed to one slave, a particular block of data within a slave, or multiple slaves. A slave can modify the datagram’s Address, C, IRQ, Process data, and WKC fields as it passes the data on to the next device. (Source: R. Wotiz)

This enables the master to confirm that all the slaves it was expecting to communicate with actually handled the data sent to them. If a slave is disconnected, or its configuration changes so it is no longer being addressed as expected, then it will no longer increment the counter. This alerts the master to rescan the network to confirm the presence of all devices and reconfigure them, if necessary. If a slave wants to alert the master of a high-priority event, it can set one or more bits in the IRQ field to request the master to take some predetermined action.

TIMING

Frames are processed in each slave by a specialized EtherCAT slave controller (ESC), which extracts incoming data and inserts outgoing data into the frame as it passes through. The ESC operates at a high speed, resulting in a typical data delay from the incoming to the outgoing network port of less than 1 μs. The operating speed is often dominated by how fast the master can process the data, rather than the speed of the network itself. For a system that runs a process feedback loop, the master has to receive data from the previous cycle and process it before sending out data for the next cycle. The minimum cycle time TCYC is given by: TCYC = TMP + TFR + N × TDLY  + 2 × TCBL + TJ. TMP = master’s processing time, TFR = frame transmission time on the network (80 ns per data byte + 5 μs frame overhead), N = total number of slaves, TDLY  = sum of the forward and return delay times through each slave (typically 600 ns), TCBL = cable propagation delay (5 ns per meter for Category 5 Ethernet cable), and TJ = network jitter (determined by master).[1]

A slave’s internal processing time may overlap some or all of these time windows, depending on how its I/O is synchronized. The network may be slowed if the slave needs more time than the total cycle time computed above. A maximum-length telegram containing 1,486 bytes of process data can be communicated to a network of 1,000 slaves in less than 1 ms, not including processing time.

Synchronization is an important aspect of any fieldbus. EtherCAT uses a distributed clock (DC) with a resolution of 1 ns located in the ESC on each slave. The master can configure the slaves to take a snapshot of their individual DC values when a particular frame is sent. Each slave captures the value when the frame is received by the ESC in both the outbound and returning directions. The master then reads these values and computes the propagation delays between each device. It also computes the clock offsets between the slaves and its reference clock, then uses these values to update each slave’s DC to match the reference. The process can be repeated at regular intervals to compensate for clock drift. This results in an absolute clock error of less than 1 μs between devices.

MUSICAL NETWORKS

The orchestra’s EtherCAT network is built around a set of modules from National Instruments. The virtual conductor is an application running under LabVIEW Real-Time on a CompactRIO controller, which functions as the master device. It communicates with four slaves containing a mix of digital and analog I/O and three slaves consisting of servo motor drives. Both the master and the I/O slaves contain a FPGA to implement any custom local processing that’s necessary to keep the data flowing. The system runs at a cycle time of 1 ms, which provides enough timing resolution to keep the balls properly flying.

I hope you’ve enjoyed learning about EtherCAT—as well as the fascinating musical device it’s used in—as much as I have.

Author’s note: I would like to thank Marc Christenson of SISU Devices, creator of this amazing device, for his help in providing information on the design.

REFERENCE

[1] National Instruments Corp., “Benchmarks for the NI 9144 EtherCAT Slave Chassis,” http://zone.ni.com/devzone/cda/tut/p/id/10596.

RESOURCES

Animusic, LLC, www.animusic.com.

Beckhoff Automation GmbH, “ET1100 EtherCAT Slave Controller Hardware Data Sheet, Version 1.8”, 2010, www.beckhoff.com/english/download/ethercat_development_products.htm.

EtherCAT Technology Group, “The Ethernet Fieldbus”, 2009, www.ethercat.org/pdf/english/ETG_Brochure_EN.pdf.

Intel, Atom microprocessor, www.intel.com/content/ www/us/en/processors/atom/atom-processor.html.

SOURCES

Atom D525 dual-core microprocessor

Intel Corp.

www.intel.com

LabVIEW Real-Time modules, CompactRIO controller, and EtherCAT devices

National Instruments Corp.

www.ni.com

Circuit Cellar 264 is now on newsstands, and it’s available at the CC-Webshop.

“Robocup” Soccer: Robot Designs Compete in Soccer Matches

Roboticists and soccer fans from around the world converged on Eindhoven, The Netherlands, from April 25–29 for the Roboup Dutch Open. The event was an interesting combination of sports and electronics engineering.

Soccer action at the Robocup Dutch Open

Since I have dozens of colleagues based in The Netherlands, I decided to see if someone would provide event coverage for our readers and members. Fortunately, TechtheFuture.com’s Tessel Rensenbrink was available and willing to cover the event. She reports:

Attending the Robocup Dutch Open is like taking a peek into the future. Teams of fully autonomous robots compete with each other in a soccer game. The matches are as engaging as watching humans compete in sports and the teams even display particular characteristics. The German bots suck at penalties and the Iranian bots are a rough bunch frequently body checking their opponents.

The Dutch Open was held in Eindhoven, The Netherlands from the 25th to the 29th of April. It is part of Robocup, a worldwide educational initiative aiming to promote robotics and artificial intelligence research. The soccer tournaments serve as a test bed for developments in robotics and help raise the interest of the general public. All new discoveries and techniques are shared across the teams to support rapid development.
The ultimate goal is to have a fully autonomous humanoid robot soccer team defeat the winner of the World Cup of Human Soccer in 2050.

In Eindhoven the competition was between teams from the Middle Size Robot League. The bots are 80 cm (2.6 ft) high, 50 cm (1.6 ft) in diameter and move around on wheels. They have an arm with little wheels to control the ball and a kicker to shoot. Because the hardware is mostly standardized the development teams have to make the difference with the software.

Once the game starts the developers aren’t allowed to aid or moderate the robots. Therefore the bots are equipped with all the hardware they need to play soccer autonomously. They’re mounted with a camera and a laser scanner to locate the ball and determine the distance. A Wi-Fi network allows the team members to communicate with each other and determine the strategy.

The game is played on a field similar to a scaled human soccer field. Playing time is limited to two halves of 15 minutes each. The teams consist of five players. If a robot does not function properly it may be taken of the field for a reset while the game continues. There is a human referee who’s decisions are communicated to the bots over the Wi-Fi network.

The Dutch Open finals were between home team TechUnited and MRL from Iran. The Dutch bots scored their first goal within minutes of the start of the game to the excitement of the predominantly orange-clad audience. Shortly thereafter a TechUnited bot went renegade and had to be taken out of the field for a reset. But even with a bot less the Dutchies scored again. When the team increased their lead to 3 – 0 the match seemed all but won. But in the second half MRL came back strong and had everyone on the edge of their seats by scoring two goals.

When the referee signaled the end of the game, the score was 3-2 for TechUnited. By winning the tournament the Dutch have established themselves as a favorite for the World Cup held in Mexico in June. Maybe, just maybe, the Dutch will finally bring home a Soccer World Cup trophy.

The following video shows a match between The Netherlands and Iran. The Netherlands won 2-1.

TechTheFuture.com is part of the Elektor group. 

 

Design West Update: Intel’s Computer-Controlled Orchestra

It wasn’t the Blue Man Group making music by shooting small rubber balls at pipes, xylophones, vibraphones, cymbals, and various other sound-making instruments at Design West in San Jose, CA, this week. It was Intel and its collaborator Sisu Devices.

Intel's "Industrial Controller in Concert" at Design West, San Jose

The innovative Industrial Controller in Concert system on display featured seven Atom processors, four operating systems, 36 paint ball hoppers, and 2300 rubber balls, a video camera for motion sensing, a digital synthesizer, a multi-touch display, and more. PVC tubes connect the various instruments.

Intel's "Industrial Controller in Concert" features seven Atom processors 2300

Once running, the $160,000 system played a 2,372-note song and captivated the Design West audience. The nearby photo shows the system on the conference floor.

Click here learn more and watch a video of the computer-controlled orchestra in action.

Build a CNC Panel Cutter Controller

Want a CNC panel cutter and controller for your lab, hackspace, or workspace? James Koehler of Canada built an NXP Semiconductors mbed-based system to control a three-axis milling machine, which he uses to cut panels for electronic equipment. You can customize one yourself.

Panel Cutter Controller (Source: James Koehler)

According to Koehler:

Modern electronic equipment often requires front panels with large cut-outs for LCD’s, for meters and, in general, openings more complicated than can be made with a drill. It is tedious to do this by hand and difficult to achieve a nice finished appearance. This controller allows it to be done simply, quickly and to be replicated exactly.

Koehler’s design is an interesting alternative to a PC program. The self-contained controller enables him to run a milling machine either manually or automatically (following a script) without having to clutter his workspace with a PC. It’s both effective and space-saving!

The Controller Setup (Source: James Koehler)

How does it work? The design controls three stepping motors.

The Complete System (Source: James Koehler)

Inside the controller are a power supply and a PCB, which carries the NXP mbed module plus the necessary interface circuitry and a socket for an SD card.

The Controller (Source: James Koehler)

Koehler explains:

In use, a piece of material for the panel is clamped onto the milling machine table and the cutting tool is moved to a starting position using the rotary encoders. Then the controller is switched to its ‘automatic’ mode and a script on the SD card is then followed to cut the panel. A very simple ‘language’ is used for the script; to go to any particular (x, y) position, to lift the cutting tool, to lower the cutting tool, to cut a rectangle of any dimension and to cut a circle of any dimension, etc. More complex instructions sequences such as those needed to cut the rectangular opening plus four mounting holes for a LCD are just combinations, called macros, of those simple instructions; every new device (meter mounting holes, LCD mounts, etc.) will have its own macro. The complete script for a particular panel can be any combination of simple commands plus macros. The milling machine, a Taig ‘micro mill’, with stepping motors is shown in Figure 2. In its ‘manual’ mode, the system can be used as a conventional three axis mill controlled via the rotary encoders. The absolute position of the cutting tool is displayed in units of either inches, mm or thousandths of an inch.

Click here to read Koehler’s project abstract. Click here to read his complete documentation PDF, which includes block diagrams, schematics, and more.

This project won Third Place in the 2010 NXP mbed Design Challenge and is posted as per the terms of the Challenge.

 

 

Robot Design with Microsoft Kinect, RDS 4, & Parallax’s Eddie

Microsoft announced on March 8 the availability of Robotics Developer Studio 4 (RDS 4) software for robotics applications. RDS 4 was designed to work with the Kinect for Windows SDK. To demonstrate the capabilities of RDS 4, the Microsoft robotics team built the Follow Me Robot with a Parallax Eddie robot, laptop running Windows 7, and the Kinect.

In the following short video, Microsoft software developer Harsha Kikkeri demonstrates Follow Me Robot.

Circuit Cellar readers are already experimenting Kinect and developing embedded system to work with it n interesting ways. In an upcoming article about a Kinect-based project, designer Miguel Sanchez describes a interesting Kinect-based 3-D imaging system.

Sanchez writes:

My project started as a simple enterprise that later became a bit more challenging. The idea of capturing the silhouette of an individual standing in front of the Kinect was based on isolating those points that are between two distance thresholds from the camera. As depth image already provides the distance measurement, all the pixels of the subject will be between a range of distances, while other objects in the scene will be outside of this small range. But I wanted to have just the contour line of a person and not all the pixels that belong to that person’s body. OpenCV is a powerful computer vision library. I used it for my project because of function blobs. This function extracts the contour of the different isolated objects of a scene. As my image would only contain one object—the person standing in front of the camera—function blobs would return the exact list of coordinates of the contour of the person, which was what I needed. Please note that this function is a heavy image processing made easy for the user. It provides not just one, but a list of all the different objects that have been detected in the image. It can also specify is holes inside a blob are permitted. It can also specify the minimum and maximum areas of detected blobs. But for my project, I am only interested in detecting the biggest blob returned, which will be the one with index zero, as they are stored in decreasing order of blob area in the array returned by the blobs function.

Though it is not a fault of blobs function, I quickly realized that I was getting more detail than I needed and that there was a bit of noise in the edges of the contour. Filtering out on a bit map can be easily accomplished with a blur function, but smoothing out a contour did not sound so obvious to me.

A contour line can be simplified by removing certain points. A clever algorithm can do this by removing those points that are close enough to the overall contour line. One of these algorithms is the Douglas-Peucker recursive contour simplification algorithm. The algorithm starts with the two endpoints and it accepts one point in between whose orthogonal distance from the line connecting the two first points is larger than a given threshold. Only the point with the largest distance is selected (or none if the threshold is not met). The process is repeated recursively, as new points are added, to create the list of accepted points (those that are contributing the most to the general contour given a user-provided threshold). The larger the threshold, the rougher the resulting contour will be.

By simplifying a contour, now human silhouettes look better and noise is gone, but they look a bit synthetic. The last step I did was to perform a cubic-spline interpolation so contour becomes a set of curves between the different original points of the simplified contour. It seems a bit twisted to simplify first to later add back more points because of the spline interpolation, but this way it creates a more visually pleasant and curvy result, which was my goal.

 

(Source: Miguel Sanchez)
(Source: Miguel Sanchez)

The nearby images show aspects of the process Sanchez describes in his article, where an offset between the human figure and the drawn silhouette is apparent.

The entire article is slated to appear in the June or July edition of Circuit Cellar.

Aerial Robot Demonstration Wows at TEDTalk

In a TEDTalk Thursday, engineer Vijay Kumar presented an exciting innovation in the field of unmanned aerial vehicle (UAV) technology. He detailed how a team of UPenn engineers retrofitted compact aerial robots with embedded technologies that enable them to swarm and operate as a team to take on a variety of remarkable tasks. A swarm can complete construction projects, orchestrate a nine-instrument piece of music, and much more.

The 0.1-lb aerial robot Kumar presented on stage—built by UPenn students Alex Kushleyev and Daniel Mellinger—consumed approximately 15 W, he said. The 8-inch design—which can operate outdoors or indoors without GPS—featured onboard accelerometers, gyros, and processors.

“An on-board processor essentially looks at what motions need to be executed, and combines these motions, and figures out what commands to send to the motors 600 times a second,” Kumar said.

Watch the video for the entire talk and demonstration. Nine aerial robots play six instruments at the 14:49 minute mark.

Issue 260: Creativity in Design

The seed for the interview with Hanno Sander on page 16 (Circuit Cellar March 2012) was sown at the 2008 Embedded Systems Conference in San Jose, CA. Hanno was at the Parallax booth demonstrating his “Dancebot,” which is a Propeller-based, two-wheeled balancing robot he wrote about in Circuit Cellar 224 (March 2009).

Balancing robot design (Source: Hanno Sander CC260)

Since Circuit Cellar is scheduled to publish Hanno’s book Advanced Control Robotics later this year, it made sense to interview him for our annual Robotics issue. As you’ll learn, Hanno is an enthusiastic designer whose intellect and passion for engineering have enabled him to travel the world and make electronics innovation his life’s work. His story should be an inspiration to everyone who reads this magazine.

After you finish the interview, check out the two robotics-related articles featured in this issue.

Square playing field (Source: Larry Foltzer CC260)

First, on page 20, Larry Foltzer tackles the topic of robot navigation with a fascinating article about position determination and acoustic delay triangulation. Next, turn to the back of the issue for Jeff Bachiochi’s article, “Wheel-Free Mobile Robots” (p. 64). Jeff takes you inside a Freescale FSLBOT mechatronics robot. Study the concepts he covers to prepare yourself for your next mobile robot design.

The rest of the issue includes a variety of articles on creative designs and essential engineering topics. I’m sure you’ll agree that the following articles are just as inspirational as they are informative.

Engineer and video game enthusiast Chris Cantrell explains how he built a Propeller-based TV gaming platform (p. 28). He describes how he hacked a joystick and got the classic Space Invaders video game up and running.

Propeller-based gaming platform (Source: Chris Cantrell CC260)

On page 36, Charles Edmondson presents his Rainbow Color Reader. The compact design can identify and announce colors for visually impaired users.

In the February issue, Alexander Pozhitkov introduced the NakedCPU project. This month he wraps up the article series by describing actual experiments and sets the foundation for future research (p. 42).

On page 50, George Novacek helps you prepare for the inevitable: microelectronic component obsolescence. You’ll find his tips invaluable as you move on to new projects.

Interested in the topic of thermal detection? Want to know how IR thermal sensing works? Turn to Richard Wotiz’s article on page 54.

On page 60, columnist George Martin provides his next engineering lesson learned from a real-world project. This month he covers the topic of working with—or without?—printer port connections.

I’d like to wrap up with a note to our staff and long-time readers. This is the 260th issue of Circuit Cellar. That’s quite an achievement! Thank you, colleagues, friends, and readers!

Circuit Cellar Issue 260 March 2012

Q&A: Hanno Sander on Robotics

I met Hanno Sander in 2008 at the Embedded Systems Conference in San Jose, CA. At the time, Hanno was at the Parallax booth demonstrating a Propeller-based, two-wheeled balancing robot. Several months later, we published an article he wrote about the project in issue March 2009. Today, Hanno runs HannoWare and works with school systems to improve youth education by focusing technological innovation in classrooms.

Hanno Sander at Work

The March issue of Circuit Cellar, which will hit newsstands soon, features an in-depth interview with Hanno. It’s an inspirational story for experienced and novice roboticists alike.

Hanno Sander's Turing maching debugged with ViewPort

Here’s an excerpt from the interview:

HannoWare is my attempt to share my hobbies with others while keeping my kids fed and wife happy. It started with me simply selling software online but is now a business developing and selling software, hardware, and courseware directly and through distributors. I get a kick out of collaborating with top engineers on our projects and love hearing from customers about their success.

Our first product was the ViewPort development environment for the Parallax Propeller, which features both traditional tools like line-by-line stepping and breakpoints as well as real-time graphs of variables and pin I/O states to help developers debug their firmware. ViewPort has been used for applications ranging from creating a hobby Turing machine to calibrating a resolver for a 6-MW motor. 12Blocks is a visual programming language for hobby microcontrollers.

The drag-n-drop style of programming with customizable blocks makes it ideal for novice programmers. Like ViewPort, 12Blocks uses rich graphics to help programmers understand what’s going on inside the processor.

The ability to view and edit the underlying sourcecode simplifies transition to text languages like BASIC and C when appropriate. TBot is the result of an Internetonly collaboration with Chad George, a very talented roboticist. Our goal for the robot was to excel at typical robot challenges in its stock configuration while also allowing users to customize the platform to their needs. A full set of sensors and actuators accomplish the former while the metal frame, expansion ports, and software libraries satisfy the latter.

Click here to read the entire interview.

 

Robot Nav with Acoustic Delay Triangulation

Building a robot is a rite of passage for electronics engineers. And thus this magazine has published dozens of robotics-related articles over the years.

In the March issue, we present a particularly informative article on the topic of robot navigation in particular. Larry Foltzer tackles the topic of robot positioning with acoustic delay triangulation. It’s more of a theoretical piece than a project article. But we’re confident you’ll find it intriguing and useful.

Here’s an excerpt from Foltzer’s article:

“I decided to explore what it takes, algorithmically speaking, to make a robot that is capable of discovering its position on a playing field and figuring out how to maneuver to another position within the defined field of play. Later on I will build a minimalist-like platform to test algorithms performance.

In the interest of hardware simplicity, my goal is to use as few sensors as possible. I will use ultrasonic sensors to determine range to ultrasonic beacons located at the corners of the playing field and wheel-rotation sensors to measure distance traversed, if wheel-rotation rate times time proves to be unreliable.

From a software point of view, the machine must be able to determine robot position on a defined playing field, determine robot position relative to the target’s position, determine robot orientation or heading, calculate robot course change to approach target position, and periodically update current position and distance to the target. Because of my familiarity with Microchip Technology’s 8-bit microcontrollers and instruction sets, the PIC16F627A is my choice for the microcontrollers (mostly because I have them in my inventory).

To this date, the four goals listed—in terms of algorithm development and code—are complete and are the main subjects of this article. Going forward, focus must now shift to the hardware side, including software integration to test beyond pure simulation.

SENSOR TECHNOLOGY & THE PLAYING FIELD
A brief survey of ultrasonic ranging sensors indicates that most commercially available units have a range capability of 20’ or less. This is for a sensor type that detects the echo of its own emission. However, in this case, the robot’s sensor will not have to detect its own echoes, but will instead receive the response to its query from an addressable beacon that acts like an active mirror. For navigation purposes, these mirrors are located at three of the four corners of the playing field. By using active mirrors or beacons, received signal strength will be significantly greater than in the usual echo ranging situation. Further, the use of the active mirror approach to ranging should enable expansion of the effective width of the sensor’s beam to increase the sensor’s effective field of view, reducing cost and complexity.

Taking the former into account, I decided the size of the playing field will be 16’ on a side and subdivided into 3” squares forming an (S × S) = (64 × 64) = (26, 26) unit grid. I selected this size to simplify the binary arithmetic used in the calculations. For the purpose of illustration here, the target is considered to be at the center of the playing field, but it could very well be anywhere within the defined boundaries of the playing field.

Figure 1: Squarae playing field (Source: Larry Foltzer CC260)

ECHOES TO POSITION VECTORS
Referring to Figure 1, the corners of the square playing field are labeled in clockwise order from A to D. Ultrasonic sonar transceiver beacons/active mirrors are placed at three of the corners of the playing field, at the corners marked A, B, and D.”

The issue in which this article appears will available here in the coming days.