Q&A: Marilyn Wolf, Embedded Computing Expert

Marilyn Wolf has created embedded computing techniques, co-founded two companies, and received several Institute of Electrical and Electronics Engineers (IEEE) distinctions. She is currently teaching at Georgia Institute of Technology’s School of Electrical and Computer Engineering and researching smart-energy grids.—Nan Price, Associate Editor

NAN: Do you remember your first computer engineering project?

MARILYN: My dad is an inventor. One of his stories was about using copper sewer pipe as a drum memory. In elementary school, my friend and I tried to build a computer and bought a PCB fabrication kit from RadioShack. We carefully made the switch features using masking tape and etched the board. Then we tried to solder it and found that our patterning technology outpaced our soldering technology.

NAN: You have developed many embedded computing techniques—from hardware/software co-design algorithms and real-time scheduling algorithms to distributed smart cameras and code compression. Can you provide some information about these techniques?

Marilyn Wolf

Marilyn Wolf

MARILYN: I was inspired to work on co-design by my boss at Bell Labs, Al Dunlop. I was working on very-large-scale integration (VLSI) CAD at the time and he brought in someone who designed consumer telephones. Those designers didn’t care a bit about our fancy VLSI because it was too expensive. They wanted help designing software for microprocessors.

Microprocessors in the 1980s were pretty small, so I started on simple problems, such as partitioning a specification into software plus a hardware accelerator. Around the turn of the millennium, we started to see some very powerful processors (e.g., the Philips Trimedia). I decided to pick up on one of my earliest interests, photography, and look at smart cameras for real-time computer vision.

That work eventually led us to form Verificon, which developed smart camera systems. We closed the company because the market for surveillance systems is very competitive.
We have started a new company, SVT Analytics, to pursue customer analytics for retail using smart camera technologies. I also continued to look at methodologies and tools for bigger software systems, yet another interest I inherited from my dad.

NAN: Tell us a little more about SVT Analytics. What services does the company provide and how does it utilize smart-camera technology?

MARILYN: We started SVT Analytics to develop customer analytics for software. Our goal is to do for bricks-and-mortar retailers what web retailers can do to learn about their customers.

On the web, retailers can track the pages customers visit, how long they stay at a page, what page they visit next, and all sorts of other statistics. Retailers use that information to suggest other things to buy, for example.

Bricks-and-mortar stores know what sells but they don’t know why. Using computer vision, we can determine how long people stay in a particular area of the store, where they came from, where they go to, or whether employees are interacting with customers.

Our experience with embedded computer vision helps us develop algorithms that are accurate but also run on inexpensive platforms. Bad data leads to bad decisions, but these systems need to be inexpensive enough to be sprinkled all around the store so they can capture a lot of data.

NAN: Can you provide a more detailed overview of the impact of IC technology on surveillance in recent years? What do you see as the most active areas for research and advancements in this field?

MARILYN: Moore’s law has advanced to the point that we can provide a huge amount of computational power on a single chip. We explored two different architectures: an FPGA accelerator with a CPU and a programmable video processor.

We were able to provide highly accurate computer vision on inexpensive platforms, about $500 per channel. Even so, we had to design our algorithms very carefully to make the best use of the compute horsepower available to us.

Computer vision can soak up as much computation as you can throw at it. Over the years, we have developed some secret sauce for reducing computational cost while maintaining sufficient accuracy.

NAN: You wrote several books, including Computers as Components: Principles of Embedded Computing System Design and Embedded Software Design and Programming of Multiprocessor System-on-Chip: Simulink and System C Case Studies. What can readers expect to gain from reading your books?

MARILYN: Computers as Components is an undergraduate text. I tried to hit the fundamentals (e.g., real-time scheduling theory, software performance analysis, and low-power computing) but wrap around real-world examples and systems.

Embedded Software Design is a research monograph that primarily came out of Katalin Popovici’s work in Ahmed Jerraya’s group. Ahmed is an old friend and collaborator.

NAN: When did you transition from engineering to teaching? What prompted this change?

MARILYN: Actually, being a professor and teaching in a classroom have surprisingly little to do with each other. I spend a lot of time funding research, writing proposals, and dealing with students.

I spent five years at Bell Labs before moving to Princeton, NJ. I thought moving to a new environment would challenge me, which is always good. And although we were very well supported at Bell Labs, ultimately we had only one customer for our ideas. At a university, you can shop around to find someone interested in what you want to do.

NAN: How long have you been at Georgia Institute of Technology’s School of Electrical and Computer Engineering? What courses do you currently teach and what do you enjoy most about instructing?

MARILYN: I recently designed a new course, Physics of Computing, which is a very different take on an introduction to computer engineering. Instead of directly focusing on logic design and computer organization, we discuss the physical basis of delay and energy consumption.

You can talk about an amazingly large number of problems involving just inverters and RC circuits. We relate these basic physical phenomena to systems. For example, we figure out why dynamic RAM (DRAM) gets bigger but not faster, then see how that has driven computer architecture as DRAM has hit the memory wall.

NAN: As an engineering professor, you have some insight into what excites future engineers. With respect to electrical engineering and embedded design/programming, what are some “hot topics” your students are currently attracted to?

MARILYN: Embedded software—real-time, low-power—is everywhere. The more general term today is “cyber-physical systems,” which are systems that interact with the physical world. I am moving slowly into control-oriented software from signal/image processing. Closing the loop in a control system makes things very interesting.

My Georgia Tech colleague Eric Feron and I have a small project on jet engine control. His engine test room has a 6” thick blast window. You don’t get much more exciting than that.

NAN: That does sound exciting. Tell us more about the project and what you are exploring with it in terms of embedded software and closed-loop control systems.

MARILYN: Jet engine designers are under the same pressures now that have faced car engine designers for years: better fuel efficiency, lower emissions, lower maintenance cost, and lower noise. In the car world, CPU-based engine controllers were the critical factor that enabled car manufacturers to simultaneously improve fuel efficiency and reduce emissions.

Jet engines need to incorporate more sensors and more computers to use those sensors to crunch the data in real time and figure out how to control the engine. Jet engine designers are also looking at more complex engine designs with more flaps and controls to make the best use of that sensor data.

One challenge of jet engines is the high temperatures. Jet engines are so hot that some parts of the engine would melt without careful design. We need to provide more computational power while living with the restrictions of high-temperature electronics.

NAN: Your research interests include embedded computing, smart devices, VLSI systems, and biochips. What types of projects are you currently working on?

MARILYN: I’m working on with Santiago Grivalga of Georgia Tech on smart-energy grids, which are really huge systems that would span entire countries or continents. I continue to work on VLSI-related topics, such as the work on error-aware computing that I pursued with Saibal Mukopodhyay.

I also work with my friend Shuvra Bhattacharyya on architectures for signal-processing systems. As for more unusual things, I’m working on a medical device project that is at the early stages, so I can’t say too much specifically about it.

NAN: Can you provide more specifics about your research into smart energy grids?

MARILYN: Smart-energy grids are also driven by the push for greater efficiency. In addition, renewable energy sources have different characteristics than traditional coal-fired generators. For example, because winds are so variable, the energy produced by wind generators can quickly change.

The uses of electricity are also more complex, and we see increasing opportunities to shift demand to level out generation needs. For example, electric cars need to be recharged, but that can happen during off-peak hours. But energy systems are huge. A single grid covers the eastern US from Florida to Minnesota.

To make all these improvements requires sophisticated software and careful design to ensure that the grid is highly reliable. Smart-energy grids are a prime example of Internet-based control.

We have so many devices on the grid that need to coordinate that the Internet is the only way to connect them. But the Internet isn’t very good at real-time control, so we have to be careful.

We also have to worry about security Internet-enabled devices enable smart grid operations but they also provide opportunities for tampering.

NAN: You’ve earned several distinctions. You were the recipient of the Institute of Electrical and Electronics Engineers (IEEE) Circuits and Systems Society Education Award and the IEEE Computer Society Golden Core Award. Tell us about these experiences.

MARILYN: These awards are presented at conferences. The presentation is a very warm, happy experience. Everyone is happy. These things are time to celebrate the field and the many friends I’ve made through my work.

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

Arduino-Based Hand-Held Gaming System

gameduino2-WEBJames Bowman, creator of the Gameduino game adapter for microcontrollers, recently made an upgrade to the system adding a Future Technology Devices International (FTDI) FT800 chip to drive the graphics. Associate Editor Nan Price interviewed James about the system and its capabilities.

NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

Bowman-WEB

James Bowman

 JAMES: I live on the California coast in a small farming village between Santa Cruz and San Francisco. I moved here from London 17 years ago. I studied computing at Imperial College London.

NAN: What types of projects did you work on when you were employed by Silicon Graphics, 3dfx Interactive, and NVIDIA?

JAMES: Always software and hardware for GPUs. I began in software, which led me to microcode, which led to hardware. Before you know it you’ve learned Verilog. I was usually working near the boundary of software and hardware, optimizing something for cost, speed, or both.

NAN: How did you come up with the idea for the Gameduino game console?

JAMES: I paid for my college tuition by working as a games programmer for Nintendo and Sega consoles, so I was quite familiar with that world. It seemed a natural fit to try to give the Arduino some eye-catching color graphics. Some quick experiments with a breadboard and an FPGA confirmed that the idea was feasible.

NAN: The Gameduino 2 turns your Arduino into a hand-held modern gaming system. Explain the difference from the first version of Gameduino—what upgrades/additions have been made?

Gameduinofinal-WEB

The Gameduino2 uses a Future Technology Devices International chip to drive its graphics

JAMES: The original Gameduino had to use an FPGA to generate graphics, because in 2011 there was no such thing as an embedded GPU. It needs an external monitor and you had to supply your own inputs (e.g., buttons, joysticks, etc.). The Gameduino 2 uses the new Future Technology Devices International (FTDI) FT800 chip, which drives all the graphics. It has a built-in color resistive touchscreen and a three-axis accelerometer. So it is a complete game system—you just add the CPU.

NAN: How does the Arduino factor into the design?

GameduinoPCB-WEB

An Arduino, Ethernet adapter, and a Gameduino

 JAMES: Arduino is an interesting platform. It is 5 V, believe it or not, so the design needs a level shifter. Also, the Arduino is based on an 8-bit microcontroller, so the software stack needs to be carefully built to provide acceptable performance. The huge advantage of the Arduino is that the programming environment—the IDE, compiler, and downloader—is used and understood by hundreds of thousands of people.

 NAN: Is it easy or possible to customize the Gameduino 2?

 JAMES: I would have to say no. The PCB itself is entirely surface mount technology (SMT) and all the ICs are QFNs—they have no accessible pins! This is a long way from the DIP packages of yesterday, where you could change the circuit by cutting tracks and soldering onto the pins.

I needed a microscope and a hot air station to make the Gameduino2 prototype. That is a long way from the “kitchen table” tradition of the Arduino. Fortunately the Arduino’s physical design is very customization-friendly. Other devices can be stacked up, adding networking, hi-fi sound, or other sensor inputs.

 NAN: The Gameduino 2 project is on Kickstarter through November 7, 2013. Why did you decide to use Kickstarter crowdfunding for this project?

 JAMES: Kickstarter is great for small-scale inventors. The audience it reaches also tends to be interested in novel, clever things. So it’s a wonderful way to launch a small new product.

NAN: What’s next for Gameduino 2? Will the future see a Gameduino 3?

 JAMES: Product cycles in the Arduino ecosystem are quite long, fortunately, so a Gameduino 3 is distant. For the Gameduino 2, I’m writing a book, shipping the product, and supporting the developer community, which will hopefully make use of it.

 

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Client Profile: Pico Technology

Pico Technology
320 North Glenwood Boulevard
Tyler, TX 75702

Contact: sales@picotech.com

Embedded Products/Services: Pico Technology’s PicoScope 5000 series uses reconfigurable ADC technology to offer a choice of resolutions from 8 to 16 bits. For more information, visit www.picotech.com/picoscope5000.html.

PicoProduct information: The new PicoScope 5000 series oscilloscopes have a significantly different architecture. High-resolution ADCs can be applied to the input channels in different series and parallel combinations to boost the sampling rate or the resolution.

In Series mode, the ADCs are interleaved to provide 1 GB/s at 8 bits. In Parallel mode, multiple ADCs are sampled in phase on each channel to increase the resolution and dynamic performance (up to 16 bits).

In addition to their flexible resolution, the oscilloscopes have ultra-deep memory buffers of up to 512 MB to enable long captures at high sampling rates. They also feature standard, advanced software, including serial decoding, mask limit testing, and segmented memory.

The PicoScope 5000 series oscilloscopes are currently available at www.picotech.com.

The two-channel, 60-MHz model with built-function generator costs $1,153. The four-channel, 200-MHz model with built-in arbitrary waveform generator (AWG) costs $2,803. The pricing includes a set of matched probes, all necessary software, and a five-year warranty.

Dual-Channel Waveform Generators

B&K Precision 4053 Waveform Generator

B&K Precision 4053 Waveform Generator

The 4050 Series is a new line of four dual-channel function/arbitrary waveform generators. The instruments can generate 5-to-50-MHz waveforms for applications requiring stable and precise sine, square, triangle, and pulse waveforms with modulation and arbitrary waveform capabilities.

All models provide a main output voltage that can be vary from 0 to 10 VPP into 50 Ω and a secondary output that can vary from 0 to 3 VPP into 50 Ω. The generators feature a 3.5” color LCD, a rotary control knob, and a numeric keypad with dedicated waveform keys and output buttons.

The 4050 Series provides users with 48 built-in arbitrary waveforms. Using the included waveform editing software via the standard USB interface on the rear, users can create and load up to 10 custom 16-kpt waveforms. For general-purpose interface bus (GPIB) connectivity, an optional USB-to-GPIB adapter is available.

The generators offer a variety of modulation schemes for modulated signal applications including amplitude and frequency modulation (AM/FM), double sideband amplitude modulation (DSB-AM), amplitude and frequency shift keying (ASK/FSK), phase modulation (PM), and pulse-width modulation (PWM). Additional standard features include a linear and logarithmic sweep function, a built-in counter, sync output, a trigger I/O terminal, and a USB host port on the front panel to save and recall instrument settings and waveforms. A standard external 10-MHz reference clock input is provided to synchronize the instrument to another generator.

The 4052 (5-MHz) costs $499, the 4053 (10 MHz) costs $599, the 4054 (25 MHz) costs $850, and the 4055 (50 MHz) costs $1,050. Note: B&K Precision is offering 10% off MSRP through November 30, 2013. See website for details.

B&K Precision Corp.
www.bkprecision.com

Solar Array Tracker (Part 1): SunSeeker Hardware

Figure 1: These are the H-bridge motor drivers and sensor input conditioning circuits. Most of the discrete components are required for transient voltage protection from nearby lightning strikes and inductive kickback from the motors.

Figure 1: These are the H-bridge motor drivers and sensor input conditioning circuits. Most of the discrete components are required for transient voltage protection from nearby lightning strikes and inductive kickback from the motors.

Graig Pearen, semi-retired and living in Prince George, BC, Canada, spent his career in the telecommunications industry where he provided equipment maintenance and engineering services. Pearen, who now works part time as a solar energy technician, designed the SunSeeker Solar Array tracker, which won third place in the 2012 DesignSpark chipKit challenge.

He writes about his design, as well as changes he has made in prototypes since his first entry, in Circuit Cellar’s October issue. It is the first part of a two-part series on the SunSeeker, which presents the system’s software and commissioning tests in the final installment.

In the opening of Part 1, Pearen describes his objectives for the solar array tracker:

When I was designing my solar photovoltaic (PV) system, I wanted my array to track the sun in both axes. After looking at the available commercial equipment specifications and designs published online, I decided to design my own array tracker, the SunSeeker (see Photo 1 and Figure 1).

I had wanted to work with a Microchip Technology PIC processor for a while, so this was my opportunity to have some fun. I based my first prototype on a PIC16F870 microcontroller but when the microcontroller maxed out, I switched to its big brother, the PIC16F877. Although both prototypes worked well, I wanted to add more features and

The SunSeeker board, at top, contains all the circuits required to control the solar array’s motion. This board plugs into the Microsoft Technology chipKIT MAX32 processor board. The bottom side of the SunSeeker board (green) with the MAX32 board (red) plugged into it is shown at bottom.

The SunSeeker board, at top, contains all the circuits required to control the solar array’s motion. This board plugs into the Microchip Technology chipKIT MAX32 processor board. The bottom side of the SunSeeker board (green) with the MAX32 board (red) plugged into it is shown at bottom.

capabilities. I particularly wanted to add Ethernet access so I could use my home network to communicate with all my systems. I was considering Microchip’s chipKIT Max32 board for the next prototype when Circuit Cellar’s DesignSpark chipKIT contest was announced.

I knew the contest would be challenging. In addition to learning about a new processor and prototyping hardware, the contest rules required me to learn a new IDE (MPIDE), programming language (C++), schematic capture, and PCB design software (DesignSpark PCB). I also decided to make this my first surface-mount component design.

My objective for the contest was to replicate the functionality of the previous Assembly language software. I wanted the new design to be a test platform to develop new features and tracking algorithms. Over the next two to three years of development and field testing, I plan for it to evolve into a full-featured “bells-and-whistles” solar array tracker. I added a few enhancements as the software evolved, but I will develop most of the additional features later.

The system tracks, monitors, and adjusts solar photovoltaic (PV) arrays based on weather and atmospheric conditions. It compiles statistics on these conditions and communicates with a local server that enables software algorithm refinement. The SunSeeker logs a broad variety of data.

The SunSeeker measures, displays, and records the duration of the daily sunny, hazy, and cloudy periods; the array temperature; the ambient temperature; daily minimum and maximum temperatures; incident light intensity; and the drive motor current. The data log is indexed by the day number (1–366). Index–0 is the annual data and 1–366 store the data for each day of the year. Each record is 18 bytes long for a total of 6,588 bytes per year.

At midnight each day, the daily statistics are recorded and added to the cumulative totals. The data logs can be downloaded in comma-separated values (CSV) format for permanent record keeping and for use in spreadsheet or database programs.

The SunSeeker has two main parts, a control module and a separate light sensor module, plus the temperature and snow sensors.

The control module is mounted behind the array where it is protected from the heat of direct sunlight exposure. The sensor module is potted in clear UV-proof epoxy and mounted a few centimeters away on the edge of, and in the same plane as, the array. To select an appropriate potting compound, I contacted Epoxies, Etc. and asked for a recommendation. Following the company’s advice, I obtained a small quantity of urethane resin (20-2621RCL) and urethane catalyst (20-2621CCL).

When controlling mechanical devices, monitoring for proper operation, and detecting malfunctions it is necessary to prevent hardware damage. For example, if the solar array were to become frozen in place during an ice storm, it would need to be sensed and acted upon. Diagnostic software watches the motors to detect any hardware fault that may occur. Fault detection is accomplished in several ways. The H-bridges have internal fault detection for over temperature, under voltage, and shorted circuit. The current drawn by the motors is monitored for abnormally high or low current and the motor drive assemblies’ pulses are counted to show movement and position.

To read more about the DIY SunSeeker solar array tracker, and Pearen’s plans for further refinements, check out the October issue.

 

Dual-Channel 3G-SDI Video/Audio Capture Card

ADLINK PCIe-2602

ADLINK PCIe-2602 Video/Audio Capture Card

The PCIe-2602 is an SDI video/audio capture card that supports all SD/HD/3G-SDI signals and operates at six times the resolution of regular VGA connections. The card also provides video quality with lossless full color YUV 4:4:4 images for sharp, clean images.

The PCIe-2602 is well suited for medical imaging and intelligent video surveillance and analytics. With up to 12-bit pixel depth, the card  provides extreme image clarity and smoother transitions from color-to-color enhance image detail to support critical medical imaging applications, including picture archiving and communication system (PACS) endoscopy and broadcasting.

The card’s features include low latency uncompressed video streaming, CPU offloading, and support for high-quality live viewing for video analytics of real-time image acquisition, as required in casino and defense environments. PCIe-2602 signals can be transmitted over 100 m when combined with a 75-Ω coaxial cable.

The PCIe-2602 is equipped with RS-485 and digital I/O. It accommodates external devices (e.g., PTZ cameras and sensors) and supports Windows 7/XP OSes. The card comes with ADLINK’s ViewCreator Pro utility to enable setup, configuration, testing, and system debugging without any software programming. All ADLINK drivers are compatible with Microsoft DirectShow.

Contact ADLINK for pricing.

ADLINK Technology, Inc.
www.adlinktech.com

Q&A: Jack Ganssle, Electronics Entrepreneur

Jack Ganssle is a well-known engineer, author, lecturer, and consultant. After learning about oscilloscopes, transistors, and capacitors in his father’s engineering lab, Jack went on to write hundreds of articles and several books about embedded development-related topics. He also started and sold three electronics companies, worked on classified government projects, and founded The Ganssle Group, based in Reisterstown, MD. I recently spoke with Jack about some of his career highlights, his current work, and what’s next in the embedded design industry.—Nan Price, Associate Editor

NAN: You’ve been interested in electronics since the age of 9. Give us a little background information. What was your first project?

Jack Ganssle

Jack Ganssle

JACK: My first project was a crystal radio with the inductor wound on the quintessential Quaker oatmeal box! It was really exciting to get AM reception over that. Back then, pretty much no one had FM. AM was it.

Later I learned to repair TVs and made pocket money doing that. Those sets were all vacuum tubes. Usually there was just a bad tube or dried out capacitor. But from there, my friends and I learned to design amplifiers (the Beatles were very hot and everyone was starting a band). For graduation from eighth grade, my dad gave me an old oscilloscope he had built from a kit years earlier.

He was part of a startup when I was in my early teens. We kids were coerced into being the (unpaid) janitors for the place. That was annoying at first. But, we were allowed to keep anything we swept up. The engineering lab’s floor was always covered in resistors, capacitors, transistors, and the like, so my parts collection grew. (ICs existed then, but were rare.)

When I was 16 I got a ham license, built  various transmitters, and used WWII surplus receivers. One day an angry letter arrived from the Federal Communications Commission (FCC). They had picked me up on my second harmonic clear across the country. I was really proud of that contact.

But it wasn’t long before some resistor-transistor logic (RTL) digital ICs came my way. Projects included controls for tube transmitters, Estes model rocket telemetry, and even a crude TV camera that used a photomultiplier tube to scan a spiral set of holes in a spinning disk. A couple of us worked on a ham radio moon bounce, but I accidentally shorted out a resistor and my only hydrogen thyratron (sort of a tube version of an SCR) blew up. There was no money for a replacement, so that project died. The transmitter used a little lighthouse tube that had a maximum rating of a couple of watts, but it worked OK when pulsing it for a few microseconds at 1 kW.

Senior year of high school a friend and I hitchhiked from Maryland to Boston to go to a surplus store. I bought a core memory plane that was 13,000 bits in a 6 in2 cube. Long hair didn’t help. We were picked up on the New Jersey Turnpike and strip searched. The cops never believed my explanation that the thing was computer memory.

A few years later, I had a 6501 microprocessor in the glove compartment of my Volkswagen bus (which I lived in for a year while saving for a sailboat). Coming into a sleepy Maine town from Canada that event was repeated when the border cops searched the bus and found the chip. They didn’t believe in computers on a chip. But the PC was years away and computers were mostly seen in science fiction films.

Freshman year of college, I designed and built a 12-bit computer using hundreds of TTL chips soldered together using phone company wire on vectorboards. For I/O there was an old Model 15 teletype using 5-bit Baudot codes that my software drove via bit banging. The OS, such as it was, lived in a pair of 1702 EPROMs, which each held 256 bytes. The computer worked great! And then the 8008, the first 8-bit microcontroller, came out and the thing was obsolete. I junked it, and now I wish I had saved at least the schematics.

But by then I had been working part-time as an electronics technician for a few years and the company needed to update its analog products to digital. No one knew anything about computers, so they promoted me to engineer. Eventually I ran the digital group there. We designed one of the first floppy disk controllers, insanely high-resolution graphics controllers, and a lot of other products. We also integrated minicomputers (Data General Novas and DEC PDP-11s) into systems with microprocessors. We bought a 5-MB disk drive for a Nova. It cost $5,000 (back when that was a lot of money) and weighed 500 lb. How things have changed.

NAN: Tell us about The Ganssle Group (www.ganssle.com). When and why did you start the company? What types of services do you provide?

JACK:  I formed The Ganssle Group in 1997 after 15 years of running an in-circuit emulator company. Working 70 h a week was getting old and I wanted more time with my kids. So my objective was to reverse the usual model. Instead of fitting life around a job, I wanted to fit the job into life.

Goal 1: Four months of vacation a year. It turns out that is elusive, in no small part due to the cool stuff going on around here, but most years we do manage two to three months off. My wife, Marybeth, works with me. She takes care of all of the administrative/travel and the like.

Goal 2: No commute. So we work out of the house (for the first few years, we worked out of the houseboat where we raised two kids).

Now the kids are grown, so there’s a Goal 3: Have as much fun as possible with Marybeth, so when I travel to new or interesting places she often accompanies me. There’s a lot more to life than work. Some of my side projects are available at www.ganssle.com/jack.

I’m not really sure what I do. I write—a lot. Readers are incredibly smart and vocal. The dialogue with them is a highlight of my day. I also give one- and two-day seminars on pretty much every continent (except Antarctica—so far!) about ways to get better firmware done faster. Sometimes I do an expert witness gig. Those are always fascinating as one gets to dig deeply into products and learn about the law. On rare occasions, I’ll do a day or three of consulting if the problem is particularly interesting. And there’s always some experiment I’m working on, which sometimes gets written up as an article.

NAN: Speaking of articles, you’ve written hundreds—including nine for Circuit Cellar magazine—on topics ranging from the history of the embedded systems programming industry, to memory management, to using programmable logic devices (PLDs). You also write a column for Embedded (www.embedded.com) and you are editor of the biweekly newsletter The Embedded Muse. Tell us about the types of projects you enjoy constructing and writing about.

The breadboard is discharging batteries. To the left, a battery is soldered to some coax. Using the waveform generator in the oscilloscope I’m measuring the battery's reactance (which, it turns out, is entirely capacitive). The IAR tool is profiling current consumption of an evaluation board.

The breadboard is discharging batteries. To the left, a battery is soldered to some coax. Using the waveform generator in the oscilloscope I’m measuring the battery’s reactance (which, it turns out, is entirely capacitive). The IAR tool is profiling current consumption of an evaluation board.

JACK: I have one experiment that’s running right now. For the last four months I’ve been discharging coin cells. It sounds dull, but some microcontroller vendors are making outrageous claims about battery life that are on the surface true but irrelevant in real circuits. This circuit runs a complex profile on the batteries, tossing different loads on for a few milliseconds, and an ARM microcontroller samples the batteries’ voltage (as well as the transistors, VCE drop) into a log file. That data goes into a spreadsheet for further analysis. I’m making a much bigger version of this now, which will handle far more batteries at a time. I recently gave some preliminary results at a talk in Asilomar, CA, which garnered a lot of interest. More results will be forthcoming soon…I promise!

Another aspect of this is leakage. Does handling a battery leave finger oils that can affect the decades-long life claimed by the vendors? To test this, I built a femtoammeter. A polypropylene capacitor is charged and feeds a super-low bias current op-amp. Another ARM board monitors the op-amp voltage to watch the capacitor discharge as various contaminants are electrically connected to the capacitor. With no contaminants connected, even after 48 h, the cap discharged less than 1 mV. The thing resolves to better than 10 fA. (One fA is a millionth of a nanoamp, or about 6,000 electrons/second).

In fact, the ADC’s transfer function is a proxy for temperature. We heat the house with wood and you could see a perfect correlation of op-amp output and temperature throughout the day. (It’s lowest in the morning as the fire burns out overnight.)

NAN: You wrote the two-part Circuit Cellar article series, “Writing a Real-Time Operating System” (Issue 7 and 8, 1989) about the Hitachi HD64180 Z80-based embedded microprocessor nearly 15 years ago. Circuit Cellar also featured another HD64180-based article, “Huge Arrays on the HD64180: Taking Advantage of Memory Management” (Issue 16, 1990). What was your fascination with the HD64180? Also, is either of these projects still current? Have you changed any of the design components?

JACK: Gee, I have no idea. I wrote those using Microsoft Works, but the file format has changed and Works can no longer open those articles. Alas, the HD64180 is quite obsolete. It was a grown-up version of the Z80 and very popular in its day.

In 1974, Intel introduced the 8080, which was the first really decent 8-bit microprocessor. But it needed two clocks and three power supplies. The folks at Zilog came out with the Z80 a year later. It could run 8080 code, but had one clock, a single 5-V supply, and it offered additional instructions that massively improved code density. Intel responded with the 8085, but it was really an 8080 in drag. The couple of new instructions added just couldn’t give the Z80 a run for its money. Eventually Zilog came out with the Z180, and Hitachi the 64180 clone, which included on-board peripherals and a memory management unit to address 1 MB using standard Z80 instructions. It was a great idea, but since there was no on-board memory, it couldn’t compete with microcontrollers such as the ancient, and still-going-strong, 8051.

NAN: In addition to writing, you lecture and teach at conferences and symposiums worldwide. Tell us about your one-day “Better Firmware Faster” seminar. How did it begin? What can attendees expect to gain from it?

JACK: I’m completely frustrated with the state of firmware. It’s inevitably late and buggy. While there’s no doubt that crafting firmware is extremely difficult—after all, software is the most complex engineered product ever invented—we can and must do better. It’s astonishing that so few groups keep even the simplest metrics, yet engineering is all about numbers.

The seminar is a fast-paced event that shows developers better ways to get their code to market. It covers process issues, as well as a lot of technology areas unique to embedded systems, such as managing memory and dealing with tough real-time problems.

What can attendees get from it? It varies from very little to a lot. Some groups refuse to change anything, so will always maintain the status quo. Others do better. Some report 40% improvements to the schedule and up to an order of magnitude of reduction in shipped bugs.

NAN: You started three high-tech companies prior to The Ganssle Group. Tell us about your work experience. Any highlights?

JACK: Well, there was one instrument that used infrared light to measure protein in cow poop. Though it was interesting technology, it’s hard to call that a highlight. The design I’m most proud of was my first emulator, which had only 17 ICs and used insanely complex code. Eventually we offered emulators that required hundreds of chips, but those cost $7,000, while the first one sold for $600.

Some of the government work I’ve done was very interesting and used extremely sophisticated electronics. But I can’t talk about those projects. A buddy and I did the White House security system during the Reagan administration. It was fun to work in the basement there, but the bureaucracy was stifling. We lost our White House passes the same day Oliver North did, but he got more press.

NAN: What do you consider to be the “next big thing” in the embedded design industry? Is there a particular technology that you’ve used or seen that will change the way engineers design in the coming months and years?

JACK: Everything is going to change for us over the next five to 10 years. We will have tools that automatically find lots of bugs. Everyone is familiar (and has a love/hate relationship) with lint. But static analyzers can today find lots of runtime bugs. These are currently expensive and frustrating, but they demonstrate that such products can, and will, exist. When the issues are resolved, I expect they’ll be as common as IDEs. Debugging manually is hugely expensive.

Another tool is slowly gaining acceptance: so-called virtualization products (e.g., from Wind River and others). These are not the hypervisors people think about when using the word “virtualization.” Rather, they are complete software models of a target system. You can run all—and I mean all—of your code on the model. The hardware is always late. These tools will permit debugging to start at the beginning of the project. The tools are also expensive and somewhat clumsy, but will get better over time.

A modern smartphone has more than 10 million lines of code. Automobiles often have more. One thing is certain: Firmware will continue to grow in size and complexity. The current techniques we use to develop code will change as well.

 

Q&A: Peter Lomas – Raspberry Pi: One Year Later, 1 Million Sold

Peter Lomas

Clemens Valens, Editor-in-Chief of Elektor Online and head of Elektor Labs, caught up with Peter Lomas, hardware designer for the Raspberry Pi single-board computer, earlier this year at the Embedded World 2013 trade show in Nuremberg, Germany. This is a longer version of an interview with Lomas published in Elektor’s May 2013 issue. The Lomas interview provided a one-year update on the rapid growth of interest in the Raspberry Pi since Elektor’s April 2012 interview with Eben Upton, one of the founders and trustees of the Raspberry Pi Foundation. The UK-based charitable foundation developed the inexpensive, credit card-sized computer to encourage the study of basic computer science in schools. In early 2012, the Raspberry Pi’s first production batches were arriving. Since then, more than 1 million boards have been sold.

CLEMENS: Raspberry Pi, the phenomena. It is quite amazing what happened.

PETER: It is, and lots of people keep asking me, why has Raspberry Pi done what it has done, what makes it different? I think it’s something we’ve really been trying to grasp. The first thing that happened with Raspberry Pi, which I think is important, is that we had one of our very first prototypes on a UK blog for one of the BBC correspondents, Rory Cellan-Jones, and they made a little video, a YouTube video, and that got 600,000 hits. So I guess that if you look at it from one aspect, that created a viral marketing, a very viral marketing campaign for Raspberry Pi. The other I think, the name, Raspberry Pi was key. And the logo that Paul Beach did for us is absolutely key because it has become iconic.

CLEMENS: Yes, it’s very recognizable.

PETER: Very recognizable. If I show you that, you know exactly what it is, in the electronics circle. So I think the brand has been very important. But you know, we shouldn’t forget the amount of work that Liz Upton’s been doing with the blogs and on our website, keeping people informed about what we’re doing. Then, I think we’ve got the fact we are a charity… that we are focused on the education of computing and electronics and that’s our motive—not actually to make boards and to make money except to fund the foundation.

CLEMENS: I looked at the Raspberry Pi website, and it doesn’t look easy to me. You target education, children, and on the website it’s hard to find what Raspberry Pi exactly is. It’s not really explained. You have to know it. There are several distributions, so you have to know Linux and you have to program in Python.

PETER: Well, that’s true and, in a weird way, that’s part of its success, because you actually have to be active. In order to do something with Pi, you can’t just get it out of a shiny box, put it on the desk and press “on.” You have to do some mental work. You have to figure some things out. Now, I actually think that there’s a bit of a benefit there, because when it actually works, you have some achievement. You’ve done something. Not “we’ve done something.” You’ve done it personally, and there is a gratification from doing it.

CLEMENS: But it’s not the easiest platform.

PETER: No, but with our educational proposition, the whole object now is to package that up in easier-to-use bundles. We can make the SD card boot straight to Scratch (a website project and simple programming language developed at the Massachusetts Institute of Technology Media Lab), so Linux becomes temporarily invisible, and there’s a set of worksheets and instructions. But we’re never going to take away, hopefully, the fact that you have to put your wires in, and I do think that is part of the importance and the attraction of it.

CLEMENS: Because of all these layers of complexity and having to program it in English (Python is in English), for the non-English population it is yet another hurdle. That’s why Arduino was so successful; they made the programming really easy. They had cheap hardware but also a way to easily program it.

PETER: There’s no doubt Arduino is a brilliant product. You are right, it enables people to get to what I call “Hello World” very easily. But, in fact, on a Raspberry Pi, after you’ve made those connections and plugged the card in, you can get to an equivalent “Hello World.” But ours is the Scratch cat. Once you’ve moved the Scratch cat, you can go in a few different directions: you can move it some more, or you can use Scratch with an I/O interface to make an LED light up or you can press a button to make the Scratch cat move. There are endless directions you can go. I’ve found, and I think Eben has similarly experienced, that kids just get it. As long as you don’t make it too complicated, the kids just get it. It’s the adults who have more problems.

CLEMENS: I saw that there are at least three different distributions for the boards. So what are the differences between the three? Why isn’t there just one?

PETER: Well, they all offer subtly different features. The whole idea was to make Raspberry Pi as an undergraduate tool. You give it to Cambridge University, hopefully Manchester University, and undergraduates can view the science before they start it. They have the summer. They can work on it, come back, and say: “Look, I did this on this board.” That’s where it all started.

CLEMENS: OK. So, you were already on quite a high level.

PETER: Well we were on a high level, that’s true. We were on a high level, so Scratch wouldn’t have been on the agenda. It was really just Python—that’s actually where the Pi comes from.
What has really happened is that we’ve developed this community and this ecosystem around Pi. So we have to be able to support the, if you like, “different roots” of people wanting to use Pi. Now we’ve got the RISC OS that you can use. And people are even doing bare-metal programming. If we just gave one distribution, I guess we’re closing it up. I fully approve of having different distributions.

CLEMENS: From the website, it’s not clear to me what is different in these distributions. For the first one, it is written: “If you’re just starting out.”

PETER: I think maybe we do need to put some more material in there to explain to people the difference. I have to explain: I’m the hardware guy. I’m the guy who sat there connecting the tracks up, connecting the components up. My expertise with the operating systems, with the distributions that we have, is really limited to the graphical interface because that’s what I use day in, day out.

CLEMENS: Once you have chosen your distribution and you want to control an LED, you have to open a driver or something, I suppose?

PETER: Well, you’ve got the library; you just have to make a library call. Again, it’s not easy. You have to go and find the libraries and you have to download them. Which is where things such as the Pi-Face (add-on board) come in, because that comes with an interactive library that will go onto Scratch. And you’ve got the Gertboard (another extension board) and that comes with the libraries to drive it and some tutorial examples and then you can wind that back to just the bare metal interface on the GPIOs.

CLEMENS: So the simplicity is now coming from the add-on boards?

PETER: Some of the add-on boards can make it simpler, where they give you the switches and they give you the LEDs. You don’t need to do any wiring. My view is that I’m trying to make it like an onion: You can start with the surface and you can do something, and then you can peel away the layers. The more interested you get, the more layers you can peel away and the more different directions you can go (in what you do with it). You must have seen the diverse things that can be done.

CLEMENS: I’ve looked at some projects. I was surprised by the number of media centers. That’s how RS Components (which distributes the Raspberry Pi) is promoting the board. Aren’t you disappointed with that? It seems to be, for a lot of people, a cheap platform to do a Linux application on. They just want to have a media center.

PETER: I know exactly what you mean. And I suppose I should be disappointed that some people buy it, they make it into a media center, and that’s all it does. But I think if only 5% or 10% of those people who make it into a media center will think: “Well, that was easy, maybe I’ll get another and see if I can do something else with it,” then it’s a success.

CLEMENS: It would be an enabler.

PETER: Getting the technology in front of people is the first problem. Getting the “Hello World” so they’ve got a sense of achievement is the second problem. Then turning them over from doing that to “Okay, well what if I try and do this?”  then that’s  Nirvana. Certainly for the kids that’s crucial, because we’re changing them from doing what they’re told, to start doing things that they think they might be able to do—and trying it. That makes them into engineers.

CLEMENS: Let’s move on to the board’s hardware.

PETER: Sure.

CLEMENS: So, you chose a Broadcom processor. Because Eben worked at Broadcom?

PETER: He still works within Broadcom. It would be hard for me to argue that that wasn’t an influence on the decision, because Eben said: “Oh look, here’s the bright shiny chip. It can do all the things that we want, why wouldn’t we use it?” The decision we made is we nailed our credentials and our reputations to the website by saying it will cost $35—it will cost $25 for the basic one. And there was no way on Earth any of us were going to go back on that… We had a spreadsheet, the basic numbers looked plausible, we just had to do a lot of work to chop it down—to hone it, to get it tight so it would actually meet the prices. So, I think if we’d gone another way, like maybe with Samsung, that would have blown the budget.

CLEMENS: Did Broadcom help in any way to make this possible?

PETER: Every semiconductor manufacturer helped the project by making the chips available. Also, the price point of the chips is important. I think some of the people who helped us took an educated gamble and gave us good pricing from day one. Because the big problem you get with trying to bootstrap any project, is that if you don’t know what your volume is going to be. You have to be conservative.

So, initially, we priced for a thousand boards, but quickly we priced for 20,000 boards, but nowhere in our wildest dreams did we think we were going to get to a 200,000-board requirement on launch day and be so tantalizingly close to selling a million after our first year. So that’s helped in a lot of ways, because obviously it’s driven the price of all the components down. I’m not going to pretend it doesn’t please the vendors of the components that had faith in us from day one, because they’ve obviously made some money out of it.

We always had the rationale that we had to have a sustainable model where the foundation, our community that is buying the boards, and our suppliers were all making a living and could feed themselves. It would have been a total disaster if someone such as Broadcom had said: “Tell you what guys, let’s give you the processors. We’ll give you the first 20,000.” And so, we could have provided all sorts of extra bells and whistles to the design. Then, when we would have sold these 20,000 boards, we’re going to raise the price of everything by $12. That would’ve been the end of Raspberry Pi.

CLEMENS: If Eben and the others had not worked for Broadcom…

PETER: Would we have used a different chip? Well, I sort of speculated about this and I went around and had a look and, at the time for the price point, we couldn’t find anything that would’ve met our requirements as well as that chip. So I was comfortable that was the one that would allow us to get to where we wanted to be, and I think the big key crunch for that was the high-definition multimedia interface (HDMI). From a technical point of view, one of the challenges we had was getting the breakout under the BGA, because blind and buried vias on PCBs are very expensive.

CLEMENS: How many layers is the board?

PETER: Six, which is a pretty bog-standard layer count. The only little trick that we used was to put blind vias only on layers one and two—so we had an extra drilling stage—but only one bonding stage. So that added $0.02 onto the cost of the board. But, because the next layer down was a ground plane, it meant that a lot of the connections that come out of the Broadcom processor just go down one layer. And that meant that I could have space underneath to route other things and actually make it all happen.

CLEMENS: Don’t they have guidelines at Broadcom?

PETER: Oh, they do have guidelines! Use blind and buried vias or vias in pads. Our first prototype was all singing, all dancing, but it would have cost $100 to $110 to manufacture. So we got the machete out and started hacking down all the things that we didn’t need. So you’ve got all the functionality that you want. You can get the performance that you want, you can get the compliance, but it’s got nothing extra.

CLEMENS: Have you been thinking about the future of Raspberry Pi?

PETER: Well, yeah… In our industry, you know, Moore’s law guarantees that everything is old-hat in two years’ time. So we’re thinking about it, but that’s all we’re doing. We’re trying to improve our educational release. I mean, let’s face it, I’m not going to pretend that the Raspberry Pi is perfect. We only made one modification to the board from design to release. We’ve only made some minor modifications under the V2 release. Some of that is to fix some anomalies, some of that was also to help our new manufacturing partner, Sony (in Pencoed, Wales), take it. Their process needed some slight changes to the board to make it easier to manufacture.

CLEMENS: About the original idea of Raspberry Pi, the educational thing. I had a look at the forum and there are lots of forums about technical details, quite a lot of questions and topics about start-up problems. But the educational forum is pretty small.

PETER: You’re right. You’re absolutely right. A lot of that work has been going on slowly and carefully in the background. To be completely honest with you, we were caught on the hub with the interest with Raspberry Pi, and so I’ve certainly spent the last 12 months making sure that we can deliver the product to our community so that they can develop with it and perhaps talk a little bit about our educational goals. But we’re absolutely refocusing on that.

CLEMENS: First, get the hardware into people’s hands and then focus on the education.

PETER: Exactly. And of course, we’ve also released the first computers in schools as manual teaching tools. But also we’ve got Clive, who is a full-time employee helping with the educational deployment. And it’s great that we’ve had all this support (from Google Giving) to get 15,000 kits into schools. I won’t pretend we don’t have a lot of work to do but, I think of where we were a year ago, just still trying to launch.

CLEMENS: It all went really fast.

PETER: Oh yes, it’s gone like a rocket!

CLEMENS: Have you personally learned something valuable from it?

PETER: Well, I’ve learned lots of things. I think the most valuable, maybe not a lesson, but a reinforcement of something I already thought, is that education doesn’t just exist in the classroom. It exists all around us. The opportunity to learn and the opportunity to teach exists every day in almost every aspect in what we do. You know, there are people who spend their lives trying to keep every secret, keep everything to themselves. But there are also people who just give. And I’ve met so many people who are just givers. I suppose I’ve learned there is a whole new system of education that goes on outside of the standard curriculum that helps people do what they want to do.

Editor’s Note: Interview by Clemens Valens, Transcription by Joshua Walbey.

RESOURCES

  • Embedded Linux Wiki, “RPi Gertboard,” elinux.org/RPi_Gertboard
  • W. Hettinga, “What Are You Doing? The Raspberry Pi $25 Computer,” Elektor April 2012.
  • Massachusetts Institute of Technology Media Lab, “Scratch,” scratch.mit.edu
  • University of Manchester School of Computer Science, Projects Using Raspberry Pi, “Pi-Face Digital Interface,” http://pi.cs.man.ac.uk/interface.htm

 

Low-Cost, High-Performance 32-bit Microcontrollers

The PIC32MX3/4 32-bit microcontrollers are available in 64/16-, 256/64-, and 512/128-KB flash/RAM configurations. The microcontrollers are coupled with Microchip Technology’s software and tools for designs in connectivity, graphics, digital audio, and general-purpose embedded control.

The microcontrollers offer high RAM memory options and high peripheral integration at a low cost. They feature 28 10-bit ADCs, five UARTS, 105-DMIPS performance, serial peripherals, a graphic display, capacitive touch, connectivity, and digital audio support.
The PIC32MX3/4 microcontrollers are supported with general software development tools, including Microchip Technology’s MPLAB X integrated development environment (IDE) and the MPLAB XC32 C/C++ compiler.

Application-specific tools include the Microchip Graphics Display Designer X and the Microchip Graphics Library, which provide a visual design tool that enables quick and easy creation of graphical user interface (GUI) screens for applications. The microcontrollers are also supported with a set of Microchip’s protocol stacks including TCP/IP, USB Device and Host, Bluetooth, and Wi-Fi. For digital audio applications, Microchip provides software for tasks such as sample rate conversion (SRC), audio codecs—including MP3 and Advanced Audio Coding (AAC), and software to connect smartphones and other personal electronic devices.

The PIC32MX3/4 family is supported by Microchip’s PIC32 USB Starter Kit III, which costs $59.99 and the PIC32MX450 100-pin USB plug-in module, which costs $25 for the modular Explorer 16 development system. Pricing for the PIC32MX3/4 microcontrollers starts at $2.50 each in 10,000-unit quantities.

Microchip Technology, Inc.
www.microchip.com

Member Profile: Steve Hendrix

Steve Hendrix

Location: Sagamore Hills, OH (located between Cleveland and Akron)

Education: BS, United States Air Force Academy, El Paso County, CO

Occupation: Steve began moonlighting as an engineering consultant in 1979. He has been a full-time consultant since 1992.

Member Status: He says he has been a subscriber since “forever.” He remembers reading the Circuit Cellar columns in Byte magazine.

Technical Interests: Steve enjoys embedded design, from picoamps to kiloamps, from nanovolts to kilovolts, from microhertz to gigahertz, and from nanowatts to kilowatts.
Current Projects: He is working on eight active professional projects. Most of his projects involve embedding Microchip Technology’s PIC18 microcontroller family.

Some of Steve’s projects include Texas Instruments Bluetooth processors and span all the previously mentioned ranges in the interfacing hardware. Steve says he is also working on a personal project involving solar photovoltaic power.

Thoughts on the Future of Embedded Technology: Steve thinks of embedded technology as “a delicate balancing act: time spent getting the technology set up vs. time we would spend to do the same job manually; convenience and connectivity vs. privacy, time, and power saved vs. energy consumed; time developing the technology vs. its payoffs; and connectedness with people far away vs. with those right around us.” Additionally, he says there are always the traditional three things to balance “good, fast, cheap—choose two!”

New Products: July 2013

CWAV, Inc. USBee QX

MIXED SIGNAL OSCILLOSCOPE WITH PROTOCOL ANALYZER

The USBee QX is a PC-based mixed-signal oscilloscope (MSO) integrated with a protocol analyzer utilizing USB 3.0 and Wi-Fi technology. The highly integrated, 600-MHz MSO features 24 digital channels and four analog channels.

With its large 896-Msample buffer memory and data compression capability, the USBeeQX can capture up to 32 days of traces. It displays serial or parallel protocols in a human-readable format, enabling developers to find and resolve obscure and difficult defects. The MOS includes popular serial protocols (e.g., RS-232/UARTs, SPI, I2C, CAN, SDIO, Async, 1-Wire, and I2S), which are typically costly add-ons for benchtop oscilloscopes. The MOS utilizes APIs and Tool Builders that are integrated into the USBee QX software to support any custom protocol.

The USBee QX’s Wi-Fi capability enables you set up testing in the lab while you are at your desk. The Wi-Fi capability also creates electrical isolation of the device under test to the host computer.

The USBee QX costs $2,495.

CWAV, Inc.
www.usbee.com

 


DownStream Technologies FabStream

FREE PCB DESIGN SOFTWARE SUITE

FabStream is an integrated PCB design and manufacturing solution designed for the DIY electronics market, including small businesses, start-ups, engineers, inventors, hobbyists, and other electronic enthusiasts. FabStream consists of free SoloPCB Design software customized to each manufacturing partner in the FabStream network.

The FabStream service works in three easy steps. First, you log onto the FabStream website (www.fabstream.com), select a FabStream manufacturing partner, and download the free design software. Next, you create PCB libraries, schematics, and board layouts. Finally, the software leads you through the process of ordering PCBs online with the manufacturer. You only pay for the PCBs you purchase. Because the service is mostly Internet-based, FabStream can be accessed globally and is available 24/7/365.

FabStream’s free SoloPCB Design software includes a commercial-quality schematic capture, PCB layout, and autorouting in one, easy-to-use environment. The software is customized to each manufacturing partner. All of the manufacturer’s production capabilities are built into SoloPCB, enabling you to work within the manufacturers’ constraints. Design changes can be made and then verified through an integrated analyzer that uses a quick pass/fail check to compare the modification to the manufacturer’s rules.

SoloPCB does not contain any CAM outputs. Instead, a secure, industry-standard IPC-2581 manufacturing file is automatically extracted, encrypted, and electronically routed to the manufacturer during the ordering process. The IPC-2581 file contains all the design information needed for manufacturing, which eliminates the need to create Gerber and NC drill files.

FabStream is available as a free download. More information can be found at www.fabstream.com

DownStream Technologies, LLC
www.downstreamtech.com

 


Rohde Schwarz SMW200A

HIGH-PERFORMANCE VECTOR SIGNAL GENERATOR

The R&S SMW200A high-performance vector signal generator combines flexibility, performance, and intuitive operation to quickly and easily generate complex, high-quality signals for LTE Advanced and next-generation mobile standards. The generator is designed to simpify complex 4G device testing.

With its versatile configuration options, the R&S SMW200A’s range of applications extends from single-path vector signal generation to multichannel multiple-input and multiple-output (MIMO) receiver testing. The vector signal generator provides a baseband generator, a RF generator, and a real-time MIMO fading simulator in a single instrument.

The R&S SMW200A covers the100 kHz-to-3-GHz, or 6 GHz, frequency range, and features a 160-MHz I/Q modulation bandwidth with internal baseband. The generator is well suited for verification of 3G and 4G base stations and aerospace and defense applications.

The R&S SMW200A can be equipped with an optional second RF path for frequencies up to 6 GHz. It can have a a maximum of two baseband and four fading simulator modules, providing users with two full-featured vector signal generators in a single unit. Fading scenarios, such as 2 × 2 MIMO, 8 × 2 MIMO for TD-LTE, and 2 × 2 MIMO for LTE Advanced carrier aggregation, can be easily simulated.

Higher-order MIMO applications (e.g., 3 × 3 MIMO for WLAN or 4 × 4 MIMO for LTE-FDD) are easily supported by connecting a third and fourth source to the R&S SMW200A. The R&S SGS100A are highly compact RF sources that are controlled directly from the front panel of the R&S SMW200A.

The R&S SMW200A ensures high accuracy in spectral and modulation measurements. The SSB phase noise is –139 dBc (typical) at 1 GHz (20 kHz offset). Help functions are provided for additional ease-of-use, and presets are provided for all important digital standards and fading scenarios. LTE and UMTS test case wizards simplify complex base station conformance testing in line with the 3GPP specification.

Contact Rohde & Schwarz for pricing.

Rohde & Schwarz
www.corporate.rohde-schwarz.com

 


Texas Instruments CC2538

INTEGRATED ZIGBEE SINGLE-CHIP SOLUTION WITH AN ARM CORTEX-M3 MCU

The Texas Instruments (TI) CC2538 system-on-chip (SoC) is designed to simplify the development of ZigBee wireless connectivity-enabled smart energy infrastructure, home and building automation, and intelligent lighting gateways. The cost-efficient SoC features an ARM Cortex-M3 microcontroller, memory, and hardware accelerators on one piece of silicon. The CC2538 supports ZigBee PRO, ZigBee Smart Energy and ZigBee Home Automation and lighting standards to deliver interoperability with existing and future ZigBee products. The SoC also uses IEEE 802.15.4 and 6LoWPAN IPv6 networks to support IP standards-based development.

The CC2538 is capable of supporting fast digital management and features scalable memory options from 128 to 512 KB flash to support smart energy infrastructure applications. The SoC sustains a mesh network with hundreds of end nodes using integrated 8-to-32-KB RAM options that are pin-for-pin compatible for maximum flexibility.

The CC2538’s additional benefits include temperature operation up to 125°C, optimization for battery-powered applications using only 1.3 uA in Sleep mode, and efficient processing for centralized networks and reduced bill of materials cost through integrated ARM Cortex-M3 core.

The CC2538 development kit (CC2538DK) provides a complete development platform for the CC2538, enabling users to see all functionality without additional layout. It comes with high-performance CC2538 evaluation modules (CC2538EMK) and motherboards with an integrated ARM Cortex-M3 debug probe for software development and peripherals including an LCD, buttons, LEDs, light sensor and accelerometer for creating demo software. The boards are also compatible with TI’s SmartRF Studio for running RF performance tests. The CC2538 supports current and future Z-Stack releases from TI and over-the-air software downloads for easier upgrades in the field.

The CC2538 is available in an 8-mm x 8-mm QFN56 package and costs $3 in high volumes. The CC2538 is also available through TI’s free sample program. The CC2538DK costs $299.

Texas Instruments, Inc.
www.ti.com

Member Profile: Thomas Struzik

Member Thomas Struzik at his bench.

 

  • Member Name: Thomas Struzik
  • Location: Houston, TX
  • Education: BSEE, Purdue University
  • Occupation: Software architect
  • Member Status: He has been a subscriber since day one. “I’ve got Issue 1 sitting in a box somewhere,” he said. Thomas adds that he was a BYTE magazine subscriber before Circuit Cellar.
  • Technical Interests: Thomas enjoys automation through embedded technology, robotics, low-level programming, and electronic music generation / enhancement.
  • Most Recent Embedded Tech-Related Purchase: He recently bought a CWAV USBee SX Digital Test Pod and an Atmel AVR Dragon.
  • Current and Recent Projects: Thomas is working on designing an isolated USB power supply for his car.
  • Thoughts on the Future of Embedded Technology: Ever-increasing complexity is becoming a stumbling block for the “average” user. “Few people even realize the technology embedded in everyday items,” he said. “How many people know that brand-new LCD TV they’ve got is actually running Linux under the covers? Fortunately, there seems to be a resurgence of ‘need-to-know how stuff works’ with the whole DIY/maker culture. But even that is still a small island compared to the population in general.”

Autonomous Mobile Robot (Part 2): Software & Operation

I designed a microcontroller-based mobile robot that can cruise on its own, avoid obstacles, escape from inadvertent collisions, and track a light source. In the first part of this series, I introduced my TOMBOT robot’s hardware. Now I’ll describe its software and how to achieve autonomous robot behavior.

Autonomous Behavior Model Overview
The TOMBOT is a minimalist system with just enough components to demonstrate some simple autonomous behaviors: Cruise, Escape, Avoid, and Home behaviors (see Figure 1). All the behaviors require left and right servos for maneuverability. In general, “Cruise” just keeps the robot in motion in lieu of any stimulus. “Escape” uses the bumper to sense a collision and then 180 spin with reverse. “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle. Finally “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.

Figure 1: High-level autonomous behavior flow

Figure 2 shows more details. The diagram captures the interaction of TOMBOT hardware and software. On the left side of the diagram are the sensors, power sources, and command override (the XBee radio command input). All analog sensor inputs and bumper switches are sampled (every 100 ms automatically) during the Microchip Technology PIC32 Timer 1 interrupt. The bumper left and right switches undergo debounce using 100 ms as a timer increment. The analog sensors inputs are digitized using the PIC32’s 10-bit ADC. Each sensor is assigned its own ADC channel input. The collected data is averaged in some cases and then made available for use by the different behaviors. Processing other than averaging is done within the behavior itself.

Figure 2: Detailed TOMBOT autonomous model

All behaviors are implemented as state machines. If a behavior requests motor control, it will be internally arbitrated against all other behaviors before motor action is taken. Escape has the highest priority (the power behavior is not yet implemented) and will dominate with its state machine over all the other behaviors. If escape is not active, then avoid will dominate as a result of its IR detectors are sensing an object in front of the TOMBOT less than 8″ away. If escape and avoid are not active, then home will overtake robot steering to sense track a light source that is immediately in front of TOMBOT. Finally cruise assumes command, and takes the TOMBOT in a forward direction temporarily.

A received command from the XBee RF module can stop and start autonomous operation remotely. This is very handy for system debugging. Complete values of all sensors and battery power can be viewed on graphics display using remote command, with LEDs and buzzer, announcing remote command acceptance and execution.

Currently, the green LED is used to signal that the TOMBOT is ready to accept a command. Red is used to indicate that the TOMBOT is executing a command. The buzzer indicates that the remote command has been completed coincident with the red led turning on.

With behavior programming, there are a lot of considerations. For successful autonomous operation, calibration of the photocells and IR sensors and servos is required. The good news is that each of these behaviors can be isolated (selectively comment out prior to compile time what is not needed), so that phenomena can be isolated and the proper calibrations made. We will discuss this as we get a little bit deeper into the library API, but in general, behavior modeling itself does not require accurate modeling and is fairly robust under less than ideal conditions.

TOMBOT Software Library
The TOMBOT robot library is modular. Some experience with C programming is required to use it (see Figure 3).

Figure 3: TOMBOT Library

The entire library is written using Microchip’s PIC32 C compiler. Both the compiler and Microchip’s 8.xx IDE are available as free downloads at www.microchip.com. The overall library structure is shown. At a highest level library has three main sections: Motor, I/O and Behavior. We cover these areas in some detail.

TOMBOT Motor Library
All functions controlling the servos’ (left and right wheel) operation is contained in this part of the library (see Listing1 Motor.h). In addition the Microchip PIC32 peripheral library is also used. Motor initialization is required before any other library functions. Motor initialization starts up both left and right servo in idle position using PIC32 PWM peripherals OC3 and OC4 and the dual Timer34 (32 bits) for period setting. C Define statements are used to set pulse period and duty cycle for both left and right wheels. These defines provide PWM varies from 1 to 2 ms for different speed CCW rotation over a 20-ms period and from 1.5 ms to 1 ms for CC rotation.

Listing 1: All functions controlling the servos are in this part of the library.

V_LEFT and V_RIGHT (velocity left and right) use the PIC32 peripheral library function to set duty cycle. The other motor functions, in turn, use V_LEFT and V_RIGHT with the define statements. See FORWARD and BACKWARD functions as an example (see Listing 2).

Listing 2: Motor function code examples

In idle setting both PWM set to 1-ms center positions should cause the servos not to turn. A servo calibration process is required to ensure center position does not result in any rotation. For the servos we have a set screw that can be used to adjust motor idle to no spin activity with a small Philips screwdriver.

TOMBOT I/O Library

This is a collection of different low level library functions. Let’s deal with these by examining their files and describing the function set starting with timer (see Listing 3). It uses Timer45 combination (full 32 bits) for precision timer for behaviors. The C defines statements set the different time values. The routine is noninterrupt at this time and simply waits on timer timeout to return.

Listing 3: Low-level library functions

The next I/O library function is ADC. There are a total of five analog inputs all defined below. Each sensor definition corresponds to an integer (32-bit number) designating the specific input channel to which a sensor is connected. The five are: Right IR, Left IR, Battery, Left Photo Cell, Right Photo Cell.

The initialization function initializes the ADC peripheral for the specific channel. The read function performs a 10-bit ADC conversion and returns the result. To faciliate operation across the five sensors we use SCAN_SENSORS function. This does an initialization and conversion of each sensor in turn. The results are placed in global memory where the behavior functions can access . SCAN_SENOR also performs a running average of the last eight samples of photo cell left and right (see Listing 4).

Listing 4: SCAN_SENOR also performs a running average of the last eight samples

The next I/O library function is Graphics (see Listing 5). TOMBOT uses a 102 × 64 monchrome graphics display module that has both red and green LED backlights. There are also red and green LEDs on the module that are independently controlled. The module is driven by the PIC32 SPI2 interface and has several control lines CS –chip select, A0 –command /data.

Listing 5: The Graphics I/O library function

The Graphics display relies on the use of an 8 × 8 font stored in as a project file for character generation. Within the library there are also cursor position macros, functions to write characters or text strings, and functions to draw 32 × 32 bit maps. The library graphic primitives are shown for intialization, module control, and writing to the module. The library writes to a RAM Vmap memory area. And then from this RAM area the screen is updated using dumpVmap function. The LED and backlight controls included within these graphics library.

The next part of I/O library function is delay (see Listing 6). It is just a series of different software delays that can be used by other library function. They were only included because of legacy use with the graphics library.

Listing 6: Series of different software delays

The next I/O library function is UART-XBEE (see Listing 7). This is the serial driver to configure and transfer data through the XBee radio on the robot side. The library is fairly straightforward. It has an initialize function to set up the UART1B for 9600 8N1, transmit and receive.

Listing 7: XBee library functions

Transmission is done one character at a time. Reception is done via interrupt service routine, where the received character is retrieved and a semaphore flag is set. For this communication, I use a Sparkfun XBee Dongle configured through USB as a COM port and then run HyperTerminal or an equivalent application on PC. The default setting for XBee is all that is required (see Photo 1).

Photo 1: XBee PC to TOMBOT communications

The next I/O library function is buzzer (see Listing 8). It uses a simple digital output (Port F bit 1) to control a buzzer. The functions are initializing buzzer control and then the on/off buzzer.

Listing 8: The functions initialize buzzer control

TOMBOT Behavior Library
The Behavior library is the heart of the autonomous TOMBOT and where integrated behavior happens. All of these behaviors require the use of left and right servos for autonomous maneuverability. Each behavior is a finite state machine that interacts with the environment (every 0.1 s). All behaviors have a designated priority relative to the wheel operation. These priorities are resolved by the arbiter for final wheel activation. Listing 9 shows the API for the entire Behavior Library.

Listing 9: The API for the entire behavior library

Let’s briefly cover the specifics.

  • “Cruise” just keeps the robot in motion in lieu of any stimulus.
  • “Escape” uses the bumper to sense a collision and then 180° spin with reverse.
  • “Avoid” makes use of continuous forward looking IR sensors to veer left or right upon approaching a close obstacle.
  • “Home” utilizes the front optical photocells to provide robot self-guidance to a strong light highly directional source.
  • “Remote operation” allows for the TOMBOT to respond to the PC via XBee communications to enter/exit autonomous mode, report status, or execute a predetermined motion scenario (i.e., Spin X times, run back and forth X times, etc.).
  • “Dump” is an internal function that is used within Remote.
  • “Arbiter” is an internal function that is an intrinsic part of the behavior library that resolves different behavior priorities for wheel activation.

Here’s an example of the Main function-invoking different Behavior using API (see Listing 10). Note that this is part of a main loop. Behaviors can be called within a main loop or “Stacked Up”. You can remove or stack up behaviors as you choose ( simply comment out what you don’t need and recompile). Keep in mind that remote is a way for a remote operator to control operation or view status.

Listing 10: TOMBOT API Example

Let’s now examine the detailed state machine associated with each behavior to gain a better understanding of behavior operation (see Listing 11).

Listing 11:The TOMBOT’s arbiter

The arbiter is simple for TOMBOT. It is a fixed arbiter. If either during escape or avoid, it abdicates to those behaviors and lets them resolve motor control internally. Home or cruise motor control requests are handled directly by the arbiter (see Listing 12).

Listing 12: Home behavior

Home is still being debugged and is not yet a final product. The goal is for the TOMBOT during Home is to steer the robot toward a strong light source when not engaged in higher priority behaviors.

The Cruise behavior sets motor to forward operation for one second if no other higher priority behaviors are active (see Listing 13).

Listing 13: Cruise behavior

The Escape behavior tests the bumper switch state to determine if a bump is detected (see Listing 14). Once detected it runs through a series of states. The first is an immediate backup, and then it turns around and moves away from obstacle.

Listing 14: Escape behavior

This function is a response to the remote C or capture command that formats and dumps (see Listing 15) to the graphics display The IR left and right, Photo left and Right, and battery in floating point format.

Listing 15: The dump function

This behavior uses the IR sensors and determines if an object is within 8″ of the front of TOMBOT (see Listing 16).

Listing 16: Avoid behavior

If both sensors detect a target within 8″ then it just turns around and moves away (pretty much like escape). If only the right sensor detects an object in range spins away from right side else if on left spins away on left side (see Listing 17).

Listing 17: Remote part 1

Remote behavior is fairly comprehensive (see Listing 18). There are 14 different cases. Each case is driven by a different XBee received radio character. Once a character is received the red LED is turned on. Once the behavior is complete, the red LED is turned off and a buzzer is sounded.

Listing 18: Remote part 2

The first case toggles Autonomous mode on and off. The other 13 are prescribed actions. Seven of these 13 were written to demonstrate TOMBOT mobile agility with multiple spins, back and forwards. The final six of the 13 are standard single step debug like stop, backward, and capture. Capture dumps all sensor output to the display screen (see Table 1).

Table 1: TOMBOT remote commands

Early Findings & Implementation
Implementation always presents a choice. In my particular case, I was interested in rapid development. At that time, I selected to using non interrupt code and just have linear flow of code for easy debug. This amounts to “blocking code.” Block code is used throughout the behavior implementation and causes the robot to be nonresponsive when blocking occurs. All blocking is identified when timeout functions occur. Here the robot is “blind” to outside environmental conditions. Using a real-time operating system (e.g., Free RTOS) to eliminate this problem is recommended.

The TOMBOT also uses photocells for homing. These sensitive devices have different responses and need to be calibrated to ensure correct response. A photocell calibration is needed within the baseline and used prior to operation.

TOMBOT Demo

The TOMBOT was successfully demoed to a large first-grade class in southern California as part of a Science, Technology, Engineering and Mathematics (STEM) program. The main behaviors were limited to Remote, Avoid, and Escape. With autonomous operation off, the robot demonstrated mobility and maneuverability. With autonomous operation on, the robot could interact with a student to demo avoid and escape behavior.

Tom Kibalo holds a BSEE from City College of New York and an MSEE from the University of Maryland. He as 39 years of engineering experience with a number of companies in the Washington, DC area. Tom is an adjunct EE facility member for local community college, and he is president of Kibacorp, a Microchip Design Partner.