A Coding Interface for an Evaluation Tool

John Peck, a test engineer at Knowles Electronics in Itasca, IL, has used ASCII interfaces to test equipment since he was a graduate student.

“I love test equipment with open, well-documented, ASCII command sets,” he says. “The plain text commands give a complicated instrument a familiar interface and an easy way to automate measurements.”

So when Peck needed to automate the process of reading his ultrasonic range finder’s voltage output, he wanted an ASCII interface to a voltmeter. He also wanted the meter to convert volts into distance, so he added an Atmel AVR Butterfly microcontroller into the mix (see Photo 1). “I thought it would be easy to give it a plain text interface to a PC,” he says.

Atmel AVR Butterfly

Atmel AVR Butterfly

The project became a bit more complex than he expected. But ultimately, Peck says, he came up came up with “a simple command interface that’s easy to customize and extend. It’s not at the level of a commercial instrument, but it works well for sending a few commands and getting some data back.”

If you would like to learn more about how to send commands from a PC to the AVR Butterfly and the basics of using the credit card-sized, single-board microcontroller to recognize, parse, and execute remote commands, read Peck’s article about his project in Circuit Cellar’s May issue.

In the italicized excerpts below, he describes his hardware connections to the board and the process of receiving remote characters (the first step in receiving remote commands). Other topics you’ll find in the full article include setting up a logging system to understand how commands are being processed, configuring the logger (which is the gatekeeper for messages from other subsystems), recognizing and adding commands to extend the system, and sending calibration values.

Peck programmed his system so that it has room to grow and can accommodate his future plans:

“I built the code with AVR-GCC, using the -Os optimization level. The output of avr-gcc –version is avr-gcc (Gentoo 4.6.3 p1.3, pie-0.5.1) 4.6.3.

“The resulting memory map file shows a 306-byte .data size, a 49-byte .bss size, and a 7.8-KB .text size. I used roughly half of the AVR Butterfly’s flash memory and about a third of its RAM. So there’s at least some space left to do more than just recognizing commands and calibrating voltages.”

“I’d like to work on extending the system to handle more types of arguments (e.g., signed integers and floats). And I’d like to port the system to a different part, one with more than one USART. Then I could have a dedicated logging port and log messages wouldn’t get in the way of other communication. Making well-documented interfaces to my designs would help me with my long-term goal of making them more modular.”

These are the connections needed for Atmel’s AVR Butterfly. Atmel’s AVRISP mkII user’s guide stresses that the programmer must be connected to the PC before the target (AVR Butterfly board).

Figure 1: These are the connections needed for Atmel’s AVR Butterfly. Atmel’s AVRISP mkII user’s guide stresses that the programmer must be connected to the PC before the target (AVR Butterfly board).

WORKING WITH THE AVR BUTTERFLY
The AVR Butterfly board includes an Atmel ATmega169 microcontroller and some peripherals. Figure 1 shows the connections I made to it. I only used three wires from the DB9 connector for serial communication with the PC. There isn’t any hardware handshaking. While I could also use this serial channel for programming, I find that using a dedicated programmer makes iterating my code much faster.

A six-pin header soldered to the J403 position enabled me to use Atmel’s AVRISP mkII programmer. Finally, powering the board with an external supply at J401 meant I wouldn’t have to think about the AVR Butterfly’s button cell battery. However, I did need to worry about the minimum power-on reset slope rate. The microcontroller won’t reset at power-on unless the power supply can ramp from about 1 to 3 V at more than 0.1 V/ms. I had to reduce a filter capacitor in my power supply circuit to increase its power-on ramp rate. With that settled, the microcontroller started executing code when I turned on the power supply.

After the hardware was connected, I used the AVR downloader uploader (AVRDUDE) and GNU Make to automate building the code and programming the AVR Butterfly’s flash memory. I modified a makefile template from the WinAVR project to specify my part, programmer, and source files. The template file’s comments helped me understand how to customize the template and comprehend the general build process. Finally, I used Gentoo, Linux’s cross-development package, to install the AVR GNU Compiler Collection (AVR-GCC) and other cross-compilation tools. I could have added these last pieces “by hand,” but Gentoo conveniently updates the toolchain as new versions are released.

Figure2

Figure 2: This is the program flow for processing characters received over the Atmel AVR Butterfly’s USART. Sending a command terminator (carriage return) will always result in an empty Receive buffer. This is a good way to ensure there’s no garbage in the buffer before writing to it.

HANDLING INCOMING CHARACTERS
To receive remote commands, you begin by receiving characters, which are sent to the AVR Butterfly via the USART connector shown in Figure 1. Reception of these characters triggers an interrupt service routine (ISR), which handles them according to the flow shown in Figure 2. The first step in this flow is loading the characters into the Receive buffer.

Figure 3: The received character buffer and pointers used to fill it are shown. There is no limit to the size of commands and their arguments, as long as the entire combined string and terminator fit inside the RECEIVE_BUFFER_SIZE.

Figure 3: The received character buffer and pointers used to fill it are shown. There is no limit to the size of commands and their arguments, as long as the entire combined string and terminator fit inside the RECEIVE_BUFFER_SIZE.

Figure 3 illustrates the Receive buffer loaded with a combined string. The buffer is accessed with a pointer to its beginning and another pointer to the next index to be written. These pointers are members of the recv_cmd_state_t-type variable recv_cmd_state.

This is just style. I like to try to organize a flow’s variables by making them members of their own structure. Naming conventions aside, it’s important to notice that no limitations are imposed on the command or argument size in this first step, provided the total character count stays below the RECEIVE_BUFFER_SIZE limit.

When a combined string in the Receive buffer is finished with a carriage return, the string is copied over to a second buffer. I call this the “Parse buffer,” since this is where the string will be searched for recognized commands and arguments. This buffer is locked until its contents can be processed to keep it from being overwhelmed by new combined strings.

Sending commands faster than they can be processed will generate an error and combined strings sent to a locked parse buffer will be dropped. The maximum command processing frequency will depend on the system clock and other system tasks. Not having larger parse or receive buffers is a limitation that places this project at the hobby level. Extending these buffers to hold more than just one command would make the system more robust.

Editor’s Note: If you are interested in other projects utilizing the AVR Butterfly, check out the Talk Zombie, which won “Distinctive Excellence” in the AVR Design Contest 2006 sponsored by ATMEL and administered by Circuit Cellar. The ATmega169-based multilingual talking device relates ambient temperature and current time in the form of speech (English, Dutch, or French). 

Places for the IoT Inside Your Home

It’s estimated that by the year 2020, more than 30 billion devices worldwide will be wirelessly connected to the IoT. While the IoT has massive implications for government and industry, individual electronics DIYers have long recognized how projects that enable wireless communication between everyday devices can solve or avert big problems for homeowners.

February CoverOur February issue focusing on Wireless Communications features two such projects, including  Raul Alvarez Torrico’s Home Energy Gateway, which enables users to remotely monitor energy consumption and control household devices (e.g., lights and appliances).

A Digilent chipKIT Max32-based embedded gateway/web server communicates with a single smart power meter and several smart plugs in a home area wireless network. ”The user sees a web interface containing the controls to turn on/off the smart plugs and sees the monitored power consumption data that comes from the smart meter in real time,” Torrico says.

While energy use is one common priority for homeowners, another is protecting property from hidden dangers such as undetected water leaks. Devlin Gualtieri wanted a water alarm system that could integrate several wireless units signaling a single receiver. But he didn’t want to buy one designed to work with expensive home alarm systems charging monthly fees.

In this issue, Gualtieri writes about his wireless water alarm network, which has simple hardware including a Microchip Technology PIC12F675 microcontroller and water conductance sensors (i.e., interdigital electrodes) made out of copper wire wrapped around perforated board.

It’s an inexpensive and efficient approach that can be expanded. “Multiple interdigital sensors can be wired in parallel at a single alarm,” Gualtieri says. A single alarm unit can monitor multiple water sources (e.g., a hot water tank, a clothes washer, and a home heating system boiler).

Also in this issue, columnist George Novacek begins a series on wireless data links. His first article addresses the basic principles of radio communications that can be used in control systems.

Other issue highlights include advice on extending flash memory life; using C language in FPGA design; detecting capacitor dielectric absorption; a Georgia Tech researcher’s essay on the future of inkjet-printed circuitry; and an overview of the hackerspaces and enterprising designs represented at the World Maker Faire in New York.

Editor’s Note: Circuit Cellar‘s February issue will be available online in mid-to-late January for download by members or single-issue purchase by web shop visitors.

Client Profile: Lauterbach, Inc

1111 Main Street #115
Vancouver, WA 98660

Contact:
sales_us@lauterbach.com

LauterbachFeatured Product: The TRACE32-ICD in-circuit debugger supports a range of on-chip debug interfaces. The debugger’s hardware is universal and enables you to connect to different target processors by simply changing the debug cable. The PowerDebug USB 3.0 can be upgraded with the PowerProbe or the PowerIntergrator to a logic analyzer.

Product Features: The TRACE 32-ICD JTAG debugger has a 5,000-KBps download rate. It features easy high-level Assembler debugging and an interface to all industry-standard compilers. The debugger enables fast download of code to target, OS awareness debugging, and flash programming. It displays internal and external peripherals at a logical level and includes support for hardware breakpoints and trigger (if supported by chip), multicore debugging (SMP and AMP), C and C++, and all common NOR and NAND flash devices.

For more information, visit www.lauterbach.com/bdmusb3.html.

Turn Your Android Device into an Application Tool

A few years ago, the Android Open Accessory initiative was announced with the aim of making it easier for hardware manufacturers to create accessories that work with every Android device. Future Technology Devices International (FTDI) joined the initiative and last year introduced the FTD311D multi-interface Android host IC. The goal was to enable engineers and designers to make effective use of tablets and smartphones with the Android OS, according to Circuit Cellar columnist Jeff Bachiochi.

The FTD311D “provides an instant bridge from an Android USB port(B) to peripheral hardware over general purpose input-out (GPIO), UART, PWM, I2C Master, SPI Slave, or SPI Master interfaces,” Bachiochi says.

In the magazine’s December issue Bachiochi takes a comprehensive look at the USB Android host IC and how it works. By the end of his article, readers will have learned quite a bit about how to use FTDI’s apps and the FT311D chip to turn an Android device into their own I/0 tool.

Bachiochi used the SPI Master demo to read key presses and set LED states on this SPI slave 16-key touch panel.

Bachiochi used the SPI Master demo to read key presses and set LED states on this SPI slave 16-key touch panel.

Here is how Bachiochi describes the FT311D and its advantages:

The FT311D is a full-speed USB host targeted at providing access to peripheral hardware from a USB port on an Android device. While an Android device can be a USB host, many are mobile devices with limited power. For now, these On-The-Go (OTG) ports will be USB devices only (i.e., they can only connect to a USB host as a USB device).

Since the USB host is responsible for supplying power to a USB peripheral device, it would be bad design practice to enable a USB peripheral to drain an Android mobile device’s energy. Consequently, the FT311D takes on the task of USB host, eliminating any draw on the Android device’s battery.

All Android devices from V3.1 (Honeycomb) support the Android Open Accessory Mode (AOAM). The AOAM is the complete reverse of the conventional USB interconnect. This game-changing approach to attaching peripherals enables three key advantages. First, there is no need to develop special drivers for the hardware; second, it is unnecessary to root devices to alter permissions for loading drivers; and third, the peripheral provides the power to use the port, which ensures the mobile device battery is not quickly drained by the external hardware being attached.

Since the FT311D handles the entire USB host protocol, USB-specific firmware programming isn’t required. As the host, the FT311D must inquire whether the connected device supports the AOAM. If so, it will operate as an Open Accessory Mode device with one USB BULK IN endpoint and one USB BULK OUT endpoint (as well as the control endpoint.) This interface will be a full-speed (12-Mbps) USB enabling data transfer in and out.

The AOAM USB host has a set of string descriptors the Android OS is capable of reading. These strings are (user) associated with an Android OS application. The Android then uses these strings to automatically start the application when the hardware is connected. The FT311D is configured for one of its multiple interfaces via configuration inputs at power-up. Each configuration will supply the Android device with a unique set of string descriptors, therefore enabling different applications to run, depending on its setup.

The FT311D’s configuration determines whether each application will have access to several user interface APIs that are specific to each configuration.

The article goes on to examine the various interfaces in detail and to describe a number of demo projects, including a multimeter.

Many of Bachiochi's projects use printable ASCII text commands and replies. This enables a serial terminal to become a handy user I/O device. This current probe circuit outputs its measurements in ASCII-printable text.

Many of Bachiochi’s projects use printable ASCII text commands and replies. This enables a serial terminal to become a handy user I/O device. This current probe circuit outputs its measurements in ASCII-printable text.

Multimeters are great tools. They have portability that enables them to be brought to wherever a measurement must be made. An Android device has this same ability. Since applications can be written for these devices, they make a great portable application tool. Until the AOAM’s release, there was no way for these devices to be connected to any external circuitry and used as an effective tool.

I think FTDI has bridged this gap nicely. It provided a great interface chip that can be added to any circuit that will enable an Android device to serve as an effective user I/O device. I’ve used the chip to quickly interface with some technology to discover its potential or just test its abilities. But I’m sure you are already thinking about the other potential uses for this connection.

Bachiochi is curious to hear from readers about their own ideas.

If you think the AOAM has future potential, but you want to know what’s involved with writing Android applications for a specific purpose, send me an e-mail and I’ll add this to my list of future projects!

You can e-mail Bachiochi at jeff.bachiochi@imaginethatnow.com or post your comment here.

 

Small, Self-Contained GNSS Receiver

TM Series GNSS modules are self-contained, high-performance global navigation satellite system (GNSS) receivers designed for navigation, asset tracking, and positioning applications. Based on the MediaTek chipset, the receivers can simultaneously acquire and track several satellite constellations, including the US GPS, Europe’s GALILEO, Russia’s GLONASS, and Japan’s QZSS.

LinxThe 10-mm × 10-mm receivers are capable of better than 2.5-m position accuracy. Hybrid ephemeris prediction can be used to achieve less than 15-s cold start times. The receiver can operate down to 3 V and has a 20-mA low tracking current. To save power, the TM Series GNSS modules have built-in receiver duty cycling that can be configured to periodically turn off. This feature, combined with the module’s low power consumption, helps maximize battery life in battery-powered systems.

The receiver modules are easy to integrate, since they don’t require software setup or configuration to power up and output position data. The TM Series GNSS receivers use a standard UART serial interface to send and receive NMEA messages in ASCII format. A serial command set can be used to configure optional features. Using a USB or RS-232 converter chip, the modules’ UART can be directly connected to a microcontroller or a PC’s UART.

The GPS Master Development System connects a TM Series Evaluation Module to a prototyping board with a color display that shows coordinates, a speedometer, and a compass for mobile evaluation. A USB interface enables simple viewing of satellite data and Internet mapping and custom software application development.
Contact Linx Technologies for pricing.

Linx Technologies
www.linxtechnologies.com

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

I/O Raspberry Pi Expansion Card

The RIO is an I/O expansion card intended for use with the Raspberry Pi SBC. The card stacks on top of a Raspberry Pi to create a powerful embedded control and navigation computer in a small 20-mm × 65-mm × 85-mm footprint. The RIO is well suited for applications requiring real-world interfacing, such as robotics, industrial and home automation, and data acquisition and control.

RoboteqThe RIO adds 13 inputs that can be configured as digital inputs, 0-to-5-V analog inputs with 12-bit resolution, or pulse inputs capable of pulse width, duty cycle, or frequency capture. Eight digital outputs are provided to drive loads up to 1 A each at up to 24 V.
The RIO includes a 32-bit ARM Cortex M4 microcontroller that processes and buffers the I/O and creates a seamless communication with the Raspberry Pi. The RIO processor can be user-programmed with a simple BASIC-like programming language, enabling it to perform logic, conditioning, and other I/O processing in real time. On the Linux side, RIO comes with drivers and a function library to quickly configure and access the I/O and to exchange data with the Raspberry Pi.

The RIO features several communication interfaces, including an RS-232 serial port to connect to standard serial devices, a TTL serial port to connect to Arduino and other microcontrollers that aren’t equipped with a RS-232 transceiver, and a CAN bus interface.
The RIO is available in two versions. The RIO-BASIC costs $85 and the RIO-AHRS costs $175.

Roboteq, Inc.
www.roboteq.com

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Dual-Channel Waveform Generators

B&K Precision 4053 Waveform Generator

B&K Precision 4053 Waveform Generator

The 4050 Series is a new line of four dual-channel function/arbitrary waveform generators. The instruments can generate 5-to-50-MHz waveforms for applications requiring stable and precise sine, square, triangle, and pulse waveforms with modulation and arbitrary waveform capabilities.

All models provide a main output voltage that can be vary from 0 to 10 VPP into 50 Ω and a secondary output that can vary from 0 to 3 VPP into 50 Ω. The generators feature a 3.5” color LCD, a rotary control knob, and a numeric keypad with dedicated waveform keys and output buttons.

The 4050 Series provides users with 48 built-in arbitrary waveforms. Using the included waveform editing software via the standard USB interface on the rear, users can create and load up to 10 custom 16-kpt waveforms. For general-purpose interface bus (GPIB) connectivity, an optional USB-to-GPIB adapter is available.

The generators offer a variety of modulation schemes for modulated signal applications including amplitude and frequency modulation (AM/FM), double sideband amplitude modulation (DSB-AM), amplitude and frequency shift keying (ASK/FSK), phase modulation (PM), and pulse-width modulation (PWM). Additional standard features include a linear and logarithmic sweep function, a built-in counter, sync output, a trigger I/O terminal, and a USB host port on the front panel to save and recall instrument settings and waveforms. A standard external 10-MHz reference clock input is provided to synchronize the instrument to another generator.

The 4052 (5-MHz) costs $499, the 4053 (10 MHz) costs $599, the 4054 (25 MHz) costs $850, and the 4055 (50 MHz) costs $1,050. Note: B&K Precision is offering 10% off MSRP through November 30, 2013. See website for details.

B&K Precision Corp.
www.bkprecision.com

Data Acquisition Instrument

The DI-145 USB data acquisition instrument features four ±100-V analog channels and two dedicated digital inputs. The included DATAQ WinDaq data acquisition software (DAS) enables you to display and record data to a PC hard drive in real time. Once recorded, data can be played back, analyzed, or exported to an array of data acquisition and spreadsheet formats.

DATAQ also provides access to the DI-145 data protocol, which enables access to the DI-145 on any Windows, Linux, or MAC OS. In addition, .NET control is available to Windows users who wish to use a third-party programming language (e.g., Microsoft’s Visual Basic or National Instruments’s LabVIEW) to interface with the DI-145.

The four ±10-V fixed differential channels are protected from transient spikes up to ±150 V peak (±75 V, continuous). A 10-bit ADC provides 19.5-mV resolution across the full-scale measurement range. Digital inputs are protected up to ±30 VDC/peak AC. The digital inputs enable you to use a switch closure or TTL signal to remotely insert event marks or record data to disk.

The DI-145 measures 1.53” × 2.625” × 5.5” (3.89 cm × 6.67 cm × 13.97 cm) and weighs 3.6 oz. The data acquisition instrument costs $29 and includes a mini screwdriver, a USB cable, WinDaq/Lite DAS, access to the data protocol, and .NET control.

DATAQ Instruments, Inc.
www.dataq.com

DIY Solar-Powered, Gas-Detecting Mobile Robot

German engineer Jens Altenburg’s solar-powered hidden observing vehicle system (SOPHECLES) is an innovative gas-detecting mobile robot. When the Texas Instruments MSP430-based mobile robot detects noxious gas, it transmits a notification alert to a PC, Altenburg explains in his article, “SOPHOCLES: A Solar-Powered MSP430 Robot.”  The MCU controls an on-board CMOS camera and can wirelessly transmit images to the “Robot Control Center” user interface.

Take a look at the complete SOPHOCLES design. The CMOS camera is located on top of the robot. Radio modem is hidden behind the camera so only the antenna is visible. A flexible cable connects the camera with the MSP430 microcontroller.

Altenburg writes:

The MSP430 microcontroller controls SOPHOCLES. Why did I need an MSP430? There are lots of other micros, some of which have more power than the MSP430, but the word “power” shows you the right way. SOPHOCLES is the first robot (with the exception of space robots like Sojourner and Lunakhod) that I know of that’s powered by a single lithium battery and a solar cell for long missions.

The SOPHOCLES includes a transceiver, sensors, power supply, motor
drivers, and an MSP430. Some block functions (i.e., the motor driver or radio modems) are represented by software modules.

How is this possible? The magic mantra is, “Save power, save power, save power.” In this case, the most important feature of the MSP430 is its low power consumption. It needs less than 1 mA in Operating mode and even less in Sleep mode because the main function of the robot is sleeping (my main function, too). From time to time the robot wakes up, checks the sensor, takes pictures of its surroundings, and then falls back to sleep. Nice job, not only for robots, I think.

The power for the active time comes from the solar cell. High-efficiency cells provide electric energy for a minimum of approximately two minutes of active time per hour. Good lighting conditions (e.g., direct sunlight or a light beam from a lamp) activate the robot permanently. The robot needs only about 25 mA for actions such as driving its wheel, communicating via radio, or takes pictures with its built in camera. Isn’t that impossible? No! …

The robot has two power sources. One source is a 3-V lithium battery with a 600-mAh capacity. The battery supplies the CPU in Sleep mode, during which all other loads are turned off. The other source of power comes from a solar cell. The solar cell charges a special 2.2-F capacitor. A step-up converter changes the unregulated input voltage into 5-V main power. The LTC3401 changes the voltage with an efficiency of about 96% …

Because of the changing light conditions, a step-up voltage converter is needed for generating stabilized VCC voltage. The LTC3401 is a high-efficiency converter that starts up from an input voltage as low as 1 V.

If the input voltage increases to about 3.5 V (at the capacitor), the robot will wake up, changing into Standby mode. Now the robot can work.

The approximate lifetime with a full-charged capacitor depends on its tasks. With maximum activity, the charging is used after one or two minutes and then the robot goes into Sleep mode. Under poor conditions (e.g., low light for a long time), the robot has an Emergency mode, during which the robot charges the capacitor from its lithium cell. Therefore, the robot has a chance to leave the bad area or contact the PC…

The control software runs on a normal PC, and all you need is a small radio box to get the signals from the robot.

The Robot Control Center serves as an interface to control the robot. Its main feature is to display the transmitted pictures and measurement values of the sensors.

Various buttons and throttles give you full control of the robot when power is available or sunlight hits the solar cells. In addition, it’s easy to make short slide shows from the pictures captured by the robot. Each session can be saved on a disk and played in the Robot Control Center…

The entire article appears in Circuit Cellar 147 2002. Type “solarrobot”  to access the password-protected article.