Arduino-Based Hand-Held Gaming System

gameduino2-WEBJames Bowman, creator of the Gameduino game adapter for microcontrollers, recently made an upgrade to the system adding a Future Technology Devices International (FTDI) FT800 chip to drive the graphics. Associate Editor Nan Price interviewed James about the system and its capabilities.

NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

Bowman-WEB

James Bowman

 JAMES: I live on the California coast in a small farming village between Santa Cruz and San Francisco. I moved here from London 17 years ago. I studied computing at Imperial College London.

NAN: What types of projects did you work on when you were employed by Silicon Graphics, 3dfx Interactive, and NVIDIA?

JAMES: Always software and hardware for GPUs. I began in software, which led me to microcode, which led to hardware. Before you know it you’ve learned Verilog. I was usually working near the boundary of software and hardware, optimizing something for cost, speed, or both.

NAN: How did you come up with the idea for the Gameduino game console?

JAMES: I paid for my college tuition by working as a games programmer for Nintendo and Sega consoles, so I was quite familiar with that world. It seemed a natural fit to try to give the Arduino some eye-catching color graphics. Some quick experiments with a breadboard and an FPGA confirmed that the idea was feasible.

NAN: The Gameduino 2 turns your Arduino into a hand-held modern gaming system. Explain the difference from the first version of Gameduino—what upgrades/additions have been made?

Gameduinofinal-WEB

The Gameduino2 uses a Future Technology Devices International chip to drive its graphics

JAMES: The original Gameduino had to use an FPGA to generate graphics, because in 2011 there was no such thing as an embedded GPU. It needs an external monitor and you had to supply your own inputs (e.g., buttons, joysticks, etc.). The Gameduino 2 uses the new Future Technology Devices International (FTDI) FT800 chip, which drives all the graphics. It has a built-in color resistive touchscreen and a three-axis accelerometer. So it is a complete game system—you just add the CPU.

NAN: How does the Arduino factor into the design?

GameduinoPCB-WEB

An Arduino, Ethernet adapter, and a Gameduino

 JAMES: Arduino is an interesting platform. It is 5 V, believe it or not, so the design needs a level shifter. Also, the Arduino is based on an 8-bit microcontroller, so the software stack needs to be carefully built to provide acceptable performance. The huge advantage of the Arduino is that the programming environment—the IDE, compiler, and downloader—is used and understood by hundreds of thousands of people.

 NAN: Is it easy or possible to customize the Gameduino 2?

 JAMES: I would have to say no. The PCB itself is entirely surface mount technology (SMT) and all the ICs are QFNs—they have no accessible pins! This is a long way from the DIP packages of yesterday, where you could change the circuit by cutting tracks and soldering onto the pins.

I needed a microscope and a hot air station to make the Gameduino2 prototype. That is a long way from the “kitchen table” tradition of the Arduino. Fortunately the Arduino’s physical design is very customization-friendly. Other devices can be stacked up, adding networking, hi-fi sound, or other sensor inputs.

 NAN: The Gameduino 2 project is on Kickstarter through November 7, 2013. Why did you decide to use Kickstarter crowdfunding for this project?

 JAMES: Kickstarter is great for small-scale inventors. The audience it reaches also tends to be interested in novel, clever things. So it’s a wonderful way to launch a small new product.

NAN: What’s next for Gameduino 2? Will the future see a Gameduino 3?

 JAMES: Product cycles in the Arduino ecosystem are quite long, fortunately, so a Gameduino 3 is distant. For the Gameduino 2, I’m writing a book, shipping the product, and supporting the developer community, which will hopefully make use of it.

 

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Dual-Channel Waveform Generators

B&K Precision 4053 Waveform Generator

B&K Precision 4053 Waveform Generator

The 4050 Series is a new line of four dual-channel function/arbitrary waveform generators. The instruments can generate 5-to-50-MHz waveforms for applications requiring stable and precise sine, square, triangle, and pulse waveforms with modulation and arbitrary waveform capabilities.

All models provide a main output voltage that can be vary from 0 to 10 VPP into 50 Ω and a secondary output that can vary from 0 to 3 VPP into 50 Ω. The generators feature a 3.5” color LCD, a rotary control knob, and a numeric keypad with dedicated waveform keys and output buttons.

The 4050 Series provides users with 48 built-in arbitrary waveforms. Using the included waveform editing software via the standard USB interface on the rear, users can create and load up to 10 custom 16-kpt waveforms. For general-purpose interface bus (GPIB) connectivity, an optional USB-to-GPIB adapter is available.

The generators offer a variety of modulation schemes for modulated signal applications including amplitude and frequency modulation (AM/FM), double sideband amplitude modulation (DSB-AM), amplitude and frequency shift keying (ASK/FSK), phase modulation (PM), and pulse-width modulation (PWM). Additional standard features include a linear and logarithmic sweep function, a built-in counter, sync output, a trigger I/O terminal, and a USB host port on the front panel to save and recall instrument settings and waveforms. A standard external 10-MHz reference clock input is provided to synchronize the instrument to another generator.

The 4052 (5-MHz) costs $499, the 4053 (10 MHz) costs $599, the 4054 (25 MHz) costs $850, and the 4055 (50 MHz) costs $1,050. Note: B&K Precision is offering 10% off MSRP through November 30, 2013. See website for details.

B&K Precision Corp.
www.bkprecision.com

Solar Array Tracker (Part 1): SunSeeker Hardware

Figure 1: These are the H-bridge motor drivers and sensor input conditioning circuits. Most of the discrete components are required for transient voltage protection from nearby lightning strikes and inductive kickback from the motors.

Figure 1: These are the H-bridge motor drivers and sensor input conditioning circuits. Most of the discrete components are required for transient voltage protection from nearby lightning strikes and inductive kickback from the motors.

Graig Pearen, semi-retired and living in Prince George, BC, Canada, spent his career in the telecommunications industry where he provided equipment maintenance and engineering services. Pearen, who now works part time as a solar energy technician, designed the SunSeeker Solar Array tracker, which won third place in the 2012 DesignSpark chipKit challenge.

He writes about his design, as well as changes he has made in prototypes since his first entry, in Circuit Cellar’s October issue. It is the first part of a two-part series on the SunSeeker, which presents the system’s software and commissioning tests in the final installment.

In the opening of Part 1, Pearen describes his objectives for the solar array tracker:

When I was designing my solar photovoltaic (PV) system, I wanted my array to track the sun in both axes. After looking at the available commercial equipment specifications and designs published online, I decided to design my own array tracker, the SunSeeker (see Photo 1 and Figure 1).

I had wanted to work with a Microchip Technology PIC processor for a while, so this was my opportunity to have some fun. I based my first prototype on a PIC16F870 microcontroller but when the microcontroller maxed out, I switched to its big brother, the PIC16F877. Although both prototypes worked well, I wanted to add more features and

The SunSeeker board, at top, contains all the circuits required to control the solar array’s motion. This board plugs into the Microsoft Technology chipKIT MAX32 processor board. The bottom side of the SunSeeker board (green) with the MAX32 board (red) plugged into it is shown at bottom.

The SunSeeker board, at top, contains all the circuits required to control the solar array’s motion. This board plugs into the Microchip Technology chipKIT MAX32 processor board. The bottom side of the SunSeeker board (green) with the MAX32 board (red) plugged into it is shown at bottom.

capabilities. I particularly wanted to add Ethernet access so I could use my home network to communicate with all my systems. I was considering Microchip’s chipKIT Max32 board for the next prototype when Circuit Cellar’s DesignSpark chipKIT contest was announced.

I knew the contest would be challenging. In addition to learning about a new processor and prototyping hardware, the contest rules required me to learn a new IDE (MPIDE), programming language (C++), schematic capture, and PCB design software (DesignSpark PCB). I also decided to make this my first surface-mount component design.

My objective for the contest was to replicate the functionality of the previous Assembly language software. I wanted the new design to be a test platform to develop new features and tracking algorithms. Over the next two to three years of development and field testing, I plan for it to evolve into a full-featured “bells-and-whistles” solar array tracker. I added a few enhancements as the software evolved, but I will develop most of the additional features later.

The system tracks, monitors, and adjusts solar photovoltaic (PV) arrays based on weather and atmospheric conditions. It compiles statistics on these conditions and communicates with a local server that enables software algorithm refinement. The SunSeeker logs a broad variety of data.

The SunSeeker measures, displays, and records the duration of the daily sunny, hazy, and cloudy periods; the array temperature; the ambient temperature; daily minimum and maximum temperatures; incident light intensity; and the drive motor current. The data log is indexed by the day number (1–366). Index–0 is the annual data and 1–366 store the data for each day of the year. Each record is 18 bytes long for a total of 6,588 bytes per year.

At midnight each day, the daily statistics are recorded and added to the cumulative totals. The data logs can be downloaded in comma-separated values (CSV) format for permanent record keeping and for use in spreadsheet or database programs.

The SunSeeker has two main parts, a control module and a separate light sensor module, plus the temperature and snow sensors.

The control module is mounted behind the array where it is protected from the heat of direct sunlight exposure. The sensor module is potted in clear UV-proof epoxy and mounted a few centimeters away on the edge of, and in the same plane as, the array. To select an appropriate potting compound, I contacted Epoxies, Etc. and asked for a recommendation. Following the company’s advice, I obtained a small quantity of urethane resin (20-2621RCL) and urethane catalyst (20-2621CCL).

When controlling mechanical devices, monitoring for proper operation, and detecting malfunctions it is necessary to prevent hardware damage. For example, if the solar array were to become frozen in place during an ice storm, it would need to be sensed and acted upon. Diagnostic software watches the motors to detect any hardware fault that may occur. Fault detection is accomplished in several ways. The H-bridges have internal fault detection for over temperature, under voltage, and shorted circuit. The current drawn by the motors is monitored for abnormally high or low current and the motor drive assemblies’ pulses are counted to show movement and position.

To read more about the DIY SunSeeker solar array tracker, and Pearen’s plans for further refinements, check out the October issue.

 

Client Profile: Netburner, Inc

NetBurner, Inc.
5405 Morehouse Drive
San Diego, CA 92121

www.netburner.com

Contact: sales@netburner.com

Embedded Products/Services: The NetBurner solution provides hardware, software, and tools to network enable new and existing products. All components are integrated and fully functional, so you can immediately begin working on your application.

Product Categories:

  • Serial to Ethernet: Modules can be used out of the box with no programming, or you can use a development kit to create your own custom applications. Hardware ranges from a single chip to small modules with many features.
  • Core Modules: Typically used as the core processing module in a design, core modules include the processor, flash, RAM and on-board network capability. The processor pins are brought out to connectors and include functions such as SPI, I2C, address/data bus, ADC, DAC, UARTs, digital I/O, PWM, and CAN.
  • Development Kits: Development kits can be used to customize any of NetBurner’s Serial-to-Ethernet or Core Modules. Kits include the Eclipse IDE, a C/C++ compiler/linker, a debugger, a RTOS, a TCP/IP stack, and board support packages.

Product Information: The MOD54415 and the NANO54415 modules provide 250-MHz processor, up to 32 MB flash, 64 MB DDR, ADC, DAC, eight UARTs, four I2C, three SPI, 1-wire, microSD flash socket, five PWM, and up to 44 digital I/O.

Exclusive Offer: Receive 15% off on select development kits. Promo code: CIRCUITCELLAR


Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.