Embedded Programming: Rummage Around In This Toolbox

Circuit Cellar’s April issue is nothing less than an embedded programming toolbox. Inside you’ll find tips, tools, and online resources to help you do everything from building a simple tracing system that can debug a small embedded system to designing with a complex system-on-a-chip (SoC) that combines programmable logic and high-speed processors.

Article contributor Thiadmer Riemersma describes the three parts of his tracing system: a set of macros to include in the source files of a device under test (DUT), a PC workstation viewer that displays retrieved trace data, and a USB dongle that interfaces the DUT with the workstation (p. 26).

Thaidmer Riemersma's trace dongle is connected to a laptop and device. The dongle decodes the signal and forwards it as serial data from a virtual RS-232 port to the workstation.

Thaidmer Riemersma’s trace dongle is connected to a laptop and DUT. The dongle decodes the signal and forwards it as serial data from a virtual RS-232 port to the workstation.

Riemersma’s special serial protocol overcomes common challenges of tracing small embedded devices, which typically have limited-performance microcontrollers and scarce interfaces. His system uses a single I/O and keeps it from bottlenecking by sending DUT-to-workstation trace transmissions as compact binary messages. “The trace viewer (or trace “listener”) can translate these message IDs back to the human-readable strings,” he says.

But let’s move on from discussing a single I/0 to a tool that offers hundreds of I/0s. They’re part of the all-programmable Xilinx Zynq SoC, an example of a device that blends a large FPGA fabric with a powerful processing core. Columnist Colin O’Flynn explores using the Zynq SoC as part of the Avnet ZedBoard development board (p. 46). “Xilinx’s Zynq device has many interesting applications,” O’Flynn concludes. “This is made highly accessible by the ZedBoard and MicroZed boards.”

An Avnet ZedBoard is connected to the OpenADC. The OpenADC provides a moderate-speed ADC (105 msps), which interfaces to the programmable logic (PL) fabric in Xilinx’s Zynq device via a parallel data bus. The PL fabric then maps itself as a peripheral on the hard-core processing system (PS) in the Zynq device to stream this data into the system DDR memory.

An Avnet ZedBoard is connected to the OpenADC. (Source: C. O’Flynn, Circuit Cellar 285)

Our embedded programming issue also includes George Novacek’s article on design-level software safety analysis, which helps avert hazards that can damage an embedded controller (p. 39). Bob Japenga discusses specialized file systems essential to Linux and a helpful networking protocol (p. 52).

One of the final steps is mounting the servomotor for rudder control. Thin cords connect the servomotor horn and the rudder. Two metal springs balance mechanical tolerances.

Jens Altenburg’s project

Other issue highlights include projects that are fun as well as instructive. For example, Jens Altenburg added an MCU, GPS, flight simulation, sensors, and more to a compass-controlled glider design he found in a 1930s paperback (p. 32). Columnist Jeff Bachiochi introduces the possibilities of programmable RGB LED strips (p. 66).

Configurable Regulator

LinearThe LTM4644 quad output step-down µModule (micromodule) regulator is configurable as a single (16-A), dual (12-A, 4-A, or 8-A, 8-A), triple (8-A, 4-A, 4-A), or quad (4-A each) output regulator. This flexibility enables system designers to rely on one simple and compact µModule regulator for the various voltage and load current requirements of FPGAs, ASICs, and microprocessors as well as other board circuitry. The LTM4644 is ideal for communications, data storage, industrial, transportation, and medical system applications.

The LTM4644 regulator includes DC/DC controllers, power switches, inductors and compensation components. Only eight external ceramic capacitors (1206 or smaller case sizes) and four feedback resistors (0603 case size) are required to regulate four independently adjustable outputs from 0.6 to 5.5 V. Separate input pins enable the four channels to be powered from a common supply rail or different rails from 4 to 14 V.

At an ambient temperature of 55°C, the LTM4644 delivers up to 13 A at 1.5 V from a 12-V input or up to 14 A with 200-LFM airflow. The four channels operate at 90° out-of-phase to minimize input ripple whether at the 1-MHz default switching frequency or synchronized to an external clock between 700 kHz and 1.3 MHz. With the addition of an external bias supply above 4 V, the LTM4644 can regulate from an input supply voltage as low as 2.375 V. The regulator also includes output overvoltage and overcurrent fault protection.

The LTM4644 costs $22.85 each in 1,000-unit quantities.

Linear Technology Corp.
www.linear.com

Places for the IoT Inside Your Home

It’s estimated that by the year 2020, more than 30 billion devices worldwide will be wirelessly connected to the IoT. While the IoT has massive implications for government and industry, individual electronics DIYers have long recognized how projects that enable wireless communication between everyday devices can solve or avert big problems for homeowners.

February CoverOur February issue focusing on Wireless Communications features two such projects, including  Raul Alvarez Torrico’s Home Energy Gateway, which enables users to remotely monitor energy consumption and control household devices (e.g., lights and appliances).

A Digilent chipKIT Max32-based embedded gateway/web server communicates with a single smart power meter and several smart plugs in a home area wireless network. ”The user sees a web interface containing the controls to turn on/off the smart plugs and sees the monitored power consumption data that comes from the smart meter in real time,” Torrico says.

While energy use is one common priority for homeowners, another is protecting property from hidden dangers such as undetected water leaks. Devlin Gualtieri wanted a water alarm system that could integrate several wireless units signaling a single receiver. But he didn’t want to buy one designed to work with expensive home alarm systems charging monthly fees.

In this issue, Gualtieri writes about his wireless water alarm network, which has simple hardware including a Microchip Technology PIC12F675 microcontroller and water conductance sensors (i.e., interdigital electrodes) made out of copper wire wrapped around perforated board.

It’s an inexpensive and efficient approach that can be expanded. “Multiple interdigital sensors can be wired in parallel at a single alarm,” Gualtieri says. A single alarm unit can monitor multiple water sources (e.g., a hot water tank, a clothes washer, and a home heating system boiler).

Also in this issue, columnist George Novacek begins a series on wireless data links. His first article addresses the basic principles of radio communications that can be used in control systems.

Other issue highlights include advice on extending flash memory life; using C language in FPGA design; detecting capacitor dielectric absorption; a Georgia Tech researcher’s essay on the future of inkjet-printed circuitry; and an overview of the hackerspaces and enterprising designs represented at the World Maker Faire in New York.

Editor’s Note: Circuit Cellar‘s February issue will be available online in mid-to-late January for download by members or single-issue purchase by web shop visitors.

Q&A: Marilyn Wolf, Embedded Computing Expert

Marilyn Wolf has created embedded computing techniques, co-founded two companies, and received several Institute of Electrical and Electronics Engineers (IEEE) distinctions. She is currently teaching at Georgia Institute of Technology’s School of Electrical and Computer Engineering and researching smart-energy grids.—Nan Price, Associate Editor

NAN: Do you remember your first computer engineering project?

MARILYN: My dad is an inventor. One of his stories was about using copper sewer pipe as a drum memory. In elementary school, my friend and I tried to build a computer and bought a PCB fabrication kit from RadioShack. We carefully made the switch features using masking tape and etched the board. Then we tried to solder it and found that our patterning technology outpaced our soldering technology.

NAN: You have developed many embedded computing techniques—from hardware/software co-design algorithms and real-time scheduling algorithms to distributed smart cameras and code compression. Can you provide some information about these techniques?

Marilyn Wolf

Marilyn Wolf

MARILYN: I was inspired to work on co-design by my boss at Bell Labs, Al Dunlop. I was working on very-large-scale integration (VLSI) CAD at the time and he brought in someone who designed consumer telephones. Those designers didn’t care a bit about our fancy VLSI because it was too expensive. They wanted help designing software for microprocessors.

Microprocessors in the 1980s were pretty small, so I started on simple problems, such as partitioning a specification into software plus a hardware accelerator. Around the turn of the millennium, we started to see some very powerful processors (e.g., the Philips Trimedia). I decided to pick up on one of my earliest interests, photography, and look at smart cameras for real-time computer vision.

That work eventually led us to form Verificon, which developed smart camera systems. We closed the company because the market for surveillance systems is very competitive.
We have started a new company, SVT Analytics, to pursue customer analytics for retail using smart camera technologies. I also continued to look at methodologies and tools for bigger software systems, yet another interest I inherited from my dad.

NAN: Tell us a little more about SVT Analytics. What services does the company provide and how does it utilize smart-camera technology?

MARILYN: We started SVT Analytics to develop customer analytics for software. Our goal is to do for bricks-and-mortar retailers what web retailers can do to learn about their customers.

On the web, retailers can track the pages customers visit, how long they stay at a page, what page they visit next, and all sorts of other statistics. Retailers use that information to suggest other things to buy, for example.

Bricks-and-mortar stores know what sells but they don’t know why. Using computer vision, we can determine how long people stay in a particular area of the store, where they came from, where they go to, or whether employees are interacting with customers.

Our experience with embedded computer vision helps us develop algorithms that are accurate but also run on inexpensive platforms. Bad data leads to bad decisions, but these systems need to be inexpensive enough to be sprinkled all around the store so they can capture a lot of data.

NAN: Can you provide a more detailed overview of the impact of IC technology on surveillance in recent years? What do you see as the most active areas for research and advancements in this field?

MARILYN: Moore’s law has advanced to the point that we can provide a huge amount of computational power on a single chip. We explored two different architectures: an FPGA accelerator with a CPU and a programmable video processor.

We were able to provide highly accurate computer vision on inexpensive platforms, about $500 per channel. Even so, we had to design our algorithms very carefully to make the best use of the compute horsepower available to us.

Computer vision can soak up as much computation as you can throw at it. Over the years, we have developed some secret sauce for reducing computational cost while maintaining sufficient accuracy.

NAN: You wrote several books, including Computers as Components: Principles of Embedded Computing System Design and Embedded Software Design and Programming of Multiprocessor System-on-Chip: Simulink and System C Case Studies. What can readers expect to gain from reading your books?

MARILYN: Computers as Components is an undergraduate text. I tried to hit the fundamentals (e.g., real-time scheduling theory, software performance analysis, and low-power computing) but wrap around real-world examples and systems.

Embedded Software Design is a research monograph that primarily came out of Katalin Popovici’s work in Ahmed Jerraya’s group. Ahmed is an old friend and collaborator.

NAN: When did you transition from engineering to teaching? What prompted this change?

MARILYN: Actually, being a professor and teaching in a classroom have surprisingly little to do with each other. I spend a lot of time funding research, writing proposals, and dealing with students.

I spent five years at Bell Labs before moving to Princeton, NJ. I thought moving to a new environment would challenge me, which is always good. And although we were very well supported at Bell Labs, ultimately we had only one customer for our ideas. At a university, you can shop around to find someone interested in what you want to do.

NAN: How long have you been at Georgia Institute of Technology’s School of Electrical and Computer Engineering? What courses do you currently teach and what do you enjoy most about instructing?

MARILYN: I recently designed a new course, Physics of Computing, which is a very different take on an introduction to computer engineering. Instead of directly focusing on logic design and computer organization, we discuss the physical basis of delay and energy consumption.

You can talk about an amazingly large number of problems involving just inverters and RC circuits. We relate these basic physical phenomena to systems. For example, we figure out why dynamic RAM (DRAM) gets bigger but not faster, then see how that has driven computer architecture as DRAM has hit the memory wall.

NAN: As an engineering professor, you have some insight into what excites future engineers. With respect to electrical engineering and embedded design/programming, what are some “hot topics” your students are currently attracted to?

MARILYN: Embedded software—real-time, low-power—is everywhere. The more general term today is “cyber-physical systems,” which are systems that interact with the physical world. I am moving slowly into control-oriented software from signal/image processing. Closing the loop in a control system makes things very interesting.

My Georgia Tech colleague Eric Feron and I have a small project on jet engine control. His engine test room has a 6” thick blast window. You don’t get much more exciting than that.

NAN: That does sound exciting. Tell us more about the project and what you are exploring with it in terms of embedded software and closed-loop control systems.

MARILYN: Jet engine designers are under the same pressures now that have faced car engine designers for years: better fuel efficiency, lower emissions, lower maintenance cost, and lower noise. In the car world, CPU-based engine controllers were the critical factor that enabled car manufacturers to simultaneously improve fuel efficiency and reduce emissions.

Jet engines need to incorporate more sensors and more computers to use those sensors to crunch the data in real time and figure out how to control the engine. Jet engine designers are also looking at more complex engine designs with more flaps and controls to make the best use of that sensor data.

One challenge of jet engines is the high temperatures. Jet engines are so hot that some parts of the engine would melt without careful design. We need to provide more computational power while living with the restrictions of high-temperature electronics.

NAN: Your research interests include embedded computing, smart devices, VLSI systems, and biochips. What types of projects are you currently working on?

MARILYN: I’m working on with Santiago Grivalga of Georgia Tech on smart-energy grids, which are really huge systems that would span entire countries or continents. I continue to work on VLSI-related topics, such as the work on error-aware computing that I pursued with Saibal Mukopodhyay.

I also work with my friend Shuvra Bhattacharyya on architectures for signal-processing systems. As for more unusual things, I’m working on a medical device project that is at the early stages, so I can’t say too much specifically about it.

NAN: Can you provide more specifics about your research into smart energy grids?

MARILYN: Smart-energy grids are also driven by the push for greater efficiency. In addition, renewable energy sources have different characteristics than traditional coal-fired generators. For example, because winds are so variable, the energy produced by wind generators can quickly change.

The uses of electricity are also more complex, and we see increasing opportunities to shift demand to level out generation needs. For example, electric cars need to be recharged, but that can happen during off-peak hours. But energy systems are huge. A single grid covers the eastern US from Florida to Minnesota.

To make all these improvements requires sophisticated software and careful design to ensure that the grid is highly reliable. Smart-energy grids are a prime example of Internet-based control.

We have so many devices on the grid that need to coordinate that the Internet is the only way to connect them. But the Internet isn’t very good at real-time control, so we have to be careful.

We also have to worry about security Internet-enabled devices enable smart grid operations but they also provide opportunities for tampering.

NAN: You’ve earned several distinctions. You were the recipient of the Institute of Electrical and Electronics Engineers (IEEE) Circuits and Systems Society Education Award and the IEEE Computer Society Golden Core Award. Tell us about these experiences.

MARILYN: These awards are presented at conferences. The presentation is a very warm, happy experience. Everyone is happy. These things are time to celebrate the field and the many friends I’ve made through my work.

CC281: Overcome Fear of Ethernet on an FPGA

As its name suggests, the appeal of an FPGA is that it is fully programmable. Instead of writing software, you design hardware blocks to quickly do what’s required of a digital design. This also enables you to reprogram an FPGA product in the field to fix problems “on the fly.”

But what if “you” are an individual electronics DIYer rather than an industrial designer? DIYers can find FPGAs daunting.

Issue281The December issue of Circuit Cellar issue should offer reassurance, at least on the topic of “UDP Streaming on an FPGA.” That’s the focus of Steffen Mauch’s article for our Programmable Logic issue (p. 20).

Ethernet on an FPGA has several applications. For example, it can be used to stream measured signals to a computer for analysis or to connect a camera (via Camera Link) to an FPGA to transmit images to a computer.

Nonetheless, Mauch says, “most novices who start to develop FPGA solutions are afraid to use Ethernet or DDR-SDRAM on their boards because they fear the resulting complexity.” Also, DIYers don’t have the necessary IP core licenses, which are costly and often carry restrictions.

Mauch’s UDP monitor project avoids such costs and restrictions by using a free implementation of an Ethernet-streaming device based on a Xilinx Spartan-6 LX FPGA. His article explains how to use OpenCores’s open-source tri-mode MAC implementation and stream UDP packets with VHDL over Ethernet.

Mauch is not the only writer offering insights into FPGAs. For more advanced FPGA enthusiasts, columnist Colin O’Flynn discusses hardware co-simulation (HCS), which enables the software simulation of a design to be offloaded to an FPGA. This approach significantly shortens the time needed for adequate simulation of a new product and ensures that a design is actually working in hardware (p. 52).

This Circuit Cellar issue offers a number of interesting topics in addition to programmable logic. For example, you’ll find a comprehensive overview of the latest in memory technologies, advice on choosing a flash file system for your embedded Linux system, a comparison of amplifier classes, and much more.

Mary Wilson
editor@circuitcellar.com

Member Profile: Scott Weber

Scott Weber

Scott Weber

LOCATION:
Arlington, Texas, USA

MEMBER STATUS:
Scott said he started his Circuit Cellar subscription late in the last century. He chose the magazine because it had the right mix of MCU programming and electronics.

TECH INTERESTS:
He has always enjoyed mixing discrete electronic projects with MCUs. In the early 1980s, he built a MCU board based on an RCA CDP1802 with wirewrap and programmed it with eight switches and a load button.

Back in the 1990s, Scott purchased a Microchip Technology PICStart Plus. “I was thrilled at how powerful and comprehensive the chip and tools were compared to the i8085 and CDP1802 devices I tinkered with years before,” he said.

RECENT EMBEDDED TECH ACQUISITION:
Scott said he recently treated himself to a brand-new Fluke 77-IV multimeter.

CURRENT PROJECTS:
Scott is building devices that can communicate through USB to MS Windows programs. “I don’t have in mind any specific system to control, it is something to learn and have fun with,” he said. “This means learning not only an embedded USB software framework, but also Microsoft Windows device drivers.”

THOUGHTS ON THE FUTURE OF EMBEDDED TECH:
“Embedded devices are popping up everywhere—in places most people don’t even realize they are being used. It’s fun discovering where they are being applied. It is so much easier to change the microcode of an MCU or FPGA as the unit is coming off the assembly line than it is to rewire a complex circuit design,” Scott said.

“I also like Member Profile Joe Pfeiffer’s final comment in Circuit Cellar 276: Surface-mount and ASIC devices are making a ‘barrier to entry’ for the hobbyist. You can’t breadboard those things! I gotta learn a good way to make my own PCBs!”

Arduino-Based Hand-Held Gaming System

gameduino2-WEBJames Bowman, creator of the Gameduino game adapter for microcontrollers, recently made an upgrade to the system adding a Future Technology Devices International (FTDI) FT800 chip to drive the graphics. Associate Editor Nan Price interviewed James about the system and its capabilities.

NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

Bowman-WEB

James Bowman

 JAMES: I live on the California coast in a small farming village between Santa Cruz and San Francisco. I moved here from London 17 years ago. I studied computing at Imperial College London.

NAN: What types of projects did you work on when you were employed by Silicon Graphics, 3dfx Interactive, and NVIDIA?

JAMES: Always software and hardware for GPUs. I began in software, which led me to microcode, which led to hardware. Before you know it you’ve learned Verilog. I was usually working near the boundary of software and hardware, optimizing something for cost, speed, or both.

NAN: How did you come up with the idea for the Gameduino game console?

JAMES: I paid for my college tuition by working as a games programmer for Nintendo and Sega consoles, so I was quite familiar with that world. It seemed a natural fit to try to give the Arduino some eye-catching color graphics. Some quick experiments with a breadboard and an FPGA confirmed that the idea was feasible.

NAN: The Gameduino 2 turns your Arduino into a hand-held modern gaming system. Explain the difference from the first version of Gameduino—what upgrades/additions have been made?

Gameduinofinal-WEB

The Gameduino2 uses a Future Technology Devices International chip to drive its graphics

JAMES: The original Gameduino had to use an FPGA to generate graphics, because in 2011 there was no such thing as an embedded GPU. It needs an external monitor and you had to supply your own inputs (e.g., buttons, joysticks, etc.). The Gameduino 2 uses the new Future Technology Devices International (FTDI) FT800 chip, which drives all the graphics. It has a built-in color resistive touchscreen and a three-axis accelerometer. So it is a complete game system—you just add the CPU.

NAN: How does the Arduino factor into the design?

GameduinoPCB-WEB

An Arduino, Ethernet adapter, and a Gameduino

 JAMES: Arduino is an interesting platform. It is 5 V, believe it or not, so the design needs a level shifter. Also, the Arduino is based on an 8-bit microcontroller, so the software stack needs to be carefully built to provide acceptable performance. The huge advantage of the Arduino is that the programming environment—the IDE, compiler, and downloader—is used and understood by hundreds of thousands of people.

 NAN: Is it easy or possible to customize the Gameduino 2?

 JAMES: I would have to say no. The PCB itself is entirely surface mount technology (SMT) and all the ICs are QFNs—they have no accessible pins! This is a long way from the DIP packages of yesterday, where you could change the circuit by cutting tracks and soldering onto the pins.

I needed a microscope and a hot air station to make the Gameduino2 prototype. That is a long way from the “kitchen table” tradition of the Arduino. Fortunately the Arduino’s physical design is very customization-friendly. Other devices can be stacked up, adding networking, hi-fi sound, or other sensor inputs.

 NAN: The Gameduino 2 project is on Kickstarter through November 7, 2013. Why did you decide to use Kickstarter crowdfunding for this project?

 JAMES: Kickstarter is great for small-scale inventors. The audience it reaches also tends to be interested in novel, clever things. So it’s a wonderful way to launch a small new product.

NAN: What’s next for Gameduino 2? Will the future see a Gameduino 3?

 JAMES: Product cycles in the Arduino ecosystem are quite long, fortunately, so a Gameduino 3 is distant. For the Gameduino 2, I’m writing a book, shipping the product, and supporting the developer community, which will hopefully make use of it.

 

Programmable Logic Video Lessons

Interested in learning more about programmable logic? You’re in luck. Colin O’Flynn’s first article in his “Programmable Logic in Practice” column appears in Circuit Cellar’s October 2013 issue. To accompany his work, Colin is producing informative videos for you to view after reading his articles.

In the first video, Colin covers the topic of adding the Xilinx ChipScope ILA/VIO core using automatic and manual insertion with ISE.

 

Since 2002, Circuit Cellar has published several of O’Flynn’s articles. O’Flynn  is an engineer and lecturer at Dalhousie University in Halifax, Nova Scotia. He earned a Master’s in applied science from Dalhousie and pursued further graduate studies in cryptographic systems. Over the years, he has developed a wide variety of skills ranging from electronic assembly (including SMDs) to FPGA design in Verilog and VHDL to high-speed PCB design.

CC279: What’s Ahead in the October Issue

Although we’re still in September, it’s not too early to be looking forward to the October issue already available online.

The theme of the issue is signal processing, and contributor Devlin Gualtieri offers an interesting take on that topic.

Gualtieiri, who writes a science and technology blog, looks at how to improve Improvig Microprocessor Audio microprocessor audio.

“We’re immersed in a world of beeps and boops,” Gualtieri says. “Every digital knick-knack we own, from cell phones to microwave ovens, seeks to attract our attention.”

“Many simple microprocessor circuits need to generate one, or several, audio alert signals,” he adds. “The designer usually uses an easily programmed square wave voltage as an output pin that feeds a simple piezoelectric speaker element. It works, but it sounds awful. How can microprocessor audio be improved in some simple ways?”

Gualtieri’s article explains how analog circuitry and sine waves are often a better option than digital circuitry and square waves for audio alert signals.

Another article that touches on signal processing is columnist Colin Flynn’s look at advanced methods of debugging an FPGA design. It’s the debut of his new column Programmable Logic in Practice.

“This first article introduces the use of integrated logic analyzers, which provide an internal view of your running hardware,” O’Flynn says. “My next article will continue this topic and show you how hardware co-simulation enables you to seamlessly split the verification between real hardware interfacing to external devices and simulated hardware on your computer.”

You can find videos and other material that complement Colin’s articles on his website.

Another October issue highlight is a real prize-winner. The issue features the first installment of a two-part series on the SunSeeker Solar Array Tracker, which won third SunSeekerplace in the 2012 DesignSpark chipKit challenge overseen by Circuit Cellar.

The SunSeeker, designed by Canadian Graig Pearen, uses a Microchip Technology chipKIT Max32 and tracks, monitors, and adjusts PV arrays based on weather and sky conditions. It measures PV and air temperature, compiles statistics, and communicates with a local server that enables the SunSeeker to facilitate software algorithm development. Diagnostic software monitors the design’s motors to show both movement and position.

Pearen, semi-retired from the telecommunications industry and a part-time solar technician, is still refining his original design.

“Over the next two to three years of development and field testing, I plan for it to evolve into a full-featured ‘bells-and-whistles’ solar array tracker,” Pearen says. “I added a few enhancements as the software evolved, but I will develop most of the additional features later.”

Walter Krawec, a PhD student studying Computer Science at the Stevens Institute of Technology in Hoboken, NJ, wraps up his two-part series on “Experiments in Developmental Robotics.”

In Part 1, he introduced readers to the basics of artificial neural networks (ANNs) in robots and outlined an architecture for a robot’s evolving neural network, short-term memory system, and simple reflexes and instincts. In Part 2, Krawec discusses the reflex and instinct system that rewards an ENN.

“I’ll also explain the ‘decision path’ system, which rewards/penalizes chains of actions,” he says. “Finally, I’ll describe the experiments we’ve run demonstrating this architecture in a simulated environment.”

Videos of some of Krawec’s robot simulations can be found on his website.

Speaking of robotics, in this issue columnist Jeff Bachiochi introduces readers to the free robot control programming language RobotBASIC and explains how to use it with an integrated simulator for robot communication.

Other columnists also take on a number of very practical subjects. Robert Lacoste explains how inexpensive bipolar junction transistors (BJTs) can be helpful in many designs and outlines how to use one to build an amplifier.

George Novacek, who has found that the cost of battery packs account for half the DIY Battery Chargerpurchase price of his equipment, explains how to build a back-up power source with a lead-acid battery and a charger.

“Building a good battery charger is easy these days because there are many ICs specifically designed for battery chargers,” he says.

Columnist Bob Japenga begins a new series looking at file systems available on Linux for embedded systems.

“Although you could build a Linux system without a file system, most Linux systems will have some sort of file system,” Japenga says. “And there are various types. There are files systems that do not retain their data (volatile) across power outages (i.e., RAM drives). There are nonvolatile read-only file systems that cannot be changed (e.g., CRAMFS). And there are nonvolatile read/write file systems.”

Linux provides all three types of file systems, Japenga says, and his series will address all of them.

Finally, the magazine offers some special features, including an interview with Alenka Zajić, who teaches signal processing and electromagnetics at Georgia Institute of Technology’s School of Electrical and Computer Engineering. Also, two North Carolina State University researchers write about advances in 3-D liquid metal printing and possible applications such as electrical wires that can “heal” themselves after being severed.

For more, check out the Circuit Cellar’s October issue.

 

 

New CC Columnist to Focus on Programmable Logic

We’d like to introduce you to Colin O’Flynn, who will begin writing a bimonthly column titled “Programmable Logic In Practice” for Circuit Cellar beginning with our October issue.

Colin at his workbench

You may have already “met.” Since 2002, Circuit Cellar has published five articles from this Canadian electrical engineer, who is also a lecturer at Dalhousie University in Halifax, Nova Scotia, and a product developer.

Colin has been fascinated with embedded electronics since he was a child and his father gave him a few small “learn to solder” kits. Since then, he has constructed many projects, earned his master’s in applied science from Dalhousie, pursued graduate studies in cryptographic systems, and become an engineering consultant. Over the years, he has developed broad skills ranging from electronic assembly (including SMDs), to FPGA design in Verilog and VHDL, to high-speed PCB design.

And he likes to share what he knows, which makes him a good choice for Circuit Cellar.

Binary Explorer

One of his most recent  projects was a Binary Explorer Board, which he developed for use  in teaching a digital logic course at Dalhousie. It fulfilled his (and his students’) need for a simple programmable logic board with an integrated programmer, several switches and LEDs, and an integrated breadboard. He is working to develop the effective and affordable board into a product.

In the meantime, he is also planning some interesting column topics for Circuit Cellar.

He is interested in a range of possible topics, including circuit board layout for high-speed FPGAs; different methods of configuring an FPGA; design of memory into FPGA circuits;

Colin’s LabJack-based battery tester

use of tools such as Altera’s OpenCV libraries to design programmable logic using C code; use of vendor-provided and open-source soft-core microcontrollers; design of a PCI-Express interface for your FPGA; and addition of a USB 3.0 interface to your FPGA.

That’s just a short list reflecting his interest in programmable logic technologies, which have become increasingly popular with engineers and designers.

To learn more about Colin’s interests, check out our February interview with him, his YouTube channel of technical videos, and, of course, his upcoming columns in Circuit Cellar.

 

 

The Future of Data Acquisition Technology

Maurizio Di Paolo Emilio

Maurizio Di Paolo Emilio

By Maurizio Di Paolo Emilio

Data acquisition is a necessity, which is why data acquisition systems and software applications are essential tools in a variety of fields. For instance, research scientists rely on data acquisition tools for testing and measuring their laboratory-based projects. Therefore, as a data acquisition system designer, you must have an in-depth understanding of each part of the systems and programs you create.

I mainly design data acquisition software for physics-related experiments and industrial applications. Today’s complicated physics experiments require highly complex data acquisition systems and software that are capable of managing large amounts of information. Many of the systems require high-speed connections and digital recording. And they must be reconfigurable. Signals that are hard to characterize and analyze with a real-time display are evaluated in terms of high frequencies, large dynamic range, and gradual changes.

Data acquisition software is typically available in a text-based user interface (TUI) that comprises an ASCII configuration file and a graphic user interface (GUI), which are generally available with any web browser. Both interfaces enable data acquisition system management and customization, and you don’t need to recompile the sources. This means even inexperienced programmers can have full acquisition control.

Well-designed data acquisition and control software should be able to quickly recover from instrumentation failures and power outages without losing any data. Data acquisition software must provide a high-level language for algorithm design. Moreover, it requires data-archiving capability for verifying data integrity.

You have many data acquisition software options. An example is programmable software that uses a language such as C. Other software and data acquisition software packages enable you to design the custom instrumentation suited for specific applications (e.g., National Instruments’s LabVIEW and MathWorks’s MATLAB).

In addition to data acquisition software design, I’ve also been developing embedded data acquisition systems with open-source software to manage user-developed applications. The idea is to have credit-card-sized embedded data acquisition systems managing industrial systems using open-source software written in C. I’m using an ARM processor that will give me the ability to add small boards for specific applications (e.g., a board to manage data transmission via Wi-Fi or GSM).

A data acquisition system’s complexity tends to increase with the number of physical properties it must measure. Resolution and accuracy requirements also affect a system’s complexity. To eliminate cabling and provide for more modularity, you can combine data acquisition capabilities and signal conditioning in one device.

Recent developments in the field of fiber-optic communications have shown longer data acquisition transmission distances can cause errors. Electrical isolation is also an important topic. The goal is to eliminate ground loops (common problems with single-ended measurements) in terms of accuracy and protection from voltage spikes.

During the last year, some new technological developments have proven beneficial to the overall efficacy of data acquisition applications. For instance, advances in USB technology have made data acquisition and storage simpler and more efficient than ever (think “plug and play”). Advances in wireless technology have also made data transmission faster and more secure. This means improved data acquisition system and software technologies will also figure prominently in smartphone design and usage.

If you look to the future, consumer demand for mobile computing systems will only increase, and this will require tablet computers to feature improved data acquisition and storage capabilities. Having the ability to transmit, receive, and store larger amounts of data with tablets will become increasingly important to consumers as time goes on. There are three main things to consider when creating a data acquisition-related application for a tablet. Hardware connectivity: Tablets have few control options (e.g., Wi-Fi and Bluetooth). Program language support: Many tablets support Android apps created in Java. Device driver availability: Device drivers permit a high-level mode to easily and reliably execute a data acquisition board’s functionality. C and LabVIEW are not supported by Android or Apple’s iOS. USB, a common DAQ bus, is available in a set of tablets. In the other case, an adapter is required. In these instances, moving a possible data acquisition system to a tablet requires extra attention.

For all of the aforementioned reasons, I think field-programmable arrays (FPGAs) will figure prominently in the evolution of data acquisition system technology. The flexibility of FPGAs makes them ideal for custom data acquisition systems and embedded applications.

The Future of FPGAs (CC 25th Anniversary Preview)

Field-programmable gate arrays (FPGAs) have been around for more than two decades. What does the future hold for this technology? According to Halifax, Canada-based electrical engineering consultant Colin O’Flynn, current FPGA-related research and recent innovations seem to presage a coming revolution in digital system design, and this could lead to striking fast advances in several fields of engineering.

In the upcoming Circuit Cellar 25th Anniversary Issue—which is slated for publication in early 2013—O’Flynn shares his thoughts on the future of FPGA technology. He writes:

Field-programmable gate arrays (FPGAs) provide a powerful means to design digital systems (see Photo 1). Rather than writing a software program, you can design a number of hardware blocks to perform your tasks at blazing speeds…

Photo 1: Source: C. O’Flynn, CC 25th Anniversary issue

Microcontrollers have long played the peripheral game: the integration of easy-to-use dedicated peripherals onto the same physical chip as your digital core. FPGAs, it would seem, have no use for dedicated logic, since you can just design everything exactly as you desire. But dedicated logic has its advantages.

Beyond technical advantages, such as lower power consumption or smaller area with dedicated cores compared to programmable cores, dedicated cores can also reduce development effort. For example, current technology sees FPGAs with integrated high-end ARM cores, capable of running Linux on the integrated hard-core. Anyone familiar with setting up Linux on an ARM-based microprocessor can use this, without needing to learn about how one develops cores and peripherals on the FPGA itself.
Beyond integrating digital cores to simplify development, you can expect to see the integration of analog peripherals. Looking at the microcontroller market, you can find a variety of tightly integrated SoC devices with analog and digital on a single device. For instance, a variety of radio devices contain a complete RF front-end combined with a digital microcontroller. While current FPGA devices offer very limited analog peripherals (most have none), having a FPGA with an integrated high-speed ADC or DAC would be the making of a highly flexible radio-on-a-chip platform. The high development cost and lack of a current market has meant this remains only an interesting idea. To see where this market comes from, let’s look at some applications for such an FPGA.

Software-Defined Radio
Software-defined radio (SDR) takes a curious approach to receiving radio waves: digitize it all, and let software sort it out. The radio front-end is simple. Typically, the center frequency of interest is just downshifted to the baseband, everything else is filtered out, and a high-speed ADC digitizes it. All the demodulation and decoding then can be down in software. Naturally, this can require some fast sampling speeds. Anything from 20 to 500 MSps is fairly typical for these systems. Dealing with this much data is suited to FPGAs, since one can generate blocks to perform all the different functions that operate simultaneously…

Circuit Cellar’s Circuit Cellar 25th Anniversary Issue will be available in early 2013. Stay tuned for more updates on the issue’s content.

Q&A: Hai (Helen) Li (Academic, Embedded System Researcher)

Helen Li came to the U.S. from China in 2000 to study for a PhD at Purdue University. Following graduation she worked for Intel, Qualcomm, and Seagate. After about five years of working in industry, she transitioned to academia by taking a position at the Polytechnic Institute of New York University, where she teaches courses such as circuit design (“Introduction to VLSI”), advanced computer architecture (“VLSI System and Architecture Design”), and system-level applications (“Real-Time Embedded System Design”).

Hai (Helen) Li

In a recent interview Li described her background and provided details about her research relating to spin-transfer torque RAM-based memory hierarchy and memristor-based computing architecture.

An abridged version of the interview follows.

NAN: What were some of your most notable experiences working for Intel, Qualcomm, and Seagate?

HELEN: The industrial working experience is very valuable to my whole career life. At Seagate, I led a design team on a test chip for emerging memory technologies. Communication and understanding between device engineers and design communities is extremely important. The joined effects from all the related disciplines (not just one particular area anymore) became necessary. The concept of cross layers (including device/circuit/architecture/system) cooptimization, and design continues in my research career.

NAN: In 2009, you transitioned from an engineering career to a career teaching electrical and computer engineering at the Polytechnic Institute of New York University (NYU). What prompted this change?

HELEN: After five years of working at various industrial companies on wireless communication, computer systems, and storage, I realized I am more interested in independent research and teaching. After careful consideration, I decided to return to an academic career and later joined the NYU faculty.

NAN: How long have you been teaching at the Polytechnic Institute of NYU? What courses do you currently teach and what do you enjoy most about teaching?

HELEN: I have been teaching at NYU-Poly since September 2009. My classes cover a wide range of computer engineering, from basic circuit design (“Introduction to VLSI”), to advanced computer architecture (“VLSI System and Architecture Design”), to system-level applications (“Real-Time Embedded System Design”).

Though I have been teaching at NYU-Poly, I will be taking a one-year leave of absence from fall 2012 to summer 2013. During this time, I will continue my research on very large-scale integration (VLSI) and computer engineering at University of Pittsburgh.

I enjoy the interaction and discussions with students. They are very smart and creative. Those discussions always inspire new ideas. I learn so much from students.

Helen and her students are working on developing a 16-Kb STT-RAM test chip.

NAN: You’ve received several grants from institutions including the National Science Foundation and the Air Force Research Lab to fund your embedded systems endeavors. Tell us a little about some of these research projects.

HELEN: The objective of the research for “CAREER: STT-RAM-based Memory Hierarchy and Management in Embedded Systems” is to develop an innovative spin-transfer torque random access memory (STT-RAM)-based memory hierarchy to meet the demands of modern embedded systems for low-power, fast-speed, and high-density on-chip data storage.

This research provides a comprehensive design package for efficiently integrating STT-RAM into modern embedded system designs and offers unparalleled performance and power advantages. System architects and circuit designers can be well bridged and educated by the research innovations. The developed techniques can be directly transferred to industry applications under close collaborations with several industry partners, and directly impact future embedded systems. The activities in the collaboration also include tutorials at the major conferences on the technical aspects of the projects and new course development.

The main goal of the research for “CSR: Small Collaborative Research: Cross-Layer Design Techniques for Robustness of the Next-Generation Nonvolatile Memories” is to develop design technologies that can alleviate the general robustness issues of next-generation nonvolatile memories (NVMs) while maintaining and even improving generic memory specifications (e.g., density, power, and performance). Comprehensive solutions are integrated from architecture, circuit, and device layers for the improvement on the density, cost, and reliability of emerging nonvolatile memories.

The broader impact of the research lies in revealing the importance of applying cross-layer design techniques to resolve the robustness issues of the next-generation NVMs and the attentions to the robust design context.

The research for “Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques” was inspired by memristors, which have recently attracted increased attention since the first real device was discovered by Hewlett-Packard Laboratories (HP Labs) in 2008. Their distinctive memristive characteristic creates great potentials in future computing system design. Our objective is to investigate process-variation aware memristor modeling, design methodology for memristor-based computing architecture, and exploitation of circuit techniques to improve reliability and density.

The scope of this effort is to build an integrated design environment for memristor-based computing architecture, which will provide memristor modeling and design flow to circuit and architecture designers. We will also develop and implement circuit techniques to achieve a more reliable and efficient system.

An electric car model controlled by programmable emerging memories is in the developmental stages.

NAN: What types of projects are you and your students currently working on?

HELEN: Our major efforts are on device modeling, circuit design techniques, and novel architectures for computer systems and embedded systems. We primarily focus on the potentials of emerging devices and leveraging their advantages. Two of our latest projects are a 16-Kb STT-RAM test chip and an electric car model controlled by programmable emerging memories.

The complete interview appears in Circuit Cellar 267 (October 2012).

CC266: Microcontroller-Based Data Management

Regardless of your area of embedded design or programming expertise, you have one thing in common with every electronics designer, programmer, and engineering student across the globe: almost everything you do relates to data. Each workday, you busy yourself with acquiring data, transmitting it, repackaging it, compressing it, securing it, sharing it, storing it, analyzing it, converting it, deleting it, decoding it, quantifying it, graphing it, and more. I could go on, but I won’t. The idea is clear: manipulating and controlling data in its many forms is essential to everything you do.

The ubiquitous importance of data is what makes Circuit Cellar’s Data Acquisition issue one of the most popular each year. And since you’re always seeking innovative ways to obtain, secure, and transmit data, we consider it our duty to deliver you a wide variety of content on these topics. The September 2012 issue (Circuit Cellar 266) features both data acquisition system designs and tips relating to control and data management.

On page 18, Brian Beard explains how he planned and built a microcontroller-based environmental data logger. The system can sense and record relative light intensity, barometric pressure, relative humidity, and more.

a: This is the environmental data logger’s (EDL) circuit board. b: This is the back of the EDL.

Data acquisition has been an important theme for engineering instructor Miguel Sánchez, who since 2005 has published six articles in Circuit Cellar about projects such as a digital video recorder (Circuit Cellar 174), “teleporting” serial communications via the ’Net (Circuit Cellar 193), and a creative DIY image-processing system (Circuit Cellar 263). An informative interview with Miguel begins on page 28.

Turn to page 38 for an informative article about how to build a compact acceleration data acquisition system. Mark Csele covers everything you need to know from basic physics to system design to acceleration testing.

This is the complete portable accelerometer design. with the serial download adapter. The adapter is installed only when downloading data to a PC and mates with an eight pin connector on the PCB. The rear of the unit features three powerful
rare-earth magnets that enable it to be attached to a vehicle.

In “Hardware-Accelerated Encryption,” Patrick Schaumont describes a hardware accelerator for data encryption (p. 48). He details the advanced encryption standard (AES) and encourages you to consider working with an FPGA.

This is the embedded processor design flow with FPGA. a: A C program is compiled for a softcore CPU, which is configured in an FPGA. b: To accelerate this C program, it is partitioned into a part for the software CPU, and a part that will be implemented as a hardware accelerator. The softcore CPU is configured together with the hardware accelerator in the FPGA.

Are you now ready to start a new data acquisition project? If so, read George Novacek’s article “Project Configuration Control” (p. 58), George Martin’s article “Software & Design File Organization” (p. 62), and Jeff Bachiochi’s article “Flowcharting Made Simple” (p. 66) before hitting your workbench. You’ll find their tips on project organization, planning, and implementation useful and immediately applicable.

Lastly, on behalf of the entire Circuit Cellar/Elektor team, I congratulate the winners of the DesignSpark chipKIT Challenge. Turn to page 32 to learn about Dean Boman’s First Prize-winning energy-monitoring system, as well as the other exceptional projects that placed at the top. The complete projects (abstracts, photos, schematic, and code) for all the winning entries are posted on the DesignSpark chipKIT Challenge website.

FPGA-Based VisualSonic Design Project

The VisualSonic Studio project on display at Design West last week was as innovative as it was fun to watch in operation. The design—which included an Altera DE2-115 FPGA development kit and a Terasic 5-megapixel CMOS Sensor (D5M)—used interactive tokens to control computer-generated music.

at Design West 2012 in San Jose, CA (Photo: Circuit Cellar)

I spoke with Allen Houng, Strategic Marketing Manager for Terasic, about the project developed by students from National Taiwan University. He described the overall design, and let me see the Altera kit and Terasic sensor installation.

A view of the kit and sensor (Photo: Circuit Cellar)

Houng also he also showed me the design in action. To operate the sound system, you simply move the tokens to create the sound effects of your choosing. Below is a video of the project in operation (Source: Terasic’s YouTube channel).