CC279: What’s Ahead in the October Issue

Although we’re still in September, it’s not too early to be looking forward to the October issue already available online.

The theme of the issue is signal processing, and contributor Devlin Gualtieri offers an interesting take on that topic.

Gualtieiri, who writes a science and technology blog, looks at how to improve Improvig Microprocessor Audio microprocessor audio.

“We’re immersed in a world of beeps and boops,” Gualtieri says. “Every digital knick-knack we own, from cell phones to microwave ovens, seeks to attract our attention.”

“Many simple microprocessor circuits need to generate one, or several, audio alert signals,” he adds. “The designer usually uses an easily programmed square wave voltage as an output pin that feeds a simple piezoelectric speaker element. It works, but it sounds awful. How can microprocessor audio be improved in some simple ways?”

Gualtieri’s article explains how analog circuitry and sine waves are often a better option than digital circuitry and square waves for audio alert signals.

Another article that touches on signal processing is columnist Colin Flynn’s look at advanced methods of debugging an FPGA design. It’s the debut of his new column Programmable Logic in Practice.

“This first article introduces the use of integrated logic analyzers, which provide an internal view of your running hardware,” O’Flynn says. “My next article will continue this topic and show you how hardware co-simulation enables you to seamlessly split the verification between real hardware interfacing to external devices and simulated hardware on your computer.”

You can find videos and other material that complement Colin’s articles on his website.

Another October issue highlight is a real prize-winner. The issue features the first installment of a two-part series on the SunSeeker Solar Array Tracker, which won third SunSeekerplace in the 2012 DesignSpark chipKit challenge overseen by Circuit Cellar.

The SunSeeker, designed by Canadian Graig Pearen, uses a Microchip Technology chipKIT Max32 and tracks, monitors, and adjusts PV arrays based on weather and sky conditions. It measures PV and air temperature, compiles statistics, and communicates with a local server that enables the SunSeeker to facilitate software algorithm development. Diagnostic software monitors the design’s motors to show both movement and position.

Pearen, semi-retired from the telecommunications industry and a part-time solar technician, is still refining his original design.

“Over the next two to three years of development and field testing, I plan for it to evolve into a full-featured ‘bells-and-whistles’ solar array tracker,” Pearen says. “I added a few enhancements as the software evolved, but I will develop most of the additional features later.”

Walter Krawec, a PhD student studying Computer Science at the Stevens Institute of Technology in Hoboken, NJ, wraps up his two-part series on “Experiments in Developmental Robotics.”

In Part 1, he introduced readers to the basics of artificial neural networks (ANNs) in robots and outlined an architecture for a robot’s evolving neural network, short-term memory system, and simple reflexes and instincts. In Part 2, Krawec discusses the reflex and instinct system that rewards an ENN.

“I’ll also explain the ‘decision path’ system, which rewards/penalizes chains of actions,” he says. “Finally, I’ll describe the experiments we’ve run demonstrating this architecture in a simulated environment.”

Videos of some of Krawec’s robot simulations can be found on his website.

Speaking of robotics, in this issue columnist Jeff Bachiochi introduces readers to the free robot control programming language RobotBASIC and explains how to use it with an integrated simulator for robot communication.

Other columnists also take on a number of very practical subjects. Robert Lacoste explains how inexpensive bipolar junction transistors (BJTs) can be helpful in many designs and outlines how to use one to build an amplifier.

George Novacek, who has found that the cost of battery packs account for half the DIY Battery Chargerpurchase price of his equipment, explains how to build a back-up power source with a lead-acid battery and a charger.

“Building a good battery charger is easy these days because there are many ICs specifically designed for battery chargers,” he says.

Columnist Bob Japenga begins a new series looking at file systems available on Linux for embedded systems.

“Although you could build a Linux system without a file system, most Linux systems will have some sort of file system,” Japenga says. “And there are various types. There are files systems that do not retain their data (volatile) across power outages (i.e., RAM drives). There are nonvolatile read-only file systems that cannot be changed (e.g., CRAMFS). And there are nonvolatile read/write file systems.”

Linux provides all three types of file systems, Japenga says, and his series will address all of them.

Finally, the magazine offers some special features, including an interview with Alenka Zajić, who teaches signal processing and electromagnetics at Georgia Institute of Technology’s School of Electrical and Computer Engineering. Also, two North Carolina State University researchers write about advances in 3-D liquid metal printing and possible applications such as electrical wires that can “heal” themselves after being severed.

For more, check out the Circuit Cellar’s October issue.

 

 

New CC Columnist to Focus on Programmable Logic

We’d like to introduce you to Colin O’Flynn, who will begin writing a bimonthly column titled “Programmable Logic In Practice” for Circuit Cellar beginning with our October issue.

Colin at his workbench

You may have already “met.” Since 2002, Circuit Cellar has published five articles from this Canadian electrical engineer, who is also a lecturer at Dalhousie University in Halifax, Nova Scotia, and a product developer.

Colin has been fascinated with embedded electronics since he was a child and his father gave him a few small “learn to solder” kits. Since then, he has constructed many projects, earned his master’s in applied science from Dalhousie, pursued graduate studies in cryptographic systems, and become an engineering consultant. Over the years, he has developed broad skills ranging from electronic assembly (including SMDs), to FPGA design in Verilog and VHDL, to high-speed PCB design.

And he likes to share what he knows, which makes him a good choice for Circuit Cellar.

Binary Explorer

One of his most recent  projects was a Binary Explorer Board, which he developed for use  in teaching a digital logic course at Dalhousie. It fulfilled his (and his students’) need for a simple programmable logic board with an integrated programmer, several switches and LEDs, and an integrated breadboard. He is working to develop the effective and affordable board into a product.

In the meantime, he is also planning some interesting column topics for Circuit Cellar.

He is interested in a range of possible topics, including circuit board layout for high-speed FPGAs; different methods of configuring an FPGA; design of memory into FPGA circuits;

Colin’s LabJack-based battery tester

use of tools such as Altera’s OpenCV libraries to design programmable logic using C code; use of vendor-provided and open-source soft-core microcontrollers; design of a PCI-Express interface for your FPGA; and addition of a USB 3.0 interface to your FPGA.

That’s just a short list reflecting his interest in programmable logic technologies, which have become increasingly popular with engineers and designers.

To learn more about Colin’s interests, check out our February interview with him, his YouTube channel of technical videos, and, of course, his upcoming columns in Circuit Cellar.

 

 

The Future of Data Acquisition Technology

Maurizio Di Paolo Emilio

Maurizio Di Paolo Emilio

By Maurizio Di Paolo Emilio

Data acquisition is a necessity, which is why data acquisition systems and software applications are essential tools in a variety of fields. For instance, research scientists rely on data acquisition tools for testing and measuring their laboratory-based projects. Therefore, as a data acquisition system designer, you must have an in-depth understanding of each part of the systems and programs you create.

I mainly design data acquisition software for physics-related experiments and industrial applications. Today’s complicated physics experiments require highly complex data acquisition systems and software that are capable of managing large amounts of information. Many of the systems require high-speed connections and digital recording. And they must be reconfigurable. Signals that are hard to characterize and analyze with a real-time display are evaluated in terms of high frequencies, large dynamic range, and gradual changes.

Data acquisition software is typically available in a text-based user interface (TUI) that comprises an ASCII configuration file and a graphic user interface (GUI), which are generally available with any web browser. Both interfaces enable data acquisition system management and customization, and you don’t need to recompile the sources. This means even inexperienced programmers can have full acquisition control.

Well-designed data acquisition and control software should be able to quickly recover from instrumentation failures and power outages without losing any data. Data acquisition software must provide a high-level language for algorithm design. Moreover, it requires data-archiving capability for verifying data integrity.

You have many data acquisition software options. An example is programmable software that uses a language such as C. Other software and data acquisition software packages enable you to design the custom instrumentation suited for specific applications (e.g., National Instruments’s LabVIEW and MathWorks’s MATLAB).

In addition to data acquisition software design, I’ve also been developing embedded data acquisition systems with open-source software to manage user-developed applications. The idea is to have credit-card-sized embedded data acquisition systems managing industrial systems using open-source software written in C. I’m using an ARM processor that will give me the ability to add small boards for specific applications (e.g., a board to manage data transmission via Wi-Fi or GSM).

A data acquisition system’s complexity tends to increase with the number of physical properties it must measure. Resolution and accuracy requirements also affect a system’s complexity. To eliminate cabling and provide for more modularity, you can combine data acquisition capabilities and signal conditioning in one device.

Recent developments in the field of fiber-optic communications have shown longer data acquisition transmission distances can cause errors. Electrical isolation is also an important topic. The goal is to eliminate ground loops (common problems with single-ended measurements) in terms of accuracy and protection from voltage spikes.

During the last year, some new technological developments have proven beneficial to the overall efficacy of data acquisition applications. For instance, advances in USB technology have made data acquisition and storage simpler and more efficient than ever (think “plug and play”). Advances in wireless technology have also made data transmission faster and more secure. This means improved data acquisition system and software technologies will also figure prominently in smartphone design and usage.

If you look to the future, consumer demand for mobile computing systems will only increase, and this will require tablet computers to feature improved data acquisition and storage capabilities. Having the ability to transmit, receive, and store larger amounts of data with tablets will become increasingly important to consumers as time goes on. There are three main things to consider when creating a data acquisition-related application for a tablet. Hardware connectivity: Tablets have few control options (e.g., Wi-Fi and Bluetooth). Program language support: Many tablets support Android apps created in Java. Device driver availability: Device drivers permit a high-level mode to easily and reliably execute a data acquisition board’s functionality. C and LabVIEW are not supported by Android or Apple’s iOS. USB, a common DAQ bus, is available in a set of tablets. In the other case, an adapter is required. In these instances, moving a possible data acquisition system to a tablet requires extra attention.

For all of the aforementioned reasons, I think field-programmable arrays (FPGAs) will figure prominently in the evolution of data acquisition system technology. The flexibility of FPGAs makes them ideal for custom data acquisition systems and embedded applications.

The Future of FPGAs (CC 25th Anniversary Preview)

Field-programmable gate arrays (FPGAs) have been around for more than two decades. What does the future hold for this technology? According to Halifax, Canada-based electrical engineering consultant Colin O’Flynn, current FPGA-related research and recent innovations seem to presage a coming revolution in digital system design, and this could lead to striking fast advances in several fields of engineering.

In the upcoming Circuit Cellar 25th Anniversary Issue—which is slated for publication in early 2013—O’Flynn shares his thoughts on the future of FPGA technology. He writes:

Field-programmable gate arrays (FPGAs) provide a powerful means to design digital systems (see Photo 1). Rather than writing a software program, you can design a number of hardware blocks to perform your tasks at blazing speeds…

Photo 1: Source: C. O’Flynn, CC 25th Anniversary issue

Microcontrollers have long played the peripheral game: the integration of easy-to-use dedicated peripherals onto the same physical chip as your digital core. FPGAs, it would seem, have no use for dedicated logic, since you can just design everything exactly as you desire. But dedicated logic has its advantages.

Beyond technical advantages, such as lower power consumption or smaller area with dedicated cores compared to programmable cores, dedicated cores can also reduce development effort. For example, current technology sees FPGAs with integrated high-end ARM cores, capable of running Linux on the integrated hard-core. Anyone familiar with setting up Linux on an ARM-based microprocessor can use this, without needing to learn about how one develops cores and peripherals on the FPGA itself.
Beyond integrating digital cores to simplify development, you can expect to see the integration of analog peripherals. Looking at the microcontroller market, you can find a variety of tightly integrated SoC devices with analog and digital on a single device. For instance, a variety of radio devices contain a complete RF front-end combined with a digital microcontroller. While current FPGA devices offer very limited analog peripherals (most have none), having a FPGA with an integrated high-speed ADC or DAC would be the making of a highly flexible radio-on-a-chip platform. The high development cost and lack of a current market has meant this remains only an interesting idea. To see where this market comes from, let’s look at some applications for such an FPGA.

Software-Defined Radio
Software-defined radio (SDR) takes a curious approach to receiving radio waves: digitize it all, and let software sort it out. The radio front-end is simple. Typically, the center frequency of interest is just downshifted to the baseband, everything else is filtered out, and a high-speed ADC digitizes it. All the demodulation and decoding then can be down in software. Naturally, this can require some fast sampling speeds. Anything from 20 to 500 MSps is fairly typical for these systems. Dealing with this much data is suited to FPGAs, since one can generate blocks to perform all the different functions that operate simultaneously…

Circuit Cellar’s Circuit Cellar 25th Anniversary Issue will be available in early 2013. Stay tuned for more updates on the issue’s content.

Q&A: Hai (Helen) Li (Academic, Embedded System Researcher)

Helen Li came to the U.S. from China in 2000 to study for a PhD at Purdue University. Following graduation she worked for Intel, Qualcomm, and Seagate. After about five years of working in industry, she transitioned to academia by taking a position at the Polytechnic Institute of New York University, where she teaches courses such as circuit design (“Introduction to VLSI”), advanced computer architecture (“VLSI System and Architecture Design”), and system-level applications (“Real-Time Embedded System Design”).

Hai (Helen) Li

In a recent interview Li described her background and provided details about her research relating to spin-transfer torque RAM-based memory hierarchy and memristor-based computing architecture.

An abridged version of the interview follows.

NAN: What were some of your most notable experiences working for Intel, Qualcomm, and Seagate?

HELEN: The industrial working experience is very valuable to my whole career life. At Seagate, I led a design team on a test chip for emerging memory technologies. Communication and understanding between device engineers and design communities is extremely important. The joined effects from all the related disciplines (not just one particular area anymore) became necessary. The concept of cross layers (including device/circuit/architecture/system) cooptimization, and design continues in my research career.

NAN: In 2009, you transitioned from an engineering career to a career teaching electrical and computer engineering at the Polytechnic Institute of New York University (NYU). What prompted this change?

HELEN: After five years of working at various industrial companies on wireless communication, computer systems, and storage, I realized I am more interested in independent research and teaching. After careful consideration, I decided to return to an academic career and later joined the NYU faculty.

NAN: How long have you been teaching at the Polytechnic Institute of NYU? What courses do you currently teach and what do you enjoy most about teaching?

HELEN: I have been teaching at NYU-Poly since September 2009. My classes cover a wide range of computer engineering, from basic circuit design (“Introduction to VLSI”), to advanced computer architecture (“VLSI System and Architecture Design”), to system-level applications (“Real-Time Embedded System Design”).

Though I have been teaching at NYU-Poly, I will be taking a one-year leave of absence from fall 2012 to summer 2013. During this time, I will continue my research on very large-scale integration (VLSI) and computer engineering at University of Pittsburgh.

I enjoy the interaction and discussions with students. They are very smart and creative. Those discussions always inspire new ideas. I learn so much from students.

Helen and her students are working on developing a 16-Kb STT-RAM test chip.

NAN: You’ve received several grants from institutions including the National Science Foundation and the Air Force Research Lab to fund your embedded systems endeavors. Tell us a little about some of these research projects.

HELEN: The objective of the research for “CAREER: STT-RAM-based Memory Hierarchy and Management in Embedded Systems” is to develop an innovative spin-transfer torque random access memory (STT-RAM)-based memory hierarchy to meet the demands of modern embedded systems for low-power, fast-speed, and high-density on-chip data storage.

This research provides a comprehensive design package for efficiently integrating STT-RAM into modern embedded system designs and offers unparalleled performance and power advantages. System architects and circuit designers can be well bridged and educated by the research innovations. The developed techniques can be directly transferred to industry applications under close collaborations with several industry partners, and directly impact future embedded systems. The activities in the collaboration also include tutorials at the major conferences on the technical aspects of the projects and new course development.

The main goal of the research for “CSR: Small Collaborative Research: Cross-Layer Design Techniques for Robustness of the Next-Generation Nonvolatile Memories” is to develop design technologies that can alleviate the general robustness issues of next-generation nonvolatile memories (NVMs) while maintaining and even improving generic memory specifications (e.g., density, power, and performance). Comprehensive solutions are integrated from architecture, circuit, and device layers for the improvement on the density, cost, and reliability of emerging nonvolatile memories.

The broader impact of the research lies in revealing the importance of applying cross-layer design techniques to resolve the robustness issues of the next-generation NVMs and the attentions to the robust design context.

The research for “Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques” was inspired by memristors, which have recently attracted increased attention since the first real device was discovered by Hewlett-Packard Laboratories (HP Labs) in 2008. Their distinctive memristive characteristic creates great potentials in future computing system design. Our objective is to investigate process-variation aware memristor modeling, design methodology for memristor-based computing architecture, and exploitation of circuit techniques to improve reliability and density.

The scope of this effort is to build an integrated design environment for memristor-based computing architecture, which will provide memristor modeling and design flow to circuit and architecture designers. We will also develop and implement circuit techniques to achieve a more reliable and efficient system.

An electric car model controlled by programmable emerging memories is in the developmental stages.

NAN: What types of projects are you and your students currently working on?

HELEN: Our major efforts are on device modeling, circuit design techniques, and novel architectures for computer systems and embedded systems. We primarily focus on the potentials of emerging devices and leveraging their advantages. Two of our latest projects are a 16-Kb STT-RAM test chip and an electric car model controlled by programmable emerging memories.

The complete interview appears in Circuit Cellar 267 (October 2012).