A Quiet Place for Soldering and Software Design

Senior software engineer Carlo Tauraso, of Trieste, Italy, has designed his home workspace to be “a distraction-free area where tools, manuals, and computer are at your fingertips.”

Tauraso, who wrote his first Assembler code in the 1980s for the Sinclair Research ZX Spectrum PC, now works on developing firmware for network devices and microinterfaces for a variety of European companies. Several of his articles and programming courses have been published in Italy, France, Spain, and the US. Three of his articles have appeared in Circuit Cellar since 2008.

Photo 1: This workstation is neatly divided into a soldering/assembling area on the left and developing/programming area on the right.

Photo 1: This workstation is neatly divided into a soldering/assembling area on the left and a developing/programming area on the right.

Tauraso keeps an orderly and, most importantly, quiet work area that helps him stay focused on his designs.

This is my “magic” designer workspace. It’s not simple to make an environment that’s perfectly suited to you. When I work and study I need silence.

I am a software engineer, so during designing I always divide the work into two main parts: the analysis and the implementation. I decided, therefore, to separate my workspace into two areas: the developing/programming area on the right and the soldering/assembling area on the left (see Photo 1). When I do one or the other activity, I move physically in one of the two areas of the table. Assembling and soldering are manual activities that relax me. On the other hand, programming often is a rather complex activity that requires a lot more concentration.

Photo 2: The marble slab at the right of Tauraso’s assembling/soldering area protects the table surface and the optical inspection camera nearby helps him work with tiny ICs.

Photo 2: The marble slab at the right of Tauraso’s assembling/soldering area protects the table surface. The optical inspection camera nearby helps him work with tiny ICs.

The assembling/soldering area is carefully set up to keep all of Tauraso’s tools within easy reach.

I fixed a marble slab square on the table to solder without fear of ruining the wood surface (see Photo 2). As you can see, I use a hot-air solder station and the usual iron welder. Today’s ICs are very small, so I also installed a camera for optical inspection (the black cylinder with the blue stripe). On the right, there are 12 outlets, each with its own switch. Everything is ready and at your fingertips!

Photo 3: This developing and programming space, with its three small computers, is called “the little Hydra.”

Photo 3: This developing and programming space, with its three small computers, is called “the little Hydra.”

The workspace’s developing and programming area makes it easy to multitask (see Photo 3).

In the foreground you can see a network of three small computers that I call “the little Hydra” in honor of the object-based OS developed at Carnegie Mellon University in Pittsburgh, PA, during the ’70s. The HYDRA project sought to demonstrate the cost-performance advantages of multiprocessors based on an inexpensive minicomputer. I used the same philosophy, so I have connected three Mini-ITX motherboards. Here I can test network programming with real hardware—one as a server, one as a client, one as a network sniffer or an attacker—while, on the other hand, I can front-end develop Windows and the [Microchip Technology] PIC firmware while chatting with my girlfriend.

This senior software designer has created a quiet work area with all his tools close at hand.

Senior software engineer Tauraso has created a quiet work area with all his tools close at hand.

Circuit Cellar will be publishing Tauraso’s article about a wireless thermal monitoring system based on the ANT+ protocol in an upcoming issue. In the meantime, you can follow Tauraso on Twitter @CarloTauraso.

Traveling With a “Portable Workspace”

As a freelance engineer, Raul Alvarez spends a lot of time on the go. He says the last four or five years he has been traveling due to work and family reasons, therefore he never stays in one place long enough to set up a proper workspace. “Whenever I need to move again, I just pack whatever I can: boards, modules, components, cables, and so forth, and then I’m good to go,” he explains.

Raul_Alvarez_Workspace _Photo_1

Alvarez sits at his “current” workstation.

He continued by saying:

In my case, there’s not much of a workspace to show because my workspace is whichever desk I have at hand in a given location. My tools are all the tools that I can fit into my traveling backpack, along with my software tools that are installed in my laptop.

Because in my personal projects I mostly work with microcontroller boards, modular components, and firmware, until now I think it didn’t bother me not having more fancy (and useful) tools such as a bench oscilloscope, a logic analyzer, or a spectrum analyzer. I just try to work with whatever I have at hand because, well, I don’t have much choice.

Given my circumstances, probably the most useful tools I have for debugging embedded hardware and firmware are a good-old UART port, a multimeter, and a bunch of LEDs. For the UART interface I use a Future Technology Devices International FT232-based UART-to-USB interface board and Tera Term serial terminal software.

Currently, I’m working mostly with Microchip Technology PIC and ARM microcontrollers. So for my PIC projects my tiny Microchip Technology PICkit 3 Programmer/Debugger usually saves the day.

Regarding ARM, I generally use some of the new low-cost ARM development boards that include programming/debugging interfaces. I carry an LPC1769 LPCXpresso board, an mbed board, three STMicroelectronics Discovery boards (Cortex-M0, Cortex-M3, and Cortex-M4), my STMicroelectronics STM32 Primer2, three Texas Instruments LaunchPads (the MSP430, the Piccolo, and the Stellaris), and the following Linux boards: two BeagleBoard.org BeagleBones (the gray one and a BeagleBone Black), a Cubieboard, an Odroid-X2, and a Raspberry Pi Model B.

Additionally, I always carry an Arduino UNO, a Digilent chipKIT Max 32 Arduino-compatible board (which I mostly use with MPLAB X IDE and “regular” C language), and a self-made Parallax Propeller microcontroller board. I also have a Wi-Fi 3G TP-LINK TL-WR703N mini router flashed   with OpenWRT that enables me to experiment with Wi-Fi and Ethernet and to tinker with their embedded Linux environment. It also provides me Internet access with the use of a 3G modem.

Raul_Alvarez_Workspace _Photo_2

Not a bad set up for someone on the go. Alvarez’s “portable workstation” includes ICs, resistors, and capacitors, among other things. He says his most useful tools are a UART port, a multimeter, and some LEDs.

In three or four small boxes I carry a lot of sensors, modules, ICs, resistors, capacitors, crystals, jumper cables, breadboard strips, and some DC-DC converter/regulator boards for supplying power to my circuits. I also carry a small video camera for shooting my video tutorials, which I publish from time to time at my website (www.raulalvarez.net). I have installed in my laptop TechSmith’s Camtasia for screen capture and Sony Vegas for editing the final video and audio.

Some IDEs that I have currently installed in my laptop are: LPCXpresso, Texas Instruments’s Code Composer Studio, IAR EW for Renesas RL78 and 8051, Ride7, Keil uVision for ARM, MPLAB X, and the Arduino IDE, among others. For PC coding I have installed Eclipse, MS Visual Studio, GNAT Programming Studio (I like to tinker with Ada from time to time), QT Creator, Python IDLE, MATLAB, and Octave. For schematics and PCB design I mostly use CadSoft’s EAGLE, ExpressPCB, DesignSpark PCB, and sometimes KiCad.

Traveling with my portable rig isn’t particularly pleasant for me. I always get delayed at security and customs checkpoints in airports. I get questioned a lot especially about my circuit boards and prototypes and I almost always have to buy a new set of screwdrivers after arriving at my destination. Luckily for me, my nomad lifestyle is about to come to an end soon and finally I will be able to settle down in my hometown in Cochabamba, Bolivia. The first two things I’m planning to do are to buy a really big workbench and a decent digital oscilloscope.

Alvarez’s article “The Home Energy Gateway: Remotely Control and Monitor Household Devices” appeared in Circuit Cellar’s February issue. For more information about Alvarez, visit his website or follow him on Twitter @RaulAlvarezT.

Client Profile: ImageCraft Creations, Inc.

CorStarter prototyping board

CorStarter prototyping board

2625 Middlefield Road, #685,
Palo Alto, CA 94306

CONTACT: Richard Man,
richard@imagecraft.com
imagecraft.com

EMBEDDED PRODUCTS:ImageCraft Version 8 C compilers with an IDE for Atmel AVR and Cortex M devices are full-featured toolsets backed by strong support.

CorStarter-STM32 is a complete C hardware and software kit for STM32 Cortex-M3 devices. The $99 kit includes a JTAG pod for programming and debugging.

ImageCraft products offer excellent features and support within budget requisitions. ImageCraft compiler toolsets are used by professionals who demand excellent code quality, full features, and diligent support in a timely manner.

The small, fast compilers provide helpful informational messages and include an IDE with an application builder (Atmel AVR) and debugger (Cortex-M), whole-program code compression technology, and MISRA safety checks. ImageCraft offers two editions that cost $249 and $499.

The demo is fully functional for 45 days, so it is easy to test it yourself.

EXCLUSIVE OFFER: For a limited time, ImageCraft is offering Circuit Cellar readers $40 off the Standard and PRO versions of its Atmel AVR and Cortex-M compiler toolsets. To take advantage of this offer, please visit http://imagecraft.com/xyzzy.html.


 

Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.

Innovation Space: A Workspace for Prototyping, Programming, and Writing

RobotBASIC co-developer John Blankenship accomplishes a lot in his “cluttered” Vero Beach, FL-based workspace.

JohnBlankenship

John Blankenship in his workspace, where he develops, designs, and writes.

He develops software, designs hardware, packages robot parts for sale, and write books and magazine articles. Thus, his workspace isn’t always neat and tidy, he explained.

“The walls are covered with shelves filled with numerous books, a wide variety of parts, miscellaneous tools, several pieces of test equipment, and many robot prototypes,” he noted.

“Most people would probably find my space cluttered and confining, but for me it comforting knowing everything I might need is close at hand.”

Blankenship co-developed RobotBASIC with Samuel Mishal, a friend and talented programmer. The introductory programming language is geared toward high school-level students.

This PCB makes it easy to build a RobotBASIC-compatible robot.

This PCB makes it easy to build a RobotBASIC-compatible robot.

You can read Blankenship’s article, “Using a Simulated Robot to Decrease Development Time,” in the March 2014 edition of Circuit Cellar. He details how implementing a robotic simulation can reduce development time. Here’s an excerpt:

If you have ever built a robot, you know the physical construction and electronic aspects are only the first step. The real work begins when you start programming your creation.

A typical starting point is to develop a library of subroutines that implement basic behaviors. Later, the routines can be combined to create more complex behaviors and eventually full-blown applications. For example, navigational skills (e.g., hugging a wall, following a line, or finding a beacon) can serve as basic building blocks for tasks such as mowing a yard, finding a charging station, or delivering drinks to guests at a party. Developing basic behaviors can be difficult though, especially if they must work for a variety of situations. For instance, a behavior that enables a robot to transverse a hallway to find a specified doorway and pass through it should work properly with different-width hallways and doorways. Furthermore, the robot should at least attempt to autonomously contend with problems arising from the imprecise movements associated with most hobby robots.

Such problems can generally be solved with a closed-loop control system that continually modifies the robot’s movements based on sensor readings. Unfortunately, sensor readings in a real-world environment are often just as flawed as the robot’s movements. For example, tray reflections from ultrasonic or infrared sensors can produce erroneous sensor readings. Even when the sensors are reading correctly, faulty data can be obtained due to unexpected environmental conditions. These types of problems are generally random and are therefore difficult to detect and identify because the offending situations cannot easily be duplicated. A robot simulator can be a valuable tool in such situations.

Do you want to share images of your workspace, hackspace, or “circuit cellar”? Send your images and space info to editor@circuitcellar.com.

Q&A: Scott Garman, Technical Evangelist

Scott Garman is more than just a Linux software engineer. He is also heavily involved with the Yocto Project, an open-source collaboration that provides tools for the embedded Linux industry. In 2013, Scott helped Intel launch the MinnowBoard, the company’s first open-hardware SBC. —Nan Price, Associate Editor

Scott Garman

Scott Garman

NAN: Describe your current position at Intel. What types of projects have you developed?

SCOTT: I’ve worked at Intel’s Open Source Technology Center for just about four years. I began as an embedded Linux software engineer working on the Yocto Project and within the last year, I moved into a technical evangelism role representing Intel’s involvement with the MinnowBoard.

Before working at Intel, my background was in developing audio products based on embedded Linux for both consumer and industrial markets. I also started my career as a Linux system administrator in academic computing for a particle physics group.

Scott was involved with an Intel MinnowBoard robotics and computer vision demo, which took place at LinuxCon Japan in May 2013.

Scott was involved with an Intel MinnowBoard robotics and computer vision demo, which took place at LinuxCon Japan in May 2013.

I’m definitely a generalist when it comes to working with Linux. I tend to bounce around between things that don’t always get the attention they need, whether it is security, developer training, or community outreach.

More specifically, I’ve developed and maintained parallel computing clusters, created sound-level management systems used at concert stadiums, worked on multi-room home audio media servers and touchscreen control systems, dug into the dark areas of the Autotools and embedded Linux build systems, and developed fun conference demos involving robotics and computer vision. I feel very fortunate to be involved with embedded Linux at this point in history—these are very exciting times!

Scott is shown working on an Intel MinnowBoard demo, which was built around an OWI Robotic Arm.

Scott is shown working on an Intel MinnowBoard demo, which was built around an OWI Robotic Arm.

NAN: Can you tell us a little more about your involvement with the Yocto Project (www.yoctoproject.org)?

SCOTT: The Yocto Project is an effort to reduce the amount of fragmentation in the embedded Linux industry. It is centered on the OpenEmbedded build system, which offers a tremendous amount of flexibility in how you can create embedded Linux distros. It gives you the ability to customize nearly every policy of your embedded Linux system, such as which compiler optimizations you want or which binary package format you need to use. Its killer feature is a layer-based architecture that makes it easy to reuse your code to develop embedded applications that can run on multiple hardware platforms by just swapping out the board support package (BSP) layer and issuing a rebuild command.

New releases of the build system come out twice a year, in April and October.

Here, the OWI Robotic Arm is being assembled.

Here, the OWI Robotic Arm is being assembled.

I’ve maintained various user space recipes (i.e., software components) within OpenEmbedded (e.g., sudo, openssh, etc.). I’ve also made various improvements to our emulation environment, which enables you to run QEMU and test your Linux images without having to install it on hardware.

I created the first version of a security tracking system to monitor Common Vulnerabilities and Exposures (CVE) reports that are relevant to recipes we maintain. I also developed training materials for new developers getting started with the Yocto Project, including a very popular introductory screencast “Getting Started with the Yocto Project—New Developer Screencast Tutorial

NAN: Intel recently introduced the MinnowBoard SBC. Describe the board’s components and uses.

SCOTT: The MinnowBoard is based on Intel’s Queens Bay platform, which pairs a Tunnel Creek Atom CPU (the E640 running at 1 GHz) with the Topcliff Platform controller hub. The board has 1 GB of RAM and includes PCI Express, which powers our SATA disk support and gigabit Ethernet. It’s an SBC that’s well suited for embedded applications that can use that extra CPU and especially I/O performance.

Scott doesn’t have a dedicated workbench or garage. He says he tends to just clear off his desk, lay down some cardboard, and work on things such as the Trippy RGB Waves Kit, which is shown.

Scott doesn’t have a dedicated workbench or garage. He says he tends to just clear off his desk, lay down some cardboard, and work on things such as the Trippy RGB Waves Kit, which is shown.

The MinnowBoard also has the embedded bus standards you’d expect, including GPIO, I2C, SPI, and even CAN (used in automotive applications) support. We have an expansion connector on the board where we route these buses, as well as two lanes of PCI Express for custom high-speed I/O expansion.

There are countless things you can do with MinnowBoard, but I’ve found it is especially well suited for projects where you want to combine embedded hardware with computing applications that benefit from higher performance (e.g., robots that use computer vision, as a central hub for home automation projects, networked video streaming appliances, etc.).

And of course it’s open hardware, which means the schematics, Gerber files, and other design files are available under a Creative Commons license. This makes it attractive for companies that want to customize the board for a commercial product; educational environments, where students can learn how boards like this are designed; or for those who want an open environment to interface their hardware projects.

I created a MinnowBoard embedded Linux board demo involving an OWI Robotic Arm. You can watch a YouTube video to see how it works.

NAN: What compelled Intel to make the MinnowBoard open hardware?

SCOTT: The main motivation for the MinnowBoard was to create an affordable Atom-based development platform for the Yocto Project. We also felt it was a great opportunity to try to release the board’s design as open hardware. It was exciting to be part of this, because the MinnowBoard is the first Atom-based embedded board to be released as open hardware and reach the market in volume.

Open hardware enables our customers to take the design and build on it in ways we couldn’t anticipate. It’s a concept that is gaining traction within Intel, as can be seen with the announcement of Intel’s open-hardware Galileo project.

NAN: What types of personal projects are you working on?

SCOTT: I’ve recently gone on an electronics kit-building binge. Just getting some practice again with my soldering iron with a well-paced project is a meditative and restorative activity for me.

Scott’s Blinky POV Kit is shown. “I don’t know what I’d do without my PanaVise Jr. [vise] and some alligator clips,” he said.

Scott’s Blinky POV Kit is shown. “I don’t know what I’d do without my PanaVise Jr. [vise] and some alligator clips,” he said.

I worked on one project, the Trippy RGB Waves Kit, which includes an RGB LED and is controlled by a microcontroller. It also has an IR sensor that is intended to detect when you wave your hand over it. This can be used to trigger some behavior of the RGB LED (e.g., cycling the colors). Another project, the Blinky POV Kit, is a row of LEDs that can be programmed to create simple text or logos when you wave the device around, using image persistence.

Below is a completed JeeNode v6 Kit Scott built one weekend.

Below is a completed JeeNode v6 Kit Scott built one weekend.

My current project is to add some wireless sensors around my home, including temperature sensors and a homebrew security system to monitor when doors get opened using 915-MHz JeeNodes. The JeeNode is a microcontroller paired with a low-power RF transceiver, which is useful for home-automation projects and sensor networks. Of course the central server for collating and reporting sensor data will be a MinnowBoard.

NAN: Tell us about your involvement in the Portland, OR, open-source developer community.

SCOTT: Portland has an amazing community of open-source developers. There is an especially strong community of web application developers, but more people are hacking on hardware nowadays, too. It’s a very social community and we have multiple nights per week where you can show up at a bar and hack on things with people.

This photo was taken in the Open Source Bridge hacker lounge, where people socialize and collaborate on projects. Here someone brought a brainwave-control game. The players are wearing electroencephalography (EEG) readers, which are strapped to their heads. The goal of the game is to use biofeedback to move the floating ball to your opponent’s side of the board.

This photo was taken in the Open Source Bridge hacker lounge, where people socialize and collaborate on projects. Here someone brought a brainwave-control game. The players are wearing electroencephalography (EEG) readers, which are strapped to their heads. The goal of the game is to use biofeedback to move the floating ball to your opponent’s side of the board.

I’d say it’s a novelty if I wasn’t so used to it already—walking into a bar or coffee shop and joining a cluster of friendly people, all with their laptops open. We have coworking spaces, such as Collective Agency, and hackerspaces, such as BrainSilo and Flux (a hackerspace focused on creating a welcoming space for women).

Take a look at Calagator to catch a glimpse of all the open-source and entrepreneurial activity going on in Portland. There are often multiple events going on every night of the week. Calagator itself is a Ruby on Rails application that was frequently developed at the bar gatherings I referred to earlier. We also have technical conferences ranging from the professional OSCON to the more grassroots and intimate Open Source Bridge.

I would unequivocally state that moving to Portland was one of the best things I did for developing a career working with open-source technologies, and in my case, on open-source projects.

TRACE32 Now Supports Xilinx MicroBlaze 8.50.C

LauterbachThe TRACE32 modular hardware and software supports up to 350 different CPUs. The microprocessor development tools now support the latest version of Xilinx’s MicroBlaze 8.50.c, which is a soft processor core designed for Xilinx FPGAs. The MicroBlaze core is included with Xilinx’s Vivado Design Edition and IDS Embedded Edition.

The TRACE32 tools have supported MicroBlaze for many years by providing efficient and user-friendly debugging at the C or C++ level using the on-chip JTAG interface. This interface also provides code download, flash programming, and quick access to all internal chip peripherals and registers.
Contact Lauterbach for pricing.

Lauterbach GmbH
www.lauterbach.com

Xilinx, Inc.
www.xilinx.com

Client Profile: Digi International, Inc

Contact: Elizabeth Presson
elizabeth.presson@digi.com

Featured Product: The XBee product family (www.digi.com/xbee) is a series of modular products that make adding wireless technology easy and cost-effective. Whether you need a ZigBee module or a fast multipoint solution, 2.4 GHz or long-range 900 MHz—there’s an XBee to meet your specific requirements.

XBee Cloud Kit

Digi International XBee Cloud Kit

Product information: Digi now offers the XBee Wi-Fi Cloud Kit (www.digi.com/xbeewificloudkit) for those who want to try the XBee Wi-Fi (XB2B-WFUT-001) with seamless cloud connectivity. The Cloud Kit brings the Internet of Things (IoT) to the popular XBee platform. Built around Digi’s new XBee Wi-Fi
module, which fully integrates into the Device Cloud by Etherios, the kit is a simple way for anyone with an interest in M2M and the IoT to build a hardware prototype and integrate it into an Internet-based application. This kit is suitable for electronics engineers, software designers, educators, and innovators.

Exclusive Offer: The XBee Wi-Fi Cloud Kit includes an XBee Wi-Fi module; a development board with a variety of sensors and actuators; loose electronic prototyping parts to make circuits of your own; a free subscription to Device Cloud; fully customizable widgets to monitor and control connected devices; an open-source application that enables two-way communication and control with the development board over the Internet; and cables, accessories, and everything needed to connect to the web. The Cloud Kit costs $149.

Small, Self-Contained GNSS Receiver

TM Series GNSS modules are self-contained, high-performance global navigation satellite system (GNSS) receivers designed for navigation, asset tracking, and positioning applications. Based on the MediaTek chipset, the receivers can simultaneously acquire and track several satellite constellations, including the US GPS, Europe’s GALILEO, Russia’s GLONASS, and Japan’s QZSS.

LinxThe 10-mm × 10-mm receivers are capable of better than 2.5-m position accuracy. Hybrid ephemeris prediction can be used to achieve less than 15-s cold start times. The receiver can operate down to 3 V and has a 20-mA low tracking current. To save power, the TM Series GNSS modules have built-in receiver duty cycling that can be configured to periodically turn off. This feature, combined with the module’s low power consumption, helps maximize battery life in battery-powered systems.

The receiver modules are easy to integrate, since they don’t require software setup or configuration to power up and output position data. The TM Series GNSS receivers use a standard UART serial interface to send and receive NMEA messages in ASCII format. A serial command set can be used to configure optional features. Using a USB or RS-232 converter chip, the modules’ UART can be directly connected to a microcontroller or a PC’s UART.

The GPS Master Development System connects a TM Series Evaluation Module to a prototyping board with a color display that shows coordinates, a speedometer, and a compass for mobile evaluation. A USB interface enables simple viewing of satellite data and Internet mapping and custom software application development.
Contact Linx Technologies for pricing.

Linx Technologies
www.linxtechnologies.com

Q&A: Marilyn Wolf, Embedded Computing Expert

Marilyn Wolf has created embedded computing techniques, co-founded two companies, and received several Institute of Electrical and Electronics Engineers (IEEE) distinctions. She is currently teaching at Georgia Institute of Technology’s School of Electrical and Computer Engineering and researching smart-energy grids.—Nan Price, Associate Editor

NAN: Do you remember your first computer engineering project?

MARILYN: My dad is an inventor. One of his stories was about using copper sewer pipe as a drum memory. In elementary school, my friend and I tried to build a computer and bought a PCB fabrication kit from RadioShack. We carefully made the switch features using masking tape and etched the board. Then we tried to solder it and found that our patterning technology outpaced our soldering technology.

NAN: You have developed many embedded computing techniques—from hardware/software co-design algorithms and real-time scheduling algorithms to distributed smart cameras and code compression. Can you provide some information about these techniques?

Marilyn Wolf

Marilyn Wolf

MARILYN: I was inspired to work on co-design by my boss at Bell Labs, Al Dunlop. I was working on very-large-scale integration (VLSI) CAD at the time and he brought in someone who designed consumer telephones. Those designers didn’t care a bit about our fancy VLSI because it was too expensive. They wanted help designing software for microprocessors.

Microprocessors in the 1980s were pretty small, so I started on simple problems, such as partitioning a specification into software plus a hardware accelerator. Around the turn of the millennium, we started to see some very powerful processors (e.g., the Philips Trimedia). I decided to pick up on one of my earliest interests, photography, and look at smart cameras for real-time computer vision.

That work eventually led us to form Verificon, which developed smart camera systems. We closed the company because the market for surveillance systems is very competitive.
We have started a new company, SVT Analytics, to pursue customer analytics for retail using smart camera technologies. I also continued to look at methodologies and tools for bigger software systems, yet another interest I inherited from my dad.

NAN: Tell us a little more about SVT Analytics. What services does the company provide and how does it utilize smart-camera technology?

MARILYN: We started SVT Analytics to develop customer analytics for software. Our goal is to do for bricks-and-mortar retailers what web retailers can do to learn about their customers.

On the web, retailers can track the pages customers visit, how long they stay at a page, what page they visit next, and all sorts of other statistics. Retailers use that information to suggest other things to buy, for example.

Bricks-and-mortar stores know what sells but they don’t know why. Using computer vision, we can determine how long people stay in a particular area of the store, where they came from, where they go to, or whether employees are interacting with customers.

Our experience with embedded computer vision helps us develop algorithms that are accurate but also run on inexpensive platforms. Bad data leads to bad decisions, but these systems need to be inexpensive enough to be sprinkled all around the store so they can capture a lot of data.

NAN: Can you provide a more detailed overview of the impact of IC technology on surveillance in recent years? What do you see as the most active areas for research and advancements in this field?

MARILYN: Moore’s law has advanced to the point that we can provide a huge amount of computational power on a single chip. We explored two different architectures: an FPGA accelerator with a CPU and a programmable video processor.

We were able to provide highly accurate computer vision on inexpensive platforms, about $500 per channel. Even so, we had to design our algorithms very carefully to make the best use of the compute horsepower available to us.

Computer vision can soak up as much computation as you can throw at it. Over the years, we have developed some secret sauce for reducing computational cost while maintaining sufficient accuracy.

NAN: You wrote several books, including Computers as Components: Principles of Embedded Computing System Design and Embedded Software Design and Programming of Multiprocessor System-on-Chip: Simulink and System C Case Studies. What can readers expect to gain from reading your books?

MARILYN: Computers as Components is an undergraduate text. I tried to hit the fundamentals (e.g., real-time scheduling theory, software performance analysis, and low-power computing) but wrap around real-world examples and systems.

Embedded Software Design is a research monograph that primarily came out of Katalin Popovici’s work in Ahmed Jerraya’s group. Ahmed is an old friend and collaborator.

NAN: When did you transition from engineering to teaching? What prompted this change?

MARILYN: Actually, being a professor and teaching in a classroom have surprisingly little to do with each other. I spend a lot of time funding research, writing proposals, and dealing with students.

I spent five years at Bell Labs before moving to Princeton, NJ. I thought moving to a new environment would challenge me, which is always good. And although we were very well supported at Bell Labs, ultimately we had only one customer for our ideas. At a university, you can shop around to find someone interested in what you want to do.

NAN: How long have you been at Georgia Institute of Technology’s School of Electrical and Computer Engineering? What courses do you currently teach and what do you enjoy most about instructing?

MARILYN: I recently designed a new course, Physics of Computing, which is a very different take on an introduction to computer engineering. Instead of directly focusing on logic design and computer organization, we discuss the physical basis of delay and energy consumption.

You can talk about an amazingly large number of problems involving just inverters and RC circuits. We relate these basic physical phenomena to systems. For example, we figure out why dynamic RAM (DRAM) gets bigger but not faster, then see how that has driven computer architecture as DRAM has hit the memory wall.

NAN: As an engineering professor, you have some insight into what excites future engineers. With respect to electrical engineering and embedded design/programming, what are some “hot topics” your students are currently attracted to?

MARILYN: Embedded software—real-time, low-power—is everywhere. The more general term today is “cyber-physical systems,” which are systems that interact with the physical world. I am moving slowly into control-oriented software from signal/image processing. Closing the loop in a control system makes things very interesting.

My Georgia Tech colleague Eric Feron and I have a small project on jet engine control. His engine test room has a 6” thick blast window. You don’t get much more exciting than that.

NAN: That does sound exciting. Tell us more about the project and what you are exploring with it in terms of embedded software and closed-loop control systems.

MARILYN: Jet engine designers are under the same pressures now that have faced car engine designers for years: better fuel efficiency, lower emissions, lower maintenance cost, and lower noise. In the car world, CPU-based engine controllers were the critical factor that enabled car manufacturers to simultaneously improve fuel efficiency and reduce emissions.

Jet engines need to incorporate more sensors and more computers to use those sensors to crunch the data in real time and figure out how to control the engine. Jet engine designers are also looking at more complex engine designs with more flaps and controls to make the best use of that sensor data.

One challenge of jet engines is the high temperatures. Jet engines are so hot that some parts of the engine would melt without careful design. We need to provide more computational power while living with the restrictions of high-temperature electronics.

NAN: Your research interests include embedded computing, smart devices, VLSI systems, and biochips. What types of projects are you currently working on?

MARILYN: I’m working on with Santiago Grivalga of Georgia Tech on smart-energy grids, which are really huge systems that would span entire countries or continents. I continue to work on VLSI-related topics, such as the work on error-aware computing that I pursued with Saibal Mukopodhyay.

I also work with my friend Shuvra Bhattacharyya on architectures for signal-processing systems. As for more unusual things, I’m working on a medical device project that is at the early stages, so I can’t say too much specifically about it.

NAN: Can you provide more specifics about your research into smart energy grids?

MARILYN: Smart-energy grids are also driven by the push for greater efficiency. In addition, renewable energy sources have different characteristics than traditional coal-fired generators. For example, because winds are so variable, the energy produced by wind generators can quickly change.

The uses of electricity are also more complex, and we see increasing opportunities to shift demand to level out generation needs. For example, electric cars need to be recharged, but that can happen during off-peak hours. But energy systems are huge. A single grid covers the eastern US from Florida to Minnesota.

To make all these improvements requires sophisticated software and careful design to ensure that the grid is highly reliable. Smart-energy grids are a prime example of Internet-based control.

We have so many devices on the grid that need to coordinate that the Internet is the only way to connect them. But the Internet isn’t very good at real-time control, so we have to be careful.

We also have to worry about security Internet-enabled devices enable smart grid operations but they also provide opportunities for tampering.

NAN: You’ve earned several distinctions. You were the recipient of the Institute of Electrical and Electronics Engineers (IEEE) Circuits and Systems Society Education Award and the IEEE Computer Society Golden Core Award. Tell us about these experiences.

MARILYN: These awards are presented at conferences. The presentation is a very warm, happy experience. Everyone is happy. These things are time to celebrate the field and the many friends I’ve made through my work.

Low-Cost SBCs Could Revolutionize Robotics Education

For my entire life, my mother has been a technology trainer for various educational institutions, so it’s probably no surprise that I ended up as an engineer with a passion for STEM education. When I heard about the Raspberry Pi, a diminutive $25 computer, my thoughts immediately turned to creating low-cost mobile computing labs. These labs could be easily and quickly loaded with a variety of programming environments, walking students through a step-by-step curriculum to teach them about computer hardware and software.

However, my time in the robotics field has made me realize that this endeavor could be so much more than a traditional computer lab. By adding actuators and sensors, these low-cost SBCs could become fully fledged robotic platforms. Leveraging the common I2C protocol, adding chains of these sensors would be incredibly easy. The SBCs could even be paired with microcontrollers to add more functionality and introduce students to embedded design.

rover_webThere are many ways to introduce students to programming robot-computers, but I believe that a web-based interface is ideal. By setting up each computer as a web server, students can easily access the interface for their robot directly though the computer itself, or remotely from any web-enabled device (e.g., a smartphone or tablet). Through a web browser, these devices provide a uniform interface for remote control and even programming robotic platforms.

A server-side language (e.g., Python or PHP) can handle direct serial/I2C communications with actuators and sensors. It can also wrap more complicated robotic concepts into easily accessible functions. For example, the server-side language could handle PID and odometry control for a small rover, then provide the user functions such as “right, “left,“ and “forward“ to move the robot. These functions could be accessed through an AJAX interface directly controlled through a web browser, enabling the robot to perform simple tasks.

This web-based approach is great for an educational environment, as students can systematically pull back programming layers to learn more. Beginning students would be able to string preprogrammed movements together to make the robot perform simple tasks. Each movement could then be dissected into more basic commands, teaching students how to make their own movements by combining, rearranging, and altering these commands.

By adding more complex commands, students can even introduce autonomous behaviors into their robotic platforms. Eventually, students can be given access to the HTML user interfaces and begin to alter and customize the user interface. This small superficial step can give students insight into what they can do, spurring them ahead into the next phase.
Students can start as end users of this robotic framework, but can eventually graduate to become its developers. By mapping different commands to different functions in the server side code, students can begin to understand the links between the web interface and the code that runs it.

Kyle Granat

Kyle Granat, who wrote this essay for Circuit Cellar,  is a hardware engineer at Trossen Robotics, headquarted in Downers Grove, IL. Kyle graduated from Purdue University with a degree in Computer Engineering. Kyle, who lives in Valparaiso, IN, specializes in embedded system design and is dedicated to STEM education.

Students will delve deeper into the server-side code, eventually directly controlling actuators and sensors. Once students begin to understand the electronics at a much more basic level, they will be able to improve this robotic infrastructure by adding more features and languages. While the Raspberry Pi is one of today’s more popular SBCs, a variety of SBCs (e.g., the BeagleBone and the pcDuino) lend themselves nicely to building educational robotic platforms. As the cost of these platforms decreases, it becomes even more feasible for advanced students to recreate the experience on many platforms.

We’re already seeing web-based interfaces (e.g., ArduinoPi and WebIOPi) lay down the beginnings of a web-based framework to interact with hardware on SBCs. As these frameworks evolve, and as the costs of hardware drops even further, I’m confident we’ll see educational robotic platforms built by the open-source community.

Arduino-Based Hand-Held Gaming System

gameduino2-WEBJames Bowman, creator of the Gameduino game adapter for microcontrollers, recently made an upgrade to the system adding a Future Technology Devices International (FTDI) FT800 chip to drive the graphics. Associate Editor Nan Price interviewed James about the system and its capabilities.

NAN: Give us some background. Where do you live? Where did you go to school? What did you study?

Bowman-WEB

James Bowman

 JAMES: I live on the California coast in a small farming village between Santa Cruz and San Francisco. I moved here from London 17 years ago. I studied computing at Imperial College London.

NAN: What types of projects did you work on when you were employed by Silicon Graphics, 3dfx Interactive, and NVIDIA?

JAMES: Always software and hardware for GPUs. I began in software, which led me to microcode, which led to hardware. Before you know it you’ve learned Verilog. I was usually working near the boundary of software and hardware, optimizing something for cost, speed, or both.

NAN: How did you come up with the idea for the Gameduino game console?

JAMES: I paid for my college tuition by working as a games programmer for Nintendo and Sega consoles, so I was quite familiar with that world. It seemed a natural fit to try to give the Arduino some eye-catching color graphics. Some quick experiments with a breadboard and an FPGA confirmed that the idea was feasible.

NAN: The Gameduino 2 turns your Arduino into a hand-held modern gaming system. Explain the difference from the first version of Gameduino—what upgrades/additions have been made?

Gameduinofinal-WEB

The Gameduino2 uses a Future Technology Devices International chip to drive its graphics

JAMES: The original Gameduino had to use an FPGA to generate graphics, because in 2011 there was no such thing as an embedded GPU. It needs an external monitor and you had to supply your own inputs (e.g., buttons, joysticks, etc.). The Gameduino 2 uses the new Future Technology Devices International (FTDI) FT800 chip, which drives all the graphics. It has a built-in color resistive touchscreen and a three-axis accelerometer. So it is a complete game system—you just add the CPU.

NAN: How does the Arduino factor into the design?

GameduinoPCB-WEB

An Arduino, Ethernet adapter, and a Gameduino

 JAMES: Arduino is an interesting platform. It is 5 V, believe it or not, so the design needs a level shifter. Also, the Arduino is based on an 8-bit microcontroller, so the software stack needs to be carefully built to provide acceptable performance. The huge advantage of the Arduino is that the programming environment—the IDE, compiler, and downloader—is used and understood by hundreds of thousands of people.

 NAN: Is it easy or possible to customize the Gameduino 2?

 JAMES: I would have to say no. The PCB itself is entirely surface mount technology (SMT) and all the ICs are QFNs—they have no accessible pins! This is a long way from the DIP packages of yesterday, where you could change the circuit by cutting tracks and soldering onto the pins.

I needed a microscope and a hot air station to make the Gameduino2 prototype. That is a long way from the “kitchen table” tradition of the Arduino. Fortunately the Arduino’s physical design is very customization-friendly. Other devices can be stacked up, adding networking, hi-fi sound, or other sensor inputs.

 NAN: The Gameduino 2 project is on Kickstarter through November 7, 2013. Why did you decide to use Kickstarter crowdfunding for this project?

 JAMES: Kickstarter is great for small-scale inventors. The audience it reaches also tends to be interested in novel, clever things. So it’s a wonderful way to launch a small new product.

NAN: What’s next for Gameduino 2? Will the future see a Gameduino 3?

 JAMES: Product cycles in the Arduino ecosystem are quite long, fortunately, so a Gameduino 3 is distant. For the Gameduino 2, I’m writing a book, shipping the product, and supporting the developer community, which will hopefully make use of it.

 

Natural Human-Computer Interaction

Recent innovations in both hardware and software have brought on a new wave of interaction techniques that depart from mice and keyboards. The widespread adoption of smartphones and tablets with capacitive touchscreens shows people’s preference to directly manipulate virtual objects with their hands.

Going beyond touch-only interaction, the Microsoft Kinect sensor enables users to play

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

This shows the hand tracking result from Kinect data. The red regions are our tracking results and the green lines are the skeleton tracking results from the Kinect SDK (based on data from the ChAirGest corpus: https://project.eia-fr.ch/chairgest/Pages/Overview.aspx).

games with their entire body. More recently, Leap Motion’s new compact sensor, consisting of two cameras and three infrared LEDs, has opened up the possibility of accurate fingertip tracking. With Project Glass, Google is pioneering new technology in the wearable human-computer interface. Other new additions to wearable technology include Samsung’s Galaxy Gear Smartwatch and Apple’s rumored iWatch.

A natural interface reduces the learning curve, or the amount of time and energy a person requires to complete a particular task. Instead of a user learning to communicate with a machine through a programming language, the machine is now learning to understand the user.

Hardware advancements have led to our clunky computer boxes becoming miniaturized, stylish sci-fi-like phones and watches. Along with these shrinking computers come ever-smaller sensors that enable a once keyboard-constrained computer to listen, see, and feel. These developments pave the way to natural human-computer interfaces.
If sensors are like eyes and ears, software would be analogous to our brains.

Understanding human speech and gestures in real time is a challenging task for natural human-computer interaction. At a higher level, both speech and gesture recognition require similar processing pipelines that include data streaming from sensors, feature extraction, and pattern recognition of a time series of feature vectors. One of the main differences between the two is feature representation because speech involves audio data while gestures involve video data.

For gesture recognition, the first main step is locating the user’s hand. Popular libraries for doing this include Microsoft’s Kinect SDK or PrimeSense’s NITE library. However, these libraries only give the coordinates of the hands as points, so the actual hand shapes cannot be evaluated.

Fingertip tracking using a Kinect sensor. The green dots are the tracked fingertips.

Our team at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory has developed methods that use a combination of skin-color and motion detection to compute a probability map of gesture salience location. The gesture salience computation takes into consideration the amount of movement and the closeness of movement to the observer (i.e., the sensor).

We can use the probability map to find the most likely area of the gesturing hands. For each time frame, after extracting the depth data for the entire hand, we compute a histogram of oriented gradients to represent the hand shape as a more compact feature descriptor. The final feature vector for a time frame includes 3-D position, velocity, and hand acceleration as well as the hand shape descriptor. We also apply principal component analysis to reduce the feature vector’s final dimension.

A 3-D model of pointing gestures using a Kinect sensor. The top left video shows background subtraction, arm segmentation, and fingertip tracking. The top right video shows the raw depth-mapped data. The bottom left video shows the 3D model with the white plane as the tabletop, the green line as the arm, and the small red dot as the fingertip.

The next step in the gesture-recognition pipeline is to classify the feature vector sequence into different gestures. Many machine-learning methods have been used to solve this problem. A popular one is called the hidden Markov model (HMM), which is commonly used to model sequence data. It was earlier used in speech recognition with great success.

There are two steps in gesture classification. First, we need to obtain training data to learn the models for different gestures. Then, during recognition, we find the most likely model that can produce the given observed feature vectors. New developments in the area involve some variations in the HMM, such as using hierarchical HMM for real-time inference or using discriminative training to increase the recognition accuracy.

Ying Yin

Ying Yin is a PhD candidate and a Research Assistant at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory. Originally from Suzhou, China, Ying received her BASc in Computer Engineering from the University of British Columbia in Vancouver, Canada, in 2008 and an MS in Computer Science from MIT in 2010. Her research focuses on applying machine learning and computer vision methods to multimodal human-computer interaction. Ying is also interested in web and mobile application development. She has won awards in web and mobile programming competitions at MIT.

Currently, the newest development in speech recognition at the industry scale is a method called deep learning. Earlier machine-learning methods require careful selection of feature vectors. The goal of deep learning is automatic discovery of powerful features from raw input data. So far, it has shown promising results in speech recognition. It can possibly be applied to gesture recognition to see whether it can further improve accuracy.

As component form factors shrink, sensor resolutions grow, and recognition algorithms become more accurate, natural human-computer interaction will become more and more ubiquitous in our everyday life.

Client Profile: Pico Technology

Pico Technology
320 North Glenwood Boulevard
Tyler, TX 75702

Contact: sales@picotech.com

Embedded Products/Services: Pico Technology’s PicoScope 5000 series uses reconfigurable ADC technology to offer a choice of resolutions from 8 to 16 bits. For more information, visit www.picotech.com/picoscope5000.html.

PicoProduct information: The new PicoScope 5000 series oscilloscopes have a significantly different architecture. High-resolution ADCs can be applied to the input channels in different series and parallel combinations to boost the sampling rate or the resolution.

In Series mode, the ADCs are interleaved to provide 1 GB/s at 8 bits. In Parallel mode, multiple ADCs are sampled in phase on each channel to increase the resolution and dynamic performance (up to 16 bits).

In addition to their flexible resolution, the oscilloscopes have ultra-deep memory buffers of up to 512 MB to enable long captures at high sampling rates. They also feature standard, advanced software, including serial decoding, mask limit testing, and segmented memory.

The PicoScope 5000 series oscilloscopes are currently available at www.picotech.com.

The two-channel, 60-MHz model with built-function generator costs $1,153. The four-channel, 200-MHz model with built-in arbitrary waveform generator (AWG) costs $2,803. The pricing includes a set of matched probes, all necessary software, and a five-year warranty.

Dual-Channel Waveform Generators

B&K Precision 4053 Waveform Generator

B&K Precision 4053 Waveform Generator

The 4050 Series is a new line of four dual-channel function/arbitrary waveform generators. The instruments can generate 5-to-50-MHz waveforms for applications requiring stable and precise sine, square, triangle, and pulse waveforms with modulation and arbitrary waveform capabilities.

All models provide a main output voltage that can be vary from 0 to 10 VPP into 50 Ω and a secondary output that can vary from 0 to 3 VPP into 50 Ω. The generators feature a 3.5” color LCD, a rotary control knob, and a numeric keypad with dedicated waveform keys and output buttons.

The 4050 Series provides users with 48 built-in arbitrary waveforms. Using the included waveform editing software via the standard USB interface on the rear, users can create and load up to 10 custom 16-kpt waveforms. For general-purpose interface bus (GPIB) connectivity, an optional USB-to-GPIB adapter is available.

The generators offer a variety of modulation schemes for modulated signal applications including amplitude and frequency modulation (AM/FM), double sideband amplitude modulation (DSB-AM), amplitude and frequency shift keying (ASK/FSK), phase modulation (PM), and pulse-width modulation (PWM). Additional standard features include a linear and logarithmic sweep function, a built-in counter, sync output, a trigger I/O terminal, and a USB host port on the front panel to save and recall instrument settings and waveforms. A standard external 10-MHz reference clock input is provided to synchronize the instrument to another generator.

The 4052 (5-MHz) costs $499, the 4053 (10 MHz) costs $599, the 4054 (25 MHz) costs $850, and the 4055 (50 MHz) costs $1,050. Note: B&K Precision is offering 10% off MSRP through November 30, 2013. See website for details.

B&K Precision Corp.
www.bkprecision.com

Solar Array Tracker (Part 1): SunSeeker Hardware

Figure 1: These are the H-bridge motor drivers and sensor input conditioning circuits. Most of the discrete components are required for transient voltage protection from nearby lightning strikes and inductive kickback from the motors.

Figure 1: These are the H-bridge motor drivers and sensor input conditioning circuits. Most of the discrete components are required for transient voltage protection from nearby lightning strikes and inductive kickback from the motors.

Graig Pearen, semi-retired and living in Prince George, BC, Canada, spent his career in the telecommunications industry where he provided equipment maintenance and engineering services. Pearen, who now works part time as a solar energy technician, designed the SunSeeker Solar Array tracker, which won third place in the 2012 DesignSpark chipKit challenge.

He writes about his design, as well as changes he has made in prototypes since his first entry, in Circuit Cellar’s October issue. It is the first part of a two-part series on the SunSeeker, which presents the system’s software and commissioning tests in the final installment.

In the opening of Part 1, Pearen describes his objectives for the solar array tracker:

When I was designing my solar photovoltaic (PV) system, I wanted my array to track the sun in both axes. After looking at the available commercial equipment specifications and designs published online, I decided to design my own array tracker, the SunSeeker (see Photo 1 and Figure 1).

I had wanted to work with a Microchip Technology PIC processor for a while, so this was my opportunity to have some fun. I based my first prototype on a PIC16F870 microcontroller but when the microcontroller maxed out, I switched to its big brother, the PIC16F877. Although both prototypes worked well, I wanted to add more features and

The SunSeeker board, at top, contains all the circuits required to control the solar array’s motion. This board plugs into the Microsoft Technology chipKIT MAX32 processor board. The bottom side of the SunSeeker board (green) with the MAX32 board (red) plugged into it is shown at bottom.

The SunSeeker board, at top, contains all the circuits required to control the solar array’s motion. This board plugs into the Microchip Technology chipKIT MAX32 processor board. The bottom side of the SunSeeker board (green) with the MAX32 board (red) plugged into it is shown at bottom.

capabilities. I particularly wanted to add Ethernet access so I could use my home network to communicate with all my systems. I was considering Microchip’s chipKIT Max32 board for the next prototype when Circuit Cellar’s DesignSpark chipKIT contest was announced.

I knew the contest would be challenging. In addition to learning about a new processor and prototyping hardware, the contest rules required me to learn a new IDE (MPIDE), programming language (C++), schematic capture, and PCB design software (DesignSpark PCB). I also decided to make this my first surface-mount component design.

My objective for the contest was to replicate the functionality of the previous Assembly language software. I wanted the new design to be a test platform to develop new features and tracking algorithms. Over the next two to three years of development and field testing, I plan for it to evolve into a full-featured “bells-and-whistles” solar array tracker. I added a few enhancements as the software evolved, but I will develop most of the additional features later.

The system tracks, monitors, and adjusts solar photovoltaic (PV) arrays based on weather and atmospheric conditions. It compiles statistics on these conditions and communicates with a local server that enables software algorithm refinement. The SunSeeker logs a broad variety of data.

The SunSeeker measures, displays, and records the duration of the daily sunny, hazy, and cloudy periods; the array temperature; the ambient temperature; daily minimum and maximum temperatures; incident light intensity; and the drive motor current. The data log is indexed by the day number (1–366). Index–0 is the annual data and 1–366 store the data for each day of the year. Each record is 18 bytes long for a total of 6,588 bytes per year.

At midnight each day, the daily statistics are recorded and added to the cumulative totals. The data logs can be downloaded in comma-separated values (CSV) format for permanent record keeping and for use in spreadsheet or database programs.

The SunSeeker has two main parts, a control module and a separate light sensor module, plus the temperature and snow sensors.

The control module is mounted behind the array where it is protected from the heat of direct sunlight exposure. The sensor module is potted in clear UV-proof epoxy and mounted a few centimeters away on the edge of, and in the same plane as, the array. To select an appropriate potting compound, I contacted Epoxies, Etc. and asked for a recommendation. Following the company’s advice, I obtained a small quantity of urethane resin (20-2621RCL) and urethane catalyst (20-2621CCL).

When controlling mechanical devices, monitoring for proper operation, and detecting malfunctions it is necessary to prevent hardware damage. For example, if the solar array were to become frozen in place during an ice storm, it would need to be sensed and acted upon. Diagnostic software watches the motors to detect any hardware fault that may occur. Fault detection is accomplished in several ways. The H-bridges have internal fault detection for over temperature, under voltage, and shorted circuit. The current drawn by the motors is monitored for abnormally high or low current and the motor drive assemblies’ pulses are counted to show movement and position.

To read more about the DIY SunSeeker solar array tracker, and Pearen’s plans for further refinements, check out the October issue.