Issue 263: Privately Funded Engineering

The public vs. private funding debate endures in the United States and Europe. Everything from energy generation (e.g., oil) to social welfare programs are debated daily by government committees, discussed in corporate board rooms, and argued over at lunch tables from Los Angeles to Brussels and beyond.

One particularly interesting discussion pertains to the role of the public and private sectors in space flight and exploration, which comprises fields such as aerospace design, embedded electronics, and robotics. In Circuit Cellar June 2012, Steve Ciarcia weighs in on this debate and makes a thought-provoking argument for the benefits of privately funded engineering endeavors. In “Google LUNAR X Prize” he writes:

This is certainly an exciting time to be an engineer. We have seen the success NASA has had with robotic exploration, especially on nearby planets such as Mars. Contrary to everything coming from NASA in the future, however, thanks to the advances in robotics and launch vehicles, “space” will soon become the province of private enterprise and not just government. Very soon, commercial space flight will become a reality.

The Google Lunar X PRIZE provides a focal point for these efforts. Google is offering a $20 million prize to the first team to complete a robotic mission to the moon. The basic goal is to put a lander on the surface of the moon, have it travel at least 500 m once it’s there, and send back high-definition pictures and video of what it finds. There’s a $5 million second prize, and also $5 million in bonus prizes for completing additional tasks such as landing near the site of a previous NASA mission, discovering water ice, traveling more than 5,000 m while on the surface, or surviving the 328-hour lunar night.

When the Lunar X PRIZE registration closed in December 2010, a global assortment of 33 separate teams had registered to compete. Seven of those teams have subsequently dropped out, but there are still 26 active teams, including 11 from the U.S. The first launch is expected sometime in 2013, and there’s plenty of time before the competition ends December 31, 2015. Some teams are even planning multiple launches to improve their chances of winning.

It’s interesting to browse through the team information and see the vast diversity in the approaches they’re taking. This is the part that is most exciting from an engineering point of view. Some teams are building their own launch systems, while others are planning to contract with existing government or commercial services, such as SpaceX. There’s a huge amount of variety among the landers, too: some will roll, some will walk, and some will fly across the moon in order to cover the required distance. Each one takes a different approach to dealing with the difficult terrain on the moon, and issues such as the raw temperature extremes between blazing sunlight and black space.

This sort of diversity is a powerful driver for future development. Each approach will have its strengths and weaknesses, and there will certainly be some spectacular failures. Subsequent missions will draw on the successful parts of each prior one. Contrast this to the approach NASA has tended to take of putting all its effort into a single design that had to succeed.

It’s also interesting to consider the economics of this sort of competition. The prize doesn’t really approach the full investment required to succeed. Indeed, Google is quite up front about the fact that it probably only covers about 40%, based on other recent high-tech competitions such as the Defense Advanced Research Projects Agency’s DARPA Grand Challenge and the Ansari X PRIZE. This means the teams need to raise most of their money in the private sector, which keeps them focused on technologies that are commercially viable.

I have long been a fan of “hard” science fiction, as typified by writers such as Larry Niven, Arthur C. Clarke, and Michael Crichton. To me, hard science fiction means you posit a minimal set of necessary technologies, such as faster than light (FTL) space travel or self-aware computers/robots, and then explore the implications of that universe without introducing new “magic” whenever your story gets stuck. In particular, Larry Niven’s “Known Space” universe—particularly in the near future—includes extensive exploration of the solar system by private entrepreneurs. With the type of competition fostered by the Google Lunar X PRIZE, I see those days as being just around the corner.

The competition among these teams, and the commercial companies that arise from them, will be good for society as a whole. For one thing, we’ll finally see the true cost of getting to space, as opposed to the massive amounts of money we’ve been pouring into NASA to achieve its goals. As a public agency, NASA has many operational constraints, and as a result, it tends to be ultra-conservative in terms of risk taking. Policies that dictate incorporating backups for the backups certainly makes a space mission more expensive than the alternative.

Despite these remarks, however, I don’t mean to sound overly negative about NASA at all. It has had many spectacular successes, starting with the Mercury, Gemini, and Apollo manned space programs, as well as robotic exploration of the solar system with the likes of Pioneer and Voyager, and more recently with the remarkable longevity of the Mars rovers, Spirit and Opportunity. There have been many beneficial spin-offs of the space program and we have all benefited in some way. We wouldn’t be where we are today without the U.S. space program. But the future is yet to be written. There are striking differences between a publicly run space program and the emerging free-market privately funded endeavors. We would do well to recognize the opportunities and the potential benefits.

Circuit Cellar 263 (June 2012) is now available on newsstands.

Issue 262: Full-Featured SBCs at Your Fingertips

Fact 1: Easy-to-use, full-featured SBCs are popping up everywhere. Fact 2: Open-source software is becoming more commonplace each day. (Even Microsoft Corp. has begun taking open source seriously.) Conclusion: It’s an opportune time to be an electronics innovator.

In Circuit Cellar May 2012, Steve Ciarcia surveys some of the more affordable, 32-bit hardware options at your disposal. In “Power to the People” he writes:

While last month I may have implied that 8 bits is enough to control the world, there are significant things happening in high-end, 32-bit embedded processors that might really produce that inevitability. There are quite a few new system-on-chip-based, low-cost, single-board computers (SBCs) specifically designed to compete with or augment the smartphone and pad computer market. These and other full-feature budget SBCs are something you should definitely keep on your radar.

These devices typically have a high-end, 32-bit processor, such as ARM Cortex-A8, running 400 MHz to 1,000 MHz, coupled with a GPU core (and sometimes a separate DSP core) along with 128 MB to 512 MB of DDR SDRAM. These boards typically boot a full-up desktop operating system (OS)—such as Linux or Android (and soon Windows 8)—and often contain enough graphics horsepower for full-frame rate HD video and gaming.

Texas Instruments made a significant splash a few years ago with the introduction of the BeagleBoard SBC (beagleboard.org, $149 at the time) with their OMAP3530 chip along with 256-MB of flash memory and 128 MB of SDRAM running Angstrom Linux on a high-resolution HDMI monitor. That board has since been superseded by the BeagleBoard-xM (1,000 MHz and 512 MB) at the same price and supplemented by the BeagleBone board. Selling for just $89, BeagleBone includes a 600-MHz AM3517 processor, 256-MB SDRAM, a 2-GB microSD card, and Ethernet (something the original BeagleBoard lacked).

All of the software for these boards is open source, and a significant community of developers has grown up around them. In particular, a lot of effort has been put into software infrastructure, with a number of OSes now ported to many of these boards, along with languages (both compiled and interpreted) and application frameworks, such as XBMC for multimedia and home-theater applications.

Another SBC that has been generating a lot of buzz lately is the Raspberry Pi board (raspberrypi.org), mainly because the “B” version is priced at just $35. Raspberry Pi is based on a Broadcom chip, which is unexpected. Broadcom traditionally only gave hardware documentation and software drivers to major customers, like set-top box manufacturers, not to an open-source marketplace. Apparently, the only proprietary piece of software for the Raspberry Pi board will be the driver/firmware for the GPU core. Unfortunately, as I write this, there are a few lingering manufacturing issues, and Raspberry Pi still awaits shipping.

Both the concept and size of an “SBC” are evolving as well. In addition to the bare development boards, a number of interesting second-level products based on these chips has begun to appear. Take a look at designsomething.org. A couple of projects in particular are Pandora’s Pandora Handheld and Always Innovating’s HDMI Dongle. The former is a pocket-sized computer that flips open to reveal an 800 × 480 touchscreen and an alphanumeric keypad with gaming controls. Besides the obvious applications as a video viewer, gaming platform, and “super PDA,” I see huge opportunities for this box as a user interface for things like USB-based test instruments.

The Always Innovating HDMI Dongle is amazing for how much functionality they’ve crammed into a small package: it’s no bigger than a USB thumb drive (it also needs a USB socket for power), but it can turn any TV with an HDMI input jack and USB socket into a fully functional, Android-based computer with 1080p HD video playback, games, and Wi-Fi-based Internet access. These dongles might easily become distributed home theater nodes, delivering high-quality video and audio to multiple rooms from a common file server; or, one of the other low-cost SBCs might become the brain of a robot that can see and understand the world around it using open-source computer vision (OpenCV).

While it makes an old hardware guy like me feel less useful, it’s clear that the hardware—or, more specifically, the necessity to always design unique hardware—is no longer the bottleneck when it comes to powerful embedded applications. In a turnaround from decades ago, the ball is now clearly in the court of the software developers.

The applications for these boards and “thumb-thingies” are endless. Basically, they have the hardware muscle to handle anything that a smartphone or pad computer can do for much less. A lot of work has already been done on the OS and middleware layers. We just need to dive in and create the applications! Then it basically becomes a simple matter of programming. Of course, you know how much I personally look forward to that.

Circuit Cellar 262 (May 2012) is on newsstands now. Click here for a free preview of the issue.

Issue 261: The Deeply Embedded 8-Bit MCU

The 8 bit debate continues. Last week at Design West in San Jose, CA, the topic came up more than once, and I reported on Microchip Technology’s expanded 8-bit PIC16F(LF)178X midrange core MCU family.

Over the years, Circuit Cellar has published several articles on the topic. Back in Circuit Cellar 8 (1989) Tom Cantrell tackled the topic in an article titled “HD647180X: A New 8-Bit Microcontroller Embedded Controllers Get Respect.” In 2010 in Circuit Cellar 143 he tackled the topic again in an article titled “Live for Today: The 8-Bit MCU Still Matters.” This month in an editorial titled “8-Bit Control Is Dead – No Way!” (Circuit Cellar 261), Steve Ciarcia weighs in on the long-debated topic.

For years tech pundits have been predicting the end of 8-bit micros. Apparently, with the prices of 16- and 32-bit MCUs constantly dropping, and presuming you always want your application to do more stuff, there is no reason not to replace a less powerful MCU, right? In my opinion, it was a false assumption then, and it still is today.

We can’t look at this as a zero-sum game. Yes, 32-bitters open up all kinds of new opportunities for embedded processing, especially in the area of network-connected personal entertainment and information devices. But this doesn’t mean they’re a better fit in the low-end control and text-based applications that the 8-bitters have occupied for so long. The boundaries are certainly “fuzzy,” but consider how we tend to generally categorize MCUs.

At the low end, we have the 8-bit controllers which typically have 8-bit data and registers along with 16-bit address paths. This is a sweet spot for all kinds of control and text-based functions that simply don’t need to handle more than 64 KB of data at a time. The price/performance of the 8-bit chip should win this fight every time.

In the midrange, we have the 16-bit MCUs and lower-end 16-bit DSP chips. These chips can do a bunch more because they handle 16-bit data and have at least 24-bit address paths. There is often a hardware multiplier as well, which makes this class of chip ideal for many types of signal processing and audio applications.

At the high end, there is the 32-bit MCU/MPU (and higher-end DSPs) that have 32-bit data and address paths. These are the chips that have the power to drive an interactive graphical user interface and process video signals in real time.

It’s clear that chip manufacturers believe in the future of all three classes of MCU; just look at the innovations they continue to introduce at all levels. Fundamentally, as the silicon improves in terms of transistor density, more memory fits onto a smaller chip, and there’s more room for on-chip peripherals. Also, clock and power management has become a lot more flexible than ever before. The lower-end and midrange MCUs are all available with some combination of hardware timers (e.g., PWM, pulse capture, and motor control), communications (e.g., UART, SPI, I2C, CAN, USB, etc.), and analog interface (e.g., ADC, DAC, and touch). Some include hardware controllers for multiplexed LCDs or Ethernet interfaces.

At the higher end, in addition to all of that, we also see options like on-chip SDRAM controllers, SD memory and I/O controllers, Ethernet MAC (and sometimes PHY), mass storage (ATAPI, SATA) and video support, including in some cases a separate GPU core. Basically, everything you need to run a full-up operating system like Windows, MacOS, or Android.

Probably the greatest result of across-the-board lower MCU costs is that we will be seeing multiple chips where just one was used before. This has been the situation with automobiles for years where reliability has increased with lots of “smart”-control modules all networked together. Certainly, this make senses in a $30,000 car, but the concept is moving down the cost spectrum as well. Take your typical household washing machine or dryer that has a motor or two and a control panel. Instead of one chip handling all of the control functions and user interface I/O, there will be one (or two) motor controller chip with a communications interface (e.g, SPI, I2C, CAN, etc.) and a second chip with a communications interface along with an LCD controller and touch sensor support.

If the system designers are forward-thinking when they define the protocol by which these subsystems communicate, they’ll end up with intelligent building blocks (e.g., “smart motor,” “smart valve,” “smart sensor”) that can be easily reused in other products, keeping manufacturing costs low. The modules themselves will be reliable and energy-efficient, contributing substantially to end-user satisfaction and low recurring costs. The key is to make each module just smart enough without going overboard on processing power or overloading it with a top-heavy protocol.

And, that’s where the lowly 8-bit MCU shines. A smart valve that just needs to sit on a LIN or 1-wire bus, operate a solenoid, and verify that it opened or closed doesn’t need a lot of CPU cycles or 32-bit addressing to do the job. One of the tiny 8-bitters in a six- or eight-pin package will do nicely, and might even cost less than the manufacturing cost and testing of the dedicated wiring harness needed to do the job in the traditional way. There’s no way a 16-bit or 32-bit MCU makes sense in this context. But more importantly, these lowly control tasks aren’t going to go away. In fact, I think you’ll be seeing a lot more of them and they’ll all need MCUs. So, although it will be less visible, the 8-bit MCU will still be deeply embedded in increasingly subtle, but important, parts of your life, working hard so you don’t have to.

 

Issue 260: Embedded Control Languages

Choosing a programming language is an essential part of any serious embedded design project. But the task can be daunting. When should you use a processor-specific language? Why not just use C?

In the March issue of Circuit Cellar, Steve Ciarcia reviews a handful of programming languages and types of and processors—and projects—for which they are intended.

Here’s Steve’s take:

Let’s talk about languages—specifically, embedded control languages. Everyone has their favorite, typically whatever they learned first, but when you get right down to it, all languages offer the same basic features.

First of all, you need to be able to specify a sequence of steps and then select between two (or more) alternative sequences—the if-then-else construct. You also need to be able to repeat a sequence, or loop, and exit that loop when a condition is met. Finally, you want to be able to invoke a sequence from multiple places within other sequences—a call function.

Assembly language is the lowest-level language you can use on most machines. Its statements bear a one-to-one relationship with the instructions the hardware executes. If-then-else and loop-exit constructs are implemented using conditional and unconditional branch instructions, and there’s usually a hardware stack that facilitates subroutine call and return. This is both a blessing and a curse—it enables you to become familiar with the hardware and use it most effectively, but it also forces you to deal with the hardware at all times.

Very early on in the development of computers, the concept of a high-level language (HLL) was developed to reduce this hardware focus. By creating a set of abstract computer operations that aren’t necessarily tied to a particular processor, the HLL frees the programmer from a specific hardware architecture and enables him to focus on the actual algorithm development. The compiler and library writers took these abstractions and efficiently mapped them onto the hardware. HLL opened up programming to “non-hardware” people whose first interest was the application problem and its solution.

Today, there are literally hundreds of computer languages (see http://en.wikipedia.org/wiki/List_of_programming_languages). Some of them are completely general-purpose, while others are very domain-specific. Two languages have been implemented on virtually every microprocessor ever invented: C and BASIC. (There’s no way I can mention them all, so I’ll just touch on some popular embedded ones.) Of the two, C is by far the more popular for embedded apps, since it runs more efficiently on most hardware. Many people would argue that C isn’t a “true” HLL; but even still, it’s a huge step up from Assembly language in terms of productivity.

There have been some niche languages intended for small systems. For example, there’s what you might call a family of reverse-Polish notation (RPN) languages: Forth, Postscript, and does anyone remember a tiny interpreted language called Mouse? These never caught on in any big way, except for Postscript, which is almost universally available these days on printers as a page-description language. But it’s a full programming language in its own right—just ask Don Lancaster!

Along the way, there have been a few processor-specific languages invented. For example, there’s JAL—just another language—which is optimized for 8-bit Microchip PIC processors, and Spin, which is designed to support the parallel-processing features of the Parallax Propeller chip.

Once you start getting into larger 16- and 32-bit chips, the set of available tools expands. Many of these processors have C/C++ toolchains based on the GNU Compiler Collection (GCC). However, this means you can really use any number of languages in the collection on these processors, including Fortran, Java, and Ada.

The designers of some embedded systems want to include the ability for the system to be programmed by their end users. To this end, the concept of an “extension language” was developed. Two notable examples are TCL and Lua. These provide a standard notation for control constructs (branching, looping and function calls) wrapped around application-specific primitive operations implemented by the system designer.

Once you start getting into systems that are large enough to require an operating system (real-time or otherwise), many of the available choices support the POSIX API. This means you can use any of the mainstream scripting languages—such as shell scripts, Perl, Python, Ruby, PHP, etc.—either internally or exposed to the end user.

And finally, there’s the web-based user interface. Even relatively simple embedded applications can have sophisticated GUIs by pushing much of the programming out to the browser itself by means of ECMAscript (JavaScript) or Java. The embedded app just needs to implement a basic HTTP server with storage for the different resources needed by the user interface running on the browser. There are all kinds of toolkits and frameworks out there that help you build such a system.

I’ll stop now. The point is, even in the world of embedded computer applications, there’s a wide variety of tools available, and picking the right set of tools for the job at hand is part of the system design process. Taking a higher-level view, this brief survey might give you an idea of what kinds of tools you would want to put some effort into learning, based on where your interests lie or the application dictates.