Onward & Upward: A History of Circuit Cellar

At the end of our conversations, longtime Circuit Cellar columnist Ed Nisley always says, “Onward and upward.” To this day, I’m not quite sure what that means, but it seems like a useful exit line. Of course, leaving a conversation and leaving a career are two completely different things. Both involve some strategy. With a conversation, one expects you’ll talk later and not everything has to be resolved by the conversation’s end. With a career, there is more finality. You want to know you have accomplished some goals, left the world a better place, and placed your legacy in the hands of people who will properly transition it.

An early Ciarcia project

These days, I’m not sure whether to laugh or cringe when I get an e-mail or meet a Circuit Cellar reader who starts a conversation by saying they have been reading my stuff and following me since BYTE magazine. Certainly, I take it as a compliment, but it also means we are both over the proverbial hill. True, the BYTE days and the seeds that generated Circuit Cellar magazine began 35 years ago. That’s a long time for any of us.

When you read the 25th Anniversary issue, you’ll find my article describing the history of how this all started. I’d like to say I had a grand plan from the very beginning, but my career path had a far simpler strategy: To create a product that would be in demand for a long time, to stay under the radar (away from lawyers and competitive vultures), and find good people with similar beliefs who would help me accomplish these goals.

I’d like to say I intuitively knew what to do as a boss, but remember, I was trained as an engineer, not an MBA. A wise person once told me there were two ways to learn things in life: through trial and error or through someone telling you. I just took to heart a business article I read in college and religiously applied it to my career path. It said the majority of small businesses fail for one of four reasons: Too little business, too much business, insufficient capital, or no plan for succession. Since I wasn’t having much fun in corporate America back then (five jobs in five years), succeeding in business had more of a “do or die” imperative than the average job.

Let me warn any budding entrepreneurs that these four events test your gambling tactics more than your business acumen. In my case, Ciarcia’s Circuit Cellar was the product 30 years ago, along with the supporting manufacturing company. It grew quickly and afforded certain luxuries (e.g., Porsches, BMWs, Ferraris, etc.) typically necessary in our culture to designate achievement. Too little business was not an issue.

The “too much business” event happened right after the introduction of the IBM PC. Circuit Cellar was the third company in the country to market an IBM PC clone. I thought it was a good idea. Everybody who couldn’t get a real IBM PC started banging on our door for an MPX-16. We got $1 million in orders in just a few weeks! What was I supposed to do? Certainly not what 99% of you would have done—I stopped taking orders!

Remember, I didn’t want to work for anybody and I don’t like doing “reports.” Delivering thousands of PCs might have made us into another Apple, but it also meant using lots of outside money, no more BYTE magazine, and no more fun monthly projects. It really meant venture capitalists and lawyers, ugh. Was it the right decision? You decide. Circuit Cellar is still here, and every early PC clone maker from back then is gone.

In 1988 we started Circuit Cellar magazine. While our money came from manufacturing projects and kits, we knew the real product was Circuit Cellar itself. It was time to launch the magazine as a unique product. Back in 1988, it typically cost about $2 million for a big publisher to start a magazine like Circuit Cellar. We pulled that off without any other sources.

Finally, there comes the toughest decision for any entrepreneur—when to hang it up. I have to admit, I wasn’t quite sure about this one. It’s not because I planned to hang in until the bitter end. It was because I didn’t immediately see any company that would appreciate Circuit Cellar enough to properly continue it. Over the years, the four major U.S. technical trade publishers had sniffed around Circuit Cellar with acquisition in mind. I never got a good feeling about them, and I’m sure they knew I wasn’t going to be a happy indentured servant in any deal they proposed.

Why it takes a European publisher to appreciate an American magazine and its readers, I’ll never know. From day one, I felt Elektor would treat Circuit Cellar properly. It’s been three years since that transition, and I feel I made the correct decision. The collective benefits of being part of a larger publishing company will prolong Circuit Cellar’s existence and enable it to expand into new markets I was too complacent to tackle. The loyal Circuit Cellar employees deserve a career path beyond my short-term ambitions, and now they have it.

As for me, I plan on spending time stringing more wires for my HCS and I’m ecstatic about having zero responsibilities anymore. I’m around if needed, but plan on taking a four-wheel drive out to the beach to find me. So, until then, I’ll just close with “onward and upward,” and see where that takes me.

Team-Based Engineering

On August 6, 2012, NASA’s Curiosity rover successfully landed in Gale Crater on Mars after traveling a daunting 352 million miles. It was a triumphant moment for the scores of Curiosity team members who had spent years engineering the mission. And it has become the archetypal example of the benefit of team-based, multidisciplinary engineering.

This is an image of the Mars Hand Lens Imager (MAHLI) located on Curiosity’s arm. (Source: NASA/JPL-Caltech/MSSS)

In Circuit Cellar 267 (October  2012), Steve Ciarcia refers to the Curiosity effort as a point of departure for expounding the importance teamwork and intelligent project management.  He argues that engineering endeavors of all sorts and sizes require the extraordinary focus and collaboration of multiple specialists all working toward a common goal.

Several weeks ago, I was following the successful landing of the Curiosity rover on Mars, which got me reminiscing about the importance of teamwork on large engineering projects. Obviously, a large project requires a significant number of people due to the sheer amount of work. But, more importantly, a project’s various tasks require a balanced mix of skills for successful and timely completion.

Naturally, you want engineers working in areas where they have the skills and confidence to succeed. That’s when they’ll do their best work. At a basic level, all engineers share a distinctive trait: the ability to make something you want from technology and materials. This is the best definition of “engineer” I have heard. But, as I said last month, different engineers have different interests, skills, and experience. Some engineers are good at understanding the subtleties of how a large systems’ components interact, while others are good at low-level details (e.g., analog circuit design, mechanical design, or software programming). Diversity of skills among team members is important and can greatly strengthen a team.

At some point, we all look to ascend the corporate ladder and, for most companies, that involves engineers taking on management responsibilities. Actively encouraging engineers to work in areas outside their comfort zones encourages greater diversity in problem-solving approaches. Further, inspiring engineers to seek responsibility and expand their comfort zones can make them better engineers for the long term. While this is mainly true about engineers who are employees, it also applies to any contractor or consultant involved in a long-term company relationship.

Some engineers can jump into just about any area and do well. However, it is rarely in the interest of the project or good team dynamics to follow that impulse. Those engineers need to enable the specialists to do the work they’re best at and only jump into situations where they can do the most good. In other words, when team management is your primary task, engineer or not, you need to take on a mentoring role, often teaching rather than doing.

Communication among team members is also key. There must be enough of it, but not too much. I have seen teams schedule so many meetings there isn’t any time left for individuals to make progress on their assigned tasks. Meetings need to be short, to the point, and involve only those people who have a vested interest in the information being exchanged. This can range from two engineers conversing in a hallway to a large project-wide meeting that keeps everyone in sync on a project’s overall goals and status. But beware, it can be difficult to keep big meetings from getting diverted into the minutiae of a particular problem. This needs to be avoided at all costs.

Even when the schedule is tight, overstaffing usually has a negative benefit. The math suggests that project completion time should go down by the inverse of the number of team members. However, this ignores the overhead of more communication among team members, which goes up by the square of the number of participants. If there are too many team members, they may start getting in each other’s way and have less sense of ownership in what they’re doing. Basically, there are some tasks that take a fixed amount of time. As the saying goes, nine women can’t make a baby in one month!

Motivating the team is another key factor that should be a priority shared by the team and technical management. No matter how large or small a team member’s assigned tasks, if he feels he has the responsibility and the recognition for getting that task done, he’ll be more engaged and motivated to do well. Moving team members from task to task destroys any sense of ownership. Granted, every project has the occasional fire that needs to be extinguished—all hands on deck—but, if a project is constantly in that state, then it’s pretty much doomed to fail.

Regardless of whether you are engineering a Mars rover at NASA or creating the next great social media “widget” at a venture-capital funded start-up, the dynamics of successful project management have an established methodology. Design engineers are creative and it is important to give them the flexibility to unleash their creativity—but keep it within bounds. Most projects are time and cost sensitive. Ratifying your step up the corporate ladder only comes by ensuring the project is completed within budget and in a timely fashion.

Circuit Cellar 267 (October 2012) is now available.

Issue 266: An Engineer’s Communication Protocol

Electrical engineers and embedded programmers can expect to work several different jobs over the course of their careers. In the mid- to late-20th century, an engineer could expect to find a job with a large company, work it for 25 or 30 years, and then retire with a pension. But today things are different. For instance, over a 20-year period, the average engineer or programmer who reads Circuit Cellar might work for a handful of different corporations, start a business, work on contract projects, and even bill hours as a consultant. Others will move between industry and academia, serve as managers, and hold positions on corporate boards.

To excel during the course of a long tech career in the 21st century, you’ll need to continuously hone your communication skills much like you do your hardware and software abilities. You must practice self awareness in order to assess your amiability, approachability, and listening skills. And you should continuously endeavor to keep your communication skills up to snuff by staying on top of advances in social media and business-standard communication protocols. While some jobs will require you to work long hours alone, the success of others will require you to check your ego at the door and let your client have his or her say. It won’t be easy. But the sooner you start focusing on strengthening your communication skills the better off you’ll be. As Steve Ciarcia notes in “Managing Expectations” (Circuit Cellar 266), your success will be based on “the art of managing expectation in the eyes of others.” Ciarcia writes:

I have a theory. People are a lot more comfortable when they can predict the future, or at least if they think they can. Look at all the resources we put into forecasting the weather or economic conditions, despite the fact that we know these are complex, chaotic systems whose sensitivity to initial conditions makes any long-term predictions less dependable. This applies on a personal level, too. We have developed protocols that help us interact with each other. We say “hello” when we pick up the phone. We shake hands when we meet for the first time. These protocols (i.e., “social customs”) help us control the process of learning about each other—what we need and what we can provide in a relationship.

Communication “protocol” is particularly important in the relationship between an engineer and his client. There is a huge amount of diversity in such a relationship. Unstated assumptions can lead to enormous gaps in expectations resulting in disappointment, frustration, anger, or even legal action in extreme cases.

Despite the fact that human resource types tend to treat engineers as interchangeable cogs in a machine, individual engineers may have distinctly different talents. Some have extensive expertise in a particular technology. Others have more general system-level design skills along with an ability to pick up the finer points of new technologies “on the fly.” Some are good at communicating with clients and developing system concepts from vague requirements, while others need to dig into the minutiae of functional specifications before defining low-level implementation details.

As an engineer, it is important to recognize where your talents lie in this broad spectrum of possibilities, and to be honest about them when describing yourself to coworkers and potential clients. Be especially careful with people who are going to represent you to others, such as headhunters and engineering services brokers. Resist the urge to “inflate” your capabilities. They’ll be doing that on your behalf, and you don’t want to compound the problem.

Similarly, engineering services customers come in all shapes and sizes. Some only have a vague product idea they want to develop, while others may have a specific description of what needs to be solved. Some small companies will want you to manage the entire product development process, while larger ones have management systems (i.e., bureaucracies) and will expect you to work within established procedures. Some will want you to work onsite using their equipment, while others will expect you to have your own workspace, support infrastructure, elaborate test equipment, and so forth.

In any case, from the customer’s point of view, there are risks to using outside engineering services. How much are they going to have to spend? What are the chances of success at that level of expenditure? Unless there are unusually large, nonrecurring engineering (NRE) charges associated with the project, labor will be the customer’s biggest expense. The obvious question is: How much time is it going to take? These are questions that are sometimes difficult to answer at a project’s inception, especially if the requirements are poorly defined. It may become necessary to guide the customer through a process of discovery that delineates individual project steps in terms of cost and accomplishment for each step. These early iterations could include things like a feasibility study or a detailed functional specification.

Generally, the customer is going to ask for a fixed-price arrangement, but beware. As the engineer, this means you are assuming all the risk. If the schedule slips or problems crop up, you are the one who will take the loss. Fixed-price contracts are a tough equilibrium. Invariably, they involve padding time estimates to balance the risk-benefit ratio, but not so much that you price yourself out of the job in the first place. A better consulting situation is a time and materials contract that puts more of the risk back on the customer and provides flexibility for unforeseen glitches. Knowledgeable customers should understand and be okay with this.

The point is, you need to be willing to take the lead and let the customer know what is happening now and every step of the way. That way, they don’t get surprised, particularly in a negative way. Since we can’t assume every consulting customer is reading my editorial, it’s up to you to explain these issues. Do it right, and you’ll have a positive foundation on which to build your relationship. And, even though I have been directing my remarks primarily to independent consultants and contractors, as an engineer, you are providing your services to others. Even as a full-time employee in a company where your only “customers” are other departments (i.e., manufacturing or testing), these principles still apply. While your present salary is a given, its future progress and longevity is all about the art of managing expectations in the eyes of others.

Circuit Cellar 266 (September 2012) is now available.

Issue 265: Design with End Users in Mind

Whether you’re building or programming microcontroller-based systems, you should always keep your end users and their needs in mind. That means restraining any urges to stuff a project with superfluous functionality and parts. In “What Were They Thinking?” (Circuit Cellar 165), Steve Ciarica argues that over-engineered electronics products are typically more annoying than impressive. Ciarcia writes:

I’ve mentioned this before, but the original BMW iDrive on my 2002 745iL was a prime example of overzealous design. Back then, somebody had the bright idea of condensing nearly all the switches and knobs formerly found on typical car dashboards down to a large knob, called iDrive, on the center console. Combined with a fancy graphics LCD, the joystick provided the driver with 3-D motion control for selecting specific subsystems and individual functions within that subsystem. The bad news was that zooming into and backing out of various control functions was so complicated it was a real driving hazard.

I’m guessing the iDrive designers got caught up in the process of creating a slick design and completely forgot about the basic reason you’re in the car in the first place—to drive, not to run a computer! Let’s face it, when you’re driving, it’s more expedient to reach for a single-function control you can locate out of the corner of your eye. The world was not ready to deal with a 3-D joystick, a complex decision tree on an LCD, and a dozen hand motions to tell a computer to accomplish the same thing. With the original iDrive, you either left the same Sirius station on forever with the hot air blowing in your face or learned to use the voice-response system to at least jump over some of the “slick” functionality and get directly to the heater or radio LCD screens.

It took until I got my 2010 X5 for them to get it right. (My three BMWs in between had iterative improvements.) The solution? They put most of the buttons back! Yeah, like most other cars these days, it still has a joystick knob and an LCD that controls individual settings, but a “real” button takes you directly to the right subsystem.

Circuit Cellar Project Editor Dave Tweed related another example to me while I was talking to him about this editorial idea. He told me about the Kawasaki intelligent proximity activation start system (KIPASS), which is an ignition system for some of Kawasaki’s high-end motorcycles that’s based on a “proximity fob” (RFID). If the physical key is in the ignition and you bring the fob within a few feet of the bike, you can start it (by turning and pushing the key). Sounds nice, right?

You can leave the key in the ignition all the time. When you walk away with the fob in your pocket, the key is captured by an electric latch so it can’t be stolen. All’s well and good. But it seems that most riders are in the habit of stopping the engine by using the “kill switch” instead of turning off the key. That’s fine until you realize that the headlight is only controlled by the ignition key—the light stays on until you turn the key to the “off” position. On a normal bike, you would do this anyway in order to pocket the key before walking away. But with the KIPASS fob, you don’t need the key and lots of bikers simply stop the engine and walk away—leaving the headlight on and killing the battery.

Here’s where it gets fun. With a dead battery, you can’t start the bike except by jumping it using another vehicle. The Allen wrench you need to open the panel you have to remove to get at the battery is in the tool kit under the seat, which is locked by the ignition key. In a real Catch-22, the ignition key is still “captured” by KIPASS even without power, and you can’t remove it to unlock the seat! The only solution is to find a friend or a tow truck with both jumper cables and the correct Allen wrench.

Home theater equipment is another irritation. Forgiving for now that most functions can be accessed only from the remote control, you’d think the few buttons they do have would work correctly. For some reason, I’m finding the built-in buttons are a lot less responsive than the remote. I suspect the interrupt handling software for the IR receiver has a much higher priority than the hardwired switches. My Blu-ray player, in particular, virtually never responds to the physical change disc button on the front, and I almost always have to use the button on the remote. Even powering up the unit takes two to three firm panel button presses but only one click on the remote. My HDTV is similar. There’s a manual button to change the input selection, and it always takes two to three presses before it actually starts to switch inputs. On the IR remote, it is instant. Grrrr.

The point is, when designing a new piece of equipment, don’t get caught up demonstrating as much technology as possible to the exclusion of all other considerations. How hard can it be to design a piece of equipment to respond equally to a few hardwired buttons as well as the IR? Similarly, just because you have the computing power and software expertise, sometimes it’s counterproductive to put all functional control only in the remote control or to put every function into one multi-option/multitasking joystick. As a designer, you have a responsibility to reduce hardware costs by eliminating excess manual controls, but you should always take the time to put yourself in the place of the end user who has to deal with your choices. More importantly, think about what happens after the dog eats the remote.

Circuit Cellar August 2012 is now available.

Issue 264: A Case for the DIY Electronics Fix

Most of today’s expensive electronics systems are engineered to be left alone—meaning, the manufacturer doesn’t want you opening, servicing, or tweaking the products on your own. But that doesn’t mean intelligent, inquisitive engineers shouldn’t give modern electronics gadgets a good hack. The rewards tend to outweigh the drawbacks. As Steve Ciarcia argues in Circuit Cellar 264 (July), you stand to learn a lot by looking inside electronics systems, especially broken ones. Even if you can’t fix them, you can pull out the components and use them in future projects.

In “Fix It or Toss It?” he writes:

No prophetic diatribes or deep philosophical insights this month. Just the musings of an old guy who apparently doesn’t know when to throw in the towel. Let me explain.

I have a friend with a couple LCD monitors he purchased about two years ago. Perhaps due to continuous duty operation (only interrupted by automatic “Sleep mode”), both were now exhibiting some flakiness, particularly when powering up from “sleep.” More importantly, if power was completely shut down, as in a power failure, they wouldn’t come back on at all without manual intervention. He asked if they could be repaired or must they be replaced.

Since I remembered something about a few manufacturers who’d had a bunch of motherboard problems a while back due to bad electrolytic capacitors, I suspected a power supply problem. Of course, agreeing to look into the problem and figuring out how to get inside the monitors was a whole different issue. Practically all of today’s electronics are not meant to be opened or serviced internally at all. Fortunately, my sledgehammer disassembly techniques weren’t so bad that I couldn’t reassemble them. In the process, I found several bulging and leaking capacitors on the power supply board. After replacing the capacitors, the monitors came right up with no problems.

Power supplies just seem to have it out for me. Recently, I had a wireless router stop working and, after a little diagnosing, I determined that its power supply (an external wall-wart) had failed. While hardly worth my time, I was curious, so I cracked open the sealed case to see just how complicated it was. Sure enough, replacing one scorched electrolytic capacitor and gluing the case back together put me back in business.

All this got me thinking about the relative value of various electronic devices. What is the replace/repair decision line? These $200 high-tech electronic LCD monitors failed because of $3 worth of old-tech components that I was fortunately able to fix. It took time to do the repair that has some value, but it also takes time to shop for and purchase a replacement. There must be better monitors these days for the same price. Should I have told him to toss them and use the opportunity to upgrade?

It’s interesting to consider the type of person who repairs stuff like this (being an EE with a fully equipped lab doesn’t hurt either). I mean, I do it primarily because I like knowing how things work. Okay, so I’m getting a little carried away after fixing a couple burnt capacitors, but there’s still an incredible sense of satisfaction in being able to put something back together and having it work. Since I was a kid, dissecting circuits and equipment helped me understand the design choices that were made, and my curiosity naturally lead me to engineering.

Now, I recognize that people like me who repair their own electronics for curiosity or adventure are very much in the minority. So, what about the average person with a failed piece of $200 electronics? For them, the only goal is getting the functionality back as soon as possible. Do they go to a repair service where it takes longer and involves a couple trips? Worse yet, some things just can’t be repaired, and the bad news then is having both the repair “inspection” cost and the replacement cost. I’m guessing that in 99% of typical cases, the no-brainer decision is to toss the failed unit and buy a new one—without ever giving me a chance to tear it apart and play with it.

Let’s face it. Taking modern equipment apart to make even simple repairs is next to impossible. The manufacturers use every trick in the design book to minimize the cost of the goods. This means leaving out features that might make end-user repair easier. Cases that snap together (once)—or worse, are heat-welded together—are cheaper than cases with screws or latches. Most board electronics are custom-labeled surface mount devices, everything uses custom connectors, and the short cabling between boards has no slack to swing out subassemblies for access, and so forth. You couldn’t even fit a scope probe inside most of this stuff if you tried. Sure, some manufacturers do still put component reference designators in the silkscreen, but I suspect it’s so they can repair subassemblies on their production line before final assembly, not make it easier for me to poke around.

Anyway, like I said, there’s no prophetic conclusion to be drawn from all of this. I fix stuff because I enjoy the challenge and I usually learn something from it. Even if I can’t repair the item, I usually keep some of the useful components and/or subassemblies for experimenter one-off projects or proof-of-concept prototypes. You never know when something in the junk box might prove useful.

Circuit Cellar 264 (July 2012) is now available on newsstands and at the Circuit Cellar Webshop.

Issue 263: Privately Funded Engineering

The public vs. private funding debate endures in the United States and Europe. Everything from energy generation (e.g., oil) to social welfare programs are debated daily by government committees, discussed in corporate board rooms, and argued over at lunch tables from Los Angeles to Brussels and beyond.

One particularly interesting discussion pertains to the role of the public and private sectors in space flight and exploration, which comprises fields such as aerospace design, embedded electronics, and robotics. In Circuit Cellar June 2012, Steve Ciarcia weighs in on this debate and makes a thought-provoking argument for the benefits of privately funded engineering endeavors. In “Google LUNAR X Prize” he writes:

This is certainly an exciting time to be an engineer. We have seen the success NASA has had with robotic exploration, especially on nearby planets such as Mars. Contrary to everything coming from NASA in the future, however, thanks to the advances in robotics and launch vehicles, “space” will soon become the province of private enterprise and not just government. Very soon, commercial space flight will become a reality.

The Google Lunar X PRIZE provides a focal point for these efforts. Google is offering a $20 million prize to the first team to complete a robotic mission to the moon. The basic goal is to put a lander on the surface of the moon, have it travel at least 500 m once it’s there, and send back high-definition pictures and video of what it finds. There’s a $5 million second prize, and also $5 million in bonus prizes for completing additional tasks such as landing near the site of a previous NASA mission, discovering water ice, traveling more than 5,000 m while on the surface, or surviving the 328-hour lunar night.

When the Lunar X PRIZE registration closed in December 2010, a global assortment of 33 separate teams had registered to compete. Seven of those teams have subsequently dropped out, but there are still 26 active teams, including 11 from the U.S. The first launch is expected sometime in 2013, and there’s plenty of time before the competition ends December 31, 2015. Some teams are even planning multiple launches to improve their chances of winning.

It’s interesting to browse through the team information and see the vast diversity in the approaches they’re taking. This is the part that is most exciting from an engineering point of view. Some teams are building their own launch systems, while others are planning to contract with existing government or commercial services, such as SpaceX. There’s a huge amount of variety among the landers, too: some will roll, some will walk, and some will fly across the moon in order to cover the required distance. Each one takes a different approach to dealing with the difficult terrain on the moon, and issues such as the raw temperature extremes between blazing sunlight and black space.

This sort of diversity is a powerful driver for future development. Each approach will have its strengths and weaknesses, and there will certainly be some spectacular failures. Subsequent missions will draw on the successful parts of each prior one. Contrast this to the approach NASA has tended to take of putting all its effort into a single design that had to succeed.

It’s also interesting to consider the economics of this sort of competition. The prize doesn’t really approach the full investment required to succeed. Indeed, Google is quite up front about the fact that it probably only covers about 40%, based on other recent high-tech competitions such as the Defense Advanced Research Projects Agency’s DARPA Grand Challenge and the Ansari X PRIZE. This means the teams need to raise most of their money in the private sector, which keeps them focused on technologies that are commercially viable.

I have long been a fan of “hard” science fiction, as typified by writers such as Larry Niven, Arthur C. Clarke, and Michael Crichton. To me, hard science fiction means you posit a minimal set of necessary technologies, such as faster than light (FTL) space travel or self-aware computers/robots, and then explore the implications of that universe without introducing new “magic” whenever your story gets stuck. In particular, Larry Niven’s “Known Space” universe—particularly in the near future—includes extensive exploration of the solar system by private entrepreneurs. With the type of competition fostered by the Google Lunar X PRIZE, I see those days as being just around the corner.

The competition among these teams, and the commercial companies that arise from them, will be good for society as a whole. For one thing, we’ll finally see the true cost of getting to space, as opposed to the massive amounts of money we’ve been pouring into NASA to achieve its goals. As a public agency, NASA has many operational constraints, and as a result, it tends to be ultra-conservative in terms of risk taking. Policies that dictate incorporating backups for the backups certainly makes a space mission more expensive than the alternative.

Despite these remarks, however, I don’t mean to sound overly negative about NASA at all. It has had many spectacular successes, starting with the Mercury, Gemini, and Apollo manned space programs, as well as robotic exploration of the solar system with the likes of Pioneer and Voyager, and more recently with the remarkable longevity of the Mars rovers, Spirit and Opportunity. There have been many beneficial spin-offs of the space program and we have all benefited in some way. We wouldn’t be where we are today without the U.S. space program. But the future is yet to be written. There are striking differences between a publicly run space program and the emerging free-market privately funded endeavors. We would do well to recognize the opportunities and the potential benefits.

Circuit Cellar 263 (June 2012) is now available on newsstands.

Issue 262: Full-Featured SBCs at Your Fingertips

Fact 1: Easy-to-use, full-featured SBCs are popping up everywhere. Fact 2: Open-source software is becoming more commonplace each day. (Even Microsoft Corp. has begun taking open source seriously.) Conclusion: It’s an opportune time to be an electronics innovator.

In Circuit Cellar May 2012, Steve Ciarcia surveys some of the more affordable, 32-bit hardware options at your disposal. In “Power to the People” he writes:

While last month I may have implied that 8 bits is enough to control the world, there are significant things happening in high-end, 32-bit embedded processors that might really produce that inevitability. There are quite a few new system-on-chip-based, low-cost, single-board computers (SBCs) specifically designed to compete with or augment the smartphone and pad computer market. These and other full-feature budget SBCs are something you should definitely keep on your radar.

These devices typically have a high-end, 32-bit processor, such as ARM Cortex-A8, running 400 MHz to 1,000 MHz, coupled with a GPU core (and sometimes a separate DSP core) along with 128 MB to 512 MB of DDR SDRAM. These boards typically boot a full-up desktop operating system (OS)—such as Linux or Android (and soon Windows 8)—and often contain enough graphics horsepower for full-frame rate HD video and gaming.

Texas Instruments made a significant splash a few years ago with the introduction of the BeagleBoard SBC (beagleboard.org, $149 at the time) with their OMAP3530 chip along with 256-MB of flash memory and 128 MB of SDRAM running Angstrom Linux on a high-resolution HDMI monitor. That board has since been superseded by the BeagleBoard-xM (1,000 MHz and 512 MB) at the same price and supplemented by the BeagleBone board. Selling for just $89, BeagleBone includes a 600-MHz AM3517 processor, 256-MB SDRAM, a 2-GB microSD card, and Ethernet (something the original BeagleBoard lacked).

All of the software for these boards is open source, and a significant community of developers has grown up around them. In particular, a lot of effort has been put into software infrastructure, with a number of OSes now ported to many of these boards, along with languages (both compiled and interpreted) and application frameworks, such as XBMC for multimedia and home-theater applications.

Another SBC that has been generating a lot of buzz lately is the Raspberry Pi board (raspberrypi.org), mainly because the “B” version is priced at just $35. Raspberry Pi is based on a Broadcom chip, which is unexpected. Broadcom traditionally only gave hardware documentation and software drivers to major customers, like set-top box manufacturers, not to an open-source marketplace. Apparently, the only proprietary piece of software for the Raspberry Pi board will be the driver/firmware for the GPU core. Unfortunately, as I write this, there are a few lingering manufacturing issues, and Raspberry Pi still awaits shipping.

Both the concept and size of an “SBC” are evolving as well. In addition to the bare development boards, a number of interesting second-level products based on these chips has begun to appear. Take a look at designsomething.org. A couple of projects in particular are Pandora’s Pandora Handheld and Always Innovating’s HDMI Dongle. The former is a pocket-sized computer that flips open to reveal an 800 × 480 touchscreen and an alphanumeric keypad with gaming controls. Besides the obvious applications as a video viewer, gaming platform, and “super PDA,” I see huge opportunities for this box as a user interface for things like USB-based test instruments.

The Always Innovating HDMI Dongle is amazing for how much functionality they’ve crammed into a small package: it’s no bigger than a USB thumb drive (it also needs a USB socket for power), but it can turn any TV with an HDMI input jack and USB socket into a fully functional, Android-based computer with 1080p HD video playback, games, and Wi-Fi-based Internet access. These dongles might easily become distributed home theater nodes, delivering high-quality video and audio to multiple rooms from a common file server; or, one of the other low-cost SBCs might become the brain of a robot that can see and understand the world around it using open-source computer vision (OpenCV).

While it makes an old hardware guy like me feel less useful, it’s clear that the hardware—or, more specifically, the necessity to always design unique hardware—is no longer the bottleneck when it comes to powerful embedded applications. In a turnaround from decades ago, the ball is now clearly in the court of the software developers.

The applications for these boards and “thumb-thingies” are endless. Basically, they have the hardware muscle to handle anything that a smartphone or pad computer can do for much less. A lot of work has already been done on the OS and middleware layers. We just need to dive in and create the applications! Then it basically becomes a simple matter of programming. Of course, you know how much I personally look forward to that.

Circuit Cellar 262 (May 2012) is on newsstands now. Click here for a free preview of the issue.

Issue 261: The Deeply Embedded 8-Bit MCU

The 8 bit debate continues. Last week at Design West in San Jose, CA, the topic came up more than once, and I reported on Microchip Technology’s expanded 8-bit PIC16F(LF)178X midrange core MCU family.

Over the years, Circuit Cellar has published several articles on the topic. Back in Circuit Cellar 8 (1989) Tom Cantrell tackled the topic in an article titled “HD647180X: A New 8-Bit Microcontroller Embedded Controllers Get Respect.” In 2010 in Circuit Cellar 143 he tackled the topic again in an article titled “Live for Today: The 8-Bit MCU Still Matters.” This month in an editorial titled “8-Bit Control Is Dead – No Way!” (Circuit Cellar 261), Steve Ciarcia weighs in on the long-debated topic.

For years tech pundits have been predicting the end of 8-bit micros. Apparently, with the prices of 16- and 32-bit MCUs constantly dropping, and presuming you always want your application to do more stuff, there is no reason not to replace a less powerful MCU, right? In my opinion, it was a false assumption then, and it still is today.

We can’t look at this as a zero-sum game. Yes, 32-bitters open up all kinds of new opportunities for embedded processing, especially in the area of network-connected personal entertainment and information devices. But this doesn’t mean they’re a better fit in the low-end control and text-based applications that the 8-bitters have occupied for so long. The boundaries are certainly “fuzzy,” but consider how we tend to generally categorize MCUs.

At the low end, we have the 8-bit controllers which typically have 8-bit data and registers along with 16-bit address paths. This is a sweet spot for all kinds of control and text-based functions that simply don’t need to handle more than 64 KB of data at a time. The price/performance of the 8-bit chip should win this fight every time.

In the midrange, we have the 16-bit MCUs and lower-end 16-bit DSP chips. These chips can do a bunch more because they handle 16-bit data and have at least 24-bit address paths. There is often a hardware multiplier as well, which makes this class of chip ideal for many types of signal processing and audio applications.

At the high end, there is the 32-bit MCU/MPU (and higher-end DSPs) that have 32-bit data and address paths. These are the chips that have the power to drive an interactive graphical user interface and process video signals in real time.

It’s clear that chip manufacturers believe in the future of all three classes of MCU; just look at the innovations they continue to introduce at all levels. Fundamentally, as the silicon improves in terms of transistor density, more memory fits onto a smaller chip, and there’s more room for on-chip peripherals. Also, clock and power management has become a lot more flexible than ever before. The lower-end and midrange MCUs are all available with some combination of hardware timers (e.g., PWM, pulse capture, and motor control), communications (e.g., UART, SPI, I2C, CAN, USB, etc.), and analog interface (e.g., ADC, DAC, and touch). Some include hardware controllers for multiplexed LCDs or Ethernet interfaces.

At the higher end, in addition to all of that, we also see options like on-chip SDRAM controllers, SD memory and I/O controllers, Ethernet MAC (and sometimes PHY), mass storage (ATAPI, SATA) and video support, including in some cases a separate GPU core. Basically, everything you need to run a full-up operating system like Windows, MacOS, or Android.

Probably the greatest result of across-the-board lower MCU costs is that we will be seeing multiple chips where just one was used before. This has been the situation with automobiles for years where reliability has increased with lots of “smart”-control modules all networked together. Certainly, this make senses in a $30,000 car, but the concept is moving down the cost spectrum as well. Take your typical household washing machine or dryer that has a motor or two and a control panel. Instead of one chip handling all of the control functions and user interface I/O, there will be one (or two) motor controller chip with a communications interface (e.g, SPI, I2C, CAN, etc.) and a second chip with a communications interface along with an LCD controller and touch sensor support.

If the system designers are forward-thinking when they define the protocol by which these subsystems communicate, they’ll end up with intelligent building blocks (e.g., “smart motor,” “smart valve,” “smart sensor”) that can be easily reused in other products, keeping manufacturing costs low. The modules themselves will be reliable and energy-efficient, contributing substantially to end-user satisfaction and low recurring costs. The key is to make each module just smart enough without going overboard on processing power or overloading it with a top-heavy protocol.

And, that’s where the lowly 8-bit MCU shines. A smart valve that just needs to sit on a LIN or 1-wire bus, operate a solenoid, and verify that it opened or closed doesn’t need a lot of CPU cycles or 32-bit addressing to do the job. One of the tiny 8-bitters in a six- or eight-pin package will do nicely, and might even cost less than the manufacturing cost and testing of the dedicated wiring harness needed to do the job in the traditional way. There’s no way a 16-bit or 32-bit MCU makes sense in this context. But more importantly, these lowly control tasks aren’t going to go away. In fact, I think you’ll be seeing a lot more of them and they’ll all need MCUs. So, although it will be less visible, the 8-bit MCU will still be deeply embedded in increasingly subtle, but important, parts of your life, working hard so you don’t have to.

 

Issue 260: Embedded Control Languages

Choosing a programming language is an essential part of any serious embedded design project. But the task can be daunting. When should you use a processor-specific language? Why not just use C?

In the March issue of Circuit Cellar, Steve Ciarcia reviews a handful of programming languages and types of and processors—and projects—for which they are intended.

Here’s Steve’s take:

Let’s talk about languages—specifically, embedded control languages. Everyone has their favorite, typically whatever they learned first, but when you get right down to it, all languages offer the same basic features.

First of all, you need to be able to specify a sequence of steps and then select between two (or more) alternative sequences—the if-then-else construct. You also need to be able to repeat a sequence, or loop, and exit that loop when a condition is met. Finally, you want to be able to invoke a sequence from multiple places within other sequences—a call function.

Assembly language is the lowest-level language you can use on most machines. Its statements bear a one-to-one relationship with the instructions the hardware executes. If-then-else and loop-exit constructs are implemented using conditional and unconditional branch instructions, and there’s usually a hardware stack that facilitates subroutine call and return. This is both a blessing and a curse—it enables you to become familiar with the hardware and use it most effectively, but it also forces you to deal with the hardware at all times.

Very early on in the development of computers, the concept of a high-level language (HLL) was developed to reduce this hardware focus. By creating a set of abstract computer operations that aren’t necessarily tied to a particular processor, the HLL frees the programmer from a specific hardware architecture and enables him to focus on the actual algorithm development. The compiler and library writers took these abstractions and efficiently mapped them onto the hardware. HLL opened up programming to “non-hardware” people whose first interest was the application problem and its solution.

Today, there are literally hundreds of computer languages (see http://en.wikipedia.org/wiki/List_of_programming_languages). Some of them are completely general-purpose, while others are very domain-specific. Two languages have been implemented on virtually every microprocessor ever invented: C and BASIC. (There’s no way I can mention them all, so I’ll just touch on some popular embedded ones.) Of the two, C is by far the more popular for embedded apps, since it runs more efficiently on most hardware. Many people would argue that C isn’t a “true” HLL; but even still, it’s a huge step up from Assembly language in terms of productivity.

There have been some niche languages intended for small systems. For example, there’s what you might call a family of reverse-Polish notation (RPN) languages: Forth, Postscript, and does anyone remember a tiny interpreted language called Mouse? These never caught on in any big way, except for Postscript, which is almost universally available these days on printers as a page-description language. But it’s a full programming language in its own right—just ask Don Lancaster!

Along the way, there have been a few processor-specific languages invented. For example, there’s JAL—just another language—which is optimized for 8-bit Microchip PIC processors, and Spin, which is designed to support the parallel-processing features of the Parallax Propeller chip.

Once you start getting into larger 16- and 32-bit chips, the set of available tools expands. Many of these processors have C/C++ toolchains based on the GNU Compiler Collection (GCC). However, this means you can really use any number of languages in the collection on these processors, including Fortran, Java, and Ada.

The designers of some embedded systems want to include the ability for the system to be programmed by their end users. To this end, the concept of an “extension language” was developed. Two notable examples are TCL and Lua. These provide a standard notation for control constructs (branching, looping and function calls) wrapped around application-specific primitive operations implemented by the system designer.

Once you start getting into systems that are large enough to require an operating system (real-time or otherwise), many of the available choices support the POSIX API. This means you can use any of the mainstream scripting languages—such as shell scripts, Perl, Python, Ruby, PHP, etc.—either internally or exposed to the end user.

And finally, there’s the web-based user interface. Even relatively simple embedded applications can have sophisticated GUIs by pushing much of the programming out to the browser itself by means of ECMAscript (JavaScript) or Java. The embedded app just needs to implement a basic HTTP server with storage for the different resources needed by the user interface running on the browser. There are all kinds of toolkits and frameworks out there that help you build such a system.

I’ll stop now. The point is, even in the world of embedded computer applications, there’s a wide variety of tools available, and picking the right set of tools for the job at hand is part of the system design process. Taking a higher-level view, this brief survey might give you an idea of what kinds of tools you would want to put some effort into learning, based on where your interests lie or the application dictates.