A Serene Workspace for Board Evaluation and Writing

 Elecronics engineer, entrepreneur, and author Jack Ganssle recently sent us information about his Finksburg, MD, workspace:

I’m in a very rural area and I value the quietness and the view out of the window over my desk. However, there are more farmers than engineers here so there’s not much of a high-tech community! I work out of the house and share an office with my wife, who handles all of my travel and administrative matters. My corner is both lab space and desk. Some of the equipment changes fairly rapidly as vendors send in gear for reviews and evaluation.

ganssle-workspace

Ganssle’s desk is home to ever-changing equipment. His Agilent Technologies MSO-X-3054A mixed-signal oscilloscope is a mainstay.

The centerpiece, though, is my Agilent Technologies MSO-X-3054A mixed-signal oscilloscope. It’s 500 MHz, 4 GSps, and includes four analog channels and 16 digital channels, as well as a waveform generator and protocol analyzer. I capture a lot of oscilloscope traces for articles and talks, and the USB interface sure makes that easy. That’s pretty common on oscilloscopes, now, but being an old-timer I remember struggling with a Polaroid scope camera.

The oscilloscope’s waveform generator has somewhat slow (20-ns) rise time when making pulses, so the little circuit attached to it sharpens this to 700 ps, which is much more useful for my work. The photo shows a Siglent SDS1102CML oscilloscope on the bench that I’m currently evaluating. It’s amazing how much capability gets packed into these inexpensive instruments.

The place is actually packed with oscilloscopes and logic analyzers, but most are tucked away. I don’t know how many of those little USB oscilloscope/logic analyzers vendors have sent for reviews. I’m partial to bench instruments, but do like the fact that the USB instruments are typically quite cheap. Most have so-so analog performance but the digital sampling is generally great.

Only barely visible in the picture, under the bench there’s an oscilloscope from 1946 with a 2” CRT I got on eBay just for fun. It’s a piece of garbage with a very nonlinear timebase, but a lot of fun. The beam is aimed by moving a magnet around! Including the CRT there are only four tubes. Can you imagine making anything with just four transistors today?

The big signal generator is a Hewlett-Packward 8640B, one of the finest ever made with astonishing spectral purity and a 0.5-dB amplitude flatness across 0.5 MHz to 1 GHz. A couple of digital multimeters and a pair of power supplies are visible as well. The KORAD supply has a USB connection and a serviceable, if klunky, PC application that drives it. Sometimes an experiment needs a slowly changing voltage, which the KORAD manages pretty well.

They’re mostly packed away, but I have a ton of evaluation kits and development boards. A Xilinx MicroZed is shown on the bench. It’s is a very cool board that has a pair of Cortex-A9s plus FPGA fabric in a single chip.

I use IDEs and debuggers from, well, everyone: Microchip Technology, IAR Systems, Keil, Segger, you name it. These run on a variety of processors but, along with so many others, more and more I’m using Cortex-M series parts.

My usual lab work is either evaluating boards, products and instruments, or running experiments that turn into articles. It pains me to see so much engineering is done via superstition today. For example, people pick switch contact debounce times based on hearsay or smoke signals or something. Engineers need data, so I tested about 50 pairs of switches to determine what real bounce characteristics are. The results are on my website. Ditto for watchdog timers and other important issues embedded people deal with.

Ganssle notes that his other “bench” is his woodworking shop. To learn more about Ganssle, read our 2013 interview.

Q&A: Scott Garman, Technical Evangelist

Scott Garman is more than just a Linux software engineer. He is also heavily involved with the Yocto Project, an open-source collaboration that provides tools for the embedded Linux industry. In 2013, Scott helped Intel launch the MinnowBoard, the company’s first open-hardware SBC. —Nan Price, Associate Editor

Scott Garman

Scott Garman

NAN: Describe your current position at Intel. What types of projects have you developed?

SCOTT: I’ve worked at Intel’s Open Source Technology Center for just about four years. I began as an embedded Linux software engineer working on the Yocto Project and within the last year, I moved into a technical evangelism role representing Intel’s involvement with the MinnowBoard.

Before working at Intel, my background was in developing audio products based on embedded Linux for both consumer and industrial markets. I also started my career as a Linux system administrator in academic computing for a particle physics group.

Scott was involved with an Intel MinnowBoard robotics and computer vision demo, which took place at LinuxCon Japan in May 2013.

Scott was involved with an Intel MinnowBoard robotics and computer vision demo, which took place at LinuxCon Japan in May 2013.

I’m definitely a generalist when it comes to working with Linux. I tend to bounce around between things that don’t always get the attention they need, whether it is security, developer training, or community outreach.

More specifically, I’ve developed and maintained parallel computing clusters, created sound-level management systems used at concert stadiums, worked on multi-room home audio media servers and touchscreen control systems, dug into the dark areas of the Autotools and embedded Linux build systems, and developed fun conference demos involving robotics and computer vision. I feel very fortunate to be involved with embedded Linux at this point in history—these are very exciting times!

Scott is shown working on an Intel MinnowBoard demo, which was built around an OWI Robotic Arm.

Scott is shown working on an Intel MinnowBoard demo, which was built around an OWI Robotic Arm.

NAN: Can you tell us a little more about your involvement with the Yocto Project (www.yoctoproject.org)?

SCOTT: The Yocto Project is an effort to reduce the amount of fragmentation in the embedded Linux industry. It is centered on the OpenEmbedded build system, which offers a tremendous amount of flexibility in how you can create embedded Linux distros. It gives you the ability to customize nearly every policy of your embedded Linux system, such as which compiler optimizations you want or which binary package format you need to use. Its killer feature is a layer-based architecture that makes it easy to reuse your code to develop embedded applications that can run on multiple hardware platforms by just swapping out the board support package (BSP) layer and issuing a rebuild command.

New releases of the build system come out twice a year, in April and October.

Here, the OWI Robotic Arm is being assembled.

Here, the OWI Robotic Arm is being assembled.

I’ve maintained various user space recipes (i.e., software components) within OpenEmbedded (e.g., sudo, openssh, etc.). I’ve also made various improvements to our emulation environment, which enables you to run QEMU and test your Linux images without having to install it on hardware.

I created the first version of a security tracking system to monitor Common Vulnerabilities and Exposures (CVE) reports that are relevant to recipes we maintain. I also developed training materials for new developers getting started with the Yocto Project, including a very popular introductory screencast “Getting Started with the Yocto Project—New Developer Screencast Tutorial

NAN: Intel recently introduced the MinnowBoard SBC. Describe the board’s components and uses.

SCOTT: The MinnowBoard is based on Intel’s Queens Bay platform, which pairs a Tunnel Creek Atom CPU (the E640 running at 1 GHz) with the Topcliff Platform controller hub. The board has 1 GB of RAM and includes PCI Express, which powers our SATA disk support and gigabit Ethernet. It’s an SBC that’s well suited for embedded applications that can use that extra CPU and especially I/O performance.

Scott doesn’t have a dedicated workbench or garage. He says he tends to just clear off his desk, lay down some cardboard, and work on things such as the Trippy RGB Waves Kit, which is shown.

Scott doesn’t have a dedicated workbench or garage. He says he tends to just clear off his desk, lay down some cardboard, and work on things such as the Trippy RGB Waves Kit, which is shown.

The MinnowBoard also has the embedded bus standards you’d expect, including GPIO, I2C, SPI, and even CAN (used in automotive applications) support. We have an expansion connector on the board where we route these buses, as well as two lanes of PCI Express for custom high-speed I/O expansion.

There are countless things you can do with MinnowBoard, but I’ve found it is especially well suited for projects where you want to combine embedded hardware with computing applications that benefit from higher performance (e.g., robots that use computer vision, as a central hub for home automation projects, networked video streaming appliances, etc.).

And of course it’s open hardware, which means the schematics, Gerber files, and other design files are available under a Creative Commons license. This makes it attractive for companies that want to customize the board for a commercial product; educational environments, where students can learn how boards like this are designed; or for those who want an open environment to interface their hardware projects.

I created a MinnowBoard embedded Linux board demo involving an OWI Robotic Arm. You can watch a YouTube video to see how it works.

NAN: What compelled Intel to make the MinnowBoard open hardware?

SCOTT: The main motivation for the MinnowBoard was to create an affordable Atom-based development platform for the Yocto Project. We also felt it was a great opportunity to try to release the board’s design as open hardware. It was exciting to be part of this, because the MinnowBoard is the first Atom-based embedded board to be released as open hardware and reach the market in volume.

Open hardware enables our customers to take the design and build on it in ways we couldn’t anticipate. It’s a concept that is gaining traction within Intel, as can be seen with the announcement of Intel’s open-hardware Galileo project.

NAN: What types of personal projects are you working on?

SCOTT: I’ve recently gone on an electronics kit-building binge. Just getting some practice again with my soldering iron with a well-paced project is a meditative and restorative activity for me.

Scott’s Blinky POV Kit is shown. “I don’t know what I’d do without my PanaVise Jr. [vise] and some alligator clips,” he said.

Scott’s Blinky POV Kit is shown. “I don’t know what I’d do without my PanaVise Jr. [vise] and some alligator clips,” he said.

I worked on one project, the Trippy RGB Waves Kit, which includes an RGB LED and is controlled by a microcontroller. It also has an IR sensor that is intended to detect when you wave your hand over it. This can be used to trigger some behavior of the RGB LED (e.g., cycling the colors). Another project, the Blinky POV Kit, is a row of LEDs that can be programmed to create simple text or logos when you wave the device around, using image persistence.

Below is a completed JeeNode v6 Kit Scott built one weekend.

Below is a completed JeeNode v6 Kit Scott built one weekend.

My current project is to add some wireless sensors around my home, including temperature sensors and a homebrew security system to monitor when doors get opened using 915-MHz JeeNodes. The JeeNode is a microcontroller paired with a low-power RF transceiver, which is useful for home-automation projects and sensor networks. Of course the central server for collating and reporting sensor data will be a MinnowBoard.

NAN: Tell us about your involvement in the Portland, OR, open-source developer community.

SCOTT: Portland has an amazing community of open-source developers. There is an especially strong community of web application developers, but more people are hacking on hardware nowadays, too. It’s a very social community and we have multiple nights per week where you can show up at a bar and hack on things with people.

This photo was taken in the Open Source Bridge hacker lounge, where people socialize and collaborate on projects. Here someone brought a brainwave-control game. The players are wearing electroencephalography (EEG) readers, which are strapped to their heads. The goal of the game is to use biofeedback to move the floating ball to your opponent’s side of the board.

This photo was taken in the Open Source Bridge hacker lounge, where people socialize and collaborate on projects. Here someone brought a brainwave-control game. The players are wearing electroencephalography (EEG) readers, which are strapped to their heads. The goal of the game is to use biofeedback to move the floating ball to your opponent’s side of the board.

I’d say it’s a novelty if I wasn’t so used to it already—walking into a bar or coffee shop and joining a cluster of friendly people, all with their laptops open. We have coworking spaces, such as Collective Agency, and hackerspaces, such as BrainSilo and Flux (a hackerspace focused on creating a welcoming space for women).

Take a look at Calagator to catch a glimpse of all the open-source and entrepreneurial activity going on in Portland. There are often multiple events going on every night of the week. Calagator itself is a Ruby on Rails application that was frequently developed at the bar gatherings I referred to earlier. We also have technical conferences ranging from the professional OSCON to the more grassroots and intimate Open Source Bridge.

I would unequivocally state that moving to Portland was one of the best things I did for developing a career working with open-source technologies, and in my case, on open-source projects.

System Safety Assessment

System safety assessment provides a standard, generic method to identify, classify, and mitigate hazards. It is an extension of failure mode effects and criticality analysis and fault-tree analysis that is necessary for embedded controller specification.

System safety assessment was originally called ”system hazard analysis.” The name change was probably due to the system safety assessment’s positive-sounding connotation.

George Novacek (gnovacek@nexicom.net) is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer of embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004.

Columnist George Novacek (gnovacek@nexicom.net), who wrote this article published in Circuit Cellar’s January 2014 issue, is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer of embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004.

I participated in design reviews where failure effect classification (e.g., hazardous, catastrophic, etc.) had to be expunged from our engineering presentations and replaced with something more positive (e.g., “issues“ instead of “problems”), lest we wanted to risk the wrath of buyers and program managers.

System safety assessment is in many ways similar to a failure mode effects and criticality analysis (FMECA) and fault-tree analysis (FTA), which I described in “Failure Mode and Criticality Analysis” (Circuit Cellar 270, 2013). However, with safety assessment, all possible system faults—including human error, electrical and mechanical subsystems’ faults, materials, and even manuals—should be analyzed. The impact of their faults and errors on the system safety must also be considered. The system hazard analysis then becomes a basis for subsystems’ specifications.

Fault Identification

Performing FMECA and FTA on your subsystem ensures all its potential faults become detected and identified. The faults’ signatures can be stored in a nonvolatile memory or communicated to a display console, but you cannot choose how the controller should respond to any one of those faults. You need the system hazard analysis to tell you what corrective action to take. The subsystem may have to revert to manual control, switch to another control channel, or enter a degraded performance mode. If you are not the system designer, you have little or no visibility of the faults’ potential impact on the system safety.
For example, an automobile consists of many subsystems (e.g., propulsion, steering, braking, entertainment, etc.). The propulsion subsystem comprises engine, transmission, fuel delivery, and possibly other subsystems. A part of the engine subsystem may include a full-authority digital engine controller (FADEC).

Do you have an electrical engineering tip you’d like to share? Send it to us here and we may publish it as part of our ongoing EE Tips series.

Engine controllers were originally mainly mechanical devices, but with the arrival of the microprocessor, they have become highly sophisticated electronic controllers. Currently, most engines—including aircraft, marine, automotive, or utility (e.g., portable electrical generator turbines)—are controlled by some sort of a FADEC to achieve best performance and safety. A FADEC monitors the engine performance and controls the fuel flow via servomotor valves or stepper motors in response to the commanded thrust plus numerous operating conditions (e.g., atmospheric and internal pressures, external and internal engine temperatures in several locations, speed, load, etc.).

The safety assessment mostly depends on where and how essentially identical systems are being used. A car’s engine failure, for example, may be nothing more than a nuisance with little safety impact, while an aircraft engine failure could be catastrophic. Conversely, an aircraft nosewheel steer-by-wire can be automatically disconnected upon a fault. And, with a little increase of the pilots’ workload, it may be substituted by differential braking or thrust to control the plane on the ground. A similar failure of an automotive steer-by-wire system could be catastrophic for a car barreling down the freeway at 70 mph.

Analysis

System safety analysis comprises the following steps: identify and classify potential hazards and associated avoidance requirements, translate safety requirements into engineering requirements, design assessment and trade-off support to the ongoing design, assess the design’s relative compliance to requirements and document findings, direct and monitor specialized safety testing, and monitor and review test and field issues for safety trends.

The first step in hazard analysis is to identify and classify all the potential system failures. FMECA and FTA provide the necessary data. Table 1 shows an example and explains how the failure class is determined.

TABLE 1
This table shows the identification and severity classification of all potential system-level failures.
Eliminated Negligible Marginal Critical Catastrophic
No safety impact. Does not significantly reduce system safety. Required actions are within the operator’s capabilities. Reduces the capability of the system or operators to cope with adverse operating conditions. Can cause major illness, injury, or property damage. Significantly reduces the capability of the system or the operator’s ability to cope with adverse conditions to the extent of causing serious or fatal injury to several people. Total loss of system control resulting in equipment loss and/or multiple fatalities.

The next step determines each system-level failure’s frequency occurrence (see Table 2). This data comes from the failure rates calculated in the course of the reliability prediction, which I covered in my two-part article “Product Reliability” (Circuit Cellar 268–269, 2012) and in “Quality and Reliability in Design” (Circuit Cellar 272, 2013).

TABLE 2
Use this information to determine the likelihood of each individual system-level failure.
Frequent Probability of occurrence per operation is equal or greater than 1 × 10–3
Probable Probability of occurrence per operation is less than 1 × 10–3 or greater than 1 × 10–5
Occasional Probability of occurrence per operation is less than 1 × 10–5 or greater than 1 × 10–7
Remote Probability of occurrence per operation is less than 1 × 10–7 or greater than 1 × 10–9
Improbable Probability of occurrence per operation is less than 1 × 10–9

Based on the two tables, the predictive risk assessment matrix for every hazardous situation is created (see Table 3). The matrix is a composite of severity and likelihood and can be subsequently classified as low, medium, serious, or high. It is the system designer’s responsibility to evaluate the potential risk—usually with regard to some regulatory requirements—to specify the maximum hazard levels acceptable for every subsystem. The subsystems’ developers must comply with and satisfy their respective specifications. Electronic controllers in safety-critical applications must present low risk due to their subsystem fault.

TABLE 3
The risk assessment matrix is based on information from Table 1 and Table 2.
Probability / Severity Catastrophic (1) Critical (2) Marginal (3) Negligible (4)
Frequent (A) High High Serious Medium
Probable (B) High High Serious Medium
Occasional (C) High Serious Medium Low
Remote (D) Serious Medium Medium Low
Improbable (E) Medium Medium Medium Low
Eliminated (F) Eliminated

The system safety assessment includes both software and hardware. For aircraft systems, the required risk level determines the development and quality assurance processes as anchored in DO-178 Software Considerations in Airborne Systems and Equipment Certification and DO-254 Design Assurance Guidance for Airborne Electronic Hardware.

Some non-aerospace industries also use these two standards; others may have their own. Figure 1 shows a typical system development process to achieve system safety.
The common automobile power steering is, by design, inherently low risk, as it continues to steer even if the hydraulics fail. Similarly, some aircraft controls continue to be the old-fashioned cables but, like the car steering, with power augmentation. If the power fails, you just need more muscle. This is not the case with the more prevalent drive- or fly-by-wire systems.

FIGURE 1: The actions in this system-development process help ensure system safety.

FIGURE 1: The actions in this system-development process help ensure system safety.

Redundancy

How can the risk be mitigated to at least 109 probability for catastrophic events? The answer is redundancy. A well-designed electronic control channel can achieve about 105 probability of a single fault. That’s it. However, the FTA shows that by ANDing two such processing channels, the resulting failure probability will decrease to 1010, thus mitigating the risk to an acceptable level. An event with 109 probability of occurring is, for many systems, acceptable as just about “never happening,” but there are requirements for 1014 or even lower probability. Increasing redundancy will enable you to satisfy the specification.
Once I saw a controller comprising three independent redundant computers, with each computer also being triple redundant. Increasing safety by redundancy is why there are at least two engines on every commercial passenger carrying aircraft, two pilots, two independent hydraulic systems, two or more redundant controllers, power supplies, and so forth.

Human Engineering

Human engineering, to use military terminology, is not the least important for safety and sometimes not given sufficient attention. MIL-STD-1472F, the US Department of Defense’s Design Criteria Standard: Human Engineering, spells out many requirements and design constraints to make equipment operation and handling safe. This applies to everything, not just electrical devices.

For example, it defines the minimum size of controls if they may be operated with gloves, the maximum weight of equipment to be located above a certain height, the connectors’ location, and so forth. In my view, every engineer should look at this interesting standard.
Non-military equipment that requires some type of certification (e.g., most electrical appliances) is usually fine in terms of human engineering. Although there may not be a specific standard guiding its design in this respect, experienced certificating examiners will point out many shortcomings. But there are more than enough fancy and expensive products on the market, which makes you wonder if the designer ever tried to use the product himself.

By putting a little thought beyond just the functional design, you can make your product attractive, easy to operate, and safe. It may be as simple as asking a few people who are not involved with your design to use the product before you release it to production.

Test Pixel 1

Client Profile: Digi International, Inc

Contact: Elizabeth Presson
elizabeth.presson@digi.com

Featured Product: The XBee product family (www.digi.com/xbee) is a series of modular products that make adding wireless technology easy and cost-effective. Whether you need a ZigBee module or a fast multipoint solution, 2.4 GHz or long-range 900 MHz—there’s an XBee to meet your specific requirements.

XBee Cloud Kit

Digi International XBee Cloud Kit

Product information: Digi now offers the XBee Wi-Fi Cloud Kit (www.digi.com/xbeewificloudkit) for those who want to try the XBee Wi-Fi (XB2B-WFUT-001) with seamless cloud connectivity. The Cloud Kit brings the Internet of Things (IoT) to the popular XBee platform. Built around Digi’s new XBee Wi-Fi
module, which fully integrates into the Device Cloud by Etherios, the kit is a simple way for anyone with an interest in M2M and the IoT to build a hardware prototype and integrate it into an Internet-based application. This kit is suitable for electronics engineers, software designers, educators, and innovators.

Exclusive Offer: The XBee Wi-Fi Cloud Kit includes an XBee Wi-Fi module; a development board with a variety of sensors and actuators; loose electronic prototyping parts to make circuits of your own; a free subscription to Device Cloud; fully customizable widgets to monitor and control connected devices; an open-source application that enables two-way communication and control with the development board over the Internet; and cables, accessories, and everything needed to connect to the web. The Cloud Kit costs $149.

Q&A: Marilyn Wolf, Embedded Computing Expert

Marilyn Wolf has created embedded computing techniques, co-founded two companies, and received several Institute of Electrical and Electronics Engineers (IEEE) distinctions. She is currently teaching at Georgia Institute of Technology’s School of Electrical and Computer Engineering and researching smart-energy grids.—Nan Price, Associate Editor

NAN: Do you remember your first computer engineering project?

MARILYN: My dad is an inventor. One of his stories was about using copper sewer pipe as a drum memory. In elementary school, my friend and I tried to build a computer and bought a PCB fabrication kit from RadioShack. We carefully made the switch features using masking tape and etched the board. Then we tried to solder it and found that our patterning technology outpaced our soldering technology.

NAN: You have developed many embedded computing techniques—from hardware/software co-design algorithms and real-time scheduling algorithms to distributed smart cameras and code compression. Can you provide some information about these techniques?

Marilyn Wolf

Marilyn Wolf

MARILYN: I was inspired to work on co-design by my boss at Bell Labs, Al Dunlop. I was working on very-large-scale integration (VLSI) CAD at the time and he brought in someone who designed consumer telephones. Those designers didn’t care a bit about our fancy VLSI because it was too expensive. They wanted help designing software for microprocessors.

Microprocessors in the 1980s were pretty small, so I started on simple problems, such as partitioning a specification into software plus a hardware accelerator. Around the turn of the millennium, we started to see some very powerful processors (e.g., the Philips Trimedia). I decided to pick up on one of my earliest interests, photography, and look at smart cameras for real-time computer vision.

That work eventually led us to form Verificon, which developed smart camera systems. We closed the company because the market for surveillance systems is very competitive.
We have started a new company, SVT Analytics, to pursue customer analytics for retail using smart camera technologies. I also continued to look at methodologies and tools for bigger software systems, yet another interest I inherited from my dad.

NAN: Tell us a little more about SVT Analytics. What services does the company provide and how does it utilize smart-camera technology?

MARILYN: We started SVT Analytics to develop customer analytics for software. Our goal is to do for bricks-and-mortar retailers what web retailers can do to learn about their customers.

On the web, retailers can track the pages customers visit, how long they stay at a page, what page they visit next, and all sorts of other statistics. Retailers use that information to suggest other things to buy, for example.

Bricks-and-mortar stores know what sells but they don’t know why. Using computer vision, we can determine how long people stay in a particular area of the store, where they came from, where they go to, or whether employees are interacting with customers.

Our experience with embedded computer vision helps us develop algorithms that are accurate but also run on inexpensive platforms. Bad data leads to bad decisions, but these systems need to be inexpensive enough to be sprinkled all around the store so they can capture a lot of data.

NAN: Can you provide a more detailed overview of the impact of IC technology on surveillance in recent years? What do you see as the most active areas for research and advancements in this field?

MARILYN: Moore’s law has advanced to the point that we can provide a huge amount of computational power on a single chip. We explored two different architectures: an FPGA accelerator with a CPU and a programmable video processor.

We were able to provide highly accurate computer vision on inexpensive platforms, about $500 per channel. Even so, we had to design our algorithms very carefully to make the best use of the compute horsepower available to us.

Computer vision can soak up as much computation as you can throw at it. Over the years, we have developed some secret sauce for reducing computational cost while maintaining sufficient accuracy.

NAN: You wrote several books, including Computers as Components: Principles of Embedded Computing System Design and Embedded Software Design and Programming of Multiprocessor System-on-Chip: Simulink and System C Case Studies. What can readers expect to gain from reading your books?

MARILYN: Computers as Components is an undergraduate text. I tried to hit the fundamentals (e.g., real-time scheduling theory, software performance analysis, and low-power computing) but wrap around real-world examples and systems.

Embedded Software Design is a research monograph that primarily came out of Katalin Popovici’s work in Ahmed Jerraya’s group. Ahmed is an old friend and collaborator.

NAN: When did you transition from engineering to teaching? What prompted this change?

MARILYN: Actually, being a professor and teaching in a classroom have surprisingly little to do with each other. I spend a lot of time funding research, writing proposals, and dealing with students.

I spent five years at Bell Labs before moving to Princeton, NJ. I thought moving to a new environment would challenge me, which is always good. And although we were very well supported at Bell Labs, ultimately we had only one customer for our ideas. At a university, you can shop around to find someone interested in what you want to do.

NAN: How long have you been at Georgia Institute of Technology’s School of Electrical and Computer Engineering? What courses do you currently teach and what do you enjoy most about instructing?

MARILYN: I recently designed a new course, Physics of Computing, which is a very different take on an introduction to computer engineering. Instead of directly focusing on logic design and computer organization, we discuss the physical basis of delay and energy consumption.

You can talk about an amazingly large number of problems involving just inverters and RC circuits. We relate these basic physical phenomena to systems. For example, we figure out why dynamic RAM (DRAM) gets bigger but not faster, then see how that has driven computer architecture as DRAM has hit the memory wall.

NAN: As an engineering professor, you have some insight into what excites future engineers. With respect to electrical engineering and embedded design/programming, what are some “hot topics” your students are currently attracted to?

MARILYN: Embedded software—real-time, low-power—is everywhere. The more general term today is “cyber-physical systems,” which are systems that interact with the physical world. I am moving slowly into control-oriented software from signal/image processing. Closing the loop in a control system makes things very interesting.

My Georgia Tech colleague Eric Feron and I have a small project on jet engine control. His engine test room has a 6” thick blast window. You don’t get much more exciting than that.

NAN: That does sound exciting. Tell us more about the project and what you are exploring with it in terms of embedded software and closed-loop control systems.

MARILYN: Jet engine designers are under the same pressures now that have faced car engine designers for years: better fuel efficiency, lower emissions, lower maintenance cost, and lower noise. In the car world, CPU-based engine controllers were the critical factor that enabled car manufacturers to simultaneously improve fuel efficiency and reduce emissions.

Jet engines need to incorporate more sensors and more computers to use those sensors to crunch the data in real time and figure out how to control the engine. Jet engine designers are also looking at more complex engine designs with more flaps and controls to make the best use of that sensor data.

One challenge of jet engines is the high temperatures. Jet engines are so hot that some parts of the engine would melt without careful design. We need to provide more computational power while living with the restrictions of high-temperature electronics.

NAN: Your research interests include embedded computing, smart devices, VLSI systems, and biochips. What types of projects are you currently working on?

MARILYN: I’m working on with Santiago Grivalga of Georgia Tech on smart-energy grids, which are really huge systems that would span entire countries or continents. I continue to work on VLSI-related topics, such as the work on error-aware computing that I pursued with Saibal Mukopodhyay.

I also work with my friend Shuvra Bhattacharyya on architectures for signal-processing systems. As for more unusual things, I’m working on a medical device project that is at the early stages, so I can’t say too much specifically about it.

NAN: Can you provide more specifics about your research into smart energy grids?

MARILYN: Smart-energy grids are also driven by the push for greater efficiency. In addition, renewable energy sources have different characteristics than traditional coal-fired generators. For example, because winds are so variable, the energy produced by wind generators can quickly change.

The uses of electricity are also more complex, and we see increasing opportunities to shift demand to level out generation needs. For example, electric cars need to be recharged, but that can happen during off-peak hours. But energy systems are huge. A single grid covers the eastern US from Florida to Minnesota.

To make all these improvements requires sophisticated software and careful design to ensure that the grid is highly reliable. Smart-energy grids are a prime example of Internet-based control.

We have so many devices on the grid that need to coordinate that the Internet is the only way to connect them. But the Internet isn’t very good at real-time control, so we have to be careful.

We also have to worry about security Internet-enabled devices enable smart grid operations but they also provide opportunities for tampering.

NAN: You’ve earned several distinctions. You were the recipient of the Institute of Electrical and Electronics Engineers (IEEE) Circuits and Systems Society Education Award and the IEEE Computer Society Golden Core Award. Tell us about these experiences.

MARILYN: These awards are presented at conferences. The presentation is a very warm, happy experience. Everyone is happy. These things are time to celebrate the field and the many friends I’ve made through my work.