Reflections on Software Development

Present-day equipment relies on increasingly complex software, creating ever-greater demand for software quality and security. The two attributes, while similar in their effects, are different. A quality software is not necessarily secure, while a secure software is not necessarily of good quality. Safe software is both of high quality and security. That means the software does what it is supposed to do: it prevents hackers and other external causes from modifying it, and should it fail, it does so in a safe, predictable way. Software verification and validation (V&V) reduces issues attributable to defects, that is to poor quality, but does not currently address misbehavior caused by external effects.

Poor software quality can result in huge material losses, even life. Consider some notorious examples of the past. An F-22 Raptor flight control error caused the $150 million aircraft to be destroyed. An RAF Chinook engine controller fault caused the helicopter crash with 29 fatalities. A Therac radiotherapy machine gave patients massive radiation overdoses causing death of two people. A General Electric power grid monitoring system’s failure resulted in a 48-hour blackout across eight US states and one Canadian province. Toyota’s electronic throttle controller was said to be responsible for the lives of 89 people.

Clearly, software quality is paramount, yet too often it takes the back seat to the time to market and the development cost. One essential attribute of quality software is its traceability. This means that every requirement can be traced via documentation from the specification down to the particular line of code—and, vice versa, every line of code can be traced up to the specification. The documentation (not including testing and integration) process is illustrated in Figure 1.

FIGURE 1: Simplified software design process documentation. Testing, verification and validation (V&V) and control documents are not shown.

FIGURE 1: Simplified software design process documentation. Testing, verification and validation (V&V) and control documents are not shown.

The terminology is that of the DO-178 standard, which is mandatory for aerospace and military software. (Similarly, hardware development is guided by DO-254.) Other software standards may use a different terminology, but the intentions are the same. DO-178 guides its document-driven process, for which many tools are available to the designer. Once the hardware-software partitioning has been established, software requirements define the software architecture and the derived requirements. Derived requirements are those that the customer doesn’t include in the specification and might not even be aware of them. For instance, turning on an indicator light may take one sentence in the specification, but the decomposition of this simple task might lead to many derived requirements.

Safety-Instrumented Functions

While requirements are being developed, test cases must be defined for each and every one of those requirements. Additionally, to increase the system safety, a so-called Safety-Instrumented Functions (SIF) should be considered. SIFs are monitors which cause the system to safely shut down if its performance fails to meet the previously defined safety limits. This is typically accomplished by redundancy in hardware, software or both. If you neglect to address such issues at an early development stage, you might end up with an unsafe system and having to redo a lot of work later.

Quality design is also a bureaucratic chore. Version control and configuration index must be maintained. The configuration index comprises the list of modules and their versions to be compiled for specific versions of the product under development. Without it, configuration can be lost and a great deal of development effort with it.

Configuration control and traceability are not just the best engineering practices. They should be mandated whenever software is being developed. Some developers believe that software qualification to a specific standard is required by the aerospace and military industries only. Worse, some commercial software developers still subscribe to the so-called iron triangle: “Get to market fast with all the features planned and high level of quality. But pick only two.”

Engineers in safety-critical industries (such as medical, nuclear, automotive, and manufacturing) work with methods similar to DO-178 to ensure their software performs as expected. Large original equipment manufacturers (OEMs) now demand adherence to software standards: IEC61508 for industrial controls, IEC62034 for medical equipment, ISO 26262 for automotive, and so forth. The reason is simple. Unqualified software can lead to costly product returns and expensive lawsuits.

Software qualification is highly labor intensive and very demanding in terms of resources, time, and money. Luckily, its cost has been coming down thanks to a plethora of automated tools now being offered. Those tools are not inexpensive, but they do pay for themselves quickly. Considering the risk of lawsuits, recalls, brand damage, and other associated costs of software failure, no company can really afford not to go through a qualification process.

Testing

As with hardware, quality must be built into the software, and this means following strict process rules. You can’t expect to test quality into the product at the end. Some companies have tried and the results have been the infamous failures noted above.
Testing embedded controllers often presents a challenge because you need the final hardware when it is not yet finished. Nevertheless, if you give testing due consideration as you prepare the software requirements, much can be accomplished by working in virtual or simulated environments. LDRA (www.ldra.com) is one great tool for this task.
Numerous methods exist for software testing. For example, dynamic code analysis examines the program during its execution, while the static analysis looks for vulnerabilities as well as programming errors. It has been shown mathematically that 100% test coverage is impossible to achieve. But even if it was, 35% to 40% of defects result from missing logic paths and another 40% from the execution of unique combinations of logic paths. Such defects wouldn’t get caught by testing, but can be mitigated by SIF.

Much embedded code is still developed in-house (see Figure 2). Is it possible for companies to improve programmers’ efficiency in this most labor-intensive task? Once again, the answer lies in automation. Nowadays, many tools come as complete suites providing various analyses, code coverage, coding standards compliance, requirements traceability, code visualization, and so forth. These tools are regularly seen at developers of avionic and military software, but they are not as frequently used by commercial developers because of their perceived high cost and steep learning curve.

FIGURE 2: Distribution of embedded software sources. Most is still developed in-house.

FIGURE 2: Distribution of embedded software sources. Most is still developed in-house.

With the growth of cloud computing and the Internet of Things (IoT), software security is gaining on an unprecedented importance. Some security measures can be incorporated in hardware while others are in software. Data encryption and password protection are the vital parts. Unfortunately, security continues to be not treated by some developers as seriously as it should be. Security experts warn that numerous IoT developers have failed to learn the lessons of the past and a “big IoT hack” in the near future is inevitable.

Security Improvements

On a regular basis, the media report on security breaches (e.g., governmental organization hacks, bank hacks, and automobile hacks). What can be done to improve security?

There are several techniques—such as Common Weakness Enumeration (CWE)—that can help to improve our chances. However, securing software is likely a task a lot more daunting than achieving comprehensive V&V test coverage. One successful hack proves the security is weak. But how many unsuccessful hacks by test engineers are needed to establish that security is adequate? Eventually, a manager, probably relying on some statistics, will have to decide that enough effort has been spent and the software can be released. Different types of systems require different levels of security, but how is this to be determined? And what about the human factor? Not every test engineer has the necessary talent for code breaking.

History teaches us that no matter how good a lock, a cipher, or a password someone has eventually broken it. Several security developers in the past challenged the public to break their “unbreakable” code for a reward, only to see their code broken within hours. How responsible is it to keep sensitive data and systems access available in the cyberspace just because it may be convenient, inexpensive, or fashionable? Have the probability and the consequences of a potential breach been always duly considered?

I have used cloud-based tools, such as the excellent mbed, but would not dream of using them for a sensitive design. I don’t store data in the cloud, nor would I consider IoT for any system whose security was vital. I don’t believe cyberspace can provide sufficient security for many systems at this time. Ultimately, the responsibility for security is ours. We must judge whether the use IoT or the cloud for a given product would be responsible. At present, I see little evidence to be convinced the industry is adequately serious about security. It will surely improve with time, but until it does I am not about to take unnecessary risks.


George Novacek is a professional engineer with a degree in Cybernetics and Closed-Loop Control. Now retired, he was most recently president of a multinational manufacturer for embedded control systems for aerospace applications. George wrote 26 feature articles for Circuit Cellar between 1999 and 2004. Contact him at gnovacek@nexicom.net with “Circuit Cellar”in the subject line.

Debugging Embedded Systems with Minimal Resources

Debugging an embedded system can be difficult when you’re dealing with either a simple system with few pins or a complex system with nearly every pin in use. Stuart Ball provides some tips to make debugging such systems a little easier.

Debugging a microcontroller system can be difficult. Things don’t work right and it often isn’t even clear why. Was something initialized wrong? Is it a timing issue? Is there conflicting use of shared resources?

Debugging is more complicated when there are limited resources. If all the processor pins are used, what do you connect to? How do you get debug information out of the firmware so you can see what is going on?

This article isn’t about debugging when you have Ethernet, USB, and Bluetooth interfaces available, or when you have a full-speed emulator. This is about debugging when there aren’t many resources available—simple systems with few pins, or more complex systems with nearly every pin already used for something.

This is the schematic for a serial port RS-232 driver. It's a standardized circuit that plugs into a header on the board to be debugged.

Figure 1: This is the schematic for a serial port RS-232 driver. It’s a standardized circuit that plugs into a header on the board to be debugged.

Postmortem vs. Real-Time Debugging

There are two general ways of debugging an embedded system. One is postmortem, looking at the state of the system after it has failed or after it has stopped at a breakpoint. The other is real-time, debugging while the system is running. Each has its own place and its own set of challenges.

Generally, the two methods use different debug techniques. A postmortem debug happens when the motor is stalled, the software can’t recover, and you have no idea why it happened. You want to know the system’s state and how it got there. Setting breakpoints is a method used in postmortem debugging; you stop the system and look at the static state after a particular point in the code is reached.

Real-time or active debugging is more appropriate for looking at timing issues, missed interrupts, cumulative latency issues, and cases where the system just does occasionally does something strange but doesn’t actually stop. Real-time debug can tell you how the system got into the state that you are trying to analyze using postmortem methods. If you can capture enough information while the system is running, you have a chance to turn a real-time problem into a simpler static post-mortem analysis.

This article appears in Circuit Cellar 312, July 2016. Download the complete article.

Universal Debug Solution

An asynchronous serial port may be the most common debug tool used in the embedded world. Most microcontrollers have at least one serial port built in. The serial port has limitations. Its speed is limited and it requires level translation to connect it to the RS-232 voltage levels of a PC.

In many cases, you might not want to put the RS-232 driver on your board. You don’t want to use the space required by either the IC or the RS-232 connector, especially for something that is only used while debugging. One way I’ve solved this problem is shown in Figure 1, which depicts just a Maxim Integrated (or Texas Instruments) MAX3232 RS-232 driver IC connected to a DE9 connector. The other side connects to a four-pin header. This is connected via a cable to the embedded system to be tested. This allows the embedded system to have just the four-pin connector wired to the microcontroller serial port pins, power, and ground.

You plug in the external circuit when you need to debug and unplug it when you are done. There is nothing special about this circuit, it is exactly the same as you might put on your microcontroller board. Except you don’t need the space on your board for this. The circuit takes power from the microcontroller board via pins 1 and 4 of the four-pin Molex connector. The connector indicated is polarized so you can’t plug it in backward. I’ve standardized on this in my embedded systems at least where the serial port is used for debug or download.

Although I used a connector with 0.1” centers on the interface board, there is nothing to prevent you from using a 2 mm or 0.05” connector, or even a row of pads at the edge of the board being debugged. You just have to make a cable that has the Molex connector at one end and whatever you need to match your embedded board at the other end. You can keep the driver board in your toolbox, put the connector on your embedded system boards, and you have it when you need it.

You can house the board into a plastic project box. In one case, I built one on a narrow piece of perforated project board, and covered the entire thing in heat shrink tubing. It has the right-angle Molex connector on one end, and a short cable with the RS-232 connector on the other end. I keep that one in my desk drawer at work.

Read the entire article (PDF)

The Perfect PCB Prototype

Interested in constructing perfect PCB prototypes? Richard Haendel has the solution for you. In this article, he explains how five simple steps—print, mount, punch, fit, and evaluate—can save you a lot of time and money.

The following article first appeared in Circuit Cellar 156.


Who designs and builds your prototype circuit boards? The other department? Oh. Well, in that case, nice seeing you. Just flip past this article and enjoy the rest of the magazine.

On the other hand, if you’re a do-it-yourself engineer like me, then perhaps my technique for prototyping prototypes will interest you (see Photo 1). It’s so easy, cheap, and obvious, I have trouble believing that no one else has done it before. If you have, please let me know. I’d love to compare notes. The entire process can be described in five words: print, mount, punch, fit, and evaluate.

Photo 1: It doesn’t get any easier than this (or cheaper). Just remember to print, mount, punch, stuff, and evaluate.

Photo 1: It doesn’t get any easier than this (or cheaper). Just remember to print, mount, punch, stuff, and evaluate.

PRINT

Your printer must be able to print a full-scale, moderately accurate representation of your PCB layout. I say “moderately accurate,” because, after all, a 10% error on a 0.4″-spaced resistor is only 0.04″. That’s close enough for most through-hole designs. Surface mounting, however, can be a problem. But because I don’t normally do surface mounting, it’s not a problem for me.

I use two printers for development: a color ink-jet and a black and white laser-jet. Both are fairly old, but they still have more than enough accuracy for this purpose. The laser-jet is probably a little better, but not by much.

Your printed layout must show (at minimum) the holes and component layout. You may or may not need to see the traces; it depends on what you’re hoping to accomplish. The traces are superfluous for test fitting (e.g., to make sure that components don’t touch each other); however, if you’re building a full-scale concept model, you’ll need as much detail as is practical. In fact, with a little more effort, you could print the top traces on one sheet and the bottom traces on another, glue them to the foam board on opposite sides (taking care to line up the holes, of course), and make yourself a full-scale PCB model. Cool.

MOUNT

Trim the excess white space from the sheet containing your printed image, because it will just get in the way. Next, cut a piece of foam board slightly larger than your layout. A utility knife and metal ruler work well for this. Peel the backing from the foam board’s adhesive side; of course, if you don’t have the self-adhesive kind, simply apply dry glue (from a glue stick) to either the board or paper. After that, carefully position one corner of your image on the foam board and smoothen it. Rub gently but firmly with a soft cloth or paper towel to permanently “seat” the image.

If you get air bubbles or wrinkles, throw it away and start over. Remember, your pattern must be accurate. You can probably make a new one faster than you can fix a damaged one. A little practice goes a long way toward achieving perfect results.

PUNCH

Using a pushpin (or a similar instrument), carefully punch your holes. As you can see in Photo 2, I use metallic pushpins with longer-than-usual shafts. Naturally, the shorter plastic pushpins will work just as well. Thumbtacks, however, are not a good choice; they’re pretty rough on the fingernails.

Photo 2: A small pin is my favorite tool for punching holes in the foam board.

Photo 2: A small pin is my favorite tool for punching holes in the foam board.

Note that this stage can be tedious, especially if you have a large board with many holes. Take your time. The holes should be centered as accurately as possible. Also, don’t push the pin all the way through; it’s merely intended to puncture the paper front so the component’s pins can penetrate the foam and have it “grab” them. In other words, you want a snug fit so the pieces don’t (easily) fall off the board.

That’s how it works for IC sockets and connectors with short leads (i.e., less than the thickness of the board). However, resistors and other parts with longer leads are a different matter. In this case, you must either trim the leads—which is fine if you’re not planning to reuse the component—or extend the hole to the backside with something like a map pin. That’s what I usually do.

FIT

That’s right. Simply fit (or stuff) your components as you would a real circuit board. Components with short leads should be easy to fit; however, those with longer leads may need persuading. Simply insert the part, grab one lead close to the board’s surface with needle-nose pliers, and gently (but firmly) coax it through the hole. Sometimes this can be a pain, especially with small-gauge component leads (e.g., ceramic capacitors). You may need to enlarge the hole from the front or backside. Remember: practice, practice, practice.

EVALUATE

In other words, use it for whatever purpose you need. Most of the time, I make these models just to test my board design and confirm that all parts will fit before committing to a manufactured prototype. After that, it’s trash. If the design is significant (pronounced “expensive to produce”), then I may make others until I’m confident of perfection.

I must confess, though, most of my models are nowhere near as neat and attractive as the one pictured in this article. Frequently, the images are slapped on a piece of scrap foam, tested, and tossed within 5 min. or less.

SO, WHAT’S IT COST?

Not much. Just the other day, I purchased a 20″ × 30″, 3/16″ thick sheet of white self-adhesive foam board at a local hobby store for $4.99. (The nonadhesive type was about $1 less.) Therefore, the cost is $4.99 divided by 600 square inches, or a mere $0.00832 per square inch—that’s less than a penny. At that rate, this board cost only $0.07.

IS IT WORTH IT?

You bet! I’ve caught numerous board design and layout errors with this technique. I’ve also learned that legends on the silk-screen layer don’t always match the physical part as closely as you may expect. This is good to know when you’re tight on board space and need to fudge a little.

Photo 3: Notice that D1 will not actually touch J2, as the PCB layout program’s silkscreen outline indicates.

Photo 3: Notice that D1 will not actually touch J2, as the PCB layout program’s silkscreen outline indicates.

I was able to crowd D1 between J2 and J3, because J2 is 0.08″ smaller than its silkscreen outline (see Photo 3). So, even though D1 appears to touch J2, there’s actually 0.04″ between them, which is more than enough for my design.

So, did I lie? Is this not as simple as can be? And cheap! Try it yourself and see.—By Richard Haendel (Circuit Cellar 156)

Minimum Mass Waveform Capture

I can capture repetitive waveforms at 1 Msps using a microcontroller’s on-chip PWM and comparator. The impetus for developing this technique came from my own need to capture repetitive waveforms using the least expensive and lowest part count means possible. I wanted to be able to view the waveforms on an LCD dedicated to the purpose or upload the waveform to a computer for manipulation on a spread­sheet. This waveform capture method adheres to the “minimum mass” product design concept: it doesn’t use anything that is not absolutely essential to obtaining the needed function.

Implementations can be cheap enough to allow capture and analysis in many applications that otherwise could not justify the cost. Such applications include calculating the RMS values or harmonic content of waveforms for power management and equipment maintenance, self-testing audio frequency circuits, the analysis of pulse response for self-tuning servos, signal signature analysis, and remote diagnostics and data gathering.

The approaches using on-chip A/D converters on AVR and PIC controllers reach sample rates of up to nearly 60 kHz. Exotic and pricey high-speed controllers top out around 100 kHz. Such a sampling rate is not really high enough for the sort of applications I had in mind: encoded data, radio control signals, A/D converter waveforms, checking the dynamic range of amplifiers and capturing audio frequency waveforms for filtering, and power calculations. I realized that the comparators in AVR and PIC devices have fast response times (several hundred nanoseconds) and that the pulse width modulation (PWM) circuit could be made fairly responsive. I just needed some way to quickly combine them to sample analog values.

Eventually it became apparent that repetitive sampling was the only way to get high enough voltage and temporal sampling resolution using only these on-chip components. Rather than trying to sample and digitize the waveform in real time as it comes in, this method finds out a little bit about the waveform using the relatively high-speed comparator every time the waveform is repeated; it builds a more detailed picture with each repetition by changing the relatively low-speed PWM voltage each time.

Subscribe to Circuit Cellar magazine! Get 12 months of electrical engineering project articles, embedded design tutorials, embedded programming tips, and much more. We cover micrcontrollers, robotics, embedded Linux, IoT projects, and more!

THE METHOD

To capture a waveform, the PWM D/A converter (PWM DAC) is set to its maximum output voltage. Then, using timing loops to generate regularly spaced sampling times (1 µs in Figure 1), the microcontroller looks at the output of the voltage comparator to determine if the incoming voltage is higher than the PWM voltage. At each sampling time, if the PWM voltage is at a higher voltage than that of the incoming waveform, the PWM value is stored in a RAM array location corresponding to that sampling time.

Figure 1—It’s all in the timing. Firmware timing loops set the interval between samples in a burst of waveform samplings that starts with a trigger signal. The green dots represent voltage levels of the sampled signal at the time of sampling.

Figure 1—It’s all in the timing. Firmware timing loops set the interval between samples in a burst of waveform samplings that starts with a trigger signal. The green dots represent voltage levels of the sampled signal at the time of sampling.

After all of the sample times have been tested against the PWM voltage, the PWM voltage is decremented. The process is then repeated until the PWM voltage has been reduced to its mini­mum value (0 V). Each scan of the sample time starts by a trigger signal that’s derived from, or in some way related to, the incoming waveform. The finer the voltage resolution, the longer it takes to capture the wave­form because the waveform has to be sampled more times. Note that the settling time for the PWM DAC needs to be longer for finer voltage resolution.

The total capture time (TCAP) equals: the number of voltage levels × (trigger latency + sample time + step settling time). Trigger latency is the average amount of time the controller waits for a trigger signal. The initial PWM settling time and the step settling time are the times for the PWM filter to charge to its initial value and settle after a 1-LSB step change, respectively. Capturing 100 samples at 1 Msps in a circuit optimized for 6-bit resolution (64 levels) takes approximately 69 ms; however, it takes about 1.3 s to measure the same waveform on a circuit optimized for 8-bit resolution.

When capturing waveforms with long periods, the total time needed to capture the waveform is dominated by the time it takes the waveform to make the requisite number of repetitions. For shorter periods, the total time is dominated by the settling times for the PWM. Thus, the higher the sampling rate, the more you can speed up the capture cycle by using a faster DAC. A resistor network connected to some port pins could suffice for low-resolution (6-bit) waveform capture. An integrated circuit DAC would be better for higher resolution applications.

The quality of the trigger signal is essential to the fidelity of the captured waveform. The trigger signal must consistently appear at the same time with respect to the captured signal, otherwise severe distortion will result. This means that a noisy trigger signal, such as one derived directly from a noisy input signal, would give poor results. You’ll get the best results with a digital trigger signal taken directly from the source of the signal if such a trigger source is available.

Unsynchronized signals (e.g., noise) are not represented accurately; instead, such signals are underrepresented in the captured waveform. This quality, which results from synchronous sampling, is sometimes a good thing because it can effectively pull a signal out of the noise, which is an important property in applications such as ultra wideband and spread-spectrum signal decoding. But, if you intend to measure noise or jitter, this quality makes the system inappropriate.

Another aspect of sampled data sys­tems is their susceptibility to aliasing. Aliasing is a phenomenon in which a signal appears to occur at a frequency other than that at which it actually occurs. For instance, when a 250-kHz square wave is viewed with a 1-µs sampling interval, it shows up properly as four samples per cycle; however, when it is captured at a 100-µs sampling interval, it appears as 16 samples per cycle, or a 625-Hz signal, which is one four-hundredth the actual frequency.

To prevent aliasing, insert an analog filter in the signal path before the comparator’s input. In the example I’ve been focusing on, the Atmel AT90S2313 samples the signal at 1 Msps. The on-chip comparator has a propagation delay of 500 to 700 ns, providing inherent filtering for components of signals above approximately 800 kHz, and thus restricting the range of frequencies above the sam­pling rate that can be aliased down to frequencies below the sampling rate. To reduce the aliasing of signal components that have a lower frequency than the sampling rate, you’d need an additional external analog filter.

Figure 2— You can work with a bare minimum of parts, because it doesn’t take much to capture repetitive waveforms at 1 Msps and upload them to a terminal program on a PC for display and analysis. The passive components connected to pins 13 and 15 of the microcontroller are in the same basic configuration used for successive approximation A/D conversion; only the firmware is different.

Figure 2—You can work with a bare minimum of parts, because it doesn’t take much to capture repetitive waveforms at 1 Msps and upload them to a terminal program on a PC for display and analysis. The passive components connected to pins 13 and 15 of the microcontroller are in the same basic configuration used for successive approximation A/D conversion; only the firmware is different.

AN IMPLEMENTATION

The simple implementation shown in Figure 2 needs only a microcontroller with a DAC and voltage comparator, and some way to get control signals into the chip and the data back out. The demonstration system, for which firmware is posted on the Circuit Cellar ftp site, assumes an Atmel AT90S2313­10 is connected to level-shifting invert­ers for the EIA-232 interface such as a Maxim MAX232 with its 1-µf capaci­tors, a 10-MHz crystal with load capacitors, a decoupling capacitor, and the PWM low-pass filter connected to pins 13 and 15 of the microcontroller (see Photo 1).

Photo 1—The only components added to the operating Atmel AT90S2313 circuit needed to allow for waveform sampling with less than 1-μs resolution at 1-V full scale are the capacitor and two resistors. Imagine how small the circuit will be using surface-mount components.

Photo 1—The only components added to the operating Atmel AT90S2313 circuit needed to allow for waveform sampling with less than 1-μs resolution at 1-V full scale are the capacitor and two resistors. Imagine how small the circuit will be using surface-mount components.

It can be controlled by and dump data to an ASCII terminal program such as HyperTerminal at capture rates from 1 µs per sample to 10 ms per sample at 6-, 7-, and 8-bit resolution with selectable trigger polarity. An example of a waveform captured with this system and plot­ted in a spreadsheet program is shown in Figure 3.

Figure 3—This is the capture of a 31.25-kHz sawtooth waveform. The sample rate is set to 1 μs per sample and the voltage resolution is 8 bits.

Figure 3—This is the capture of a 31.25-kHz sawtooth waveform. The sample rate is set to 1 μs per sample and the voltage resolution is 8 bits.

PWM LOW-PASS FILTER

The DAC uses pulse-width modula­tion, so it is necessary to have an averaging (low-pass) filter to recover the DC component while filtering out most of the PWM signal’s AC component. The AC component remaining on the filter’s output is referred to as ripple.

The filter is made up of 330- and 82-kΩ resistors and a 0.047-µF capaci­tor, which forms a single-pole RC fil­ter. The two resistors form a voltage divider to reduce the full-scale voltage from the DAC to 1-V full scale. If you are worried about accuracy, you can replace the 82-kΩ resistor with a fixed resistor and a variable resistor in series to allow for full-scale calibration.

If 5-V full scale is appropriate for your application, you can omit the lower resistor and save a part. The low-pass filter for the PWM output needs to be made with a large enough time constant to keep the ripple to an acceptable level. After the filter time constant is pinned down, the controller must wait long enough after each step change in output voltage for the filter to settle adequately before starting measurements.

The PWM filter can be analyzed as a single resistor driving the capacitor (see Figure 4). Judging from the AT90S2313 datasheet, when operating at 5 V, the output resistance of the PWM output is approximately 28 Ω; it is safe to say that it is negligible com­pared to the 330-kΩ resistor that’s in series with it. Thus, the filter model is plenty close by taking the value of the resistance to be the parallel combina­tion of the two resistors (see Figure 4).

Figure 4—The PWM filter is easily analyzed as a single resistor charging the capacitor by replacing the resistors with a single resistor equal to the parallel combination of the two, because that is what it looks like to the capacitor.

Figure 4—The PWM filter is easily analyzed as a single resistor charging the capacitor by replacing the resistors with a single resistor equal to the parallel combination of the two, because that is what it looks like to the capacitor.

The first step is to select the time constant that gives an acceptably low ripple. For my application, I considered speed to be more important than absolute accuracy, so I decided to keep the ripple at 1 LSB. The time constant should be figured for the worst possible PWM signal. The worst case for ripple is when the lowest frequency appears at the filter’s input. In the case of the AT90S2313, this occurs when the PWM output runs 50% duty cycle. Under this condition, the pulse frequency is about 19.6 kHz and the voltage across the capacitor is 0.5 V. When the pulse is high (this analysis is the same for the time the pulse is low, only the signs change), the difference between the PWM peak voltage (1 V) and the voltage across the capacitor is across the equivalent resistance, and the current through the resistance charges the capacitor.

Note that 1 LSB of an 8-bit value based on 1-V full scale is 4 mV (1/255). Using the formula in Figure 5, the time constant must be approximately 3.2 ms. I chose the resistors by first selecting the largest capacitor and a pair of large resistors that had the necessary 4:1 resistance ratio while simultaneously giving nearly the correct time constant. The resulting combination gives a divide ratio of 1:5.02 and a time con­stant of 3.15 ms (67 kΩ × 0.047 µF).

Figure 5—A simplified model can be used to predict the relationship between the filter’s time constant and the amount of ripple. The charging current for the capacitor comes from the voltage drop between the 1 V from the output of the resistive divider and the voltage across the capacitor. Note that EO is the voltage change across the capacitor (1 LSB = 4 mV), and EI is the average voltage across the resistance (0.5 V). T is the time that voltage is applied across the circuit (25.5 μs), and t is the time constant of the circuit.

Figure 5—A simplified model can be used to predict the relationship between the filter’s time constant and the amount of ripple. The charging current for the capacitor comes from the voltage drop between the 1 V from the output of the resistive divider and the voltage across the capacitor. Note that EO is the voltage change across the capacitor (1 LSB = 4 mV), and EI is the average voltage across the resistance (0.5 V). T is the time that voltage is applied across the circuit (25.5 μs), and t is the time constant of the circuit.

After the filter time constant is known, the settling times can be determined. I decided to have the con­troller wait for the initial settling of the filter to within 1 LSB of full scale before starting the waveform capture cycle using the formula in Figure 6. For the settling time between succes­sive steps, I wanted to wait until after the voltage changed more than 0.5 LSB. Because the step size is 1 LSB, I chose one time constant, or 3 ms.

Figure 6—The initial settling time must be long enough to assure that the PWM output settles to within 1 LSB of the final voltage. It must be calculated for the worst case scenario, which is when it starts from 0 V. Note that ΔV is the error in the settled voltage (1 LSB = 4 mV). EI is the voltage applied to the circuit, which is 1 V. In is the natural logarithm (base 2.71828…). T is the time that voltage is applied across the circuit, and t is the time constant of the circuit (3.17 ms).

Figure 6—The initial settling time must be long enough to assure that the PWM output settles to within 1 LSB of the final voltage. It must be calculated for the worst case scenario, which is when it starts from 0 V. Note that ΔV is the error in the settled voltage (1 LSB = 4 mV). EI is the voltage applied to the circuit, which is 1 V. In is the natural logarithm (base 2.71828…). T is the time that voltage is applied across the circuit, and t is the time constant of the circuit (3.17 ms).

FIRMWARE

When capturing a waveform, the PWM circuit first generates the maxi­mum output voltage and samples all time intervals starting from a trigger signal, taking care to keep the time between samples constant. Whenever the voltage at a sampled time exceeds the PWM voltage, the PWM value is stored in the RAM array location corresponding to that sample. In this way, at the end of the capture cycle, the peak value at each sampling time is stored in the RAM array.

The sampling loop in Listing 1 is the time-critical part of the code. It requires 10 clock cycles per sample. With a 10­-MHz clock, the sampling rate is 1 MHz. Two clock cycles are taken up by the indirect jump instruction (ijmp), which jumps either to the next instruction in sequence (at the label oneus:) or to a delay routine that returns to the next instruction in sequence. Eliminating the indirect jump instruction would decrease the sampling interval to eight cycles. Straight line coding would be inflexible and take a lot of program memory, but it could reduce the sam­pling interval to as few as three cycles when storing the waveform in RAM.

Listing 1—The sampling of the waveform takes place at the sbic ACSR,5 instruction, where the output of the comparator is tested. If the comparator’s output is low, execution proceeds to st Y+,pwmval, the instruction that stores the data into the RAM array via the Y pointer. If the comparator’s input is high, the program branches back to nextydelay, which imcrements the Y pointer without storing data.

Listing 1—The sampling of the waveform takes place at the sbic ACSR,5 instruction, where the output of the comparator is tested. If the comparator’s output is low, execution proceeds to st Y+,pwmval, the instruction that stores the data into the RAM array via the Y pointer. If the comparator’s input is high, the program branches back to nextydelay, which imcrements the Y pointer without storing data.

At the beginning of a waveform collection cycle, the program sits in a wait loop and waits for a transition on the trigger input. After the triggering edge is detected, the sampling routine is called and it runs through and collects a full set of samples. Then, the PWM value is decremented, a wait loop is executed to allow the RC filter in the PWM DAC to settle, and the program returns to wait for the next triggering event. This process continues until the lowest pos­sible PWM value has been tested.

Timing uncertainty is introduced by the short loop in which the controller waits for the triggering edge. The uncertainty translates into jitter in the signal sampling. As long as the uncer­tainty is small compared to the signal-sampling interval, it should not contribute much in the way of noise to the captured waveform. In applications that use only a few machine cycles between samples, it pays to keep the wait loops as short as possible.

BELOW GROUND SIGNALS

Judging from the offset-versus-input voltage curve on the AT90S2313’s datasheet, the comparator’s differen­tial gain is good enough for 6-bit waveform capture just above ground. For linearity errors of less than 1 LSB with 8-bit operation, the comparator inputs need to run closer to the mid­dle of the power supply where the curve is nearly flat.

There is a bonus to adding offset to the input signal in that it can measure input signals at and below ground without clipping. When the input signal is level-shifted, the PWM DAC’s output must be similarly offset. The PWM offset circuit provides an oppor­tunity for an adjustable vertical-cen­tering control (to use the oscilloscope term). Circuits that shift the input level and allow offset adjustment are shown in Figure 7.

Figure 7—The FET provides an offset allowing the input to swing above and below ground as well as moving the input to the AT90S2313’s on-chip comparator away from ground and enabling an offset adjustment. You can also achieve these functions with op-amps, but there are several trade-offs to consider.

Figure 7—The FET provides an offset allowing the input to swing above and below ground as well as moving the input to the AT90S2313’s on-chip comparator away from ground and enabling an offset adjustment. You can also achieve these functions with op-amps, but there are several trade-offs to consider.

Level shifting is achieved easily enough with an op-amp if you have a negative power supply, but my objec­tive was to make the entire system operate from a single 5-V regulator. Besides, my cheap single-supply op-amps, which also had adequate dynam­ic range, had too poor a slew rate to give satisfactory performance at 1 Msps.
A junction field effect transistor (JFET) source follower is an ideal way to offset the input signal to a more positive voltage without much attenu­ation or loss of bandwidth. I used an MPF102 in my own circuit because I had some on hand. Numerous other small signal JFETs would work well.

Pinch-off voltage is the FET parameter that most affects the offset because, for most FETs, this parameter varies widely. To obtain the approximate 2.5-V offset (the DC voltage on the FET source when the gate is grounded), you can hand select an FET, adjust the source resistor (15 kΩ in the circuit above), or try a combination of the two. The higher the value of the resistor, the higher the off­set voltage (i.e., up to nearly the pinch-off voltage of the FET, which is usually specified at a low current). Be aware that the source resistor affects the trade-off between the bandwidth and signal loss. As the resistor gets larger, the bandwidth will decrease; as the resistor gets smaller, the gain of the source follower drops. For my particular circuit layout and its parasitic capacitance, 15 kΩ was about the upper limit for 1 Msps.

One way to add an adjustable DC offset to the output of the PWM circuit without affecting the RC filter’s response time is to use an adjustable constant current source. The current source shown in Figure 7 relies on the fact that the 2N2907’s collector current is nearly equal to the emitter current. (The collector cur­rent equals the emitter current times Alpha, which is nearly unity and pretty stable.) Emitter current is determined by the voltage across the 8.2-kΩ emitter resistor, which follows the base voltage and is temperature-compensated by the diode in series with the potentiometer.

SIMPLE, ECONOMICAL, FLEXIBLE

Among the variations that may be useful are programmable offset and gain controls, and a calibration func­tion using only a few resistors and additional I/O pins. In multiple-chip systems, the time-dependent sampling task can be offloaded to a low-cost slave processor with little or no RAM that sends intermediate results to a host. The slave could be one of the cheapest eight-pin microcontrollers offered that has a suitable on-chip voltage comparator. The minimum mass waveform capture approach is a building block that produces a much faster sampling rate and costs less than conventional approaches using on-chip A/D converters.

I suspect that by now you have come up with some ideas of your own. It’s easy enough to put the sam­ple system together, so why not give it a try?

ABOUT THE AUTHOR

Dick Cappels enjoys tinkering with and writing about analog circuits and microcontrollers. He has published several papers relating to electronic displays in computer systems, and is currently active in the Society for Information Display. Dick holds 17 U.S. patents.

This article first appeared in Circuit Cellar 159.

An Introduction to Verilog

If you are new to programming FPGAs and CPLDs or looking for a new design language, Kareem Matariyeh has the solution for you. In this article, he introduces you to Verilog. Although the hardware description language has been used in the ASIC industry for years, it has all the tools to help you implement complex designs, such as a creating a VGA interface or writing to an Ethernet controller. Matariyeh writes:

Programmable logic has been around for well over two decades. Today, due to larger and cheaper devices on the market, FPGAs and CPLDs are finding their way into a wide array of projects, and there is a plethora of languages to choose from. VHDL is the popular choice outside of the U.S. It is preferred if you need a strong typed language. However, the focus of this article will be on another popular language called Verilog, which is a hardware description language that is similar to the C language.

Typically, Verilog is used in the ASIC design industry. Companies such as Sun Microsystems, Advanced Micro Devices, and NVIDIA use Verilog to verify and test new processor architectures before committing to physical silicon and post-fab verification. However, Verilog can be used in other ways, including implementing complex designs such as a VGA interface. Another complex design such as an Ethernet controller can also be written in Verilog and implemented in a programmable device.

This article is mostly tailored to engineers who need to learn Verilog and do not know or know little about the language. Those who know VHDL will benefit from reading this article as well and should be able to pick up Verilog fairly quickly after reviewing the example listings and referring to the Resources at the end of the article. This article does not go over hardware, but I have included some links that will help you learn more about how the hardware interacts with this language at the end.

THE VERILOG LANGUAGE

First, it is best to know what variable types are available in Verilog. The basic types available are: binary, integer, and real. Other types are available but they are not used as often as these three. Keep everything in the binary number system as much as possible because type casting can cause post-implementation issues, but not all writers are the same. Binary and integer types have the ability to use other values such as “z” (high impedance) and “x” (don’t care). Both are nice to have around when you want a shared bus between designs or a bus to the outside world. Binary types can be assigned by giving an integer value. However, there are times when you want to assign or look at a specific bit. Some of the listings use this notation. In case you are curious, it looks like this: X’wY, where X is the word size, w is the number base—b for binary, h for hex—and Y is the value. Any value without this is considered an integer by default. Keeping everything in binary, however, can become a pain in the neck especially when dealing with numbers larger than 8 bits.Table1

Table 1 shows some of the variable types that are available in Verilog. Integer is probably the most useful one to have around because it’s 32 bits long and helps you keep track of numbers easily. Note that integer is a signed type but can also be set with all “z” or “x.” Real is not used that much, when it is used the number is truncated to an integer. It is best to keep this in mind when using the real type, granted it is the least popular compared to binary and integer. When any design is initialized in a simulator, the initial values of a binary and integer are all “x.” Real, on the other hand, is 0.0 because it cannot use “x.” There are other types that are used when interconnecting within and outside of a design. They are included in the table, but won’t be introduced until later.Table2

Some, but not all, operators from C are in Verilog. Some of the operators available in Verilog are in Table 2. It isn’t a complete list, but it contains most of the more commonly used operators. Like C, Verilog can understand operations and perform implicit casting (i.e., adding an integer with a 4-bit word and storing it into a binary register or even a real); typically this is frowned on mostly due to the fact that implicit casting in Verilog can open a new can of worms and cause issues when running the code in hardware. As long as casting does not give any erroneous results during an operation, there should be no show-stoppers in a design. Signed operation happens only if integers and real types are used in arithmetic (add, subtract, multiply) operations.

VERILOG MODULES

In Verilog, designs are called modules. A module defines its ports and contains the implementation code. If you think of the design as a black box, Verilog code typically looks like a black box with the top missing. Languages like Verilog and VHDL encourage black box usage because it can make code more readable, make debugging easier, and encourage code reuse. In Verilog, multiple code implementations cannot have the same module name. This is in stark contrast to VHDL, where architectures can share the same entity name. The only way to get around this in Verilog is to copy a module and rename it.

In Listing 1, a fairly standard shift register inserts a binary value at the end of a byte every clock cycle. If you’re experienced with VHDL, you can see that there aren’t any library declarations. This is mainly due to the fact that Verilog originated from an interpretive foundation. However, there are include directives that can be used to add external modules and features. Obviously, the first lines after the module statement are defining the modules’ port directions and type with the reserved words input and output. There is another declaration called inout, which is bidirectional but not in the listing. A module’s input and output ports can use integer and real, but binary is recommended if it is a top-level module.Listing1

The reg statement essentially acts like a storage unit. Because it has the same name as the output port it acts like one item. Using reg this way is helpful because its storage ability allows the output to remain constant while system inputs change between clock cycles. There is another kind of statement called wire. It is used to tie more than one module together or drive combinational designs. It will appear in later listings.

The next line of code is the always statement or block. You want to have a begin and end statement for it. If you know VHDL, this is the same as the process statement and works in the same fashion. If you are completely new to programmable logic in general, it works like this: “For every action X that happens on signals indicated in the sensitivity list, follow these instructions.” In some modules, there is usually a begin and an end statement. This is the equivalent of curly braces seen in C/C++. It’s best to use these with decision structures (i.e., always, if, and case) as much as possible.

Finally, the last statement is a logical left shift operation. Verilog bitwise operators in some instances need the keyword assign for the operation to happen. The compiler will tell you if an assign statement is missing. From there, the code does its insertion operation and then waits for the next positive edge of the clock. This was a pretty straightforward example; unfortunately, it doesn’t do much. The best way to get around that is to add more features using functions, tying-in more modules, or using parameters to increase flexibility.

TASKS & FUNCTIONS

Tasks and functions make module implementation clearer. Both are best used when redundant code or complex actions need to be split up from the main source. There are some differences between tasks and functions.

A task can call other tasks and functions, while a function can call only other functions. A task does not return a value; it modifies a variable that is passed to it as an output. Passing items to a task is also optional. Functions, on the other hand, must return one and only one value and must have at least one value passed to them to be valid. Tasks are well-suited for test benches because they can hold delay and control statements. Functions, however, have to be able to run within one time unit to work. This means functions should not be used for test benches or simulations that require delays or use sequential designs. Experimenting is a good thing because these constructs are helpful.

There is one cardinal rule to follow when using a function or task. They have to be defined within the module, unlike VHDL where functions are defined in a package to get maximum flexibility. Tasks and functions can be defined in a separate file and then attached to a module with an include statement. This enables you to reuse code in a project or across multiple projects. Both tasks and functions can use types other than binary for their input and output ports, giving you even more flexibility.Listing2

Listing 2 contains a function that essentially acts like a basic ALU. Depending on what is passed to the function, the function will process the information and return the calculated integer value. Tasks work in the same way, but the structure is a little different when dealing with inputs and outputs. As I said before, one of the major differences between a task and a function is that the former can have multiple outputs, rather than just one. This gives you the ability to make a task more complicated internally, if need be.Listing3

Listing 3 is an example of a task in action with more than one output. Note how it is implemented the same way as a function. It has to be defined and called within the module in order to work. But rather than define the task explicitly within the module, the task is defined in a separate file and an include directive is added in the module code just to show how functions and tasks can be defined outside of a module and available for other modules to use.

BUILDING WITH MODULES

If too much is added to a module, it can become so large that debugging and editing become a chore. Doing this also minimizes code reuse to the point where new counters and state machines are being recreated when just using small modules/functions from a previous project is more than adequate. A good way to get around these issues is by making multiple modules in the same file or across multiple files and creating an instantiation of that module within an upper-level module to use its abilities. Multiple modules are good to have for a pipelined system. This enables you to use the same kind of module over multiple areas of a system. Older modules can also be used this way so less time is used on constant recreation.Listing4

That is the idea of code reuse in a nutshell. Now I will discuss an example of code reuse and multiple modules. The shift register from Listing 1 is having its data go into an even parity generator and the result from both modules is output through the top-level module in Listing 4. All of this is done across multiple files in one listing for easier reading. In all modular designs, there is always a module called a top-level entity, where all of the inputs and outputs of a system connect to the physical world. It is also where lower-level entities are spawned. Subordinates can spawn entities below themselves as well (see Figure 1).Figure 1

Think of it as a large black box with smaller black boxes connected with wires and those small black boxes have either stuff or even smaller black boxes. Pretty neat, but it can get annoying. Imagine a situation where a memory controller for 10-bit addressing is created and then the address length needs to be extended to 16 bits. That can be a lot of files to go through to change 10 to 16. However, with parameters all that needs to be changed is one value in one file and it’s all done.

PARAMETERS

Parameters are great to have around in Verilog and can make code reuse even more attractive. Parameters allow words to take the place of a numerical value like #define in C, but with some extra features such as overriding. Parameters can be put in length descriptors, making it easy to change the size of an output, input, or variable. For example, if a VGA generator had a color depth of 8 bits but needed to be changed to 32-bit color depth, then instead of changing the locations where the value occurs, only the value of the parameter would be changed and when the module was recompiled it would be able to display 32-bit color. The same can be done for memory controllers and other modules that have ports, wires, or registers with 1 bit or more in size. Parameters can also be overridden. This is performed just before or when a module is instantiated. This is helpful if the module needs to be the same all the time across separate projects that are using the same source, but needs to be a little different for another project. Parameters can also be used in functions and tasks as long as the parameter is in the same file the implementation code is in. Parameters with functions and tasks give Verilog the flexibility of a VHDL package, granted it really isn’t a package, because the implementation is located in a module and not in a separate construct.Listing5

There are many ways to override parameters. One way is by using the defparam keyword, which explicitly changes the value of the parameter in the instantiated module before it is invoked. Another way is by overriding the parameter when the module is being invoked. Listing 5 shows how both are done with dummy modules that already have defined parameters. The defparam method is from an older version of the language, so depending on the version of Verilog being used, make sure to pick the right method.

Download the entire article.