Low-Drop Series Regulator Using a TL431 (EE Tip #134)

You likely have a stash of 12-V lead acid batteries (such as the sealed gel cell type) in your lab or circuit cellar. Below is a handy tip from Germany-based Lars Krüger for a simple way to charge them.

A simple way of charging them is to hook up a small unregulated 15-V wall wart power supply. This can easily lead to overcharging, though, because the off-load voltage is really too high. The remedy is a small but precise series regulator using just six components, which is connected directly between the power pack and the battery (see schematic) and doesn’t need any heatsink.EL2009-Kruger-SeriesReg

The circuit is adequatele proof against short circuits (minimum 10 seconds), with a voltage drop of typically no more than 1 V across the collector-emitter path of the transistor.

For the voltage source you can use any transformer power supply from around 12 V to 15 V delivering a maximum of 0.5 A. By providing a heatsink for T1 and reducing the value of R1 you can also redesign the circuit for higher currents.

Resource: http://focus.ti.com/lit/ds/symlink/tl431.pdf. This tip first appeared in Elektor July/August 2009.

Measuring Jitter (EE Tip #132)

Jitter is one of the parameters you should consider when designing a project, especially when it involves planning a high-speed digital system. Moreover, jitter investigation—performed either manually or with the help of proper measurement tools—can provide you with a thorough analysis of your product.

There are at least two ways to measure jitter: cycle-to-cycle and time interval error (TIE).

WHAT IS JITTER?
The following is the generic definition offered by The International Telecommunication Union (ITU) in its G.810 recommendation. “Jitter (timing): The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz).”

First, jitter refers to timing signals (e.g., a clock or a digital control signal that must be time-correlated to a given clock). Then you only consider “significant instants” of these signals (i.e., signal-useful transitions from one logical state to the other). These events are supposed to happen at a specific time. Jitter is the difference between this expected time and the actual time when the event occurs (see Figure 1).

Figure 1—Jitter includes all phenomena that result in an unwanted shift in timing of some digital signal transitions in comparison to a supposedly “perfect” signal.

Figure 1—Jitter includes all phenomena that result in an unwanted shift in timing of some digital signal transitions in comparison to a supposedly “perfect” signal.

Last, jitter concerns only short-term variations, meaning fast variations as compared to the signal frequency (in contrast, very slow variations, lower than 10 Hz, are called “wander”).

Clock jitter, for example, is a big concern for A/D conversions. Read my article on fast ADCs (“Playing with High-Speed ADCs,” Circuit Cellar 259, 2012) and you will discover that jitter could quickly jeopardize your expensive, high-end ADC’s signal-to-noise ratio.

CYCLE-TO-CYCLE JITTER
Assume you have a digital signal with transitions that should stay within preset time limits (which are usually calculated based on the receiver’s signal period and timing diagrams, such as setup duration and so forth). You are wondering if it is suffering from any excessive jitter. How do you measure the jitter? First, think about what you actually want to measure: Do you have a single signal (e.g., a clock) that could have jitter in its timing transitions as compared to absolute time? Or, do you have a digital signal that must be time-correlated to an accessible clock that is supposed to be perfect? The measurement methods will be different. For simplicity, I will assume the first scenario: You have a clock signal with rising edges that are supposed to be perfectly stable, and you want to double check it.

My first suggestion is to connect this clock to your best oscilloscope’s input, trigger the oscilloscope on the clock’s rising edge, adjust the time base to get a full period on the screen, and measure the clock edge’s time dispersion of the transition just following the trigger. This method will provide a measurement of the so-called cycle-to-cycle jitter (see Figure 2).

Figure 2—Cycle-to-cycle is the easiest way to measure jitter. You can simply trigger your oscilloscope on a signal transition and measure the dispersion of the following transition’s time.

Figure 2—Cycle-to-cycle is the easiest way to measure jitter. You can simply trigger your oscilloscope on a signal transition and measure the dispersion of the following transition’s time.

If you have a dual time base or a digital oscilloscope with zoom features, you could enlarge the time zone around the clock edge you are interested in for more accurate measurements. I used an old Philips PM5786B pulse generator from my lab to perform the test. I configured the pulse generator to generate a 6.6-MHz square signal and connected it to my Teledyne LeCroy WaveRunner 610Zi oscilloscope. I admit this is high-end equipment (1-GHz bandwidth, 20-GSPS sampling rate and an impressive 32-M word memory when using only two of its four channels), but it enabled me to demonstrate some other interesting things about jitter. I could have used an analog oscilloscope to perform the same measurement, as long as the oscilloscope provided enough bandwidth and a dual time base (e.g., an old Tektronix 7904 oscilloscope or something similar). Nevertheless, the result is shown in Figure 3.

Figure 3—This is the result of a cycle-to-cycle jitter measurement of the PM5786A pulse generator. The bottom curve is a zoom of the rising front just following the trigger. The cycle-to-cycle jitter is the horizontal span of this transition over time, here measured at about 620 ps.

Figure 3—This is the result of a cycle-to-cycle jitter measurement of the PM5786A pulse generator. The bottom curve is a zoom of the rising front just following the trigger. The cycle-to-cycle jitter is the horizontal span of this transition over time, here measured at about 620 ps.

This signal generator’s cycle-to-cycle jitter is clearly visible. I measured it around 620 ps. That’s not much, but it can’t be ignored as compared to the signal’s period, which is 151 ns (i.e., 1/6.6 MHz). In fact, 620 ps is ±0.2% of the clock period. Caution: When you are performing this type of measurement, double check the oscilloscope’s intrinsic jitter as you are measuring the sum of the jitter of the clock and the jitter of the oscilloscope. Here, the latter is far smaller.

TIME INTERVAL ERROR
Cycle-to-cycle is not the only way to measure jitter. In fact, this method is not the one stated by the definition of jitter I presented earlier. Cycle-to-cycle jitter is a measurement of the timing variation from one signal cycle to the next one, not between the signal and its “ideal” version. The jitter measurement closest to that definition is called time interval error (TIE). As its name suggests, this is a measure of a signal’s transitions actual time, as compared to its expected time (see Figure 4).

Figure 4—Time interval error (TIE) is another way to measure jitter. Here, the actual transitions are compared to a reference clock, which is supposed to be “perfect,” providing the TIE. This reference can be either another physical signal or it can be generated using a PLL. The measured signal’s accumulated plot, triggered by the reference clock, also provides the so-called eye diagram.

Figure 4—Time interval error (TIE) is another way to measure jitter. Here, the actual transitions are compared to a reference clock, which is supposed to be “perfect,” providing the TIE. This reference can be either another physical signal or it can be generated using a PLL. The measured signal’s accumulated plot, triggered by the reference clock, also provides the so-called eye diagram.

It’s difficult to know these expected times. If you are lucky, you could have a reference clock elsewhere on your circuit, which would supposedly be “perfect.” In that case, you could use this reference as a trigger source, connect the signal to be measured on the oscilloscope’s input channel, and measure its variation from trigger event to trigger event. This would give you a TIE measurement.

But how do you proceed if you don’t have anything other than your signal to be measured? With my previous example, I wanted to measure the jitter of a lab signal generator’s output, which isn’t correlated to any accessible reference clock. In that case, you could still measure a TIE, but first you would have to generate a “perfect” clock. How can this be accomplished? Generating an “ideal” clock, synchronized with a signal, is a perfect job for a phase-locked loop (PLL). The technique is explained my article, “Are You Locked? A PLL Primer” (Circuit Cellar 209, 2007.) You could design a PLL to lock on your signal frequency and it could be as stable as you want (provided you are willing to pay the expense).

Moreover, this PLL’s bandwidth (which is the bandwidth of its feedback filter) would give you an easy way to zoom in on your jitter of interest. For example, if the PLL bandwidth is 100 Hz, the PLL loop will capture any phase variation slower than 100 Hz. Therefore, you can measure the jitter components faster than this limit. This PLL (often called a carrier recovery circuit) can be either an actual hardware circuit or a software-based implementation.

So, there are at least two ways to measure jitter: Cycle-to-cycle and TIE. (As you may have anticipated, many other measurements exist, but I will limit myself to these two for simplicity.) Are these measurement methods related? Yes, of course, but the relationship is not immediate. If the TIE is not null but remains constant, the cycle-to-cycle jitter is null.  Similarly, if the cycle-to-cycle jitter is constant but not null, the TIE will increase over time. In fact, the TIE is closely linked to the mathematical integral over time of the cycle-to-cycle jitter, but this is a little more complex, as the jitter’s frequency range must be limited.

Editor’s Note: This is an excerpt from an article written by Robert Lacoste, “Analyzing a Case of the Jitters: Tips for Preventing Digital Design Issues,” Circuit Cellar 273, 2013.

Active ESD Protection for Microcontrollers (EE Tip #129)

Microcontrollers need to be protected from of electrostatic discharge (ESD). You can use the circuit described in this post when you have an application requires a greater degree of ESD protection than what you get from an IC on its I/O pins. Although there are many ESD clamping devices out there, they don’t typically enable you to precisely limit voltage overshoots and undershoots.

Normally, when dealing with a microcontroller or other digital circuit the connections on the device are protected against electrostatic dis­charge. Nevertheless engineers are 4ever taking special precautions when handling such devices to avoid the risks of ESD: the lab will have an anti-static covering on the floor, and nylon clothes and shoes with soles made of insulating material are avoided. And, in case that is not enough, it is normal to wear an anti-static wrist band when moving devices from their anti-static bags to the anti-static bench surface. But what exactly do we mean when we talk about ESD?

HUMAN BODY MODEL

The first model for static discharge, mentioned as early as the 19thcentury, was the “human body model” (HBM). This takes as its starting point a voltage of up to 40 kV, a body capaci­tance of a few hundred picofarads, and a (skin) resistance of 1.5 kΩ. We find that even with a static voltage of only 10 kV, as might easily be acquired by walking across an artificial fiber car­pet in shoes with synthetic soles, it is possible to discharge through a fingertip at peak cur­rents of up to 20 A! The discharge also hap­pens in a very short period, perhaps measured in nanoseconds.

The HBM was adopted in the electronics industry in the 1970s with the introduction of sensitive JFET devices in space applications. The compo­nents were tested using a simple RC circuit like the one shown in Figure 1. The discharge cur­rent depends only on the resistance in the cir­cuit, and the damped discharge curve is largely free of oscillation and is accurately reproducible.

Figure 1—Standard test circuit and current waveform for the human body model

Figure 1—Standard test circuit and current waveform for the human body model

There are also other models that deal with dis­charge through a sensitive component, for exam­ple when a low-resistance electrical connection is made between two devices (the “machine model,” or MM), or when a static charge present on the device itself is discharged (the “charged device model,” or CDM)…

ESD CLAMP CIRCUITS

Figure 2 shows the typical protection circuitry provided on a microcontroller’s I/O port. This example is from an Atmel ATmega. Other microcontrol­lers and logic devices use similar arrangements. Two bipolar protection diodes conduct discharge currents that could cause undershoots or overshoots to one of the supply rails, either VCC or ground. However, the diodes take about 6 ns before they conduct fully.

Figure 2—Typical ESD protection circuit, as found in an Atmel microcontroller

Figure 2—Typical ESD protection circuit, as found in an Atmel microcontroller

Since ESD transients can sometimes be considerably shorter than this, it is possible that the CMOS circuit structures will be damaged long before the diodes spring into action. The parasitic capacitance of the pin is around 6 pF, and this is quickly charged up by the energy in the electrostatic discharge. Unfortunately, we cannot increase this capacitance with­out increasing the impedance of the pin, which is not desirable.

Standard ESD protection circuits like this one are designed to meet the particular requirements set by the ESD Association. However, it is becoming apparent that the traditional models are not appropriate for modern applications. Recent efforts have been directed toward developing a new “system level model” (SLM), which takes into account the different aspects of the older models. This model employs two stored charges that are discharged in different ways, creating a high-amplitude current pulse that decays very quickly plus a low-amplitude pulse that dies away more slowly. The energy transferred in a dis­charge under the SLM can be very much higher than that in the traditional models (Figure 3).

Figure 3—Current waveform under the system-level model

Figure 3—Current waveform under the system-level model

It is readily apparent that the conventional I/O pin circuitry on the IC is not sufficient to provide ESD protection under this model. Also, the con­tinuing industry pressure to make smaller and more complex structures makes it very difficult for design engineers even to maintain current levels of ESD protection, let alone improve on them. In other words: the silicon area needed to provide ESD protection in accordance with the SLM is simply not available! For this reason, external ESD clamp circuits are becoming more rele­vant. If a component provides only a low level of ESD protection (or even none at all) it is possible to add such a circuit at the points most at risk. The clamp circuits usually use so-called transient suppression diodes (transils or tranzorbs) which, like Zener diodes, start to conduct at a specified threshold voltage. However, unlike Zener diodes, they react quickly and can withstand much higher current transients. There are many variations on the circuit design, but none has exceptional performance and none offers precise clamping of voltage undershoots and overshoots.

STATE-OF-THE-ART ESD CLAMPING

If we are in the lucky position of not having to worry about the last cent of materials cost or the last square millimeter of board area, we can easily cre­ate a state-of-the-art active ESD protection circuit from discrete components (Figure 4).

Figure 4—This protection circuit clamps voltage transients outside defined upper and lower thresholds

Figure 4—This protection circuit clamps voltage transients outside defined upper and lower thresholds

The transistor circuit forms a kind of regulated voltage divider. The current through the two resistors R2 and R3 is such that the voltages across them are just enough that transistors T1 and T4 start to conduct and T2 and T3 are just short of saturation. So we have one base-emitter voltage (about 600 mV) across each of these two resistors, which means in turn that the emitters of T2 and T3 are 600 mV below VCC and above ground respectively. The circuit as shown is suitable for a 5 V supply; R1 can be changed to suit supplies of 3.3 V or 2.7 V if needed.

What is the point of this complexity? If the I/O pin is high (at +5 V) the upper 1N4148 switching diode will conduct fully as its cathode is at only 4.4 V. If a positive voltage transient should occur it will be conducted by the 1N4148, without switching delay, to the positive rail by 1N5817 Schottky diode D2, which acts quickly and has a low forward voltage. The same thing happens with polarities reversed when a negative voltage transient (below ground) occurs. Hence the digital inputs and outputs are protected against voltage excursions outside the range of the supply rails. In addition, voltage peaks are limited by the use of suppression inductors. The Murata BLM series inductor presents a relatively high impedance to signals in the 100 MHz range and so can significantly reduce the level of transients.

Although the approach we have described works well with digital levels, it is not suitable for use with signals destined for the analog-to-digital converter (ADC) on a microcontroller. In this case a reverse-biased diode between the signal and each supply rail is required to clamp overshoots and undershoots, with a pair of 10 kΩ series resistors to limit the transient current.

The series-connected capacitors C2 and C3 present a low-impedance path for transients between VCC and ground, and hence spikes on the supply rails will also be conducted away.—P. Kruger, “Active ESD Protection,” Elektor January/February 2014

Editor’s note: This article originally appeared in Elektor January/February 2014. It was shortened and updated for publication on CircuitCellar.com, which is an Elektor International Media Publication.

 

RESOURCES

www.teseq.de/de/de/service_support/technical_information/01_Transient_ immunity_testing_e.pdf

www.ti.com/lit/sg/sszb130b/sszb130b.pdf

www.semtech.com/circuit-protection/esd-protection/

www.murata.com/products/emc/basic/feature/bl_intro.html

Op-Amp Versus Comparator (EE Tip #128)

Practically every lecture course or textbook on electronics describes how to use an operational amplifier as a comparator. Here we look at the possibility in more detail, and see how it can often be a very poor idea.

The idea behind the comparator configuration is simple. An op-amp has a very high open-loop DC gain which means that even a tiny differential input voltage will drive the output to one extreme or the other. If the voltage at the non-inverting (“+”) input is greater than that at the inverting (“–“) input the output goes high; otherwise the output goes low. In other words the two voltages are compared and the output is a binary indication of which of the two is the greater.

Figure 1: SPICE simulation results: an LT1028 op-amp pressed into service as a comparator versus a real comparator type LT1720.

Figure 1: SPICE simulation results: an LT1028 op-amp pressed into service as a comparator versus a real comparator type LT1720.

So the op-amp looks like the perfect device to use as a comparator. But why then do there exist special-purpose comparator devices? Looked at from the outside, op-amps and comparators appear indistinguishable. Besides power connections, they both have “+” and “–” inputs and a single output. Taking a look at the internal circuit diagram, again the two devices appear broadly very similar (although a comparator device with an open-collector or open-drain output does look more obviously different from an op-amp). The big difference, which is not apparent without looking at the circuit more closely, is that the output stages of operational amplifiers are designed for linear operation, with the general aim of amplifying the input signal with as little distortion as possible (assuming that some negative feedback is provided), but in the case of a comparator the output circuit is designed to operate in saturation, that is, to switch between the upper and lower output voltage limits without the provision of external feedback. Comparators often also offer a ground connection in addition to the usual power connections, and provide digital logic levels at their outputs while accepting symmetrical analogue input signals.

What do these differences mean in practice? Comparators can react very quickly to changes in their input voltages with short propagation delays and output rise- and fall-times all specified by the manufacturer.

In contrast, because op-amps are not expected to be used in this mode, manufacturers tend not to give explicit specifications for propagation delay and rise- and fall-times (although they do normally specify slew rate), and these characteristics can be considerably poorer for op-amps than for comparators. To take an extreme example, a low-power op-amp might have a propagation delay measured in milliseconds, whereas a comparator might react in nanoseconds: a million times faster.

There is a further problem with op-amps. Many devices exhibit significantly increased power consumption when the output is in saturation, the resulting power dissipation on occasion being enough to destroy the device. Also, many op-amps (those not advertised as having “rail-to-rail outputs”) are not capable of driving their outputs close to the supply rails, for example having a maximum output voltage of 3 V with a 5-V supply. There can also be restrictions on the inputs. Some op-amps are equipped with antiparallel diodes across their input terminals, which prevent differential input voltages of more than about 0.6 V, whereas comparators’ inputs are often allowed to vary over the whole supply range.

Of course, there are many noncritical applications where an op-amp will work perfectly acceptably as a comparator, but it is not a practice to be recommended. The skeptic should lash up a quick test with a comparator and an op-amp side-by-side, each fed with a squarewave signal with rapid edges. Some of the potential pitfalls are shown up more easily in simulation, such as the possibility of an op-amp being so slow that it entirely misses a narrow pulse. It is hard to guarantee circuit performance, current consumption, and even the survival of the device.

The illustrations show a SPICE simulation of a relatively nimble op-amp (an LT1028 with a minimum slew rate of 11 V/µs) and a type LT1720 comparator. It is clear that the comparator responds sooner and with a much shorter rise-time. Its output swings all the way to +5 V rather than the 3 V managed by the op-amp. The situation is similar when the output swings low: the op-amp is much slower and only reaches an output voltage of –3 V rather than –5 V. The original squarewave is hardly recognizable at the op-amp’s output. Although the LT1028 cannot achieve its maximum specified gain with a –5-V supply, it is still a factor of at least 20 faster than an LM324 (with a slew rate of 0.5 V/µs); what the latter would make of our squarewave would not be a pretty sight. The op-amp fails to cope at all with shorter pulses, which are then effectively “swallowed,” while the comparator continues to handle them without difficulty.

Worthwhile further reading on this subject is Texas Instruments application note SLOA067 by Bruce Carter entitled “Op Amps and Comparators—Don’t Confuse Them!.”— Michael Holzl, Elektor January 2011

CircuitCellar.com is an Elektor International Media publication.

Build an Adequate Test Bench (EE Tip #127)

It’s in our makeup as engineers that we want to test our newly received boards as soon as possible. We just can’t wait to connect them to a power supply and then use our test bench equipment (e.g., generators, oscilloscopes, switches or LEDs, and so on) for simulation.

Circuit Cellar columnist Robert Lacoste's workspace in Chaville, France.

Circuit Cellar columnist Robert Lacoste’s clean, orderly workspace in Chaville, France.

But due to our haste, the result is usually a PCB under test lying on a crowded workbench in the middle of a mesh of test cables, alligator clamps, prototyping boards, and other probes. Experience shows that the probability of a short circuit or mismatched connection is high during this phase of engineering excitement.

Test Board

Rather than requiring a mesh of test wires, it is often wise to develop a small test PCB that will drastically simplify the test phase. Here the ancillary board provided a clean way to connect a Microchip Technology ICD3 debugger, a JTAG emulator, a debug analyzer, and a power supply input.

Take your time: prepare a real test bench to which you can connect your board. It could be as simple as a clean desk with properly labeled wires, but you might also need to anticipate the design of a test PCB in order to simplify the cabling.—Robert Lacoste, “Mixed-Signal Designs,” CC25:25th Anniversary Issue, 2013.