Issue 318: EQ Answers

Here are the answers to the four EQ problems that appeared in Circuit Cellar 318.

Problem 1: Outside of simply moving data from one place to another, most of the work of a computer is performed by “dyadic” operators — operations that combine two values to form a third value. Examples include addition, subtraction and multiplication for arithmetic; AND, OR and XOR for logical operations. A dyadic operation requres three operands — two “source” values and a “destination” location. One way to classify a computer’s ISA (instruction set architecture) is by the number of operands that are explicitly specified in a dyadic instruction. The classifications are:

  • 0-address (stack machine)
  • 1-address (accumulator-based)
  • 2-address
  • 3-address

Can you describe some of the pros and cons of each of these choices?

Answer 1:

0-address

A 0-address machine is also known as a “stack” machine. All operators take their source operands from the stack and place their result on it. The only instructions that contain memory addresses are the “load” and “store” locations that transfer data between the stack and main memory.

Pros: Short instructions, no implicit limit on the size of the stack.

Cons: More instructions required to implement most computations. Parallel computations and common subexpressions require a lot of “stack shuffling” operations.

1-address

In this type of machine, the ALU output is always loaded into an “accumulator” register, which is also always one of the source operands.

Pros: Simple to construct. Eliminates many of the separate “load” operations.

Cons: Requires results to be explicitly stored before doing another calculation. Longer instructions, depending on the number of registers, etc.

2-address

This type of machine allows the two source operands to be specified independently, but requires that the destination be the same as one of the source operands.

Pros: Allows more than one destination, eliminating more “move” operations.

Cons: Even longer instructions.

3-address

This type of machine allows all three operands to be specified independently.

Pros: Most flexible, eliminates most data moves.

Cons: Longest instructions.

To summarize, the short instructions of the stack machine allow a given computation to be done in the smallest amount of program memory, but require more instruction cycles (time) to complete it. The flexibility of the 3-address architecture allow a computation to be done in the fewest instruction cycles (least time), but it consumes more program memory.


Problem 2: In order to be generally useful, a computer ISA must be “Turing complete”, which means that it can — at least theoretically, if not in practice — perform any computation that a Turing Machine can do. This includes things like reading and writing data from a memory, performing arithmetic and logical computations on the data, and altering its behavior based on the values in the data. Most practical computers have relatively rich instruction sets in oder to accomplish this with a reasonable level of efficiency. However, what is the minimum number of instructions required to achieve Turing-completeness?

Answer 2: Just one instruction, chosen carefully, is sufficient to achieve Turing-completeness. One example would be the instruction “subtract one from memory and branch if the result is not zero”. All of the operations of an ordinary computer can be synthesized as sequences of these “DJN” instructions. Note that since there is only one choice, there is no need to include an “opcode” field in the coding of each instruction. Instead, each instruction simply contains a pair of addresses: the data to be decremented, and the destination address of the jump.


Problem 3: Some processor ISAs are notorious for not being “friendly” to procedure-oriented languages such as C, requiring a lot of work on the part of the compiler in order produce reasonably efficient code, and even then, often introducing some restrictions for the programmer. What are some key features of an ISA that would make it “C-friendly”

Answer 3: The key concept in procedure-oriented languages like C is that of function composition. This means that it must be easy to produce new functions by combining calls to existing functions, and that functions can be called in the process of building argument lists for other functions. The C language takes this to the extreme, in the sense that every operator &mdash including the assignment operator — creates an expression that has a result value that can be used to build larger expressions. Therefore, one key architectural element is the ability to create function contexts — sets of parameters, local variables and return values — that can be “stacked” to arbitrary levels. In terms of an ISA, this means that it must support the direct implementation of at least one data stack that includes the ability to index locations within that stack relative to a stack pointer and/or a frame pointer. This concept is a direct abstraction from the hardware addressing modes of the PDP-11 minicomputer, the machine on which the first versions of C were developed. The PDP-11 ISA allows any of its 8 general-purpose registers to be used to address memory, with addressing modes that include “predecrement” and “postincrement” — implementing “push” and “pop” operations as single instructions — as well as “indexed indirect”, which allows local variables to be addressed as an offset from the stack pointer.


Problem 4: Sometimes a computer must work on data that is wider than its native word width. What is the key feature of its ISA that makes this easy to do?

Answer 4: The key feature in an ISA that allows arithmetic and shift operations to be extended to multiples of the processor’s native word width is that of a “carry” status bit. This bit allows one bit of information to be “carried” forward from one instruction to the next without requiring extra instructions to be executed.

For arithmetic operations, this bit remembers whether the instruction operating on the lower-order words of the operands resulted in a numerical “carry” or “borrow” that will affect the instruction operating on the next-higher-order words. Similarly, for shift and rotate instructions, this bit remembers the state of the bit shifted out of one word that needs to be shifted into the next word.

Contributor: David Tweed

Issue 316: EQ Answers

Question 1: What is the second grid in a tetrode vacuum tube for? How about the third grid in a pentode?

Answer 1: In a triode, there is a certain amount of capacitance between the control grid and the plate, which contributes to negative feedback and stability problems if there’s significant phase shift in the surrounding circuitry. This often requires “neutralization”, which consists of an external capacitance between the plate and the cathode (often just a metal tab along the outside of the tube) that helps cancel out this effect.

The second grid in a tetrode, called the screen grid, is used to electrostatically isolate the control grid from the plate and eliminate this effect. It is usually tied to a voltage that is close to the plate voltage, but it is heavily bypassed (AC-coupled the cathode or to ground). A secondary effect of this grid is to help intensify the E-field near the control grid and accelerate the electrons in this region.

A problem that crops up in tetrodes, however, is that electrons get knocked loose from the plate by the impact of the cathode current in a process called secondary emission. Some of these electrons get drawn to the second grid, creating a current that is proportional to the plate current and partly negating the intended effect of this grid. A pentode introduces a third grid, called a suppressor grid, that is tied to a more negative voltage (in fact, it is usually tied directly to the cathode) and repels these secondary electrons back toward the plate.

Question 2: Wirewound resistors tend to have an undesirable reactance because of their construction. This series inductance causes the overall impedance to rise with frequency. Sometimes it is suggested to wind the resistor as two separate windings and then connect them so that their magnetic fields cancel. However, this creates a different problem. What is it?

Answer 2: In order to get better magnetic cancellation, the two windings are often done by twisting the two wires together and then winding them together on a form. When you connect the windings so as to cancel, it turns out that the terminals of the resistor are the two wires at the same end of the combined winding. Because of their physical proximity, this creates a great deal of parasitic capacitance that appears in parallel with the desired resistance. This causes the overall impedance to fall off at higher frequencies.

Question 3: What is the relationship, if any, between the GPS master clock and the GPS microwave carrier frequencies L1 and L2? Why are two different frequencies used?

Answer 3: The L1 carrier is 1575.42 MHz, which is exactly 154 times the GPS master clock rate of 10.23 MHz.

The L2 carrier is 1227.60 MHz, or 120 times the master clock.

Two frequencies are used so that receivers can make estimates of the bending effects of the ionosphere, which allows them to make corrections to their time-of-flight measurements. Both carriers are modulated with the C/A (coarse acquisition) signal.

Also, high-resolution GPS receivers can lock directly onto the carrier frequencies in order to establish their position more accurately. The carrier wavelength is just 19 or 24 cm, while the C/A “chip” wavelength (at 1.023 MHz) is 293 m. Such receivers can establish absolute position to within a few cm and make relative position measurements to a fraction of 1 cm.

Question 4: Who, exactly, is “ELI the ICE man?”

Answer 4: Not who, but what. It’s a mnemonic phrase that reminds you that voltage (E) in an inductor (L) leads the current (I), or “ELI”, and that current (I) in a capacitor (C) leads the voltage (E), or “ICE”.

What we’re talking about here is the phase relationships between voltage and current when you apply a sinewave voltage to a coil or capacitor. Mnemonic devices can be handy, but it’s better to have a good basic understanding of what’s going on.

An inductor stores energy in the form of a magnetic field produced by the current flowing through it. Although you can apply an arbitrary voltage across a coil, the current will change only by adding or subtracting energy from the field. This causes the current to lag behind the applied voltage.

Similarly, a capacitor stores energy in the form of an electric field produced by the charge on its plates. Although you can apply an arbitrary current to a capacitor, the voltage will change only by adding or subtracting charge from the plates. This causes the voltage to lag behind the applied current, or equivalently, the current to lead the voltage.

Issue 314: EQ Answers

Answer 1—Intersymbol interference is created by nonlinearities in the phase/frequency response of the RF channel. These irregularities are generally proportional to the carrier frequency — for example, if a channel centered at 1 MHz has a quality factor (Q) of 50, it will have a 3-dB bandwidth of 20 kHz. But a similar channel centered at 100 MHz will have a bandwidth of 2 MHz.

If you need a bandwidth of 20 kHz for your signal, the 100-MHz channel will have a flatter response over any 20 kHz segment than the 1-MHz channel, reducing the phase distortion.

This is just one way of thinking about it. In reality, there are many factors that affect the flatness of any given communication channel — and the analog circuitry used to interface to it. It’s just that in general, it’s easier to keep distortions of this type low if the bandwidth is a smaller fraction of the carrier frequency.

Answer 2—14.31818 MHz is exactly 4× the NTSC color-burst frequency. The value for the latter is 30 × 525 × 455 / 2 / 1.001 = 3.5795454… MHz. This crystal was used in computer display adapters that were used to produce signals that could be displayed on a standard NTSC color TV set.

11.0592 MHz is exactly 96× (i.e., 6 × 16) the standard UART baud rate of 115.2 kbps. It is also conveniently close to the maximum clock rate of many early Intel single-chip microcontrollers (12 MHz). This allowed systems based on them to communicate at standard baud rates without requiring a separate crystal.

6.176 MHz is exactly 4× the data rate of a digital telephony T1 subscriber line, which is 8000 × 193 = 1.544 MHz. This crystal frequency is often used in a VCXO (voltage-controlled crystal oscillator) within a PLL. This allows the terminal equipment to establish a local timebase that is synchronized to the network.

Answer 3—A solar panel has a characteristic curve that resembles that of a diode, except that in the presence of sunlight, the curve is shifted so that the panel can deliver energy to an external load.

Both of the proposed batteries will cause the panel to operate close to its maximum current, with the 1.5-V battery receiving slightly more current than the 9-V battery, and therefore greater charge (current × time).

However, the 9-V battery is getting 6× the voltage at a current that is much greater than 1/6 that of the 1.5-V battery, therefore it is receiving more power (current × voltage) and more energy (power × time).

Answer 4—An MPPT (maximum power point tracking) controller would deliver the full power of the solar panel to either battery, giving each one the same total energy. However, the 1.5-V battery would receive 6× the current and 6× the charge in the process.

 

 

 

 

 

 

 

 

 

 

 

 

 

Issue 312: EQ Answers

Question 1: What is the probably of a flip-flop with a uniformly-distributed asynchronous input going metastable?

Answer 1: The probably that a flip-flop with a uniformly-distributed asynchronous input will go metastable is a function of how wide its “window of opportunity” (the sum of the setup and hold times) is and the clock period. It is proportional to (and less than, because of manufacturer’s testing margins) the ratio of these two times.

Question 2: How long does it take for a flip-flop in a metastable state to resolve itself?

Answer 2: There is no definite time for a flip-flop to resolve a metastable state. All we can say is that there is some probability that it will remain metastable after a given amount of time has passed. This is usually a rapidly-decaying exponential funciton. The scale factor, or time constant associated with this function is determined by factors such as the internal gain of the flip-flop and the speed of the active devices used in its implementation.

Question 3: Why does putting multiple flip-flops in series reduce the probability of having a metastable output at the output?

Answer 3: When two flip-flops are placed in series, the probability of the second one going metastable is the product of two factors: the probabiliy of the first one going metastable in the first place, and the probability of that metastable state lasting exactly as long as the clock period. Both of these factors are much less than unity, so their product is even lower still.

A third flip-flop is sometimes used, which reduces the chances of metastability to infinitesimal levels.

Question 4: Under what conditions will a metastable condition propagate from one flip-flop to the next?

Answer 4: In order for the second flip-flop in a chain to go metastable, it’s input must be changing in the “window of opportunity” defined by its setup and hold times.

Note that if the first flip-flop has not yet resolved itself by the time the next clock edge comes along, the second one will not generally go metastable. This is because the circuitry of the flip-flop is always designed so that the input threshold voltage differs enough from the metastable output voltage by enough of a margin to guarantee that the second flip-flop will interpret it as a definite high or definite low.

Therefore, the only way that the second flip-flop can go metastable is if the first one’s metastable state had just started to resolve itself at the next clock edge, such that its output was passing through the second one’s input threshold at that moment.

Contributor: David Tweed

Issue 310: EQ Answers

Answer 1: UDP packets are subject to the following problems. Packets may be lost. Packets may experience variable delays. Packets may arrive in a different order from the order they were transmitted.

UDP gives the application the ability to detect and deal with these issues without experiencing the overhead and arbitrarily large delays associated with TCP.

Since UDP packets can get lost or arrive out of order, you include a sequence number in the packet so that the receiving side can detect either of these occurrances.

The packets also experience random delays over some range that is generally bounded. Therefore, you use a FIFO buffer (or “elastic store”) on the receive side to hide the packet arrival “jitter”. You try to keep the amount data in this buffer that corresponds to the average packet delay, plus a safety margin. If this FIFO ever “runs dry”, you might need to set the (re-)starting threshold to a higher value. Packets that arrive extremely late are treated the same as lost packets.

Answer 2: Any difference between the transmit and receive sample clocks means that the average amount of data in the receive-side FIFO will start to trend upward or downward over time. If the FIFO depth is increasing, it is necessary to increase the output audio sample rate slightly to match. Similarly, if it is decreasing, it is necessary to decrease the sample rate. These adjustments will cause the long-term average sample rate of the receiver to match that of the transmitter exactly.

Answer 3: You can effectively do both the multiplication and the division one addition and one subtraction at a time, by keeping track of the milliseconds right inside the ISR, rather than (or in addition to) simply counting the raw ticks:

/* microseconds can be a 16-bit integer */

microseconds += microsecondsPerTick;

while (microseconds >= 1000) {

microseconds -= 1000;

++milliseconds;

Answer 4: I2C clock “stretching” refers to the mechanism by which a slave device holds SCL low *after* the master has driven it low, in order to prevent it from going high again before the slave is ready to process the next data bit.

If the master is waiting for more data from, say, a host CPU, it simply won’t drive SCL low in the first place — it’ll simply leave SCL high until the next data transfer can start. There’s no reason for the master to hold SCL low for an extended period of time.

The one exception would be during the arbitration phase of a multi-master setup. In that case, some clock stretching will occur as a result of the various masters not being strictly in-phase as they start their transfers.

 

Issue 308: EQ Answers

Problem 1—The circuit shown below is an audio amplifier with a slightly unusual topology. Explain how to analyze its DC operating point.eq0675_fig1

 Answer 1—For the DC analysis, start by calculating the Thevenin equivalent of the bias network: 8.0 V and 16.67 kΩ. This sets the emitter of Q1 at about 7.3 V.

Now, consider R6. Since the voltage across it is limited to 0.7 V, it is carrying at most about 0.15 mA. If we assume for the moment that the contribution of Q3’s base is negligible (we’ll verify this shortly), then that same current is flowing through R12, which gives it a voltage drop of 1.5 V, setting the collector voltage of Q3 at about 5.8 V.

This means that R10 is carrying a total current of about 1.23 mA, which means that the remaining current (1.08 mA) is flowing through Q3. If Q3 has a gain of 100, then its base current is about 10.8 µA, which is less than 10% of the R6 current, as surmised.

You could iterate through this analysis a few more times to get more exact figures, but that’s what circuit simulators are for.

Problem 2—What is the AC gain of the circuit, and what is its lower cutoff frequency?

Answer 2—As far as the AC analysis goes, Q1 by itself has a gain that is set by R6 and R8 to about 21, but since Q3 has no emitter resistor, its voltage gain is very large. Therefore, the overall gain of the circuit is almost entirely controlled by the negative feedback (R12 and R8), which makes the gain about 46.

Each of the capacitors has a high-pass effect on the circuit:

  • C1 working with the impdeance of the bias network has a time constant of 16.67 ms, which corresponds to a corner frequency of 9.5 Hz.
  • C2 working with R8 has a time constant of 22 ms, which corresponds to a corner frequency of 7.2 Hz.
  • C5 working with R10 and Rload has a time constant of 26.7 ms, which corresponds to a corner frequency of 6.0 Hz.

The overall circuit response will be dominated by the input network, for a cutoff frequency of about 10 Hz.

Problem 3—What is the analog video bandwidth required for a VGA display of 640 × 480 pixels at 60 frames/second?

 Answer 3—For a good-quality computer video display, where fine vertical lines show the same contrast as fine horizontal lines, the video bandwidth should be able to pass at least the 3rd harmonic of the fastest square wave that appears in the image.

The fastest square wave is alternating dark/light pixels, so its fundamental frequency is half the frequency of the dot clock. For VGA at 25.175 MHz, this would be 12.59 MHz. Three times this fundamental frequency is 37.76 MHz.

Problem 4—Some radar systems use a “chirped pulse”. What exactly is a chirped pulse, and what are its advantages?

 Answer 1—The basic problem in radar is to get both adequate power for total range and good timing resolution for range resolution. It is hard to build high-power amplifiers for microwave frequencies. You want to have a lot of energy in each transmitted pulse, but you also want to keep the pulse short.

There is a kind of all-pass filter (constant amplitude response) that has the property that it delays different frequency components by different amounts (linear phase response). When given a narrow pulse at its input, it produces a waveform that starts at a high frequency and then ramps down to a low frequency, over a much longer period of time. When done at audio frequencies, the result sounds like the chirp of a bird or insect, which is where the name comes from. This stretched pulse allows the power amplifier to operate at a lower peak power for a longer time in order to get the same total pulse energy.

Now, in radar, it doesn’t matter if you don’t compress the pulse again before feeding it to the antenna — the chirped pulse works just as well as the compressed pulse in terms of detecting objects.

In fact, you gain additional advantages when the reflections come back. You can amplify the chirped signal in the receiver (getting some of the same advantages as in the transmitter amplifier regarding peak-to-average power). And you can use a “matched filter” (which has the opposite phase characteristic from the transmit filter) to compress the pulse just prior to detection. This filter has the additional advantage of rejecting a lot of potential interference sources as well. The narrow pulses coming out of the receiver filter provide the required time resolution (range resolution).

Issue 304: EQ Answers

Problem 1—The following circuit was designed to be an inrush current limiter for the large (40,000 µF) capaitor C1. R7 represents the application load of about 180 mA at 9 V.eq0671_fig1

The load on the 9-V source (Vin) needed to be limited to about 350 mA, and the circuit performs remarkably well, as shown in the simulation. The current -Id(M1) is the drain current of the MOSFET.eq0671_fig2

However, note that the circuit does not sense the source or load current directly. How then does it work?

Answer 1—Basically, the circuit works by using the C2-R3 combination as a model (or analog) for the charging of C1. Instead of sensing the current in C1 directly, R3 senses the current in C2, and it is assumed that this value is proportional to the current in C1, which is true as long as the voltage across R3 is a small fraction of the total.

Whenever there is a drop across R3 because of current through C2, the drive to the pass transistor is reduced.

The Thevenin equivalent of the base drive to Q1 is 1.5 V and 120 kΩ, so if the voltage across R3 ever rises as high as 1.5 V – 0.6 V = 0.9 V, Q1 is cut off altogether, removing the drive from M1 as well. Assuming a VBE for Q1 of 0.65 , this would occur at a C2 current of about 0.85 V/10 kΩ = 85 µA, which would correspond to a current in C1 of 85 µA × (40000 µF / 12 µF) = 280 mA.

By adjusting the resistor and capacitor values, you can change that limiting current value. Note that the total current through M1 (and the power supply) is the C1 charging currrent plus the rising load current through R7, so pick the limit value accordingly.

This analogy works because the basic equation of a capacitor says that the current through a capacitor is proportional to the rate of change of the voltage across it, and also to its capacitance:

i(t) = C dV(t) / dt

The assumption is that the voltage across R3 is “small”, which means that V(t) is essentially the same for both capacitors. This means that the current through each is directly proportional to its capacitance.

In this specific case, the voltage across R3 can be as high as 0.85 V, which is aobut 10% of the supply voltage, so the proportionality isn’t as precise as it could be, but it’s good enough for this purpose.

Problem 2—The actual charging current seems to be limited to about 230 mA. Why is this?

Answer 2—Transistor Q1 also contributes to the current flowing through R3, which means that the current through C2 must be corresponding less, which means that the current through C1 must be less, too.

Working backward, if the current through C1 is 230 mA, then the current through C2 would be about 230 mA × (12 µF / 40,000 µF) = 69 µA.

That implies that Q1 is passing about 85 µA – 69 µA = 16 µA.

This current passing through R1 would create a voltage drop of about 16 µA × 220 kohm = 3.52 V, which represents the threshold voltage of M1.

Problem 3—What is the function of C3?

Answer 3—C3 provides a “soft start” function for the circuit. If the base of Q1 were to rise instantly to 1.5 V, then M1 would be turned on fairly hard, creating an initial load spike on the power source.

Note that Q1 can turn on M1, charging its gate capacitance through R2. However, M1 can only be turned off by dicharging that capacitacne through R1, making turn-off much slower than turn-on.

As noted before, these resistances need to be high so that the current that they contribute to R3 is small relative to the current through C2.

Problem 4—Are there any special considerations on the selection of M1?

Answer 4—As long as M1 can handle the voltage and the current, and that the maximum VGS it sees is controllede by appropriate selection of R1 and R2, there’s really nothing special required.

Pay attention to the SOA (safe operating area) diagram in the datasheet. Plot some sample voltage and current values from the simulation in order to make sure it stays in the safe area.

Also, be sure to give it an adequate way to dissipate the pulse of heat associated with the charging surge of C1 without having its temperature rise too high.

Issue 302: EQ Answers

Problem 1: You have decided to build a small computer from discrete transistors as a demonstration. After researching the available technologies, you have decided to base your design on NMOS logic, using a 3-input NOR gate as your basic building block, as shown below.eq0673_fig1

Each gate uses three 2N7000 N-channel MOSFETs as pulldown transistors, and a 10K resistor as a passive pullup. You figure that you’ll need somewhere between 500 and 1000 of these gates to build a useful computer — after all, the original PDP-8 12-bit minicomputer CPU was built with only about 519 gates.

Approximately how fast will you be able to clock this computer?

Answer 1: The timing will depend primarily on the capacitive load on each logic gate, which would include both the wiring capacitance and the capacitance of the MOSFET gate(s) you’re driving.

For example, the 2N7000 has an input capacitance of 20 pF typical (50 pF max). If your average fanout is 3, plus some wiring capacitance, that gives you a typical load of 100 – 200 pF. With a 10K pullup, that gives you an R-C time constant of 1 – 2 µs. You’d probably need to allow at least two time constants for one “gate delay” for reliable switching, so we’re talking about 2 – 4 µs per gate.

To get useful work done, you’ll need to allow some maximum number of gate delays per clock period. This will depend on your specific design, but a number like 6 to 10 would be typical. So now we’re talking about a clock period of 12 – 40 µs, or frequencies in the range of 25 – 80 kHz.

Switching to a 1K pullup resistor would allow the frequency to scale up by roughly a factor of 10.

Problem 2: Assuming a supply voltage of 5V, about how much power would you expect this computer to consume?

Answer 2: You can assume that roughly half of the gates will be active (outputs low) at any given moment, with current passing through their pullup resistors. Each resistor passes 5V / 10K = 0.5mA, and if there are 1000 gates, this represents an worst-caxse current of 0.5A, giving a power consumption of 5V × 0.5A = 2.5W. If only about half the gates are active, then the average power will be about 1.25W.

Switching to a 1K pullup resistor will raise this average static power consumption to roughly 12.5W (5A, or 25W, worst-case).

Problem 3: How many 3-input gates does it take to construct a edge-triggered (master-slave) D flip-flop?

Answer 3: Six 3-input NOR gates can be used to build a master-slave D flip-flop.eq0673_fig2

Note that the active edge of the clock is the falling edge.

Problem 4: What famous computer was built using NOR gates exclusively for the logic?

Answer 4: The original Cray-1 supercomputer was constructed using a single type of IC for the logic that contained one 4-input and one 5-input NOR gate. This IC used ECL (emitter-coupled logic) technology and the machine ran with a cycle time of 12.5 ns (80 MHz). About 200,000 gates were required to implement the CPU.

Contributor: David Tweed

Issue 300: EQ Answers

Problem 1: The diagram below is a simplified illustration of a switchmode “buck” DC-DC converter with synchronous (active) rectification. The switching elements are shown as MOSFETs, with the associated body diodes drawn explicitly. The details associated with driving the MOSFET gates are ignored, other than to say that when one is on, the other is off, and the duty cycle is variable.eq0672_img1

This is, by definition, a CCM (continuous conduction mode) converter. What does this tell us about the relationship between VA, VB and the duty cycle of the switching?

Answer 1: In normal operation, M2 is switched on first, and current flows through it and L1, charging the inductor with magnetic energy. When M2 switches off and M1 switches on, the current continues to flow through L1, discharging its stored energy.

Now, if M1 weren’t there, the circuit would still work, because the discharge current would still flow through D1. However, once L1’s current drops to zero, the diode would block any further flow — this is known as “discontinuous conduction mode”. Whereas, with M1 present, the current flow can actually reverse. In other words, with active (synchronous) rectification, the converter can both source and sink current at its output. This is known as “continuous conduction mode”. This means that the relationship between the input voltage VA and the output voltage VB is only a function of the duty cycle of the switching:eq300 equation

 

Problem 2: Can the output of such a converter sink as well as source current? If so, where does the current go?

Answer 2: Yes, as mentioned above, it can indeed sink current. When the current in L1 goes negative, the current flows through M1 to ground as long as M1 is on. But when M1 switches off and M2 switches on, this forces current back toward VA and C2, until the voltage across L1 causes the current to ramp back up to zero and then positive again.

Problem 3: Draw a similar diagram for a switchmode “boost” DC-DC converter with synchronous rectification. What interesting thing can you say about the two diagrams?

Answer 3: Here is the corresponding diagram for a “boost” converter:eq0672_img2

In normal operation, M1 switches on first, charging L1 with magnetic energy. Then, M1 switches off and M2 switches on, allowing the stored energy to discharge into C2.

The remarkable thing about this diagram is that it is an exact mirror image of the buck converter!

Question 4: Based on the answers to the previous questions, what can you say about the direction of power flow through this type of converter?

Answer 4: Again, with the boost converter, we could eliminate M2 and allow D2 to do the output switching, but M2 allows current to flow either way during the discharge phase. And just like with the buck converter, this means that the input-output voltage relationship becomes a function of only the switching duty cycle:eq300 equation2

Note that this is a simple rearrangement of the terms in the equation for the buck converter — in other words, it’s the same equation. This tells us that regardless of which way the power is flowing, the relationship between VA and VB is simply a function of the switching duty cycle.

So, to turn this into a concrete example, if the PWM control is set up so that M2 is on 5/12 = 42% of the time, you could apply 12V at VA and get 5V out at VB, OR you could apply 5V at VB and get 12V out at VA!

One final note about regulation: This circuit provides a specific ratiometric relationship between the two voltages that is based on the duty cycle of the switching. If the input voltage is unregulated, but you want a regulated output voltage, then you need to provide a mechanism that varies the duty cycle of the switch in order to cancel out the input variations. Note that this control could be based on measuring the input voltage directly (feedforward control) or measuring the output voltage (feedback control).

If you’re going to build a practical bidirectional power converter with regulation, you’ll have to pay extra attention to how this control mechanism works in both modes of operation.

Contributor: David Tweed

Issue 298: EQ Answers

Problem 1: What do we call a network of gates that has no feedback of any kind? What is its key characteristic?

Answer 1: A network of gates that has no feedback is called “combinatorial logic”, or sometimes “combinational logic”. Its defining characteristic is that the output of the network is strictly a function of the current input values; the past history of the inputs has no effect. The branch of mathematics associated with this is called “combinatorics”, and we say that the output is the result of logical combinations of the input values.

Problem 2: What do we call a network of gates that has negative feedback? Give an example.

Answer 2: If a network of gates has “negative” feedback, it means that an output is fed back to an input in such a way that it always causes that output to change state again. When this occurs, the output never achieves a stable state. We call this an “oscillator.”

The simplest example is the “ring oscillator,” which simply comprises an odd number of inverters, each with its output connected to the input of the next. When any output changes state, the change propagates all the way around through the chain and forces it back to the opposite state.

Note that a single inverter fed back to itself will not usually oscillate. Because the propagation delay is comparable to the transition time, the output will usually just settle at an intermediate analog value, rather than any valid digital value.

Problem 3: What do we call a network of gates that has positive feedback? Give an example of the simplest possible such network (there are several).

Answer 3: A network of gates with positive feedback becomes an asynchronous state machine (ASM). The network’s output becomes a function of both its current input values and the past history of the input values — in other words, the network can be said to have a “memory” of its past.

The simplest possible examples include a single AND gate or a single OR gate. An AND gate with one input tied to its output has two output states, 0 and 1. If the other input has at any time in the past been driven to 0, then the output will be 0; otherwise it will be 1.

Similarly, the output of an OR gate with one input tied to its output will be 1 if the other input has ever been driven to 1.

The simplest ASM that is of practical use in most applications is the set-reset latch, which requires two gates that each have an input from the output of the other. They can comprise one AND gate and one OR gate, or a pair of NAND gates or a pair of NOR gates. This type of ASM can be used to store one bit of information, and is a type of flip-flop. Another (very old) term is “bistable multivibrator.”

Problem: Why are NAND gates and NOR gates considered to be “universal” gates? Why is this important?

Answer 4: It can be shown that any logical function can be created from a network of only NAND gates, or of only NOR gates. This includes ASMs such as the two-NAND or two-NOR flip-flop described above.

This is important because many of the basic technologies for creating digital gates tend to require that each gate be implicitly inverting. In other words, a 1 on an input can only force a 0 at the output, and vice-versa.

Examples include RTL (resistor-transistor logic), which is based on NOR gates, and DTL (diode-transistor logic) or TTL (transistor-transistor logic), which are based on NAND gates.

This is particularly true of CMOS, the most popular implementation technology in current use. Any network of N-channel pull-down transistors and P-channel pull-up transistors can only invert an input signal. The big advantage of CMOS is that NAND and NOR gates can be combined with equal ease.

Contributor: David Tweed

Electrical Engineering Crossword (Issue 296)

The answers to Circuit Cellar’s March 2015 electrical engineering crossword puzzle are now available. 296-grid-(key)Across

2. HUM—Field of 60 or 120 Hz magnetic or electrostatic energy
5. SWEEP—Scan of a range of frequencies
6. WIREWOUND—WW
13. POTENTIOMETER—Pot
14. NANO—Prefix that divides a unit by a billion
15. ISOTROPIC—Exhibiting the same physical properties in all directions
17. MICROMETER—One millionth of a meter
18. INCREMENT—To change in value by a discrete step
19. INTERFERENCE—RFI, EMI
20. MULTIMETER—Test instrument that can make several different measurements

Down

1. QUALITYCONTROL—QC (two words)
3. FLUX—A field of radiated energy
4. FERROMAGNETIC—Magnetizable substance based on iron
7. DIFFRACTION—Bending of energy waves as they move around or through an obstacle
8. JUNCTION—Any connection between two electrical conductors
9. CLIPPING—Slicing off of signal peaks
10. COMMON—Conductor shared by various circuits
11. SUBSONIC—Slower than the speed of sound
12. DECODE—Convert a digital signal back into an analog signal
16. CAPACITANCE—Measured in fractions of a farad

Issue 296: EQ Answers

Answer 1—The frequency generated at the QB output of the counter is 16.000 MHz × 3 / 13 = 3.6923 MHz. The ratio between this and 3.6864 MHz is 1.0016, so the error expressed as a percentage is +0.16%. This is well within the tolerance required for asynchronous serial communications.

Answer 2—The circuit generates rising edges (also falling edges) at intervals of 4 clocks, 4 clocks and 5 clocks, but the ideal spacing would be 4.3333 clocks. Therefore two of the intervals are short by 1/3 clock and one of them is long by 2/3 clock.

Therefore, the cycle-to-cycle peak-to-peak jitter is 1/3 + 2/3 = 1 full input clock period, or 62.5 ns. But taking an average over a complete group of 13 clocks, no edge is displaced from its “ideal” location by more than 1/3 clock, or 20.8 ns.

Answer 3—The following table shows the divider ratios required for various standard baud rates.297 eq answers

As you can see, a modern UART can generate the clocks for baud rates up to 38400 with the exact same error as the 3/13 counter scheme — note that 26 and 52 are multiples of 13. But above that, the frequency error increases. This is why microcontrollers with built-in UARTs often run at “oddball” frequencies such as 11.0592 MHz or 12.288 MHz — these freqeuncies can be easily divided down to produce precisely correct baud rates.

Answer 4—A UART receiver waits for the leading edge of the start bit, and then samples the next 10 bits in the center of each bit “cell”. If by the time it gets to the 10th cell, the sampling point at the receiver has moved beyond the edge of the 10th bit (the stop bit) defined by the transmitter, the transmission will fail. This means that the timing error must be no more than ± 1/2 bit over a 9.5-bit span, or a total error between transmitter and receiver of ±5.26%. If the error is split evenly, this means that each baud rate generator must be accurate to within ±2.63%.

However, in reality, the receiver cannot determine the location of the leading edge precisely. Since it is using a 16× clock to do the sampling, there could be as much as 1/16 of a bit delay before the receiver actually recognizes the start bit, and all of its sampling points for the subsequent bits will be delayed by that amount. This means that the timing error must be no more than ± 7/16 of a bit by the time we get to the last bit, which means that the maximum total error is ±4.60%, or ±2.30% for each baud rate generator.

 

 

Issue 294: EQ Answers

Problem 1—Let’s get back to basics and talk about the operation of a capacitor. Suppose you have two large, flat plates that are close to each other (with respect to their diameter). If you charge them up to a given voltage, and then physically move the plates away from each other, what happens to the voltage? What happens to the strength of the electric field between them?

Answer 1—The capacitance of the plates drops with increasing distance, so the voltage between them rises, because the charge doesn’t change and the voltage is equal to the charge divided by the capacitance. At first, while the plate spacing is still small relative to their diameter, The capacitance is proportional to the inverse of the spacing, so the voltage rises linearly with the spacing. However, as the spacing becomes larger, the capacitance drops more slowly and the voltage rises at a lower rate as well.

While the plate spacing is small, the electric field is almost entirely directly between the two plates, with only minor “fringing” effects at the edges. Since the voltage rise is proportional to the distance in this regime, the electric field (e.g., in volts per meter) remains essentially constant. However, once the plate spacing becomes comparable to the diameter of the plates, and fringing effects begin to dominate, the field begins to spread out and weaken. Ultimately, at very large distances, at which the plates themselves can be considered points, the voltage is essentially constant, and the field strength directly between them becomes proportional to the inverse of the distance.


Problem 2—If you double the spacing between the plates of a charged capapcitor, the capacitance is cut in half, and the voltage is doubled. However, the energy stored in the capacitor is defined to be E = 0.5 C V2. This means that at the wider spacing, the capacitor has twice the energy that it had to start with. Where did the extra energy come from?

Answer 2—There is an attractive force between the plates of a capacitor created by the electric field. Physically moving the plates apart requires doing work against this force, and this work becomes the additional potential energy that is stored in the capacitor.


Question 3—What happens when a dielectric is placed in an electric field? Why does the capacitance of pair of plates increase when the space betwenn them is filled with a dielectric?

Answer 3—Dielectric materials are made of atoms, and the atoms contain both positive and negative charges. Although neither the positive nor the negative charges are free to move about in the material (which is what makes it an insulator), they can be shifted to varying degress with respect to each other. An electric field causes this shift, and the shift in turn creates an opposing field that partially cancels the original field. Part of the field’s energy is absorbed by the dielectric.

In a capacitor, the energy absorbed by the dielectric reduces the field between the plates, and therefore reduces the voltage that is created by a given amount of charge. Since capacitance is defined to be the charge divided by the voltage, this means that the capacitance is higher with the dielectric than without it.


Problem 4—What is the piezoelectric effect?

Answer 4—With certain dielectrics, most notably quartz and certain ceramics, the displacement of charge also causes a significant mechanical strain (physical movement) of the crystal lattice. This effect works two ways — a physical strain also causes a shift in electric charges, creating an electric field. This effect can be exploited in a number of ways, including transducers for vibration and sound (microphones and speakers), as well as devices that have a strong mechanical resonance (e.g., crystals) that can be used to create oscillators and filters.

Contributed by David Tweed

Issue 292: EQ Answers

Problem 1—Let’s talk about noise! There are different types of noise that might be present in a system, and it’s important to understand how to deal with them.

For example, analog sensors and other types of active devices will often have AWGN, or Additive White Gaussian Noise, at their outputs. Any sort of analog-to-digital converter will add quantization noise to the data. What is the key difference between these two types of noise?

Answer 1—The key difference between AWGN and quantization noise is the PDF, or Probability Density Function, which is a description of how the values (voltage or current levels in analog systems, or data values in digital systems) are distributed.

The values from AWGN have a bell-shaped distribution, known variously as a Gaussian or Normal distribution. The formula for this distribution is:292-EQ-equation

µ represents the mean value, which we take to be zero in discussions about noise. σ is known as the “standard deviation” of the distribution, and is a way to characterize the “width” of the distribution.

It looks like this:

292-EQ-graph

Source: Wikipedia (en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg)

While the curve is nonzero everywhere (from –∞ to +∞) it is important to note that the values will be within ±1 σ of the mean 68% of the time, within ±2 σ of the mean 95% of the time, and within ±3 σ of the mean 99.7% of the time. In other words, although the peak-to-peak value of this kind of noise is theoretically infinite, you can treat it as being less than 4σ 95% of the time.

On the other hand, the values from quantization noise have a uniform distribution — the values are equally probable, but only over a fixed span that’s equal to the quantization step size of the converter. The peak-to-peak range of this noise is equal to the converter’s step size (resolution).

However, it’s important to note that both sources of noise are “white”, which is a shorthand way of saying that their effects are uniformly distributed across the frequency spectrum.


Problem 2—Signal-to-noise ratios are most usefully described as power ratios. How does one characterize the power levels for both AWGN and quantization noise?

Answer 2—The power of a noise signal is proportional to the square of its RMS value.

The RMS value of AWGN is numerically equal to its standard deviation.

The RMS value of quantization noise is simply the peak-to-peak value (the step size of the converter) divided by √12, or VRMS = 0.2887 VPP. This is easily derived if you characterize the quantization noise signal as a small sawtooth wave that gets added to the analog signal.


Question 3—When you have multiple sources of noise in a system, how can you characterize their combined effect on the overall system performance?

Answer 3—When combining noise sources, you can’t simply add their RMS voltage or current values together. From one sample to the next, one noise source might partially cancel the effects of the other noise source(s).

Instead, you add the individual noise power levels to come up with an overall noise power level. Since power is proportional to voltage (or current) squared, this means that you need to square the individual RMS measurements, add them together, and then take the square root of the result in order to get an equivalent overall RMS value.

VRMS(total) = √(VRMS(n1)2 + VRMS(n2)2 + …)


Problem 4—Broadband analog sensors and other active devices often specify their noise levels in units of “microvolts per root-Hertz” (µV/√Hz) or “nanoamps per root-Hertz” (nA/√Hz). Where does this strange unit come from, and how do you use it?

Answer 4—As described in the previous answer, uncorrelated noise sources are added based on their power. With AWGN, the noise in one “segment” of the frequency spectrum is not correlated with another segment of the spectrum, so if you have a particular voltage level of noise in, say, a 1-Hz band of frequencies, you’ll have √2 times as much noise in a 2-Hz band of frequencies. In general, the RMS noise level for any particular bandwidth is going to be proportional to the square root of that bandwidth, which is why the devices are characterized that way.

So, if you have an opamp that’s characterized as having a noise level of 2 µV/√Hz, and you want to use this in an audio application with a bandwidth of 20 kHz, the overall noise at the output of the opamp will be 2 µV × √20000, or about 283 µVRMS. If your signal is a sinewave with a peak-to-peak value of 1V (353 mVRMS), you’ll have a signal-to-noise ratio of about 124 dB.

Contributed by David Tweed

Issue 290: EQ Answers

Problem 1—What is an R-C snubber, and what is a typical application for one?

Answer 1—An R-C snubber is the series combination of a resistor and a capacitor that is placed in parallel with a switching element that controls the power to an inductive load in order to safely absorb the energy of switching transients.

The problem is that a load that has an inductive component will produce a brief very high-voltage “spike” when the current through it is interrupted quickly. This spike can cause semiconductor devices to break down or even mechanical contacts to arc over, reducing their lifetime. The snubber absorbs the energy of the spike and dissipates it as heat, without ever allowing the voltage to rise too high.


Problem 2—How do you pick the resistor value in an R-C snubber?

Answer 2—To pick the resistor value, you first need to know what the maximum voltage you want to allow is. For example, if you have a MOSFET that has a drain-to-source breakdown rating of 400 V, you might choose to limit the snubber voltage to 200 V. Call this VMAX. Next, you need to know the maximum current that will be flowing through the load (and the switching element). Call this IMAX. At the instant the switching element opens, this current will be flowing through the resistor, and this will determine the initial voltage that appears across the switching element. Therefore pick the resistance: R = VMAX/IMAX.


Question 3—How do you pick the capacitor value in an R-C snubber?

Answer 3—Picking the capacitor can be more tricky. The key concept is that you need to pick a capacitor that can absorb the energy stored in the inductance of the load while keeping its terminal voltage under VMAX. Since loads don’t often specify their values of inductance, this may require some experimentation. Let’s call the load inductance LLOAD. The energy that it stores at the maximum current is: E = 0.5 IMAX2 LLOAD.The energy that a capacitor stores is: E = 0.5 V2C.

So, if we say that we want the capacitor to store the same energy that’s in the inductance when its terminal voltage is at VMAX, we can combine the twe equations and then solve for C:

0.5 VMAX2C = 0.5 IMAX2LLOAD

C = (IMAX2/VMAX2)LLOAD

This value will actually be somewhat conservative, because some of the initial energy of the inductance will be dissipated in the resistor during the initial transient, before it even gets to the capacitor. After that, the inductance and the capacitor will behave as a series-resonant circuit, with the current oscillating back and forth until all of the energy is gone.


Problem 4—What additional concern is there with regard to an R-C snubber when switching AC power?

Answer 4—When switching DC, the snubber absorbs the energy stored in the load’s inductance, and after a while, no current flows and the capacitor is charged to the supply voltage. However, when switching AC, the snubber has a finite impedance at the AC frequency, which means that it “leaks” a certain amount of current even when the main switching element is open. While this may or may not cause a problem for the load (usually not), there is also the issue of the continuous power being dissipated in the snubber resistor. The resistor must be rated to withstand this leakage power in addition to the energy of the switching events.