# Issue 294: EQ Answers

Problem 1—Let’s get back to basics and talk about the operation of a capacitor. Suppose you have two large, flat plates that are close to each other (with respect to their diameter). If you charge them up to a given voltage, and then physically move the plates away from each other, what happens to the voltage? What happens to the strength of the electric field between them?

#### Answer 1—The capacitance of the plates drops with increasing distance, so the voltage between them rises, because the charge doesn’t change and the voltage is equal to the charge divided by the capacitance. At first, while the plate spacing is still small relative to their diameter, The capacitance is proportional to the inverse of the spacing, so the voltage rises linearly with the spacing. However, as the spacing becomes larger, the capacitance drops more slowly and the voltage rises at a lower rate as well.

While the plate spacing is small, the electric field is almost entirely directly between the two plates, with only minor “fringing” effects at the edges. Since the voltage rise is proportional to the distance in this regime, the electric field (e.g., in volts per meter) remains essentially constant. However, once the plate spacing becomes comparable to the diameter of the plates, and fringing effects begin to dominate, the field begins to spread out and weaken. Ultimately, at very large distances, at which the plates themselves can be considered points, the voltage is essentially constant, and the field strength directly between them becomes proportional to the inverse of the distance.

Problem 2—If you double the spacing between the plates of a charged capapcitor, the capacitance is cut in half, and the voltage is doubled. However, the energy stored in the capacitor is defined to be E = 0.5 C V2. This means that at the wider spacing, the capacitor has twice the energy that it had to start with. Where did the extra energy come from?

#### Answer 2—There is an attractive force between the plates of a capacitor created by the electric field. Physically moving the plates apart requires doing work against this force, and this work becomes the additional potential energy that is stored in the capacitor.

Question 3—What happens when a dielectric is placed in an electric field? Why does the capacitance of pair of plates increase when the space betwenn them is filled with a dielectric?

#### Answer 3—Dielectric materials are made of atoms, and the atoms contain both positive and negative charges. Although neither the positive nor the negative charges are free to move about in the material (which is what makes it an insulator), they can be shifted to varying degress with respect to each other. An electric field causes this shift, and the shift in turn creates an opposing field that partially cancels the original field. Part of the field’s energy is absorbed by the dielectric.

In a capacitor, the energy absorbed by the dielectric reduces the field between the plates, and therefore reduces the voltage that is created by a given amount of charge. Since capacitance is defined to be the charge divided by the voltage, this means that the capacitance is higher with the dielectric than without it.

Problem 4—What is the piezoelectric effect?

Answer 4—With certain dielectrics, most notably quartz and certain ceramics, the displacement of charge also causes a significant mechanical strain (physical movement) of the crystal lattice. This effect works two ways — a physical strain also causes a shift in electric charges, creating an electric field. This effect can be exploited in a number of ways, including transducers for vibration and sound (microphones and speakers), as well as devices that have a strong mechanical resonance (e.g., crystals) that can be used to create oscillators and filters.

Contributed by David Tweed

# Issue 292: EQ Answers

Problem 1—Let’s talk about noise! There are different types of noise that might be present in a system, and it’s important to understand how to deal with them.

For example, analog sensors and other types of active devices will often have AWGN, or Additive White Gaussian Noise, at their outputs. Any sort of analog-to-digital converter will add quantization noise to the data. What is the key difference between these two types of noise?

#### Answer 1—The key difference between AWGN and quantization noise is the PDF, or Probability Density Function, which is a description of how the values (voltage or current levels in analog systems, or data values in digital systems) are distributed.

The values from AWGN have a bell-shaped distribution, known variously as a Gaussian or Normal distribution. The formula for this distribution is: µ represents the mean value, which we take to be zero in discussions about noise. σ is known as the “standard deviation” of the distribution, and is a way to characterize the “width” of the distribution.

It looks like this: Source: Wikipedia (en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg)

While the curve is nonzero everywhere (from –∞ to +∞) it is important to note that the values will be within ±1 σ of the mean 68% of the time, within ±2 σ of the mean 95% of the time, and within ±3 σ of the mean 99.7% of the time. In other words, although the peak-to-peak value of this kind of noise is theoretically infinite, you can treat it as being less than 4σ 95% of the time.

On the other hand, the values from quantization noise have a uniform distribution — the values are equally probable, but only over a fixed span that’s equal to the quantization step size of the converter. The peak-to-peak range of this noise is equal to the converter’s step size (resolution).

However, it’s important to note that both sources of noise are “white”, which is a shorthand way of saying that their effects are uniformly distributed across the frequency spectrum.

Problem 2—Signal-to-noise ratios are most usefully described as power ratios. How does one characterize the power levels for both AWGN and quantization noise?

#### Answer 2—The power of a noise signal is proportional to the square of its RMS value.

The RMS value of AWGN is numerically equal to its standard deviation.

The RMS value of quantization noise is simply the peak-to-peak value (the step size of the converter) divided by √12, or VRMS = 0.2887 VPP. This is easily derived if you characterize the quantization noise signal as a small sawtooth wave that gets added to the analog signal.

Question 3—When you have multiple sources of noise in a system, how can you characterize their combined effect on the overall system performance?

#### Answer 3—When combining noise sources, you can’t simply add their RMS voltage or current values together. From one sample to the next, one noise source might partially cancel the effects of the other noise source(s).

Instead, you add the individual noise power levels to come up with an overall noise power level. Since power is proportional to voltage (or current) squared, this means that you need to square the individual RMS measurements, add them together, and then take the square root of the result in order to get an equivalent overall RMS value.

VRMS(total) = √(VRMS(n1)2 + VRMS(n2)2 + …)

Problem 4—Broadband analog sensors and other active devices often specify their noise levels in units of “microvolts per root-Hertz” (µV/√Hz) or “nanoamps per root-Hertz” (nA/√Hz). Where does this strange unit come from, and how do you use it?

#### Answer 4—As described in the previous answer, uncorrelated noise sources are added based on their power. With AWGN, the noise in one “segment” of the frequency spectrum is not correlated with another segment of the spectrum, so if you have a particular voltage level of noise in, say, a 1-Hz band of frequencies, you’ll have √2 times as much noise in a 2-Hz band of frequencies. In general, the RMS noise level for any particular bandwidth is going to be proportional to the square root of that bandwidth, which is why the devices are characterized that way.

So, if you have an opamp that’s characterized as having a noise level of 2 µV/√Hz, and you want to use this in an audio application with a bandwidth of 20 kHz, the overall noise at the output of the opamp will be 2 µV × √20000, or about 283 µVRMS. If your signal is a sinewave with a peak-to-peak value of 1V (353 mVRMS), you’ll have a signal-to-noise ratio of about 124 dB.

Contributed by David Tweed

# Issue 290: EQ Answers

Problem 1—What is an R-C snubber, and what is a typical application for one?

Answer 1—An R-C snubber is the series combination of a resistor and a capacitor that is placed in parallel with a switching element that controls the power to an inductive load in order to safely absorb the energy of switching transients.

The problem is that a load that has an inductive component will produce a brief very high-voltage “spike” when the current through it is interrupted quickly. This spike can cause semiconductor devices to break down or even mechanical contacts to arc over, reducing their lifetime. The snubber absorbs the energy of the spike and dissipates it as heat, without ever allowing the voltage to rise too high.

Problem 2—How do you pick the resistor value in an R-C snubber?

Answer 2—To pick the resistor value, you first need to know what the maximum voltage you want to allow is. For example, if you have a MOSFET that has a drain-to-source breakdown rating of 400 V, you might choose to limit the snubber voltage to 200 V. Call this VMAX. Next, you need to know the maximum current that will be flowing through the load (and the switching element). Call this IMAX. At the instant the switching element opens, this current will be flowing through the resistor, and this will determine the initial voltage that appears across the switching element. Therefore pick the resistance: R = VMAX/IMAX.

Question 3—How do you pick the capacitor value in an R-C snubber?

Answer 3—Picking the capacitor can be more tricky. The key concept is that you need to pick a capacitor that can absorb the energy stored in the inductance of the load while keeping its terminal voltage under VMAX. Since loads don’t often specify their values of inductance, this may require some experimentation. Let’s call the load inductance LLOAD. The energy that it stores at the maximum current is: E = 0.5 IMAX2 LLOAD.The energy that a capacitor stores is: E = 0.5 V2C.

So, if we say that we want the capacitor to store the same energy that’s in the inductance when its terminal voltage is at VMAX, we can combine the twe equations and then solve for C:

0.5 VMAX2C = 0.5 IMAX2LLOAD

This value will actually be somewhat conservative, because some of the initial energy of the inductance will be dissipated in the resistor during the initial transient, before it even gets to the capacitor. After that, the inductance and the capacitor will behave as a series-resonant circuit, with the current oscillating back and forth until all of the energy is gone.

Problem 4—What additional concern is there with regard to an R-C snubber when switching AC power?

Answer 4—When switching DC, the snubber absorbs the energy stored in the load’s inductance, and after a while, no current flows and the capacitor is charged to the supply voltage. However, when switching AC, the snubber has a finite impedance at the AC frequency, which means that it “leaks” a certain amount of current even when the main switching element is open. While this may or may not cause a problem for the load (usually not), there is also the issue of the continuous power being dissipated in the snubber resistor. The resistor must be rated to withstand this leakage power in addition to the energy of the switching events.

# Issue 288: EQ Answers

Problem 1—When designing a pair of band-splitting filters (for, say, an audio crossover), why is it important to match frequencies of the –3-dB points of the low-pass and high-pass responses?

Answer 1—The cutoff frequencies of the two filters should be the same so that the overall frequency response when the filter outputs are recombined is flat and has no phase shift. For example, if you feed the cutoff frequency into both filters and then combine the results again, the output will be the same level as the input (0-dB overall gain). As long as the “order” of the two filters is the same (they have the same roll-off slope), the gain will be flat across the entire transition band of frequencies.

What’s really going on is this: A filter’s –3-dB point is where the output has half the power of the input signal, which means that the output voltage is 1/sqrt(2) times the input voltage. The –3-dB point is also where the output signal is phase shifted by 45°. It lags by 45° in the low-pass filter and leads by 45° in the high-pass filter. This means that the outputs of the two filters have a total phase shift of 90° relative to each other.

When you add two sinewaves that have the same amplitude and a 90° phase shift, you don’t get double the voltage. You get sqrt{2} times the voltage. You also get a waveform that has a phase midway between the two signals being added.

So, the final amplitude is sqrt{2}/sqrt{2} times the original input voltage, and the final phase is midway between 45° and –45°, or 0°. In other words, you get the original sinewave back exactly.

Problem 2—A certain portable stereo unit runs for about 12 h on a set of LR20 (D-size alkaline) batteries. If you want to extend the stereo’s run-time, is it better to simply use multiple sets of batteries sequentially, or to connect them all in series-parallel to create one big battery pack?

Answer 2—In general, batteries provide greater capacity at lower average currents. This is partly due to the battery’s internal chemistry, but largely due to the simple fact that less power is wasted in the internal resistance of the battery.

Here are two graphs taken from two different datasheets that illustrate this.  If the stereo is running for 12 h on a set of batteries, based on eyeballing these graphs, it’s probably getting about 8 A-h of capacity out of one set, so it’s drawing about 660 mA on average. Putting three sets of batteries in parallel would drop the current in each set to about 220 mA, and it will get something closer to 12 A-h from each set.

In other words, if you use, say, three sets of batteries sequentially, you’ll get 36 h of playing time (24 A-h total), but if you use them in parallel together, you’ll get something closer to 54 h of playing time (36 A-h total).

Problem 3—If you wanted to make a capacitor from scratch, what common household materials might you use?

Answer 3—A capacitor consists of two flat conductors separated by a dielectric. Aluminum foil is an obvious candidate for the conductors, and either waxed paper or plastic food wrap would be suitable dielectrics — they have similar characteristics.

Problem 4—How big would a 10-µF capacitor using these materials be?

Answer 4—You need to do some basic calculations first. The formula for capacitance is: • εR is the relative permittivity of the dielectric
• ε0 is the permittivity of free space
• A is the area of one plate
• d is the separation between the plates

Let’s say you want to use 1-mil (25.4 µm) waxed paper as a dielectric. Note that this will determine the voltage rating of the capacitor. The dielectric strength of waxed paper is about 35-40 MV/m, so this will give you a capacitor that can theoretically handle almost a kilovolt, but be conservative in how you use it!

The relative permittivity of waxed paper is about 3.7, the permittivity of free space is 8.854e-12 F/m. Solve for the area required: If you get aluminum foil and waxed paper that’s about 12″ (30 cm) wide, you can probably get an overlap of, say, 25 cm, which means that you’ll need a length of about 15.5 m to get the area you need.

If you then roll up your capacitor (using a second layer of waxed paper), the capacitance will be doubled, or about 10 µF. Obviously, this will be physically rather large, more than a foot long and several inches in diameter.

Plastic food wrap has a similar dielectric constant and dielectric strength as waxed paper, but typically comes in a 0.5-mil (12.7 µm) thickness. A capacitor using this would have about half the voltage rating and about half the overall volume.

# Issue 286: EQ Answers

#### Question 1—A divider is a logic module that takes two binary numbers and produces their numerical quotient (and optionally, the remainder). The basic structure is a series of subtractions and multiplexers, where the multiplexer uses the result of the subtraciton to select the value that gets passed to the next step. The quotient is formed from the bits used to control the multiplexers, and the remainder is the result of the last subtraction.

If it is implemented purely combinatorially, then the critical path through all of this logic is quite long (even with carry-lookahead in the subtractors) and the clock cycle must be very slow. What could be done to shorten the clock period without losing the ability to get a result on every clock?

#### Answer 1—Pretty much any large chunk of combinatorial logic can be pipelined in order to reduce the clock period. This allows it to produce more results in a given amount of time, at the expense of increasing the latency for any particular result.

Divider logic is very easy to pipeline, and the number of pipeline stages you can use is fairly arbitrary. You could insert a pipeline register after each subtract-mux pair, or you might choose to do two or more subtract-mux stages per pipeline register You could even go so far as to pipeline the subtracts and the muxes separately (or even pipeline *within* each subtract) in order to get the fastest possible clock speed, but this would be rather extreme.

The more pipeline registers you use, the shorter the critical path (and the clock period) can be, but you use more resources (the registers). Also, the overall latency goes up, since you need to account for the setup and propagation times of the pipeline registers in the clock period (in addition to the subtract-mux logic delays). This gets multiplied by the number of pipeline stages in order to compute the total latency.

#### Answer 2—If you don’t need the level of performance provided by a pipelined divider, you can computes the quotient serially, one bit at a time. You would just need one subtractor and one multiplexer, along with registers to hold the input values, quotient bits and the intermediate result.

You could potentially compute more than one bit per clock period using additional subtract-mux stages. This gives you the flexibility to trade off space and time as needed for a particular application.

Question 3—An engineer wanted to build an 8-MHz filter that had a very narrow bandwidth, so he used a crystal lattice filter like this:

#### However, when he built and tested his filter, he discovered that while it worked fine around 8 MHz, the attenuation at very high frequencies (e.g., >80 MHz) was very much reduced. What caused this?

#### Answer 3—The equivalent circuit for a quartz crystal is something like this: The components across the bottom represent the mechanical resonance of the crystal itself, while the capacitor at the top represents the capacitance of the electrodes and holder. Typical values are:

• Cser: 10s of fF (yes, femtofarads, 10-15F)
• L: 10s of mH
• R: 10s of ohms
• Cpar: 10s of pF

The crystal has a series-resonant frequency based on just Cser and L. It has a relatively low impedance (basically just R) at this frequency.

It also has a parallel-resonant (sometimes called “antiresonant”) frequency when you consider the entire loop, including Cpar. Since Cser and Cpar are essentially in series, together they have a slightly lower capacitance than Cser alone, so the parallel-resonant frequency is slightly higher. The crystal’s impedance is very high at this frequency.

But at frequencies much higher than either of the resonant frequencies, you can see that the impedance of Cparalone dominates, and this just keeps decreasing with increasing frequency. This reduces the crystal lattice filter to a simple capacitive divider, which passes high freqeuncies with little attenuation.

#### Finally, calculate the value of Cpar required to give that value of capacitance when in series with Cser: Note that all three equations can be combined into one, and this reduces to: # Issue 284: EQ Answers

PROBLEM 1
Can you name all of the signals in the original 25-pin RS-232 connector?

Pins 9, 10, 11, 18, and 25 are unassigned/reserved. The rest are:

 Pin Abbreviation Source Description 1 PG – Protective ground 2 TD DTE Transmitted data 3 RD DCE Received data 4 RTS DTE Request to send 5 CTS DCE Clear to send 6 DSR DCE Data Set Ready 7 SG – Signal ground 8 CD DCE Carrier detect 12 SCD DCE Secondary carrier detect 13 SCTS DCE Secondary clear to send 14 STD DTE Secondary transmitted data 15 TC DCE Transmitter clock 16 SRD DCE Secondary received data 17 RC DCE Receiver clock 19 SRTS DTE Secondary request to send 20 DTR DTE Data terminal ready 21 SQ DCE Signal quality 22 RI DCE Ring indicator 23 – DTE Data rate selector 24 ETC DTE External transmitter clock

PROBLEM 2
What is the key difference between a Moore state machine and a Mealy state machine?

The key difference between Moore and Mealy is that in a Moore state machine, the outputs depend only on the current state, while in a Mealy state machine, the outputs can also be affected directly by the inputs.

PROBLEM 3
What are some practical reasons you might choose one state machine over the other?

In practice, the difference between Moore and Mealy in most situations is not very important. However, when you’re trying to optimize the design in certain ways, it sometimes is.

Generally speaking, a Mealy machine can have fewer state variables than the corresponding Moore machine, which will save physical resources on a chip. This can be important in low-power designs.

On the other hand, a Moore machine will typically have shorter logic paths between flip-flops (total combinatorial gate delays), which will enable it to run at a higher clock speed than the corresponding Mealy machine.

PROBLEM 4
What is the key feature that distinguishes a DSP from any other general-purpose CPU?

Usually, the key distinguishing feature of a DSP when compared with a general-purpose CPU is that the DSP can execute certain signal-processing operations with few, if any, CPU cycles wasted on instructions that do not compute results.

One of the most basic operations in many key DSP algorithms is the MAC (multiply-accumulate) operation, which is the fundamental step used in matrix dot and cross products, FIR and IIR filters, and fast Fourier transforms (FFTs). A DSP will typically have a register and/or memory organization and a data path that enables it to do at least 64 MAC operations (and often many more) on unique data pairs in a row without any clocks wasted on loop overhead or data movement. General-purpose CPUs do not generally have enough registers to accomplish this without using additional instructions to move data between registers and memory.

# Issue 282: EQ Answers

PROBLEM 1
Construct an electrical circuit to find the values of Xa, Xb, and Xc in this system of equations:

21Xa – 10Xb – 10Xc = 1
–10Xa + 22Xb – 10Xc = –2
–10Xa – 10Xb + 20Xc = 10

Your circuit should include only the following elements:

one 1-Ω resistor
one 2-Ω resistor
three 10-Ω resistors
three ideal constant voltage sources
three ideal ammeters

The circuit should be designed so that each ammeter displays one of the values Xa, Xb, or Xc. Given that the Xa, Xb, and Xc values represent currents, what kind of circuit analysis yields equations in this form?

You get equations in this form when you do mesh analysis of a circuit. Each equation represents the sum of the voltages around one loop in the mesh.

PROBLEM 2
What do the coefficients on the left side of the equations represent? What about the constants on the right side?

The coefficients on the left side of each equation represent resistances. Resistance multiplied by current (the unknown Xa, Xb, and Xc values) yields voltage.
The “bare” numbers on the right side of each equation represent voltages directly (i.e., independent voltage sources).

PROBLEM 3
What is the numerical solution for the equations?

To solve the equations directly, start by solving the third equation for Xc and substituting it into the other two equations:

Xc = 1/2 Xa + 1/2 Xb + 1/2

21Xa – 10Xb – 5Xa – 5Xb – 5 = 1
–10Xa + 22Xb – 5Xa – 5Xb – 5 = –2

16Xa – 15Xb = 6
–15Xa + 17Xb = 3

Solve for Xa by multiplying the first equation by 17 and the second equation by 15 and then adding them:

272Xa – 255Xb = 102
–225Xa + 255Xb = 45

47Xa = 147 → Xa = 147/47

Solve for Xb by multiplying the first equation by 15 and the second equation by 16 and then adding them:

240Xa – 225Xb = 90
–240Xa + 272Xb = 48

47Xb = 138 → Xb = 138/47

Finally, substitute those two results into the equation for Xc:

Xc = 147/94 + 138/94 + 47/94 = 332/94 = 166/47

PROBLEM 4
Finally, what is the actual circuit? Draw a diagram of the circuit and indicate the required value of each voltage source.

The circuit is a mesh comprising three loops, each with a voltage source. The common elements of the three loops are the three 10-Ω resistors, connected in a Y configuration (see the figure below). The values of the voltage sources in each loop are given directly by the equations, as shown. To verify the numeric solution calculated previously, you can calculate all of the node voltages around the outer loop, plus the voltage at the center of the Y, and ensure they’re self-consistent.

We’ll start by naming Va as ground, or 0 V:

Vb = Va + 2 V = 2 V

Vc = Vb + 2 Ω × Xb = 2V + 2 Ω × 138/47 A = 370/47 V = 7.87234 V

Vd = Vc + 1 Ω × Xa = 370/47 V + 1 Ω × 147/47A = 517/47 V = 11.000 V

Ve = Vd – 1 V = 11.000 V – 1.000 V = 10.000 V

Va = Ve – 10 V = 0 V

which is where we started.

The center node, Vf, should be at the average of the three voltages Va, Vc, and Ve:

0 V + 370/47 V + 10 V/3 = 840/141 V = 5.95745 V

We should also be able to get this value by calculating the voltage drops across each of the three 10-Ω resistors:

Va + (Xc – Xb) × 10 Ω = 0 V + (166 – 138)/47A × 10 Ω = 280/47 V = 5.95745 V

Vc + (Xb – Xa) × 10 Ω = 370/47V + (138-147)/47A × 10 Ω = 280/47 V = 5.95745 V

Ve + (Xa – Xc) × 10 Ω = 10 V + (147-166)/47 A × 10 Ω = 280/47 V = 5.95745 V

# Issue 280: EQ Answers

PROBLEM 1
What is the key difference between the following two C functions?

```#define VOLTS_FULL_SCALE 5.000
#define KPA_PER_VOLT 100.0
#define KPA_THRESHOLD 200.0

/* adc_reading is a value between 0 and 1
*/
{
float pressure = voltage * KPA_PER_VOLT;

return pressure > KPA_THRESHOLD;
}

{
float voltage_threshold = KPA_THRESHOLD / KPA_PER_VOLT;
float adc_threshold = voltage_threshold / VOLTS_FULL_SCALE;

}```

The first function, test_pressure(), converts the ADC reading to engineering units before making the threshold comparison. This is a direct, obvious way to implement such a function.

The second function, test_pressure2(), converts the threshold value to an equivalent ADC reading, so that the two can be compared directly.

The key difference is in performance. The first function requires that arithmetic be done on each reading before making the comparison. However, the calculations in the second function can all be performed at compile time, which means that the only run-time operation is the comparison itself.

PROBLEM 2
How many NAND gates would it take to implement the following translation table? There are five inputs and eight outputs. You may consider an inverter to be a one-input NAND gate.

 Inputs Outputs A B C D E F G H I J K L M 1 1 1 1 1 0 0 0 0 1 1 1 1 0 1 1 1 1 0 0 0 0 0 0 1 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 1 1 1

First of all, note that there are really only four inputs and three unique outputs for this function, since input E is always 1 and outputs GHI are always 0. The only real outputs are F, plus the groups JK and LM.

Since the other 27 input combinations haven’t been specified, we can take the output values associated with all of them as “don’t care.”

The output F is simply the inversion of input C.

The output JK is high only when A is high or D is low.

The output LM is high except when B is low and C is high.

Therefore, the entire function can be realized with a total of five gates: PROBLEM 3
Quick history quiz: Who were the three companies who collaborated to create the initial standard for the Ethernet LAN?

The original 10-Mbps Ethernet standard was jointly developed by Digital Equipment Corp. (DEC), Intel, and Xerox. It was released in November 1980, and was commonly referred to as “DIX Ethernet.”

PROBLEM 4
What was the name of the wireless network protocol on which Ethernet was based? Where was it developed?

The multiple access with collision detection protocol that Ethernet uses was based on a radio protocol developed at the University of Hawaii. It was known as the “ALOHA protocol.”

# Issue 278: EQ Answers

Problem 1—Tom, an FPGA designer, is helping out on a system that handles standard-definition digital video at 27 MHz and stores it into an SDRAM that runs at 200 MHz. He discovered the following logic in the FPGA (see Figure 1).

Let’s see if we can work out what it does. To start with, what is the output of the XOR gate in?

Answer 1—When the 27-MHz clock goes from low to high, the first flip-flop changes state. Let’s say that its output goes from low to high as well. Then, when the clock goes from high to low, the second flip-flop’s output will become the same as the first.

On the clock’s next rising edge, the first flip-flop will change again, this time from high to low. And on the next falling edge, the second one will follow suit.

Putting it another way, following each rising edge of the clock, the two flip-flops are different. Following each falling edge, they’re the same. Since we’re feeding them into an XOR gate, the gate’s output will be high following the clock’s rising edge and low following the falling edge. In other words, the XOR gate’s is a replica of the clock signal itself!

Problem 2—Why is this necessary?

Answer 2—In many FPGA architectures, clock signals are automatically assigned to special clock routing resources, which are different from—and kept separate from—the routing resources used for “ordinary” signals. The tools actually discourage (or even prevent) you from using a clock as an input to a gate or to any input of a flip-flop other than the clock input.

Therefore, when you need to pass a clock into another timing domain as a signal, it becomes necessary to generate an ordinary signal that is a replica of the clock. This is one way to accomplish that.

Problem 3—What is the AND gate’s output?

Answer 3—The three flip-flops in the 200-MHz domain have a delayed versions of the (replica) 27-MHz clock signal. The first two function as a conventional synchronizer to minimize the effects of metastability. The third one, along with the AND gate, functions as an edge detector, generating a one-clock pulse in the 200-MHz clock domain following each rising edge of the 27-MHz clock. This pulse might be used, for example, to initiate a write request in the SDRAM for each video data word.

Problem 4—Tom decided to verify the circuit’s operation in his logic simulator, but immediately ran into a problem. What was the problem and what could be added to the circuit to make simulation possible?

Answer 4—There is a subtle problem here for a simulator: All of the flip-flops start out in the “unknown” state. Feeding that back (inverted) to the first flip-flop leaves it in an unknown state. The entire simulation will never get out of the unknown state, even though we can reason that it doesn’t matter which actual state the first flip-flop starts out in. The XOR gate’s output will be known after one full clock cycle. To fix this, it is necessary to explicitly reset the first flip-flop at the beginning of the simulation, then the rest of the circuit will simulate normally.

# Issue 276: EQ Answers

Problem 1
Suppose you have an ordinary switch mode buck regulator. The input voltage is 100 V, the switch’s duty cycle is exactly 50%, and you measure the output voltage as 70 V. Is this converter operating in continuous conduction mode or discontinuous conduction mode? How can you tell?

If a switch mode buck converter is operating in continuous conduction mode, then the output voltage is the fraction of the input voltage as defined by the duty cycle. 100 V × 0.5 would equal 50 V. Therefore, this converter is operating in discontinuous conduction mode.

Note that continuous conduction mode includes the case in which synchronous (active) rectification is being used and the current through the coil is allowed to reverse direction when the output is lightly loaded. The output voltage in relation to the input voltage will still be defined by the switch duty cycle.

Therefore, we also know that the regulator in question is not using synchronous rectification, but rather is using a diode instead.

Problem 2
Since a diode can be placed in a High-Impedance state (reverse-biased) or a Low-Impedance state (forward-biased), they are sometimes used to switch AC signals, including audio and RF. What determines the magnitude of a signal that a diode can switch?

When diodes are used for signal switching, there are two considerations with regard to the magnitude of the signal relative to the DC control signal:

• In the Blocking state, the reverse bias voltage must be greater than the peak signal voltage to prevent signal leakage. Also, a high-bias voltage reduces the parasitic capacitance through the diode. PIN diodes are often used for RF switching because of their ultra-low capacitance.
• In the On state, the forward DC control current through diode must be greater than the peak AC signal current, and it must be large enough so that the current doesn’t approach the diode curve’s “knee” too closely, introducing distortion.

Obviously, the diode needs to be rated for both the peak reverse voltage and the peak forward current created by the combination of the control signal and the application signal.

Problem 3
What common function does the following truth table represent?

 A B C X Y Z 0 0 0 ? 0 0 0 0 0 1 ? 0 0 1 0 1 0 ? 0 1 0 0 1 1 ? 0 0 1 1 0 0 ? 1 0 0 1 0 1 ? 0 0 1 1 1 0 ? 0 1 0 1 1 1 ? 0 0 1

The truth table implements a form of priority encoder:

Z is set if C is set, otherwise
Y is set if B is set, otherwise
X is set if A is set

In other words, C has the highest priority and A has the lowest. However, unlike conventional priority encoders that produce a binary output, this one produces a “one hot” encoding.

Problem 4
Write the equations for the logic that would implement the table.

The logic is quite straightforward:

Z = C
Y = B & !C
X = A & !B & !C

# Issue 274: EQ Answers

The answers to the Circuit Cellar 274 Engineering Quotient are now available. The problems and answers are listed below.

Problem 1—What is wrong with the name “programmable unijunction transistor?”

Answer 1—Unlike the original unijunction transistor—which really does have just a single junction—the programmable unijunction transistor (PUT) is actually a four-layer device that has three junctions, much like a silicon-controlled rectifier (SCR).

Problem 2—Given a baseband channel with 3-kHz bandwidth and a 40-dB signal-to-noise ratio (SNR), what is the theoretical capacity of this channel in bits per second?

Answer 2—The impulse response of an ideal channel with exactly 3 kHz of bandwidth is a sinc (i.e., sin(x)/x) pulse in the time domain that has nulls that are 1/6,000 s apart. This means you could send a series of impulses through this channel at a 6,000 pulses per second rate. And, if you sampled at exactly the correct instants on the receiving end, you could recover the amplitudes of each of those pulses with no interference from the other pulses on either side of it.

However, a 40-dB signal-to-noise ratio implies that the noise power is 1/10,000 of the maximum signal power. In terms of distinguishing voltage or current levels, this means you can send at most sqrt(10,000) = 100 distinct levels through the channel before they start to overlap, making it impossible to separate one from another at the receiving end.

100 levels translates to log2100 = 6.64 binary bits of information. This means the total channel capacity is 3,9840 bits/s (i.e., 6,000 pulses/s × 6.64 bits/pulse).

This is the basis for the Shannon-Hartley channel capacity theorem.

Problem 3—In general, is it possible to determine whether a system is linear and time-invariant (LTI) by simply examining its input and output signals?

Answer 3—In general, given an input signal and an output signal, you might be able to definitively state that the system is not linear and time-invariant (LTI), but you’ll never be able to definitively state that it is, only that it might be.

The general technique is to use information in the input signal to see whether the output signal can be composed from the input features. Input signals (e.g., impulses and steps) are easist to analyze, but other signals can also be analyzed.

Problem 4—One particular system has this input signal: Figure 1

The output is given by: Figure 2

Is this system LTI?

Answer 4—In this example, the input is a rectangular pulse that can be analyzed as the superposition of two step functions that are separated in time, one positive-going and the other negative-going. This makes the analysis easy, since you can see the initial response to the first step function then determine whether the response following the second step is a linear combination of two copies of the first part of the response.

In this case, the response to the first step function at t = 0 is that the output starts rising linearly, also at t = 0. The second (negative) input step function occurs at t = 0.5, and if the system is LTI, you would expect the output to also change what it’s doing at that time. In fact, you would expect the output to level off at whatever value it had reached at that time, because the LTI response to the second step should be a negative-going linear ramp, which, when added to the original response, should cancel out.

However, this is not the output signal received, so this system is definitely not LTI.

# Issue 272: EQ Answers

The answers to the Circuit Cellar 272 Engineering Quotient are now available. The problems and answers are listed below.

Problem 1—Why does the power dissipation of a Darlington transistor tend to be higher than that of a single bipolar transistor in switching applications?

Answer 1—In switching applications, a single transistor can saturate, resulting in a low VCE of 0.3 to 0.4 V. However, in a Darlington pair, the output transistor is prevented from saturating by the negative feedback provided by the driver transistor. If the collector voltage drops below the sum of the VBE of the output transistor (about 0.7 V) and the VCE(sat) of the driver transistor (about 0.3 V), the drive current to the output transistor is reduced, preventing it from going into saturation itself. Therefore, the effective VCE(sat) of the Darlington pair is 1 V or more, resulting in much higher dissipation at a given current level.

Problem 2—Suppose you have some 3-bit data, say, grayscale values for which 000 = black and 111 = white. You have a display device that takes 8-bit data, and you want to extend the bit width of your data to match.

If you just pad the data with zeros, you get the value 11100000 for white, which is not full white for the 8-bit display—that would be 11111111. What can you do?

Answer 2—One clever trick is to repeat the bits you have as many times as necessary to fill the output field width. For example, if the 3-bit input value is ABC, the output value would be ABCABCAB. This produces the following mapping, which interpolates nicely between full black and full white (see Table 1). Note that this mapping preserves the original bits; if you want to go back to the 3-bit representation, just take the MSBs and you have the original data.

 3-bit INPUT 8-bit OUTPUT 000 00000000 001 00100100 010 01001001 011 01101101 100 10010010 101 10110110 110 11011011 111 11111111

Problem 3—Can an induction motor (e.g., squirrel-cage type) be used as a generator?

Answer 3—Believe it or not, yes it can.

An induction motor has no electrical connections to the rotor; instead, a magnetic field is induced into the rotor by the stator. The motor runs slightly slower than “synchronous” speed—typically 1725 or 3450 rpm when on 60 Hz power.

If the motor is provided with a capacitive load, is driven at slightly higher than synchronous speed (1875 or 3750 rpm), and has enough residual magnetism in the rotor to get itself going, it will generate power up to approximately its rating as a motor. The reactive current of the load capacitor keeps the rotor energized in much the same way as when it is operating as a motor.

See www.qsl.net/ns8o/Induction_Generator.html for additional details. Problem 4—In Figure 1, why does this reconstruction of a 20-kHz sinewave sampled at 44.1 kHz show ripple in its amplitude?

Answer 4—The actual sampled data, represented by the square dots in the diagram, contains equal levels of Fsignal (the sine wave) and Fsample-Fsignal (one of the aliases of the sinewave). Any reconstruction filter is going to have difficulty passing the one and eliminating the other, so you inevitably get some of the alias signal, which, when added to the desired signal, produces the “modulation” you see.

In the case of a software display of a waveform on a computer screen (e.g., such as you might see in software used to edit audio recordings), they’re probably using an FIR low-pass filter (sin(x)/x coefficients) windowed to some finite length. A shorter window gives faster drawing times, so they’re making a tradeoff between visual fidelity and interactive performance. The windowing makes the filter somewhat less than brick-wall, so you get the leakage of the alias and the modulation.

In the case of a real audio D/A converter, even with oversampling you can’t get perfect stopband attenuation (and you must always do at least some of the filtering in the analog domain), so once again you see the leakage and modulation.

In this example, Fsignal = 0.9×Fnyquist, so Falias = 1.1×Fnyquist and Falias/Fsignal = 1.22. To eliminate the visible artifacts, the reconstruction filter would need to have a slope of about 60dB over this frequency span, or about 200 dB/octave.

# Issue 270: EQ Answers

The answers to the Circuit Cellar 270 Engineering Quotient are now available. The problems and answers are listed below.

#### Problem 1: Given a microprocessor that has hardware support for just one level of priority for interrupts, is it possible to implement multiple priorities in software? If so, what are the prerequisites that are required?

Answer 1: Yes, given a few basic capabilities, it is possible to implement multiple levels of interrupt priority in software. The basic requirements are that it must be possible to reenable interrupts from within an interrupt service routine (ISR) and that the different interrupt sources can be individually masked.

#### Question 2: What is the basic scheme for implementing software interrupt priorities?

Answer 2: In normal operation, all the interrupt sources are enabled, along with the processor’s global-interrupt mask.

When an interrupt occurs, the global interrupt mask is disabled and the “master” ISR is entered. This code must (quickly) determine which interrupt occurred, disable that interrupt and all lower-priority interrupts at their sources, then reenable the global-interrupt mask before jumping to the ISR for that interrupt. This can often be facilitated by precomputing a table of interrupt masks for each priority level.

#### Question 3: What are some of the problems associated with software interrupt priorities?

Answer 3: For one thing, the start-up latency of all the ISRs is increased by the time spent in the “master” ISR. This can be a problem in time-critical systems. This scheme enables interrupts to be nested, so the stack must be large enough to handle the worst-case nesting of ISRs, on top of the worst-case nesting of non-interrupt subroutine calls.

Finally, it is very tricky to do this in anything other than Assembly language. If you want to use a high-level language, you’ll need to be intimately familiar with the language’s run-time library and how it handles interrupts and reentrancy, in general.

Answer 4: Yes, on most such processors, you can execute a subroutine call to a “return from interrupt” instruction while still in the master ISR, which will then return to the master ISR, but with interrupts enabled.

Check to see whether the “return from interrupt” affects any other processor state (e.g., popping a status word from the stack) and prepare the stack accordingly.

Also, beware that another interrupt could occur immediately thereafter, and make sure the master ISR is reentrant beyond that point.

Contributed by David Tweed

# Issue 268: EQ Answers

Problem 1: A transformer’s windings, when measured individually (all other windings disconnected), have a certain amount of inductance. If you have a 1:1 transformer (both windings have the same inductance) and connect the windings in series, what value of inductance do you get?

Answer 1: Assuming you connect the windings in-phase, you’ll have double the number of turns, so the resulting inductance will be about four times the inductance of one winding alone.

If you hook them up out of phase, the inductance will cancel out and you’ll be left with the resistance of the wire and a lot of parasitic inter-winding capacitance.

Problem 2: If you connect the windings in parallel, what value of inductance do you get?

Answer 2: With the two windings connected in-phase and in parallel, the inductance will be exactly the same as the single-winding case. But the resulting inductor will be able to handle twice the current, as long as the core itself doesn’t saturate.

Question 3: Suppose you have a 32-bit word in your microprocessor, and you want to count how many contiguous strings ones that appear in it. For example, the word “01110001000111101100011100011111” contains six such strings. Can you come up with an algorithm that uses simple shifts, bitwise logical and arithmetic operators, but —here’s the twist—does not require iterating over each bit in the word?

Answer 3: Here’s a solution that iterates over the number of strings, rather than the number of bits in the word.

```int nstrings (unsigned long int x)
{
int result = 0;

/* convert x into a word that has a '1' for every
* transition from 0 to 1 or 1 to 0 in the original
* word.
*/
x ^= (x << 1);

/* every pair of ones in the new word represents
* a string of ones in the original word. Remove
* them two at a time and keep count.
*/
while (x) {
/* remove the lowest set bit from x; this
* represents the start of a string of ones.
*/
x &= ~(x & -x);
++result;

/* remove the next set bit from x; this
* represents the end of that string of ones.
*/
x &= ~(x & -x);
}
return result;
}```

Problem 4: For the purpose of timing analysis, the operating conditions of an FPGA are sometimes known as “PVT,” which stands for “process, voltage, and temperature.” Voltage and temperature are pretty much self-explanatory, but what does process mean in this context?

Answer 4: The term process in this case refers to the manufacturing process at the plant where they make the FPGA. It’s a measure of the statistical variability of the physical characteristics from chip to chip as they come off the line.
This includes everything from mask alignment to etching times to doping levels. These things affect electrical parameters such as sheet and contact resistance, actual transistor gains, and thresholds and parasitic capacitances.
These kinds of variations are unavoidable, and the P in PVT is an attempt to account for their effects in the timing analysis. The idea is to make the analysis conservative enough so that your design will work reliably despite these variations.

Contributed by David Tweed

# Issue 266: EQ Answers

The answers to the Circuit Cellar 266 (July 2012) Engineering Quotient are now available. The problems and answers are listed below.

Problem 1—What’s the key difference between infinite impulse response (IIR) and finite impulse response (FIR) digital filters?

Answer 1—An infinite impulse response (IIR) filter incorporates feedback in its datapath, which means that any particular input sample can affect the output for an indefinite (infinite) time into the future. In contrast, a finite impulse response (FIR) filter uses only feedforward in its datapath, which means that any given input sample can only affect the output for a time corresponding to the number of storage (delay) stages in the filter.

Problem 2—Does the fact that the finite resolution of digital arithmetic effectively truncates the impulse response of an IIR filter turn it into an FIR filter?

Answer 2—While it’s technically true that the impulse response of an IIR filter implemented, say, with fixed-point arithmetic is effectively finite, this has no real bearing on its classification in terms of either its design or application. It’s still an IIR filter for all practical purposes.

Problem 3—The following pseudocode represents an implementation of a single-pole low-pass IIR filter, using 16-bit input and output values and a 24-bit internal accumulator and a filter coefficient of 1/256:

`  # The 32-bit accumulator holds 16 integer`
`  # and 16 fractional bits`
`  \$acc = 0x00000000;`

`  # The input value is a 16-bit integer.`
`  \$input = 0xFFFF;`

`  # Offset used for rounding the accumulator`
`  # to 24 bits.`
`  \$offset = 0x80;`

`  while (1) {`
`    # acc = (255*acc + input)/256`
`    \$acc -= (\$acc >> 8);`
`    \$acc += (\$input << 8) + \$offset;`
`    # limit acc to 24 bits`
`    \$acc &= 0xFFFFFF00;`
`    # output is integer part of acc`
`    \$output = \$acc >> 16;`
`  }`

An implementor of this filter complained that “the output never reaches 0xFFFF.” What was the flaw in his reasoning?

Answer 3—The accumulator in this filter eventually settles at the value 0xFFFE8100. If you simply take the upper 16 bits of this, then the output value appears to be 0xFFFE. But if you properly round the accumulator by adding 0x00008000 before dropping the LSBs, then the output value is the correct value of 0xFFFF.

Problem 4—The original implementor’s solution was to change the \$offset value to 0xFF. Why did this work?

Answer 4—Changing the \$offset value to 0xFF effectively adds a bias to each input sample, which averages out to 0x00007F00 in the accumulator. The effect of this is to add the required rounding offset to the accumulator so that truncating the LSBs to create the 16-bit output value comes up with the correct answer.

.