Measuring Jitter (EE Tip #132)

Jitter is one of the parameters you should consider when designing a project, especially when it involves planning a high-speed digital system. Moreover, jitter investigation—performed either manually or with the help of proper measurement tools—can provide you with a thorough analysis of your product.

There are at least two ways to measure jitter: cycle-to-cycle and time interval error (TIE).

WHAT IS JITTER?
The following is the generic definition offered by The International Telecommunication Union (ITU) in its G.810 recommendation. “Jitter (timing): The short-term variations of the significant instants of a timing signal from their ideal positions in time (where short-term implies that these variations are of frequency greater than or equal to 10 Hz).”

First, jitter refers to timing signals (e.g., a clock or a digital control signal that must be time-correlated to a given clock). Then you only consider “significant instants” of these signals (i.e., signal-useful transitions from one logical state to the other). These events are supposed to happen at a specific time. Jitter is the difference between this expected time and the actual time when the event occurs (see Figure 1).

Figure 1—Jitter includes all phenomena that result in an unwanted shift in timing of some digital signal transitions in comparison to a supposedly “perfect” signal.

Figure 1—Jitter includes all phenomena that result in an unwanted shift in timing of some digital signal transitions in comparison to a supposedly “perfect” signal.

Last, jitter concerns only short-term variations, meaning fast variations as compared to the signal frequency (in contrast, very slow variations, lower than 10 Hz, are called “wander”).

Clock jitter, for example, is a big concern for A/D conversions. Read my article on fast ADCs (“Playing with High-Speed ADCs,” Circuit Cellar 259, 2012) and you will discover that jitter could quickly jeopardize your expensive, high-end ADC’s signal-to-noise ratio.

CYCLE-TO-CYCLE JITTER
Assume you have a digital signal with transitions that should stay within preset time limits (which are usually calculated based on the receiver’s signal period and timing diagrams, such as setup duration and so forth). You are wondering if it is suffering from any excessive jitter. How do you measure the jitter? First, think about what you actually want to measure: Do you have a single signal (e.g., a clock) that could have jitter in its timing transitions as compared to absolute time? Or, do you have a digital signal that must be time-correlated to an accessible clock that is supposed to be perfect? The measurement methods will be different. For simplicity, I will assume the first scenario: You have a clock signal with rising edges that are supposed to be perfectly stable, and you want to double check it.

My first suggestion is to connect this clock to your best oscilloscope’s input, trigger the oscilloscope on the clock’s rising edge, adjust the time base to get a full period on the screen, and measure the clock edge’s time dispersion of the transition just following the trigger. This method will provide a measurement of the so-called cycle-to-cycle jitter (see Figure 2).

Figure 2—Cycle-to-cycle is the easiest way to measure jitter. You can simply trigger your oscilloscope on a signal transition and measure the dispersion of the following transition’s time.

Figure 2—Cycle-to-cycle is the easiest way to measure jitter. You can simply trigger your oscilloscope on a signal transition and measure the dispersion of the following transition’s time.

If you have a dual time base or a digital oscilloscope with zoom features, you could enlarge the time zone around the clock edge you are interested in for more accurate measurements. I used an old Philips PM5786B pulse generator from my lab to perform the test. I configured the pulse generator to generate a 6.6-MHz square signal and connected it to my Teledyne LeCroy WaveRunner 610Zi oscilloscope. I admit this is high-end equipment (1-GHz bandwidth, 20-GSPS sampling rate and an impressive 32-M word memory when using only two of its four channels), but it enabled me to demonstrate some other interesting things about jitter. I could have used an analog oscilloscope to perform the same measurement, as long as the oscilloscope provided enough bandwidth and a dual time base (e.g., an old Tektronix 7904 oscilloscope or something similar). Nevertheless, the result is shown in Figure 3.

Figure 3—This is the result of a cycle-to-cycle jitter measurement of the PM5786A pulse generator. The bottom curve is a zoom of the rising front just following the trigger. The cycle-to-cycle jitter is the horizontal span of this transition over time, here measured at about 620 ps.

Figure 3—This is the result of a cycle-to-cycle jitter measurement of the PM5786A pulse generator. The bottom curve is a zoom of the rising front just following the trigger. The cycle-to-cycle jitter is the horizontal span of this transition over time, here measured at about 620 ps.

This signal generator’s cycle-to-cycle jitter is clearly visible. I measured it around 620 ps. That’s not much, but it can’t be ignored as compared to the signal’s period, which is 151 ns (i.e., 1/6.6 MHz). In fact, 620 ps is ±0.2% of the clock period. Caution: When you are performing this type of measurement, double check the oscilloscope’s intrinsic jitter as you are measuring the sum of the jitter of the clock and the jitter of the oscilloscope. Here, the latter is far smaller.

TIME INTERVAL ERROR
Cycle-to-cycle is not the only way to measure jitter. In fact, this method is not the one stated by the definition of jitter I presented earlier. Cycle-to-cycle jitter is a measurement of the timing variation from one signal cycle to the next one, not between the signal and its “ideal” version. The jitter measurement closest to that definition is called time interval error (TIE). As its name suggests, this is a measure of a signal’s transitions actual time, as compared to its expected time (see Figure 4).

Figure 4—Time interval error (TIE) is another way to measure jitter. Here, the actual transitions are compared to a reference clock, which is supposed to be “perfect,” providing the TIE. This reference can be either another physical signal or it can be generated using a PLL. The measured signal’s accumulated plot, triggered by the reference clock, also provides the so-called eye diagram.

Figure 4—Time interval error (TIE) is another way to measure jitter. Here, the actual transitions are compared to a reference clock, which is supposed to be “perfect,” providing the TIE. This reference can be either another physical signal or it can be generated using a PLL. The measured signal’s accumulated plot, triggered by the reference clock, also provides the so-called eye diagram.

It’s difficult to know these expected times. If you are lucky, you could have a reference clock elsewhere on your circuit, which would supposedly be “perfect.” In that case, you could use this reference as a trigger source, connect the signal to be measured on the oscilloscope’s input channel, and measure its variation from trigger event to trigger event. This would give you a TIE measurement.

But how do you proceed if you don’t have anything other than your signal to be measured? With my previous example, I wanted to measure the jitter of a lab signal generator’s output, which isn’t correlated to any accessible reference clock. In that case, you could still measure a TIE, but first you would have to generate a “perfect” clock. How can this be accomplished? Generating an “ideal” clock, synchronized with a signal, is a perfect job for a phase-locked loop (PLL). The technique is explained my article, “Are You Locked? A PLL Primer” (Circuit Cellar 209, 2007.) You could design a PLL to lock on your signal frequency and it could be as stable as you want (provided you are willing to pay the expense).

Moreover, this PLL’s bandwidth (which is the bandwidth of its feedback filter) would give you an easy way to zoom in on your jitter of interest. For example, if the PLL bandwidth is 100 Hz, the PLL loop will capture any phase variation slower than 100 Hz. Therefore, you can measure the jitter components faster than this limit. This PLL (often called a carrier recovery circuit) can be either an actual hardware circuit or a software-based implementation.

So, there are at least two ways to measure jitter: Cycle-to-cycle and TIE. (As you may have anticipated, many other measurements exist, but I will limit myself to these two for simplicity.) Are these measurement methods related? Yes, of course, but the relationship is not immediate. If the TIE is not null but remains constant, the cycle-to-cycle jitter is null.  Similarly, if the cycle-to-cycle jitter is constant but not null, the TIE will increase over time. In fact, the TIE is closely linked to the mathematical integral over time of the cycle-to-cycle jitter, but this is a little more complex, as the jitter’s frequency range must be limited.

Editor’s Note: This is an excerpt from an article written by Robert Lacoste, “Analyzing a Case of the Jitters: Tips for Preventing Digital Design Issues,” Circuit Cellar 273, 2013.

Real-Time Processing for PCIe Digitizers

Agilent U5303A PCIe 12bit High-Speed DigitizerThe U5303A digitizer and the U5340A FPGA development kit are recent enhancements to Agilent Technologies’s PCI Express (PCIe) high-speed digitizers. The U5303A and the U5340A FPGA add next-generation real-time peak detection functionalities to the PCIe devices.

The U5303A is a 12-bit PCIe digitizer with programmable on-board processing. It offers high performance in a small footprint, making it an ideal platform for many commercial, industrial, and aerospace and defense embedded systems. A data processing unit (DPU) based on the Xilinx Virtex-6 FPGA is at the heart of the U5303A. The DPU controls the module functionality, data flow, and real-time signal processing. This feature enables data reduction and storage to be carried out at the digitizer level, minimizing transfer volumes and accelerating analysis.

The U5340A FPGA development kit is designed to help companies and researchers protect their IP signal-processing algorithms. The FPGA kit enables integration of an advanced real-time signal processing algorithm within Agilent Technologies’s high-speed digitizers. The U5340A features high-speed medical imaging, analytical time-of-flight, lidar ranging, non-destructive testing, and a direct interface to digitizer hardware elements (e.g., the ADC, clock manager, and memory blocks). The FPGA kit includes a library of building blocks, from basic gates to dual-port RAM; a set of IP cores; and ready-to-use scripts that handle all aspects of the build flow.

Contact Agilent Technologies for pricing.

Agilent Technologies, Inc.
www.agilent.com

Energy-Measurement AFEs

Microchip_MCP3913The MCP3913 and the MCP3914 are Microchip Technology’s next-generation family of energy-measurement analog front ends (AFEs). The AFEs integrate six and eight 24-bit, delta-sigma ADCs, respectively, with 94.5-dB SINAD, –106.5-dB THD, and 112-dB Spurious-Free Dynamic Range (SFDR) for high-accuracy signal acquisition and higher-performing end products.

The MCP3914’s two extra ADCs enable the monitoring of more sensors with one chip, reducing its cost and size. The programmable data rate of up to 125 ksps with low-power modes enables designers to scale down for better power consumption or to use higher data rates for advanced signal analysis (e.g., calculating harmonic content).

The MCP3913 and the MCP3914 improve application performance and provide flexibility to adjust the data rate to optimize each application’s rate of performance vs power consumption. The AFEs feature a CRC-16 checksum and register-map lock, for increased robustness. Both AFEs are offered in 40-pin uQFN packages. The MCP3913 adds a 28-pin SSOP package option.

The MCP3913 and the MCP3914 AFEs cost $3.04 each in 5,000-unit quantities. Microchip Technology also announced the MCP3913 Evaluation Board and the MCP3914 Evaluation Board, two new tools to aid in the development of energy systems using these AFEs. Both evaluation boards cost $99.99.

Microchip Technology, Inc.
www.microchip.com

3-D Integration Impact and Challenges

People want transistors—lots of them. It pretty much doesn’t matter what shape they’re in, how small they are, or how fast they operate. Simply said, the more the merrier. Diversity is also good. The more different the transistors, the more useful and interesting the product. And without any question, the cheaper the transistors, the better. So the issue is, how best to achieve as many diverse transistors at the lowest cost possible.

One approach is more chips. Placing a lot of chips close together on a small board will produce a system with many transistors. Another way is more transistors per chip. Keep on scaling the technology to provide more transistors in one or a few chips.

silicon chipThe third option combines these two approaches. Let’s have many chips with many transistors and end up with a huge number of transistors. However, there is a limit to this approach. It’s well understood that scaling is coming to an end. And placing multiple chips on a board can have a terrible effect on a system’s overall speed and power dissipation.

But there is an elegant and intellectually simple solution. Rather than connecting these chips horizontally across a board, connect them vertically, providing N times more transistors, where N is the number of chips stacked one above another. Such vertical, 3-D integration was first broached by William Shockley, co-inventor of the transistor at Bell Labs in 1947. Shockley described the 3-D integration concept in a 1958 patent, which was followed by Merlin Smith and Emanuel Stern’s 1967 patent outlining how best to produce the holes between layers. We now call these inter-layer holes through silicon vias (TSVs). Technology is still catching up to these 3-D concepts.

Three-dimensional integration offers exciting advantages. For example, the vertical distance between layers is much shorter than the horizontal dimensions across a chip. Three-dimensional circuits, therefore, operate faster and dissipate less power than their 2-D equivalent. A 3-D system is shockingly small, permitting it to fit much more conveniently into a tiny space. Think small portable electronics (e.g., credit cards).

But the most exciting advantage of 3-D integration isn’t the small form factor, higher speed, or lower power; it’s the natural ability to support many disparate technologies and functions as one integrated, heterogeneous system. Even better, each chip layer can be optimized for a particular function and technology, since the individual chips can each be developed in isolation. No more trading off different capabilities to combine disparate technologies on the same chip. Now we can use the absolute best technology for each layer and a completely different and optimized technology for a different layer. This approach enables all kinds of novel applications that until now couldn’t have been conceived or would have been cost-prohibitive.

Imagine placing a microprocessor plane below a MEMS-accelerometer plane below an analog plane (with ADCs) below a temperature sensor, all below a video imager (which has to be at the top to “see”). All of these planes fit together into a tiny (smaller than a fingernail) silicon cube while operating at higher speeds and dissipating lower power.

There are technical issues, including: how to best make the TSVs, how to construct the system architecture to fully exploit the system’s 3-D nature, how to deliver power across these multiple planes, how to synchronize this system to best move data around the cube, how to manage system design complexity, and much more.

Two issues rise to the top. The first is power dissipation (specifically, power density). When many transistors switch at a high rate within a tiny volume, the temperature rises, which can impair performance and reliability. I believe this issue, albeit difficult, is technically solvable and simply will require a lot of good engineering.

The real problem is cost. How do we mature this technology quickly enough to drive the costs down to a point where volume commercial applications are possible? Many companies are close to producing tangible 3-D-based products. Cubes of highly dense memory will likely be the first serious and cost-effective product. Early versions are already available. Three-dimensional integration will soon be here in a serious way with what will be a fascinating assortment of all kinds of exciting new products. You won’t have to wait too long.

Evaluating Oscilloscopes (Part 2)

This is Part 2 of my mini-series on selecting an oscilloscope. Rather than a completely thorough guide, it’s more a “collection of notes” based on my own research. But I hope you find it useful, and it might cover a few areas you hadn’t considered.

Last week I mentioned the differences between PC-based and stand-alone oscilloscopes and discussed the physical probe’s characteristics. This week I’ll be discussing the “core” specifications: analog bandwidth, sample rate, and analog-to-digital converter (ADC) resolution.

Topic 1: Analog Bandwidth
Many useful articles online discuss the analog oscilloscope bandwidth, so I won’t dedicate too much time to it. Briefly, the analog bandwidth is typically measured as the “half-power” or -3 dB point, as shown in Figure 1. Half the power means 1/√2 of the voltage. Assume you put a 10-MHz, 1-V sine wave into your 100-MHz bandwidth oscilloscope. You expect to see a 1-V sine wave on the oscilloscope. As you increase the frequency of the sine wave, you would instead expect to see around 0.707 V when you pass a 100-MHz sine wave. If you want to see this in action, watch my video in which I sweep the input frequency to an oscilloscope through the -3 dB point.

Figure 1: The bandwidth refers to the "half-power" or -3 dB  point. If we drove a sine wave of constant amplitude and increasing frequency into the probe, the -3 dB point would be when the amplitude measured in the scope was 0.707 times the initial amplitude.

Figure 1: The bandwidth refers to the “half-power” or -3 dB point. If we drive a sine wave of constant amplitude and increasing frequency into the probe, the -3 dB point would be when the amplitude measured in the scope is 0.707 times the initial amplitude.

Unfortunately, you are likely to be measuring square waves (e.g., in digital systems) and not sine waves. Square waves contain high-frequency components well beyond the fundamental frequency of the wave. For this reason the “rule of thumb”  is to select an oscilloscope with five times the analog bandwidth of the highest–frequency digital signal you would be measuring. Thus, a 66-MHz clock would require a 330-MHz bandwidth oscilloscope.

If you are interested in more details about bandwidth selection, I encourage you to see one of the many excellent guides. Adafruit has a blog post “Why Oscilloscope Bandwidth Matters” that offers more information, along with links to guides from Agilent Technologies and Tektronix.

If you want to play around yourself, I’ve got a Python script that applies analog filtering to a square wave and plots the results, available here. Figure 2 shows an example of a 50-MHz square wave with 50-MHz, 100-MHz, 250-MHz, and 500-MHz analog bandwidth.

Figure 2: This shows sampling a 50-MHz square wave with 50, 100, 250, and 500-MHz of analog bandwidth.

Figure 2: This shows sampling a 50-MHz square wave with 50, 100, 250, and 500 MHz of analog bandwidth.

Topic 2: Sample Rate
Beyond the analog bandwidth, oscilloscopes also prominently advertise the sample rate. Typically, this is in MS/s (megasamples per second) or GS/s (gigasamples per second). The advertised rate is nearly always the maximum if using a single channel. If you are using both channels on a two-channel oscilloscope that advertises 1 GS/s, typically the maximum rate is actually 500 MS/s for both channels.

So what rate do you need? If you are familiar with the Nyquist criterion, you might simply think you should have a sample rate two times the analog bandwidth. Unfortunately, we tend to work in the time domain (e.g., looking at the oscilloscope screen) and not the frequency domain. So you can’t simply apply that idea. Instead, it’s useful to have a considerably higher sample rate compared to analog bandwidth, say, a five times higher sample rate. To illustrate why, see Figure 3. It shows a 25.3-MHz square wave, which I’ve sampled with an oscilloscope with 50-MHz analog bandwidth. As you would expect, the signal rounds off considerably. However, if I only sample it at 100 MS/s, at first sight the signal is almost unrecognizable! Compare that with the 500 MS/s sample rate, which more clearly looks like a square wave (but rounded off due to analog bandwidth limitation).

Again, these figures both come from my Python script, so they are based purely on “theoretical” limits of sample rate. You can play around with sample rate and bandwidth to get an idea of how a signal might look.

Figure 3

Figure 3: This shows sampling a 25.3-MHz square wave at 100 MS/s results in a signal that looks considerably different than you might expect! Sampling at 500 MS/s results in a much more “proper” looking wave.

Topic 3: Equivalent Time Sampling
Certain oscilloscopes have an equivalent time sampling (ETS) mode, which advertises an insanely fast sample rate. For example, the PicoScope 6000 series, which has a 5 GS/s sample rate, can use ETS mode and achieve 200 GS/s on a single channel, or 50 GS/s on all channels.

The caveat is that this high sample rate is achieved by doing careful phase shifts of the A/D sampling clock to sample “in between” the regular intervals. This requires your input waveform be periodic and very stable, since the waveform will actually be “built up” over a longer time interval.

So what does this mean to you? Luckily, many actual waveforms are periodic, and you might find ETS mode very useful. For example, if you want to measure the phase shift in two clocks through a field-programmable gate array (FPGA), you can do this with ETS. At 50 GS/s, you would have 20 ps resolution on the measurement! In fact, that resolution is so high you could measure the phase difference due to a few centimeters difference in PCB trace.

To demonstrate this, I can show you a few videos. To start with, the simple video below shows moving the probes around while looking at the phase difference.

A more practical demonstration, available in the following video, measures the phase shift of two paths routed through an FPGA.

Finally, if you just want to see a sine wave using ETS you can check out the bandwdith demonstration  I referred to earlier in the this article. The video (see below) includes a portion using ETS mode.

 

Topic 4: ADC Resolution
A less prominently advertised feature of certain oscilloscopes is the ADC bit resolution in the front end. Briefly, the ADC resolution tells you how the analog waveform will get mapped to the digital domain. If you have an 8-bit ADC, this means you have 28 = 256 possible numbers the digital waveform can represent. Say you have a ±5 V range on the oscilloscope—a total span of 10 V. This means the ADC can resolve 10 V / 256 = 39.06 mV difference on the input voltage.

This should tell you one fact about digital oscilloscopes: You should always use the smallest possible range to get the finest granularity. That same 8-bit ADC on a ±1 V range would resolve 7.813 mV. However, what often happens is your signal contains multiple components—say, spiking to 7 V during a load switch, and then settling to 0.5 V. This precludes you from using the smaller range on the input, since you want to capture the amplitude of that 7-V spike.

If, however, you had a 12-bit ADC, that 10 V span (+5 V to -5 V) would be split into 212 = 4,096 numbers, meaning the resolution is now 2.551 mV.  If you had a 16-bit ADC, that 10-V range would give you 216 = 65,536 numbers, meaning you could resolve down to 0.1526 mV. Most of the time, you have to choose between a faster ADC with lower (typically 8-bit) resolution or a slower ADC with higher resolution. The only exception to this I’m aware of is the Pico Technology FlexRes 5000 series devices, which allow you to dynamically switch between 8/12/14/15/16 bits with varying changes to the number of channels and sample rate.

While the typical ADC resolution seems to be 8 bits for most scopes, there are higher-resolution models too. As mentioned, these devices are permanently in high-resolution mode, so you have to decide at purchasing time if you want a very high sample rate, or a very high resolution. For example, Cleverscope has always advertised higher resolutions, and their devices are available in 10, 12, or 14 bits. Cleverscope seems to sell the “digitizer” board separately, giving you some flexibility in upgrading to a higher-resolution ADC. TiePie engineering has devices available from 8–14 bits with various sample rate options. Besides the FlexRes device I mentioned, Pico Technology offers some fixed resolution devices in higher 14-bit resolution. Some of the larger manufacturers also have higher-resolution devices, for example Teledyne LeCroy has its High Resolution Oscilloscope (HRO), which is a fixed 12-bit device.

Note that many devices will advertise either an “effective” or “software enhanced” bit resolution higher than the actual ADC resolution. Be careful with this: software enhancement is done via filtering, and you need to be aware of the possible resulting changes to your measurement bandwidth. Two resources with more details on this mode include the ECN magazine article “How To Get More than 8 Bits from Your 8-bit Scope” and the Teledyne LeCroy application note “Enhanced Resolution.” Remember that a 12-bit, 100-MHz bandwidth oscilloscope is not the same as an 8-bit, 100-MHz bandwidth oscilloscope with resolution enhancement!

Using the oscilloscope’s fast Fourier transform (FFT) mode (normally advertised as the spectrum analyzer mode), we can see the difference a higher-resolution ADC makes. When looking at a waveform on the screen, you may think that you don’t care at all about 14-bit accuracy or something similar. However, if you plan to do measurements such as total harmonic distortion (THD), or otherwise need accurate information about frequency components, having high resolution may be extremely important to achieve a reasonable dynamic range.

As a theoretical example I’m using my script mentioned earlier, which will digitize a perfect sine wave and then display the frequency spectrum. The number of bits in the ADC (e.g., quantization) is adjustable, so the harmonic component is solely due to quantization error. This is shown in Figure 4. If you want to see a version of this using a real instrument, I conduct a similar demonstration in this video.

Certain applications may find the higher bit resolution a necessity. For example, if you are working in high-fidelity audio applications, you won’t be too worried about an extremely high sample rate, but you will need the high resolution.

Figure 4: In the frequency domain, the effect of limited quantization bits is much more apparent. Here a 10-MHz pure sine wave frequency spectrum is taken using a different number of bits during the quantization process.

Figure 4: In the frequency domain, the effect of limited quantization bits is much more apparent. Here a 10-MHz pure sine wave frequency spectrum is taken using a different number of bits during the quantization process. (CLICK TO ZOOM)

Coming Up
This week I’ve taken a look at some of the core specifications. I hope the questions to ask when purchasing an oscilloscope are becoming clearer! Next week, I’ll be looking at the software running the oscilloscope, and details such as remote control, FFT features, digital decoding, and buffer types. The fourth and final week will delve into a few remaining features such as external trigger and clock synchronization and will summarize all the material I’ve covered in this series.

Author’s note: Every reasonable effort has been made to ensure example specifications are accurate. There may, however, be errors or omissions in this article. Please confirm all referenced specifications with the device vendor.