Synchronized RF Transceiver Rapid Prototyping Kit for SDR

Analog Devices recently announced a software-defined radio (SDR) rapid prototyping kit with dual 2 x 2 AD9361 RF transceivers to simplify and rapidly prototype 4 × 4 MIMO wireless transceiver applications on the Xilinx Zynq-7000 all-programmable SoC development platforms. The AD-FMCOMMS5-EBZ rapid prototyping kit provides a hardware/software ecosystem solution addressing the challenges of SDR transceiver synchronization experienced by RF and analog designers when implementing systems using MIMO architectures. A webinar is available on how to synchronize multiple RF transceivers in high-channel density applications.

Source: Analog Devices

Source: Analog Devices

The AD-FMCOMMS5-EBZ rapid prototyping kit includes the following:

  • An FPGA mezzanine card (FMC) featuring two of Analog AD9361 2 x 2 RF transceivers and support circuitry
  • Reference designs
  • Design and simulation tools for MathWorks
  • HDL (hardware description language) code
  • Device drivers for Zynq-7000 All Programmable SoCs
  • Online support at ADI’s EngineerZone for rapid prototyping to reduce development time and risk.

The AD-FMCOMMS5-EBZ rapid prototyping kit is the fifth SDR rapid prototyping kit ADI has introduced in the last year to help customers address the global SDR market. SDR MIMO applications range from defense electronics and RF instrumentation to communications infrastructure and include active antennas, transmit beamforming, receive angle of arrival systems, and open-source SDR development projects.

The AD9361 operates over a frequency range of 70 MHz to 6 GHz. It is a complete radio design that combines multiple functions, including an RF front end, mixed-signal baseband section, frequency synthesizers, two analog-to-digital converters and two direct conversion receivers in a single chip. The AD9361 supports channel bandwidth from less than 200 kHz to 56 MHz, and is highly programmable, offering the widest dynamic range available in the market today with state-of-the-art noise figure and linearity.

Source: Analog Devices

 

Bit Banging

Shlomo Engelberg, an associate professor in the electronics department of the Jerusalem College of Technology, is well-versed in signal processing. As an instructor and the author of several books, including Digital Signal Processing: An Experimental Approach (Springer, 2008), he is a skilled guide to how to use the UART “protocol” to implement systems that transmit and receive data without a built-in peripheral.

Implementing serial communications using software rather than hardware is called bit-banging, the topic of his article in Circuit Cellar’s June issue.

“There is no better way to understand a protocol than to implement it yourself from scratch,” Engelberg says. “If you write code similar to what I describe in this article, you’ll have a good understanding of how signals are transmitted and received by a UART. Additionally, sometimes relatively powerful microprocessors do not have a built-in UART, and knowing how to implement one in software can save you from needing to add an external UART to your system. It can also reduce your parts count.”

In the excerpt below, he explains some UART fundamentals:

WHAT DOES “UART” MEAN?
UART stands for universal asynchronous receiver/transmitter. The last three words in the acronym are easy enough to understand. “Asynchronous” means that the transmitter and the receiver run on their own clocks. There is no need to run a wire between the transmitter and the receiver to enable them to “share” a clock (as required by certain other protocols). The receiver/transmitter part of the acronym means just what it says: the protocol tells you what signals you need to send from the transmitter and what signals you should expect to acquire at the receiver.

The first term of the acronym, “universal,” is a bit more puzzling. According to Wikipedia, the term “universal” refers to the fact that the data format and the speed of transmission are variable. My feeling has always been that the term “universal” is basically hype; someone probably figured a “universal asynchronous receiver/transmitter” would sell better than a simple “asynchronous receiver/transmitter.”

Figure 1: The waveform output by a microprocessor’s UART is shown. While “at rest,” the UART’s output is in the high state. The transmission begins with a start bit in which the UART’s output is low. The start bit is followed by eight data bits. Finally, there is a stop bit in which the UART’s output is high.

Figure 1: The waveform output by a microprocessor’s UART is shown. While “at rest,” the UART’s output is in the high state. The transmission begins with a start bit in which the UART’s output is low. The start bit is followed by eight data bits. Finally, there is a stop bit in which the UART’s output is high.

TEAMWORK NEEDED
Before you can use a UART to transfer information from device to device, the transmitter and receiver have to agree on a few things. First, they must agree on a transmission speed. They must agree that each transmitted bit will have a certain (fixed) duration, denoted TBIT. A 1/9,600-s duration is a typical choice, related to a commonly used crystal’s clock speed, but there are many other possibilities. Additionally, the transmitter and receiver have to agree about the number of data bits to be transmitted each time, the number of stop bits to be used, and the flow control (if any).

When I speak of the transmitter and receiver “agreeing” about these points, I mean that the people programming the transmitting and receiving systems must agree to use a certain data rate, for example. There is no “chicken and egg” problem here. You do not need to have an operational UART before you can use your UART; you only need a bit of teamwork.

UART TRANSMISSION
Using a UART is considered the simplest way of transmitting information. Figure 1 shows the form the transmissions must always make. The line along which the signal is transmitted is initially “high.” The transmissions begin with a single start bit during which the line is pulled low (as all UART transmissions must). They have eight data bits (neither more nor less) and a single stop bit (and not one and a half or two stop bits) during which the line is once again held high. (Flow control is not used throughout this article.)

Why must this protocol include start and stop bits? The transmitter and the receiver do not share a common clock, so how does the receiver know when a transmission has begun? It knows by realizing that the wire connecting them is held high while a transmission is not taking place, “watching” the wire connecting them, and waiting for the voltage level to transition from high to low, which it does by watching and waiting for a start bit. When the wire leaves its “rest state” and goes low, the receiver knows that a transmission has begun. The stop bit guarantees that the line returns to its “high” level at the end of each transmission.

Transmissions have a start and a stop bit, so the UART knows how to read the two words even if one transmits that data word 11111111 and follows it with 11111111. Because of the start and stop bits, when the UART is “looking at” a line on which a transmission is beginning, it sees an initial low level (the start bit), the high level repeated eight times, a ninth high level (the stop bit), and then the pattern repeats. The start bit’s presence enables the UART to determine what’s happening. If the data word being transmitted were 00000000 followed by 00000000, then the stop bit would save the day.

The type of UART connection I describe in this article only requires three wires. One wire is for transmission, one is for reception, and one connects the two systems’ grounds.

The receiver and transmitter both know that each bit in the transmission takes TBIT seconds. After seeing a voltage drop on the line, the receiver waits for TBIT/2 s and re-examines the line. If it is still low, the receiver assumes it is in the middle of the start bit. It waits TBIT seconds and resamples the line. The value it sees is then used to determine data bit 0’s value. The receiver then samples every TBIT seconds until it has sampled all the data bits and the stop bit.

Engelberg’s full article, which you can find in Circuit Cellar’s June issue, goes on to explain UART connections and how he implemented a simple transmitter and receiver. For the projects outlined in his article, he used the evaluation kit for Analog Devices’s ADuC841.

“The transmitter and the receiver are both fairly simple to write. I enjoyed writing them,” Engelberg says in wrapping up his article. “If you like playing with microprocessors and understanding the protocols with which they work, you will probably enjoy writing a transmitter and receiver too. If you do not have time to write the code yourself but you’d like to examine it, feel free to e-mail me at shlomoe@jct.ac.il. I’ll be happy to e-mail the code to you.”

Pulse-Shaping Basics

Pulse shaping (i.e., base-band filtering) can vastly improve the behavior of wired or wireless communication links in an electrical system. With that in mind, Circuit Cellar columnist Robert Lacoste explains the advantages of filtering and examines Fourier transforms; random non-return-to-zero NRZ signaling; and low-pass, Gaussian, Nyquist, and raised-cosine filters.

Lacoste’s article, which appears in Circuit Cellar’s April 2014 issue, includes an abundance of graphic simulations created with Scilab Enterprises’s open-source software. The simulations will help readers grasp the details of pulse shaping, even if they aren’t math experts. (Note: You can download the Scilab source files Lacoste developed for his article from Circuit Cellar’s FTP site.)

Excerpts from Lacoste’s article below explain the importance of filtering and provide a closer look at low-pass filters:

WHY FILTERING?
I’ll begin with an example. Imagine you have a 1-Mbps continuous digital signal you need to transmit between two points. You don’t want to specifically encode these bits; you just want to transfer them one by one as they are.

Before transmission, you will need to transform the 1 and 0s into an actual analog signal any way you like. You can use a straightforward method. Simply define a pair of voltages (e.g., 0 and 5 V) and put 0 V on the line for a 0-level bit and put 5 V on the line for a 1-level bit.


This method is pedantically called non-return-to-zero (NRZ). This is exactly what a TTL UART is doing; there is nothing new here. This analog signal (i.e., the base-band signal) can then be sent through the transmission channel and received at the other end (see top image in Figure 1).


Note: In this article I am not considering any specific transmission channel. It could range from a simple pair of copper wires to elaborate wireless links using amplitude, frequency and/or phase modulation, power line modems, or even optical links. Everything I will discuss will basically be applicable to any kind of transmission as it is linked to the base-band signal encoding prior to any modulation.

Directly transmitting a raw digital signal, such as this 1-Mbps non-return-to-zero (NRZ) stream (at top), is a waste of bandwidth. b—Using a pulse-shaping filter (bottom) reduces the required bandwidth for the same bit rate, but with a risk of increased transmission errors.

Figure 1: Directly transmitting a raw digital signal, such as this 1-Mbps non-return-to-zero (NRZ) stream (top), is a waste of bandwidth. Using a pulse-shaping filter (bottom) reduces the required bandwidth for the same bit rate, but with a risk of increased transmission errors.


Now, what is the issue when using simple 0/5-V NRZ encoding? Bandwidth efficiency. You will use more megahertz than needed for your 1-Mbps signal transmission. This may not be an issue if the channel has plenty of extra capacity (e.g., if you are using a Category 6 1-Gbps-compliant shielded twisted pair cable to transmit these 1 Mbps over a couple of meters).


Unfortunately, in real life you will often need to optimize the bandwidth. This could be for cost reasons, for environmental concerns (e.g., EMC perturbations), for regulatory issues (e.g., RF channelization), or simply to increase the effective bit rate as much as possible for a given channel.


Therefore, a good engineering practice is to use just the required bandwidth through a pulse-shaping filter. This filter is fitted between your data source and the transmitter (see bottom of Figure 1).


The filter’s goal is to reduce as much as possible the occupied bandwidth of your base-band signal without affecting the system performance in terms of bit error rate. These may seem like contradictory requirements. How can you design such a filter? That’s what I will try to explain in this article….


LOW-PASS FILTERS

A base-band filter is needed between the binary signal source and the transmission media or modulator. But what characteristics should this filter include? It must attenuate as quickly as possible the unnecessary high frequencies. But it must also enable the receiver to decode the signal without errors, or more exactly without more errors than specified. You will need a low-pass filter to limit the high frequencies. As a first example, I used a classic Butterworth second-order filter with varying cut-off frequencies to make the simulation. Figure 2 shows the results. Let me explain the graphs.

Figure 2: This random non-return-to-zero (NRZ) signal (top row) was passed through a second-order Butterworth low-pass filter. When the cut-off frequency is low (310 kHz), the filtered signal (middle row) is distorted and the eye diagram is closed. With a higher cutoff (410 kHz, bottom row), the intersymbol interference (ISI) is lower but the frequency content is visible up to 2 MHz.

Figure 2: This random non-return-to-zero (NRZ) signal (top row) was passed through a second-order Butterworth low-pass filter. When the cut-off frequency is low (310 kHz), the filtered signal (middle row) is distorted and the eye diagram is closed. With a higher cutoff (410 kHz, bottom row), the intersymbol interference (ISI) is lower but the frequency content is visible up to 2 MHz.

The leftmost column shows the signal frequency spectrum after filtering with the filter frequency response in red as a reference. The middle column shows a couple of bits of the filtered signal (i.e., in the time domain), as if you were using an oscilloscope. Last, the rightmost column shows the received signal’s so-called “eye pattern.” This may seem impressive, but the concept is very simple.

Imagine you have an oscilloscope. Trigger it on any rising or falling front of the signal, scale the display to show one bit time in the middle of the screen, and accumulate plenty of random bits on the screen. You’ve got the eye diagram. It provides a visual representation of the difficulty the receiver will have to recover the bits. The more “open” the eye, the easier it is. Moreover, if the successive bits’ trajectories don’t superpose to each other, there is a kind of memory effect. The voltage for a given bit varies depending on the previously transmitted bits. This phenomenon is called intersymbol interference (ISI) and it makes life significantly more difficult for decoding.


Take another look at the Butterworth filter simulations. The first line is the unfiltered signal as a reference (see Figure 2, top row). The second line with a 3-dB, 310-kHz cut-off frequency shows a frequency spectrum significantly reduced after 1 MHz but with a high level of ISI. The eye diagram is nearly closed (see Figure 2, middle row). The third line shows the result with a 410-kHz Butterworth low-pass filter (see Figure 2, bottom row). Its ISI is significantly lower, even if it is still visible. (The successive spot trajectories don’t pass through the same single point.) Anyway, the frequency spectrum is far cleaner than the raw signal, at least from 2 MHz.

Lacoste’s article serves as solid introduction to the broad subject of pulse-shaping. And it concludes by re-emphasizing a few important points and additional resources for readers:

Transmitting a raw digital signal on any medium is a waste of bandwidth. A filter can drastically improve the performance. However, this filter must be well designed to minimize intersymbol interference.

The ideal solution, namely the Nyquist filter, enables you to restrict the used spectrum to half the transmitted bit rate. However, this filter is just a mathematician’s dream. Raised cosine filters and Gaussian filters are two classes of real-life filters that can provide an adequate complexity vs performance ratio.

At least you will no longer be surprised if you see references to such filters in electronic parts’ datasheets. As an example, see Figure 3, which is a block diagram of Analog Devices’s ADF7021 high-performance RF transceiver.

This is a block diagram of Analog Devices’s ADF7021 high-performance transceiver. On the bottom right there is a “Gaussian/raised cosine filter” block, which is a key factor in efficient RF bandwidth usage.

Figure 3: This is a block diagram of Analog Devices’s ADF7021 high-performance transceiver. On the bottom right there is a “Gaussian/raised cosine filter” block, which is a key factor in efficient RF bandwidth usage.

The subject is not easy and can be easily misunderstood. I hope this article will encourage you to learn more about the subject. Bernard Sklar’s book Digital Communications: Fundamentals and Applications is a good reference. Playing with simulations is also a good way to understand, so don’t hesitate to read and modify the Scilab examples I provided for you on Circuit Cellar’s FTP site.  

Lacoste’s full article is in the April issue, now available for membership download or single issue purchase. And for more information about improving the efficiency of wireless communication links, check out Lacoste’s 2011 article “Line-Coding Techniques,” Circuit Cellar 255, which tells you how you can encode your bits before transmission.

A Look at Low-Noise Amplifiers

Maurizio Di Paolo Emilio, who has a PhD in Physics, is an Italian telecommunications engineer who works mainly as a software developer with a focus on data acquisition systems. Emilio has authored articles about electronic designs, data acquisition systems, power supplies, and photovoltaic systems. In this article, he provides an overview of what is generally available in low-noise amplifiers (LNAs) and some of the applications.

By Maurizio Di Paolo Emilio
An LNA, or preamplifier, is an electronic amplifier used to amplify sometimes very weak signals. To minimize signal power loss, it is usually located close to the signal source (antenna or sensor). An LNA is ideal for many applications including low-temperature measurements, optical detection, and audio engineering. This article presents LNA systems and ICs.

Signal amplifiers are electronic devices that can amplify a relatively small signal from a sensor (e.g., temperature sensors and magnetic-field sensors). The parameters that describe an amplifier’s quality are:

  • Gain: The ratio between output and input power or amplitude, usually measured in decibels
  • Bandwidth: The range of frequencies in which the amplifier works correctly
  • Noise: The noise level introduced in the amplification process
  • Slew rate: The maximum rate of voltage change per unit of time
  • Overshoot: The tendency of the output to swing beyond its final value before settling down

Feedback amplifiers combine the output and input so a negative feedback opposes the original signal (see Figure 1). Feedback in amplifiers provides better performance. In particular, it increases amplification stability, reduces distortion, and increases the amplifier’s bandwidth.

 Figure 1: A feedback amplifier model is shown here.


Figure 1: A feedback amplifier model is shown.

A preamplifier amplifies an analog signal, generally in the stage that precedes a higher-power amplifier.

IC LOW-NOISE PREAMPLIFIERS
Op-amps are widely used as AC amplifiers. Linear Technology’s LT1028 or LT1128 and Analog Devices’s ADA4898 or AD8597 are especially suitable ultra-low-noise amplifiers. The LT1128 is an ultra-low-noise, high-speed op-amp. Its main characteristics are:

  • Noise voltage: 0.85 nV/√Hz at 1 kHz
  • Bandwidth: 13 MHz
  • Slew rate: 5 V/µs
  • Offset voltage: 40 µV

Both the Linear Technology and Analog Devices amplifiers have voltage noise density at 1 kHz at around 1 nV/√Hz  and also offer excellent DC precision. Texas Instruments (TI)  offers some very low-noise amplifiers. They include the OPA211, which has 1.1 nV/√Hz  noise density at a  3.6 mA from 5 V supply current and the LME49990, which has very low distortion. Maxim Integrated offers the MAX9632 with noise below 1nV/√Hz.

The op-amp can be realized with a bipolar junction transistor (BJT), as in the case of the LT1128, or a MOSFET, which works at higher frequencies and with a higher input impedance and a lower energy consumption. The differential structure is used in applications where it is necessary to eliminate the undesired common components to the two inputs. Because of this, low-frequency and DC common-mode signals (e.g., thermal drift) are eliminated at the output. A differential gain can be defined as (Ad = A2 – A1) and a common-mode gain can be defined as (Ac = A1 + A2 = 2).

An important parameter is the common-mode rejection ratio (CMRR), which is the ratio of common-mode gain to the differential-mode gain. This parameter is used to measure the  differential amplifier’s performance.

Figure 2: The design of a simple preamplifier is shown. Its main components are the Linear Technology LT112 and the Interfet IF3602 junction field-effect transistor (JFET).

Figure 2: The design of a simple preamplifier is shown. Its main components are the Linear Technology LT1128 and the Interfet IF3602 junction field-effect transistor (JFET).

Figure 2 shows a simple preamplifier’s design with 0.8 nV/√Hz at 1 kHz background noise. Its main components are the LT1128 and the Interfet IF3602 junction field-effect transistor (JFET).  The IF3602 is a dual N-channel JFET used as stage for the op-amp’s input. Figure 3 shows the gain and Figure 4 shows the noise response.

Figure 3: The gain of a low-noise preamplifier.

Figure 3: The is a low-noise preamplifier’s gain.

 

Figure 4: The noise response of a low-noise preamplifier

Figure 4: A low-noise preamplifier’s noise response is shown.

LOW NOISE PREAMPLIFIER SYSTEMS
The Stanford Research Systems SR560 low-noise voltage preamplifier has a differential front end with 4nV/√Hz input noise and a 100-MΩ input impedance (see Photo 1a). Input offset nulling is accomplished by a front-panel potentiometer, which is accessible with a small screwdriver. In addition to the signal inputs, a rear-panel TTL blanking input enables you to quickly turn the instrument’s gain on and off (see Photo 1b).

Photo 1a:The Stanford Research Systems SR560 low-noise voltage preamplifier

Photo 1a: The Stanford Research Systems SR560 low-noise voltage preamplifier. (Photo courtesy of Stanford Research Systems)

Photo 1 b: A rear-panel TTL blanking input enables you to quickly turn the Stanford Research Systems SR560 gain on and off.

Photo 1b: A rear-panel TTL blanking input enables you to quickly turn the Stanford Research Systems SR560 gain on and off. (Photo courtesy of Stanford Research Systems)

The Picotest J2180A low-noise preamplifier provides a fixed 20-dB gain while converting a 1-MΩ input impedance to a 50-Ω output impedance and 0.1-Hz to 100-MHz bandwidth (see Photo 2). The preamplifier is used to improve the sensitivity of oscilloscopes, network analyzers, and spectrum analyzers while reducing the effective noise floor and spurious response.

Photo 2: The Picotest J2180A low-noise preamplifier is shown.

Photo 2: The Picotest J2180A low-noise preamplifier is shown. (Photo courtesy of picotest.com)

Signal Recovery’s Model 5113 is among the best low-noise preamplifier systems. Its principal characteristics are:

  • Single-ended or differential input modes
  • DC to 1-MHz frequency response
  • Optional low-pass, band-pass, or high-pass signal channel filtering
  • Sleep mode to eliminate digital noise
  • Optically isolated RS-232 control interface
  • Battery or line power

The 5113 (see Photo 3 and Figure 5) is used in applications as diverse as radio astronomy, audiometry, test and measurement, process control, and general-purpose signal amplification. It’s also ideally suited to work with a range of lock-in amplifiers.

Photo 3: This is the Signal Recovery Model 5113 low-noise pre-amplifier.

Photo 3: This is the Signal Recovery Model 5113 low-noise preamplifier. (Photo courtesy of Signal Recovery)

Figure 5: Noise contour figures are shown for the Signal Recovery Model 5113.

Figure 5: Noise contour figures are shown for the Signal Recovery Model 5113.

WRAPPING UP
This article briefly introduced low-noise amplifiers, in particular IC system designs utilized in simple or more complex systems such as the Signal Recovery Model 5113, which is a classic amplifier able to obtain different frequency bands with relative gain. A similar device is the SR560, which is a high-performance, low-noise preamplifier that is ideal for a wide variety of applications including low-temperature measurements, optical detection, and audio engineering.

Moreover, the Krohn-Hite custom Models 7000 and 7008 low-noise differential preamplifiers provide a high gain amplification to 1 MHz with an AC output derived from a very-low-noise FET instrumentation amplifier.

One common LNA amplifier is a satellite communications system. The ground station receiving antenna will connect to an LNA, which is needed because the received signal is weak. The received signal is usually a little above background noise. Satellites have limited power, so they use low-power transmitters.

Telecommunications engineer Maurizio Di Paolo Emilio was born in Pescara, Italy. Working mainly as a software developer with a focus on data acquisition systems, he helped design the thermal compensation system (TCS) for the optical system used in the Virgo Experiment (an experiment for detecting gravitational waves). Maurizio currently collaborates with researchers at the University of L’Aquila on X-ray technology. He also develops data acquisition hardware and software for industrial applications and manages technical training courses. To learn more about Maurizio and his expertise, read his essay on “The Future of Data Acquisition Technology.”