Put a FIR Filter to Work
Working with finite impulse response (FIR) filters isn’t black magic. Robert covers the basics and touches on the mathematical concepts you’ll need to understand in order to put a FIR filter to work.
Welcome to the darker side! Because this month’s theme is signal processing, I scratched my head to find the best subject for this column. I wanted something that would be more than useful for your applications, reasonably easy to use, and still a little magical. It also had to be a subject that I could easily explain in a few pages. I think I found the perfect candidate. In this column, I’ll describe a well-known and very powerful class of digital filters: finite impulse response filters, or FIR for short.
As usual, I will use plain English to explain the concepts, but some mathematical concepts will still appear from time to time. However, I will try to stay as close to the bench as possible. So, don’t be afraid, breathe normally, and keep reading. If you get lost, just skip to the “FIR for You” section!
BACK TO BASICS
Before digging into the topic of FIR filters, let’s start by refreshing our neurons on the Fourier transform. As you may have noticed (even after the talented Circuit Cellar editors corrected my approximate English), I’m not a U.S citizen. I’m French. So, I can’t resist telling you that Jean Baptiste Joseph Fourier (1768–1830) was a French mathematician and physicist. His life was quite busy. He was deeply involved in the French Revolution and kept his head on his shoulders only by chance. He then went with Napoleon during his expeditions and became governor of Lower Egypt. Later, he returned to Europe and to science and published his well-known paper on thermodynamics in 1822, which brings us back to our subject. In the paper, Fourier claims that any function of a variable can be expanded in a series of sines of multiples of the variable. This transformation was later named the Fourier transform.
There are books just on the mathematical theory behind the concept, but I will focus on the “engineer’s view.” For an engineer, a function is usually a table of discrete numbers, so I will never use the real Fourier transform. I will use its discrete time equivalent, the discrete Fourier transform (DFT). I will also restrict my presentation to “real” signals (as opposed to Fourier transforms of complex numbers), and for easier reading, I will skip a number of implementation details, which are posted on the Circuit Cellar FTP site.
Suppose that you worked on some kind of digitizer and got 1,024 samples of a real-life signal in RAM with a sampling rate of 250 ksps. As you know, the sampling rate limits your knowledge about the signal to frequencies below the Nyquist limit:
A DFT takes the 1,024 time domain samples and converts them into their frequency domain representation, from DC to 125 kHz, with a frequency resolution of:
So, you started with 1,024 samples and you got only 512 frequency bins (i.e., 125,000/244.14). Why half the number of samples? The trick is that the outputs of the DFT are not real numbers. They are complex numbers with two components for each frequency. A complex number is equivalent to a magnitude and also to a phase (see Figure 1). What does it mean? The DFT operation transformed your 1,024 timedomain digitized samples into a sum of 512 sinusoidal signals with respective frequencies of 0 (DC value), 244.14 Hz, 488.28 Hz, 732.42 Hz, and so on, up to 125 kHz, each with a given amplitude and a given phase. It is important to understand that this is not an “approximation,” it is just another way to see the same signal. The best proof is that the DFT has its reciprocal, the inverse discrete Fourier transform (IDFT), which returns your original signal from its spectral representation. And, an IDFT is nothing more than a DFT, just a different scaling factor somewhere.
Why not experiment? You may be familiar with tools like MATLAB, but all of the examples presented in this article were done with Scilab, which is a similar tool. Scilab is opensource. I love it just because it works under the OS of your choice, it is supplied with plenty of compatible toolboxes for signal processing and graphics, and it is free (www.scilab.org). Listing 1 is a short Scilab script, which actually decomposed a very short signal in its Fourier coefficients and reconstructed understand that this is not an “approximation,” it is just another way to see the same signal. The best proof is that the DFT has its reciprocal, the inverse discrete Fourier transform (IDFT), which returns your original signal from its spectral representation. And, an IDFT is nothing more than a DFT, just a different scaling factor somewhere. Why not experiment? You may be familiar with tools like MATLAB, but all of the examples presented in this article were done with Scilab, which is a similar tool. Scilab is opensource. I love it just because it works under the OS of your choice, it is supplied with plenty of compatible toolboxes for signal processing and graphics, and it is free (www.scilab.org). Listing 1 is a short Scilab script, which actually decomposed a very short signal in its Fourier coefficients and reconstructed the signal as a sum of sines. Figure 2 is the output.
Listing 1—This short Scilab script defines an eight-point arbitrary signal, calculates its Fourier transform, extracts the amplitude and phase of each sinusoidal component, and allows it to verify (on a graph) that the original signal and its recomposition are identical (see Figure 2). // // FFT of a very small signal and the corresponding decomposition // // Just take an eight-point signal signal=[4.4 2.3 -3.5 -1.2 -2.8 2.0 1.6 2.5]; // Plot it in a 3-box graph, with rollover of the last to first point subplot(1,3,1); plot(signal,’black’); xtitle(‘A simple eight-point signal’); // Calculate its Fourier transform, giving eight complex numbers spectrum=fft(signal); // However, as the input signal is real then only the first half // of the FFT is actually useful (the last half is the complex // conjugate of the first half) : usefulspectrum=spectrum(1:$/2+1); // Calculate amplitudes and phases of all frequency bins amplitudes=abs(usefulspectrum)/8; phases=imag(log(usefulspectrum)); // First amplitude is the DC value of the signal (phase is always 0) f0=amplitudes(1); // Then generate the sinusoidal waves for the harmonics 1, 2, 3 and // 4. Each should be doubled because there is a complex conjugate // partner in the second half of the fourier transform except the // last one (which is real) x=2*%pi*(0:80)/80; // calculate it on 80 points f1=2*amplitudes(2)*cos(x+phases(2)); f2=2*amplitudes(3)*cos(2*x+phases(3)); f3=2*amplitudes(4)*cos(3*x+phases(4)); f4=amplitudes(5)*cos(4*x+phases(5)); // The signal is the sum of the DC value and all harmonics f=f0+f1+f2+f3+f4; subplot(1,3,2); plot(f1,’green’); plot(f2,’green’); plot(f3,’green’); plot(f4,’green’); plot(f,’red’); xtitle(‘DFT decomposition and sum of sines’); // Compare original signal and recomposition subplot(1,3,3); plot(f,’red’); plot2d([0:7]*10+1,signal); xtitle(‘Original and recomposed signals compared’), halt(); xdel();
Listing 2—This Scilab script is the complete code needed to synthesize and simulate a FIR filter. The actual output of the code is in Figure 8. //-------------------------------------- //Custom complex filter //-------------------------------------- // Specification of the desired frequency response ntaps=200; wanted(1:ntaps)=0; wanted(10:20)=1.0; wanted(40:60)=2.0; subplot(4,1,1); plot(wanted); // Invert it in the second half (negative frequencies) for i=1:ntaps/2;wanted(ntaps-i+1)=wanted(i);end; subplot(4,1,2); plot(wanted); // Calculate its FFT, which is the impulse response fircoeff=real(fftshift(fft(wanted))); subplot(4,1,3); plot(fircoeff); // Calculate the actual frequency response [hm,fr]=frmag(fircoeff,1000); subplot(4,1,4); plot2d(fr’,hm’); xtitle(‘’,’’,’Actual transfer function’); halt(); xdel();
Just a word on the calculation method. The examples use an optimized fast Fourier transform (FFT) algorithm. This is just the common and efficient way to calculate a DFT on a computer; it is not a new concept. Basically, an FFT gives the same result as a DFT, but it requires n.log(n) and not n2 operations. It’s a great advantage when n starts to get large. Some Fourier transforms of simple signals are illustrated in Figure 3. Take a look at them to get a feeling for a frequency spectrum.
Thanks to the Fourier transform, you can convert a timedomain signal into its frequency spectrum. This will give you a first straightforward method and introduction to implementing digital filters. Filtering a signal is just modifying its frequency spectrum, attenuating or enhancing the subbands you are interested in. That’s really easy with a DFT with a three-step process: DFT, filter, and IDFT. With this method, if you want to implement a low-pass filter, the first step is to take your input time-domain signal and calculate its Fourier transform. The second step is to remove all components above the wanted cutting frequency from its spectrum, which can be easily done by multiplying term by term, the spectrum, and a rectangular frequency mask equal to 1, from DC up to the cutting frequency, and then null. You now have a filtered spectrum and you can get back to a filtered time-domain signal by the third step, an inverse Fourier transform. So, for those of you who like equations, if S is the signal and M is the wanted frequency response, then the filtered signal S′ is given by:
This is illustrated in Figure 4 and in the accompanying Scilab source code. The real strength of the method is that virtually any linear filter can be implemented with the same algorithm. You are not limited to simple filters, such as low-pass filters. Nothing forbids you from designing a more complex filter with several subbands and a different behavior in each subband. You just have to design a frequency mask accordingly. The calculations will be the same and you will also be able to modify that filter very easily.
FROM FREQUENCY TO TIME
The frequency domain filtering method using a DFT and an IDFT is great and it is effectively used, but it has two problems. The first is that a DFT (or IDFT) requires a significant amount of processing power and memory. The second is that the process is block oriented. You must wait until you have a full buffer of data before calculating its DFT. This introduces a long delay in processing, as well as difficulties on the block’s boundaries, requiring complex windowing techniques if you want to filter a continuous signal. Can it be simpler? Yes, but this will unfortunately require a very powerful mathematical result. If you have two signals A and B, then:
What is this crazy ⊗? It is what mathematicians call a convolution. It is nothing more than a couple of multiplications and additions. (Refer to the pseudocode in Figure 5.) Based on the result, trust me (or demonstrate it yourself, it’s easy), the method I have described for frequency domain filters can be done without a DFT or an IDFT. The filtered signal S′ can be calculated by a convolution between the original signal S and a new signal IMP.
And IMP is simply the IDFT of the frequency mask that you want to implement. Physically speaking, an IMP is the signal on the output of the filter when a short impulse is applied on its input. That’s why an IMP is called the impulse response of the filter. Figure 4 shows the IDFT of the spectrum mask on the middle-left box. That is an IMP.
So, with this method, the filtering operation is now only based on timedomain signals. You just have to calculate the convolution of the signal to be filtered and the impulse response of the desired filter. Of course, because the frequency mask is a finite length table, the impulse response calculated as its IDFT is also finite. The impulse response is time-limited. That’s why the algorithm is called a FIR. The main point here is that the IMP signal can be precalculated (with an IDFT) when you design your filter. The real-time calculation of the filter then doesn’t need the DFT at all.
If you have been following me so far, you should see that the FIR algorithm gives exactly the same result as the frequency domain digital filtering process we discussed first (Fourier transform, then multiplying the frequency spectrum by a filter mask, then inverse Fourier transform). This is not another method, just a far more effective implementation, especially for time-continuous signals.
FIR FOR YOU
You should now have an idea of where a FIR filter comes from. Let’s stop focusing on the theory and see what you will actually have to do to implement a FIR digital filter in your next design. It will be an easy, sevenstep process (see Figure 6).
The first step is to define the overall sampling frequency, which will usually be the clock frequency of your ADC. This task may sound trivial, but it is often the most important step in the design process. And, this is not an easy step, so I will spend a little time on it. As you know, thanks to Nyquist, the sampling frequency should be at least two times higher than the maximum frequency of the input signal. However, this would require a perfectly sharp antialiasing low-pass filter with 100% transfer up to the cutoff frequency and then 100% attenuation, which doesn’t exist anywhere else other than in the minds of mathematicians. In real life, the defibe a difficult cost balance between an expensive antialiasing filter and an expensive, quick ADC and digital filter. Still not convinced? Take a numeric example. Suppose that you need a perfect 8-bit digitization of a DC-to-10-kHz signal. Easy isn’t it? However, you want to get 8 bits of precise measurement, so avoid spurious signals with amplitudes higher than one least significant bit. This means that the antialiasing filter should reduce out-of-band signals to levels 256 times lower than the amplitude of the desired signal. In terms of power, because power is the square of the voltage, the rejection of the antialiasing filter should be better than 1/65,536 above the Nyquist limit, which is:
Do you plan to use a simple firstorder RC filter? The filter provides an attenuation of only 6 dB per octave, meaning 6 dB each time the frequency is doubled above the cutting frequency. To get 48 dB, you will need to double the frequency eight times. This means that the Nyquist frequency should be:
The sampling rate should be no less 10 kHz × 28 = 2.56 MHz 10 log 1 65,536 × ⎛ 48 dB ⎝ ⎜ ⎞ ⎠ ⎟ = − than 5.12 Msps! Of course, better filters will enable you to limit the requirement. For example, a six-pole filter will provide an attenuation of 36 dB/octave, which will enable you to use a sampling rate of 50 ksps, but that is still far above the theoretical 20 ksps that some engineers would have used (see Figure 7). Even with very good antialiasing filters and modest requirements (8 bits), the sampling frequency should be significantly above the double of the input signal frequency, and it may be 10 times higher if you want to use simple filters. This is why oversampling is so often a good engineering solution. Think about it for your next project!
The second step in the FIR design process is far easier and quite fun. It consists of defining the theoretical frequency response that you would dream to have. Take a sheet of paper and draw a frequency scale from DC to the Nyquist frequency (half the sampling frequency) and draft the desired filter response. Assume that you need a complex filter like the one in Figure 8. The first box is a filter with two passbands with respective gains of one and two.
The third step involves defining the number of taps of the FIR filter. This is often a trial-and-error process, but you will have a starting point. Remember our discussion about frequency domain filters? Because the FIR taps are calculated from the frequency mask through an inverse Fourier transform, the number of taps of the filter is equivalent to the number of bins on the frequency mask. So, the more complex the target frequency response, the higher the tap count. For example, if you want to design a crude low-pass filter with an “easy” cut-off frequency somewhere in the middle of the passband, an eight-tap FIR filter may be enough. However, if you need a band-pass of 1/100 of the frequency band, you will need significantly more than 100 taps (possibly 500 or more). Figure 9 illustrates the frequency response of a simple low-pass filter for different numbers of taps. For a customer who wants a razor filter, select a 200-tap filter or more.
The fourth step is the magical one: the synthesis of the FIR filter. Magical, but understandable after the first part of this article. The synthesis of the FIR filter means that you need to calculate the coefficient of each tap. The coefficients are the values of the inverse Fourier transform of the frequency response. Remember: IMP is the IDFT of the frequency mask M. In order to calculate them, you need a tool like SciLab to create a table with as many points as the number of taps to fill its first half with the frequency response from DC to the Nyquist frequency and to duplicate it upside down in the second half. (Refer to the second plot in Figure 8.) The duplication is just because the inverse Fourier transform assumes that the input frequencies are both positive frequencies and negative ones (trust me). Then, call the inverse Fourier transform function and you get the impulse response of the filter, which is the coefficient of each successive tap of the FIR filter. (Refer to the third plot in Figure 8.)
Mathematically speaking, the filter will be perfect. The actual response will be the exact specified response at each of the N discrete frequencies used to define it, from DC to 10 kHz with a step of 50 Hz (i.e., 10 kHz/200) in my example. However, in real life, it is also very important to know how the filter will behave for frequencies between these individual frequencies, especially if the tap count is small. This is the fifth step: simulation. You can do the simulation yourself with test software in your preferred language, but it will need you to generate sine signals of slowly increasing frequencies to filter it with the synthesized FIR and then to measure and plot the output amplitude for each frequency. Simple, but painful. The other solution is to use a ready-made frequency- response analysis function available in SciLab: frmag. The entire synthesis and simulation code in Scilab is in Listing 2. The actual output is in Figure 8. The bottom plot is the actual response, which is very close to the theoretical one because the number of taps is high.
I must mention that there are more elaborate synthesis techniques that could enable you to optimize the filter for a given number of taps and for a given list of mandatory setpoints on the frequency response graph. The techniques limit the oscillations of the frequency response between the setpoints by using windowing and optimization algorithms. One of them, using the Minimax algorithm, is supported in Scilab through the eqfir function. You will find an example of it in the code posted on the Circuit Cellar FTP site. This was the method used for the examples in Figure 9.
Now that you have a simulated response of the FIR filter, it’s time to compare it to the specifications you were looking for. Is it fully satisfactory? If not, you will have to reiterate the entire process of taking different assumptions, which is the sixth step of the design process. You will probably need to tweak the number of taps or optimize the frequency response a couple of times.
TIME TO SOLDER
The last step is the easiest one. Select your preferred digital platform. You can use a microcontroller for low-speed applications, a DSP for medium-speed projects, or an FPGA with plenty of hardware multipliers for high-end ones. Code the FIR algorithm, which should need only a couple of source code lines and some nonvolatile memory to store the FIR coefficients. To make your life easier, a skeleton of a FIR algorithm coded in C language is posted on the Circuit Cellar FTP site.
Your FIR filter should work well as soon as you have corrected the usual bugs (at least if you haven’t forgotten a good antialiasing filter on your PCB).
It is possible to perform signal processing on a microcontroller, and FIR in particular. A reasonably complex 32-tap FIR filter needs 32 multiplications and 32 additions per sampling cycle, usually done on 16-bit integer numbers. If you are clever, you will notice that the impulse response is always symmetrical, which could enable you to divide the number of multiplications by two. Anyway, even if you select an 8-bit 10-MIPS microcontroller, make sure you use one with an on-chip 8 × 8 bit hardware multiplier (e.g., a PIC18F). You then should be able to implement such a FIR filter with sampling rates up to 40 ksps or more. So, digital filters are not only for high-end applications. I hope I have convinced you that FIR filters are not black magic, even if they are sometimes on the darker side. Try them in your next project!CC
Robert Lacoste lives near Paris, France. He has 18 years of experience working on embedded systems, analog designs, and wireless telecommunications. He has won prizes in more than 15 international design contests. In 2003, Robert started a consulting company, ALCIOM, to share his passion for innovative mixed-signal designs. You can reach him at email@example.com. Don’t forget to write “Darker Side” in the subject line to bypass his spam filters.
To download code, go here.
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • OCTOBER 2007 #207 – Get a PDF of the issueSponsor this Article
Robert Lacoste lives in France, between Paris and Versailles. He has more than 30 years of experience in RF systems, analog designs and high-speed electronics. Robert has won prizes in more than 15 international design contests. In 2003 he started a consulting company, ALCIOM, to share his passion for innovative mixed-signal designs. Robert is now an R&D consultant, mentor and trainer. Robert’s bimonthly Darker Side column has been published in Circuit Cellar since 2007. You can reach him at firstname.lastname@example.org.