CC Blog Design Solutions Research & Design Hub

Murphy’s Laws in the DSP World (Part 2)

The Next Three Laws

Many unexpected issues come into play when you move from the real world of analog signals and enter the world of digital signal processing (DSP). Part 2 of this article series charges forward introducing “Murphy’s Laws of DSP” #7, #8 and #9 and looks at the spectral analysis of DSP signals.

  • What’s happening

  • Tech

Sound wave pressures and water reservoir levels contain information that we need to manipulate with computers. Mobile phone conversations become clearer to understand if we can keep just the electromagnetic signals coming directly from the cell-tower and remove any interfering signals that have bounced off nearby buildings. Developers must use sensors to transfer things from the analog world into a stream of sampled values that can be handled by computers. They need to move into the world of digital signal processing—DSP.

The working of proposed algorithms can be explored using mathematical scripting languages such as Octave or Mathwork’s MATLAB. The generated input and output results of our simulations can then be used to validate the correctness—and/or limitations—of real time implementations of these algorithms on hardware. However, working in this new digitized arena is not straightforward. Imagine: The developer has everything under control, with the new algorithm becoming the best thing since the invention of sliced bread. Then a minor change in a simulation parameter and things suddenly are not making common sense. The step forward improvement from the change has suddenly become two steps backward, and you worry thinking: “Just how much else is broken? What else needs fixing?”

In Part 1 this article series (Circuit Cellar 335, June 2018), we mentioned that there are special properties of the DSP world that must be taken into account when developing DSP code. While these properties are not introduced by use of a mathematical scripting language tool, our understanding of the properties can be influenced by the tool. We suggested that faster progress could be made if a person understood how the six variants of Murphy’s Laws, listed in Table 1, might impact algorithm development.

Table 1
The first six DSP Murphy’s Law to tackle

Unfortunately, the earthquakes potentially generated by Murphy’s Law do not stop at a DSP Richter ML_DSP Level VI. In this article, we show the greater damage that Murphy can inflict even when tackling a low-level practical DSP example.


Suppose for the past year, you have been recording intermittent audio signals of a noise nuisance experienced in a local neighborhood. A lot of hassle would be removed if these noises could be matched against a suspected noise source, and the problem removed.

You have been capturing data over the past month in several households around the neighborhood. After a month you recognize that down in the basement, no signals are being recorded above 200 Hz except for the occasional loud human voice and the impulsive bang, made up of all frequencies, as somebody slams a door. You also recognize that sampling signals at 44 kHz is generating 42 GB of data per household over a long weekend—0.5 Terabytes of data over a month. That’s a lot of data to store and process.

On the professional side, you decide that, in the future you will adjust your recording equipment settings to only sample at 1 kHz and reduce storage and processing time issues. This is about 5 times faster than the highest expected signal frequency, 200 Hz, and so avoids issues discussed on how to avoid the impact of Murphy’s Law DSP_ML-I (Table 1) discussed in Part 1 of this series.

As this is a community problem with recordings needed in many homes, you have been kindly loaned a variety of recording equipment from your industrial “competitors”. Other recordings are being made by householders on their own smart phones. After adjusting the recording settings to a 1 kHz sampling rate, you find that the quality of the sound recordings from some of the loaned equipment has become terrible, very noisy and with many unexpected frequencies now apparently present in the adjusted signal that were not in the analysis of the original recordings.

In addition, you decide to digitally down-sample all the legacy data already collected for storage and compatibility reasons. The Nyquist sampling criterion that forms part of DSP_ML-I says that you will get invalid signals if you don’t sample at least twice as fast as the highest frequency present in the signal. Can you take all the signals measured on previous days at the audio 44 kHz rate and then store every 44th sample to achieve a new 1 kHz sampled signal? Since the signals have some background traffic noise and occasionally voices, wouldn’t it better to develop a program to output the average of 44 samples to remove the higher frequency components?

After coding these ideas, you check the quality of the processed signal in terms of the signal-to-noise ratio (SNR). The simplest SNR measure requires dividing the biggest signal amplitudes with the average noise level. The new SNR of the processed old signals seems better than the worst of the new 1 kHz audio recordings, but it is not as good as expected. So, what new DSP problems has Murphy introduced in your new recordings and data reductions during digital down-sampling procedures?


In our June Part 1 article, frequency spectrum of a 4 Hz signal sampled at 8 Hz apparently showed the presence of a 12 Hz signal. This turned out to be a misinterpretation of the output of the fast Fourier transform algorithm fft() and was solved by remembering:


Applying fftshift() at least once every day

will help keep frequency confusion at bay!

It turns out that down-sampling a signal off-line on a computer or through specialized hardware can introduce similar DSP frequency misinterpretation issues. Whether applying down-sampling through hardware or software, we need to be aware of the seventh DSP Murphy’s Law:

DSP_ML-VII: As I was going up a DSP stair,

I met a signal that should not be there!

It should not be there again today!

I wish that signal would go away!

In Listing 1, Investigate_DSP_ML_VII(), the function SimulateTripleSignal() reuses the methods of the TimeDomainSignal class developed in the first article to develop a 1 second signal with an easy to find DC component and two other signals, Figure 1a. The frequency components of a 48 Hz cosine signal, shown in black in Figure 1b, all lie along the real frequency axis and are easily distinguishable from the components of an 86 Hz sine signal, shown in red, which lie along the imaginary frequency axis.

Listing 1
Down-sampling a signal with large intensity, high frequency components will introduce obvious aliasing. However, it must be remembered that the aliasing introduced by down-sampling even low intensity high frequency, background, noise components will rapidly degrade a signal’s signal-to-noise ratio because of the wide bandwidth of the noise.

%  Place in file"C:\DSP_ML\Investigate_DSP_ML_VII.m" 
function Investigate_DSP_ML_VII( )
clear classes; %Encourage Octave/MATLAB accepting class changes

tripleSignal = SimulateTripleSignal(0, 48, 86);
DisplayTimeSignal(tripleSignal, 'DC, 48 Hz and 86 Hz', '-'); 
FrequencySpectrum(tripleSignal, ...
                'Frequency Spectrum TripleSignal', '-');

%% Down-sampling introduces aliasing into high frequency signals
downsampledTripleSignal = DownSampleSignal(tripleSignal);               
DisplayTimeSignal(downsampledTripleSignal, ...
                 'Downsampled DC, 48 Hz and 86 Hz', '-');
FrequencySpectrum(downsampledTripleSignal, ...
                 'Frequency Spectrum down-sampled TripleSignal', '-');

%% Add noise to triple signal
noisyTripleSignal = AddNoise(tripleSignal, 1.0);
[frequency, freqData] =   ...
       FrequencySpectrum(noisyTripleSignal, ...
                 'Frequency Spectrum noisy tripleSignal', '-');
CalculateDisplaySignalSNR(freqData, -36, -4);
CalculateDisplaySignalSNR(freqData, 4, 36);

%% Down-sampling introduces aliasing of high frequency noise
downsampledNoisyTripleSignal =  ... 
[frequency, freqDataDownSampled] = ...
    FrequencySpectrum(downsampledNoisyTripleSignal, ...
        'Frequency Spectrum down-sampled noisy TripleSignal', '-');
CalculateDisplaySignalSNR(freqDataDownSampled, -36, -4);
CalculateDisplaySignalSNR(freqDataDownSampled, 4, 36);

%  Place in file"C:\DSP_ML\SimulateTripleSignal.m" 
function tripleSignal = SimulateTripleSignal(freq1, freq2, freq3 )
  fastSampling = 256; duration = 1; phase0 = 0; phase90 = 90;
  signalFreq1 = TimeDomainSignal(1.0, freq1, phase0, ...
                 duration, fastSampling);
  signalFreq2Phase0 = TimeDomainSignal(1.0, freq2, phase0, ...
                 duration, fastSampling);
  signalFreq3Phase90 = TimeDomainSignal(1.3, freq3, phase90, ...
                 duration, fastSampling);
  tripleSignal = signalFreq1 + signalFreq2Phase0 ... 
                        + signalFreq3Phase90;

Listing 2
Down-sampling can be achieved by discarding sampled values. However, this simple approach activates DSP_ML-VII and causes non-physical, aliased, frequency components to appear.

%  Place in "C:\DSP_ML\@TimeDomainSignal\DownSampledSignal.m" 
function downsampledSignal = ...
% Modify TimeDomainSignal instance
  downsampledSignal = signal;
  downsampledSignal.timeData = ... 
                     signal.timeData(1 : 2 : signal.numPts);
  downsampledSignal.timeSampled = ...
               signal.timeSampled(1 : 2 : signal.numPts);
  downsampledSignal.numPts = signal.numPts / 2;
  downsampledSignal.samplingFrequency = ...
                                   signal.samplingFrequency / 2;

Down-sampling by a factor of 2 is achieved in the new class method DownSampleSignal() given in Listing 2 by throwing away—decimating—every second point. As can be seen from Figure 1c, down-sampling in this simple fashion causes a drastic change in the time domain signal’s appearance. Because we are no longer obeying the Nyquist criteria of sampling twice as fast as the highest frequency signal (DSP_ML-I) the frequency spectrum of the down-sampled signal, Figure 1d, is also distorted. The 86 Hz signal now appears as a signal around 40 Hz.

We can interpret the strange results in the down-sampled spectrum, Figure 1d, as follows. The original signal was sampled 256 times in one second for 1 second. Applying the discrete Fourier transform will produce a frequency spectrum ranging from -128 Hz to nearly +128 Hz. The triple signal will have frequency components at 0 Hz, at +40 Hz and -40 Hz and at +86 Hz and -86 Hz.

After down-sampling, the effective sampling rate becomes 128 times per second, which will provide a frequency spectrum ranging from -64 Hz to nearly +64 Hz. The frequency spectrum of the DC and 48 Hz signal will be correctly displayed. However, the
+86 Hz signal can’t be displayed, but its energy is morphed—aliased—into a false frequency signal in the -64 Hz to +64 Hz range.

Instead of being displayed as a positive amplitude imaginary component 22 Hz above the 64 Hz border, it will be aliased into a signal that is 22 Hz above the -64 Hz border—at -42 Hz. Similarly, the negative amplitude component at -86 Hz will be aliased to become a component at +42 Hz.


Now imagine a real-life signal with a DC and only a 48 Hz signal. Since there is no 86 Hz signal to alias, is it safe to down-sample in the simple way? Figure 2 shows how DSP_ML-VII answers that question very firmly. Now the triple signal has been made more real-life with added white noise using the AddNoise() and CalculateDisplaySignalSNR() given in Listing 3. The SNR of the DC peak, DC_Amplitude / stdev(backgroundNoise), has been calculated at two locations at around 21.2. However, why has the SNR dropped to around 13.9 after down-sampling? The precise SNR value will change each time you run the script, because the noise values are re-generated as random values.

Before down-sampling there were 256 samples of the DC signal giving frequency spectrum peaks with an intensity of 256. After down-sampling there are only 128 samples, leading to a frequency peak of half the intensity 128. Common sense suggests that the SNR should be un-changed. There are now only 128 noise values, so their intensities should also be halved!

This is where DSP_ML-VII comes into play. Yes, there are less noise samples, lowering their intensity by a factor of 2. However, the noise frequencies in the frequency range -128 Hz to -64 Hz are aliased and added to the noise already present in the 0 Hz to +64 Hz range, and similarly for the noise the range +64 Hz to +128 Hz which is aliased down to the -64 Hz to 0 Hz range.

Standard deviations—the estimate of experimental signal variations—of random noise sequences add as the square root of the sum of the squared deviations:

which in this case increases the noise level by a factor:

The original SNR will drop to:

which is close to the value of 13.9 in Figure 2b calculated for the specific noise values generated in this study.


The first question to ask is: How do we—or perhaps how can we—stop this loss of SNR when down-sampling the legacy data values that have been measured over the past year? This is a genuine DSP issue associated with any down-sampling, not an artifact of using a visualization tool during a simulated study.

This means the second question to ask is: When we adjust our equipment settings to sample at 1 kHz in future rather than 44 kHz, has our system’s hardware designer overcome the noise and signal aliasing issues to retain the proper SNR?

What is causing a problem here is a variant of Murphy’s Law VI:

DSP_ML-VI: By symmetry, any DSP frequency domain analysis problem has a non-obvious time domain equivalent.

The new hardware related version can be expressed as:

DSP_ML-VIB: By symmetry, any DSP analysis problem you can simulate in software has a non-obvious hardware equivalent.

Data down-sampling to reduce storage requirements is a commonly needed DSP operation. We need to find out how to perform it, in software or hardware, without degrading the quality of the signals we want to analyze. The most obvious solution is: “high frequency noise components can’t cause problems after down sampling if we get rid of them before down-sampling!” This is the idea explored in the Investigate_DSP_ML_VIII() simulation study in Listing 4.

To get Investigate_DSP_ML_VIII() to run, you will need to add the Listing 4 scripts MovingAverageFIR.m and CustomFIR.m and the new class method FIRFilterSignal from Listing 5. There is an extra step when running the test code because the fir2() function to design FIR filters is not part of the standard Octave download. To fix this problem, you will need to install the signal package from Octave Forge. To do this, you type and run “pkg load signal” in Octave’s command window

Listing 3
Signals with white noise are easy to generate with the randn() operator inside the AddNoise() method. The function CalculateDisplaySignalSNR() calculates the SNR ratio of the DC peak using noise levels measured in frequency regions where there are no other signals.

%  Place in file "C:\DSP_ML\@TimeDomainSignal\AddNoise.m" 
function noisySignal = ...                    %  NEW CLASS METHOD
                                           AddNoise(signal, noiseAmp)  
  noisySignal = signal;
  centreAmplitude = 0; 
  noisySignal.timeData = noisySignal.timeData ...
       + centreAmplitude + noiseAmp * randn(noisySignal.numPts, 1)';

%  Place in file "C:\DSP_ML\CalculateDisplaySignalSNR.m" 
function CalculateDisplaySignalSNR(freqData, leftFreq, rightFreq)
  markHeight = 0.4 * max(real(freqData));
  plot([leftFreq, leftFreq], [-markHeight, markHeight],  '-b');                                                                          
  plot([rightFreq, rightFreq], [-markHeight, markHeight], '-b');     
  numPtsBy2 = length(freqData) / 2; 
  leftIndex = leftFreq + numPtsBy2;  
  rightIndex = rightFreq + numPtsBy2; 
  stdDeviation = std(real(freqData(leftIndex:rightIndex)));
  SNR = max(real(freqData))  / stdDeviation;
  text(leftFreq + 1 , -1.15 * markHeight, 'SNR');
  text(leftFreq + 1 , -1.45 * markHeight, sprintf('%4.1f', SNR));
  pause (3);
Listing 4
In the Investigate_DSP_ML_VIII() simulation study we explore the effectiveness of removing high frequency noise components using moving average and custom FIR filtering operations before down-sampling.

%  Place in file "C:\DSP_ML\Investigate_DSP_ML_VIII.m" 
function Investigate_DSP_ML_VIII( )
clear classes; %Encourage Octave/MATLAB accepting class changes
tripleSignal = SimulateTripleSignal(0, 48, 86);
noisyTripleSignal = AddNoise(tripleSignal, 1.0);
% Show gain of Moving Average time domain FIR
[frequency, freqData] = ...
     FrequencySpectrum(noisyTripleSignal, ...
               'Frequency Spectrum noisy TripleSignal', '-');
[~, filterGain]...
        = FIRFilterSignal(noisyTripleSignal, 'Moving Average');
plot(frequency, abs(filterGain) * max(real(freqData)), 'Linewidth', 3);
legend('REAL()', 'IMAG()', 'MAG(MA FIR)', 'location', 'NorthEast');
pause (3);

% Show gain with a 10th order Custom time domain FIR
FrequencySpectrum(noisyTripleSignal, ...
                 strcat('Frequency Spectrum noisy tripleSignal', ...
                      '- Custom 10th Order FIR'), '-');
[~, filterGain]...
        = FIRFilterSignal(noisyTripleSignal, 'Custom');
plot(frequency, abs(filterGain) * max(real(freqData)), 'Linewidth', 3);
legend('REAL()', 'IMAG()', 'MAG(CUSTOM FIR)', 'location', ...

%  Place in file "C:\DSP_ML\MovingAverageFIR.m" 
function filterCoeffs = MovingAverageFIR(numCoeffs)
  filterCoeffs = ones(numCoeffs, 1) / numCoeffs;   % Gain adjust

%  Place in file" C:\DSP_ML\CustomFIR.m" 
function filterCoeffs = CustomFIR(numCoeffs)
  filterOrder         = numCoeffs - 1;
  idealGain           = [1.0 1.0 0.0 0.0];
  designFrequencies = [0.0, 51.2, 76.8, 128.0];
  normalizedFreq = designFrequencies / 128; 
  filterCoeffs        = fir2(filterOrder, normalizedFreq, idealGain); 
Listing 5
This new class method FIRFilterSignal() allows the implementation of a 4th order MA or a 10th order custom FIR operation. The method returns both the filtered signal and the filter gain.

%  Place in file "C:\DSP_ML\@TimeDomainSignal\FIRFilterSignal.m" 
function [filteredSignal, filterGain] =   ...%  NEW CLASS METHOD
                        FIRFilterSignal(signal, whichFilter)
  if strcmp(whichFilter, 'Moving Average')
    filterCoeffs = MovingAverageFIR(5); %  4th order MA filter
    filterCoeffs = CustomFIR(11);  %  10th order custom filter
  numFIRCoeffs= length(filterCoeffs);
  data = signal.timeData;
  filteredData = zeros(1, signal.numPts);
  for count = numFIRCoeffs : signal.numPts % No output for initial points
    filteredValue = 0;
    for coeffs = 1 : numFIRCoeffs
          filteredValue =  filteredValue + ...
                   data(count - coeffs + 1) * filterCoeffs(coeffs);
     filteredData(count) = filteredValue;
  filteredSignal = signal;  % Generate a TimeDomainSignal Instance
  filteredSignal.timeData = filteredData;

% Frequency response =  DFT(Time domain Impulse response)
  FIRImpulseResponse = zeros(signal.numPts, 1);
  FIRImpulseResponse(1 : numFIRCoeffs) = filterCoeffs;
  filterGain = fftshift( fft(FIRImpulseResponse) );

A moving average filter is just that—generating a new sample output point y[n∆T] with all high frequency components removed by averaging the last P input values, x[(n-P+1)∆T] —› x[n∆T]. Then we move one-time interval down the input signal array and do a P point average again.

Computationally, this means that we calculate:

This has the same format as the operation of applying a finite duration impulse response filter:

where the FIR filter coefficients a0 … aP-1 are all equal to the constant 1/P.


FIR filters have some very nice DSP properties. First, they are easy to calculate—simply a sum of time domain data values multiplied by coefficient values. FIR operations are so common that single cycle “Multiply and Accumulate” instructions are standard to most processors for real-time FIR operations.

Another nice thing is that a developer can quickly check that the code for custom high-speed, long length FIR filters on a multi-core system is correct. All that is needed is to generate an impulsive input signal made up of zeros and one non-zero value: 0, 0, 0, 0, 0, 1, 0, 0, 0 …. And apply the FIR filter. The filter’s output impulse response is guaranteed to be an ordered copy of the FIR coefficients: 0, 0, 0, 0, 0, a0, a1 … aP-1, 0, 0, 0 ….

Even more interesting is the relationship between the impulse response and the frequency characteristics of the filtering operation. In the last lines of Listing 5, we generated the FIR filter’s impulse response by overlaying the filter coefficients on top of an array of zeros:

FIRImpulseResponse = ... zeros(signal.numPts, 1);

FIRImpulseResponse(1 : ... numFIRCoeffs) = filterCoeffs;

If you take the DFT of the impulse response of an FIR filter, last line in Listing 5:

filterGain = fftshift( ...

fft(FIRImpulseResponse) );

and remember DSP_ML-V: about applying fftshift() at least once every day, then you will get the filter’s frequency response. This can be superimposed upon the input signal’s frequency characteristics, Figure 3a, to predict the output signal’s frequency characteristics.


Advertise Here

As might be expected, a new DSP idea means more things to worry about. May I introduce you to the eighth DSP Murphy’s Law which can be expressed in the form of a pun:


When designing finite impulse response filters, be careful what you ask FIR!

The first filter we investigate in the Investigate_DSP_ML_VIII() study, Listing 4, is a 4th order moving average (MA) filter in the script MovingAverageFIR.m. Here, each of the 5 filter coefficients has the same value: 1/5. Common sense seems to suggest that summing up and averaging the last 5 values of the input signal should be sufficient to remove ALL quickly varying high frequency noise and signals, leaving just the more slowly changing low frequency signals.

However, this is not the case, as can be seen from the 4th order MA filter’s response shown in blue in Figure 3a. Yes, low frequency signal components around 0 Hz are passed as expected, but there are also strong (unwanted) signal components passed for frequencies around 80 Hz and more again near 120 Hz.

This strange result can be understood by applying this moving MA filter, with its 5 coefficients 1, 1, 1, 1, 1, to a number of signals. Slowly varying signals, such as [0.86 0.96 1.00 0.96 0.86] will generate an average of 0.93, indicating they will be passed by the filter. The varying signal, S1, with values [-0.80 0.30 1.00 0.30 -0.80], and the more rapidly varying signal S2 with values [0.30 -0.80 1.00 -0.80 0.30] will average out to zero. However, a signal S3 = [-1.0, 0.0, 1.0, 0.0, -1.0], with a frequency between that of S1 and S2, will have a non-zero average passed by the filter.

This behavior is seen in Figure 3a with the moving average filter having many pass and stop bands. This is why, by chance, the 48 Hz components we want to keep appears with zero amplitude in this filter curve picture—meaning it will be removed from the filter output. By contrast, the high frequency components, noise or the 86 Hz signal we wanted removed, will only be slightly reduced in their amplitude after filtering. MA filters are not the right thing to apply to remove potential aliasing problems before down-sampling signals!


There are a number of tools available to design the “good quality” FIR filters frequently needed during DSP analysis. In Listing 4’s CustomFIR() design function we have made use of the built-in fir2() tool. To use this tool, we need to know the FIR filter order, numCoeffs – 1. We also need to make design decisions and to specify the filter gain:

idealGain =[1.0 1.0 0.0 0.0];

at a number of specific frequencies:

designFrequencies =[0.0, 51.2, ...

76.8, 128.0]

A gain of 1.0 at frequency 0, means all low frequencies will be passed. A gain of 0.0 at frequency 128 means all high frequencies will be blocked. An interesting property of digital filters is that their design does not depend on the actual range of frequencies that need to be filtered, but rather on the number of samples that cover that range of frequencies. This means that the fir2 tool must be given the frequency values normalized by samplingRate / 2, in this case 128.

normalizedFreq[] = ...

designFrequencies[] / 128

Unfortunately, after designing our FIR filter we quickly meet up with the ninth DSP Murphy’s Law:

DSP_ML-IX: There are many advantages of digital filters over analog filters, but providing perfection is not one of them.

A custom 10th order FIR filter does a reasonable job of removing high frequency noise components, but as can be seen from Fig. 3B, it would leave some of the undesired 86 Hz signal to be aliased down to 42 Hz after down-sampling. To reduce the residual level of the 86 Hz signal we need a significantly higher order filter.

In Listing 6, we investigate how the SNR of the down-sampled signal is impacted by application of the 10th order custom FIR filter applied to the time domain signal, and a new concept—applying the filtering operation directly in the frequency domain.

Listing 6
In the Investigate_DSP_ML_IX() simulation, we identify the hidden distortions that occur when applying digital filters in either the time or frequency domains.

%  Place in file "C:\DSP_ML\Investigate_DSP_ML_IX.m" 
function Investigate_DSP_ML_IX( ) 
clear classes; %Encourage Octave/MATLAB accepting class changes
tripleSignal = SimulateTripleSignal(0, 48, 86);
noisyTripleSignal = AddNoise(tripleSignal, 1.0);
%% Apply Custom time domain FIR
[filteredNoisyTripleSignal, filterGain]...
        = FIRFilterSignal(noisyTripleSignal, 'Custom');
[frequency, freqData] = ...
           FrequencySpectrum(filteredNoisyTripleSignal, ...
               ' FIR filtered noisy tripleSignal', '-');
plot(frequency, abs(filterGain) * max(real(freqData)), 'Linewidth', 3);
CalculateDisplaySignalSNR(freqData, -36, -4);
CalculateDisplaySignalSNR(freqData, 4, 36);

%% Down sample FIR filtered signal
downsampledFilteredNoisyTripleSignal = ...

[~, freqNoisyDataDownSampled] = ...
   FrequencySpectrum(downsampledFilteredNoisyTripleSignal, ...
      'Down-sampled FIR filtered noisy TripleSignal', '-');
CalculateDisplaySignalSNR(freqNoisyDataDownSampled, -36, -4);
CalculateDisplaySignalSNR(freqNoisyDataDownSampled, 4, 36);

%% Apply frequency domain filter
[filteredNoisyTripleSignal, filterGain]...
        = DFTFilterSignal(noisyTripleSignal);
[frequency, freqData] = ...
  FrequencySpectrum(filteredNoisyTripleSignal, ...
                  'DFT filtered noisy tripleSignal', '-');
plot(frequency, abs(filterGain) * max(real(freqData)), 'Linewidth', 3);
CalculateDisplaySignalSNR(freqData, -36, -4);
CalculateDisplaySignalSNR(freqData, 4, 36);

%% Downsample DFT filtered signal
downsampledFilteredNoisyTripleSignal =  ...     
[~, freqNoisyDataDownSampled] = ...
    FrequencySpectrum(downsampledFilteredNoisyTripleSignal, ...
       'Down-sampled DFT filtered noisy TripleSignal', '-');
CalculateDisplaySignalSNR(freqNoisyDataDownSampled, -36, -4);
CalculateDisplaySignalSNR(freqNoisyDataDownSampled, 4, 36);

Figure 4a show the output frequency spectrum of the noisy three component signal after filtering by the 10th order custom FIR filter. The filtering has significantly reduced the high frequency noise components and left a greatly reduced 86 Hz signal intensity. Figure 4b shows the frequency spectrum after down-sampling the filtered signal. Removal of the high frequency noise components means that the initial and final SNR of the DC component remains above 22, just what we wanted! The filtering has significantly reduced the amplitude of the
86 Hz signal, now aliased down to 42 Hz, to just above the noise level.

However, the shape of edge of the frequency response of the 10th order FIR filter has also made an unwanted reduction in the amplitude of the 48 Hz signal we wanted to retain for examination.

The amplitude of the 48 Hz signal can be recovered by increasing the FIR filter length and choosing filter coefficient values to sharpen up the edge of the filter’s frequency response. However, as should be expected, DSP_ML-VIII is again lurking in the wings and can be expected to try to thwart our effects.


It turns out that we can’t code the filtering operation to be evenly applied to all the points of the incoming signal. The early part of the signal is not being handled as the later part of the signal as can be seen when we try to write the FIR code.

Computing the FIR requires the mathematical operation:

This sum needs to be calculated using a code loop. To process the first input value, time zero, means that this sum needs to be performed with n = 0 and use input values at times -1, -2, -3 .. before we started measuring.

Since we can’t call on Dr. Who, the Time Lord, any time anybody needs to do filtering, we must find compromises to get the best filter output we can. One way, is to hope that since we are measuring for a long time, that we can ignore this starting problem and use only the valid outputs we can calculate. This we did in Listing 5 by performing the loop calculation using the limits:


Advertise Here

for count = numFIRCoeffs : ...


and initializing the first P-1 filtered values that can’t be evaluated to zero. This generates a distorted initial portion of the filter output known as the filter’s starting transient.

However, earlier we mentioned we needed to increase the number of filter taps P to improve the filter response to remove filter distortion and to keep the 48 Hz signal we wanted (Figure 4). However, doing this will increase the early signal’s distortion as the FIR transient becomes longer. The transient distortions become much worse for the longer length FIR filters applied in real life where the filter order may need P > 256 to get the correct filter response shape needed to process images of 1000 × 1000 pixels

With audio signals, there is an out to remove the starting transient. Audio signals are continually coming in. Rather than just applying a P tap filter to the N points captured over the time, K∆T —› (K+N-1)∆T, we retain P older values captured earlier and process more values over a longer time period, (K-P+1)∆T —› (K+N-1)∆T. However, this longer calculation takes more time to perform, and that can be a problem as discussed in the next section.

In real life, you don’t capture one small section of data, filter it and then admire the spectrum! Yes, when testing our code, the filtered signal’s spectrum is checked to see that the filter operation is the one expected. However, in real situations we continually capture data, filter it, and then perhaps play back data through a speaker.

The trouble is, for real time analysis, all that filtering must occur BEFORE the next signal sample comes in. Each Pth order filter operation requires the order of:

4P fetches of instructions

2P memory fetches for the data and coefficients

P multiplications, P additions during the FIR

This is a total of 7P processor operations. All calculations must complete within the next 1/ 44000 s otherwise the output signal will be distorted. We will ignore the additional time needed for the 4P, or more, fetches of instructions because many processors now have a Harvard rather than von Neumann architecture. This means their architecture pipelines things so that the fetching of instructions from program memory occurs in parallel with data memory fetches. However, we need to worry about the 4P time to repeatedly fetch the new data and the filter coefficients, and doing all the math operations.

Suppose the FIR filter length P is large—300 is not uncommon—and you are trying to keep processor clock speeds down either for product cost or power consumption reasons. Now 4P FIR operations, and all the other things your processor has to do, may take longer than the time between input samples.

Under these circumstances frequency domain filtering may provide the answer. Historically this approach, multiple applications of the fft(), got a bum rap. This was because it requires more data and program memory storage than time domain filtering, and in the ‘80s it would cost $1200 for a 1 MB memory card extension. Memory costs are no longer important today, although over-shooting the size of the available memory in a small product can have significant market considerations.

During frequency domain filtering you must capture M data points and then attempt to filter all of the last M points before the next M points have been recorded. Frequency domain filtering involves transforming the M points captured into the frequency domain via an fft() algorithm. The amplitudes of the frequency components are directly manipulated to cause the filtering operation, and then the signal is returned back into the time domain with an ifft().

We need to show that all these transformations can be done in a time less than 4P×M cycles to solve the time crunch of implementing real time Pth order filtering. The literature shows that a M point fft() takes M log2(M) operations of the form A = B+C×D. Because A, B, C and D are all complex data values of the form x + jy, memory fetches, stores and additions take 2 cycles rather than 1. Complex multiplications, (a+jb) × (x+jy), involve 4 multiplication and 2 additions—
6 cycles rather than 1.

That roughly means that a M-point fft() will take on the order of 16 M log2(M) cycles, if we ignore some special fft() memory characteristics that will reduce this number by a factor of 2 or more. For frequency domain digital filtering we need 2 fft() operations and an additional 2M multiplications and 4M data memory fetches and stores to perform the filtering operation itself.

If we decide to process blocks of data of size M = 1,024, then frequency domain filtering requires a total of 6 M + 32 M log2(M) cycles, or around 330,000 cycles. This is faster than the 410,000 cycles needed for the time domain FIR filter for P = 100. The break-even point is somewhere in the range P = 60 to 80. In addition to the time savings, frequency domain filtering appears to be more flexible as you can shape all 1,024 frequency components to be what you want.


Listing 7, the code to perform digital filtering in the frequency domain, appears to be an algorithm that provides the exception to DSP_ML-IX – Perfection is not possible with digital filters. First we apply a DFT using the fft() operator to all the time domain data—not having to worry about signal distortion from initial zero values because of time domain filter transients. Note the double use of the advice from DSP_ML-V about applying fftshift().

Listing 7
In apparent contradiction to DSP_ML-IX, a perfectly shaped, sharp edged digital filter can be designed for use in the frequency domain. All undesired high frequency noise and signal components can be removed without impacting the amplitudes of all the wanted low frequency components.

%  Place in file "C:\DSP_ML\@TimeDomainSignal\DFTFilterSignal.m" 
function [filteredSignal, filterGain] = ... %  NEW CLASS METHOD
  data = signal.timeData;

% Move to frequency domain remembering DSP_ML V: 
% Applying fftshift( ) at least once every day 
%                               will keep frequency confusion at bay  
% Despite their names
%   fftshift( ), and its inverse ifftshift( ), give identical results
  dataDFT = ifftshift(fft(fftshift(data)));

% Design the perfectly sharp edge frequency domain filter
  dataLength = length(data);      
  filterGain = ones(dataLength, 1);
  leftFreq = -64; rightFreq = 63;
  filterGain(1:(leftFreq + dataLength / 2 - 1)) = 0;  
  filterGain((rightFreq + dataLength / 2 + 1): dataLength) = 0;

% Apply the filter 
  dataDFT =  dataDFT .* filterGain';          

% Move back into the time domain and then 
%                generate a TimeDomainSignal instance and fill it
  modifiedData = fftshift(ifft(ifftshift(dataDFT)));    
  filteredSignal = signal;    
  filteredSignal.timeData =  modifiedData ; 

Once in the frequency domain, we can design a perfectly shaped, sharp edged digital filter, blue line in Figure 5a. All undesired high frequency noise and signal components can be removed without impacting the amplitudes of all the wanted low frequency components. The real and imaginary parts of the frequency filtered signal are returned to the time domain using an inverse DFT, combining another double use of the fftshift() operator with ifft(). After down-sampling, we appear to have a signal whose perfect spectrum appears in Figure 5b. The original SNR, around 22, of both the DC and 48 Hz components are retained.


Advertise Here

This points out the importance of avoiding aliasing when down-sampling the signals already stored on our computer. We also need to ensure that an adequate analog anti-aliasing filter is put in front of analog to digital convertor (ADC) before we sample the audio signal captured by a microphone or the values from any sensor.

Then we ask the question: What are we going to do with this perfect signal? The answer is: Combine it with other section of “perfectly” filtered signals from the hours of audio data we captured in community homes. Problem solved—and then DSP_ML-VI springs to mind (Table 1). The vice-versa part of this DSP_ML-VI is about to come into play. Go back and look at Figure 4 in the June Part 1 article. (Figure 4 from Part 1 is also repeated for you on the Circuit Cellar article materials page for this article.) In that Figure we saw that a time domain sinusoidal burst signal had significant distortions in its frequency domain signal. Generating a burst is simply applying a sharp edged digital filter to a longer sinusoidal signal!

Applying this sharp edged, time domain filter caused such distortion in the frequency domain analysis. Unfortunately, DSP_ML-VI then clearly hints that an equivalent sharp edged, frequency domain filter will be causing distortions in the final filtered time domain signal we want to further analyze or play back.

Obviously, there are many more DSP Murphy’s Law dragons still to slay!


This article is designed as a series of projects where you cut and paste the code from this pdf into your .m script. Alternatively, you can get all the code from the Circuit Cellar Article Code & Files webpage. Remember: Cutting and pasting code from pdfs in MATLAB and Octave .m scripts suffers from its own Murphy’s Laws, especially with the occasional cut-and-paste hyphen and minus sign having the wrong format.

When all the files in this project have been added to those of the June Part 1 article, then the directory structure C:\DSP_ML should look like Figure 6 with all the class information in directory C:\DSP_ML\@TimeDomainSignal.

Figure 6
When all the files in this project have been added to those of Part 1, (Circuit Cellar 335, June 2018) then the directory structure C:\DSP_ML should look like this.

Add the line pkg load signal to the startup.m script so that Octave will recognize functions, fir2() that are not automatically available.

In developing these articles, we have been struggling with the way that Octave seems to have problems with handling classes within a class directory such as C:\DSP_ML\@TimeDomainSignal under Windows.

Adding the line:

clear classes;

at the start of each sub project script, SEEMS to have solved the problem of Octave to stop using old class scripts and instead use the updated one where we have just corrected all the (unintentional) errors we made. However, we have not really found a solution to ensure Octave to automatically recognize new methods placed in @TimeDomainSignal in the same way that MATLAB does. We have been using two approaches when developing and testing our code for these:

  1. Late at night we entered all the new scripts, Investigate_DSP_ML, and class methods associated with the next section of the project. Now close Octave and go to bed for a well justified sleep. Restarting Octave at the beginning of a new day, or any other time, solves all problems of recognizing new class methods.
  2. WIDFI (When in Doubt Fake it). Simply put all the class methods in the main directory C:\DSP_ML.!

If any reader knows a better solution to this Octave problem, please let us know and we will add it into the next article in the series. Thanks.

The authors thank summer research student Maaz Khurram for helping with the article. 

Mathworks |
GNU Octave |


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Mike Smith ( is a professor in Computer Engineering at the University of Calgary, Canada. When not singing with Calgary’s adult recreational choir Up2Something, Mike’s main interests are in developing new biomedical engineering algorithms and moving them onto multi-core and multiple-processor embedded systems in a systematic and reliable fashion. Mike has recently become a convert to the application of agile methodologies in the embedded environment. Mike was one of the notable contributors interviewed in Circuit Cellar’s 25th Anniversary issue.

Mai Tanaka ( is a graduate student in Electrical and Computer Engineering at the University of Calgary, Canada. Her research interests include the transfer and translation of information at the human machine interface such as speech processing.

Ehsan Shahrabi Farahani ( has completed his Masters in Electrical and Computer Engineering at the University of Calgary, Canada. His research interests include digital signal processing, image processing, biomedical signal processing, medical data analysis, medical imaging and speech processing.

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Murphy’s Laws in the DSP World (Part 2)

by Michael Smith, Mai Tanaka and Ehsan Shahrabi Farahani time to read: 26 min