## Future Imperfect

### Unpredictable issues crop up when you move from the real world of analog signals and enter the world of digital signal processing (DSP). In Part 3 of this article series, Mike and Mai focus on strategies for how to—or how to try to—avoid Murphy’s Laws when doing DSP.

In the previous two parts of this article series, in June and August (*Circuit Cellar* 335 and 337), we have been exploring the generation of reliable digital signal processing (DSP) software when using MATLAB or Octave tools to explore new solutions before real-time implementation on actual hardware. By developing a small collection of TimeDomainSignal class methods, we identified, and then attempted to correct, a number of DSP problems. These arise because we are manipulating short collections of sampled data that are meant, somehow, to represent the intricacies of the infinite, analog real world.

The first article (Part 1) covered potential problems involving displays of digital signals in time and frequency domains. In Part 2, we discussed how to improve program speed and memory storage issues by digital down-sampling. This involves discarding signal values when we find out that we have been capturing digital signals at a faster rate than is necessary. To avoid issues such as aliasing, sample twice as fast as the fastest frequency in your signal—the Nyquist sampling criterion. We showed how applying digital filters could improve or degrade the quality of the down-sampled signals depending on how they were designed or applied.

As an aide de memoire, a DSP Murphy’s Law was introduced to describe how to avoid each programming issues. To date, there are nine DSP Murphy’s Laws which are listed in **Table 1**. However, at the end of each article, there was a hint that the application of these Laws during our simulation studies was an imperfect solution. Each time we looked at a new signal, or added a new DSP method, we weren’t getting the anticipated results 100% of the time. We would have to upgrade the method over and beyond what we had anticipated to fix each new DSP Murphy’s Law. It also made us suspect that there are other problems lurking in our code, waiting to be uncovered as we try to apply the analysis tools to real world signals prior to code release. This translates into potential financial losses if many customers request fixes in the new products we have released.

In this article, we are going to try to explore some of these remaining problems. First, we will explore in more detail finding a solution to a community noise problem my group is tackling: “The Ranchland’s Hum.” We believe that there are unique characteristics of this noise nuisance recorded in a number of neighborhood homes. We need to extract this acoustic signature from the midst of traffic and home sounds and compare it with other households to identify if there is more than one Hum. Then we can try to match this signature to that from potential noise sources.

The second issue, unfortunately, is to work through the potential problems generated by the tenth DSP Murphy’s Law:

**DSP_ML-X:** *There is no such thing as ONLY* *nine DSP Murphy’s Laws*

— ADVERTISMENT—

—Advertise Here—

##### ACOUSTIC SIGNATURE SEARCH

We are planning to develop an algorithm to extract a specific acoustic signature from the midst of traffic and home sounds. The first step in the investigation is to develop a simulated signal with known characteristics so we can check that we get the expected results after applying our new DSP analysis methods.

In **Listing 1**, Investigate_DSP_ML_XI( ), we have used the TimeDomainSignal methods from the previous articles to generate a cosine wave at 42 Hz whose amplitude is impacted by two side band signals at 37 Hz and 47 Hz. These side band signals have been given different amplitudes 0.1 and 0.25. We have already shown that incorrectly applying DSP algorithms to sampled signals causes unintentional distortions. In particular, aliasing, where true signal frequencies appear in an incorrect frequency location. The unequal amplitudes will help better identify any unexpected changes in the frequency spectrum after running the algorithm.

**Listing 1**

This attempt to generate a 42 Hz signal with sidebands at 37 Hz and 47 Hz, Figure 1a, leads to a frequency spectrum with components having unexpected negative amplitudes as shown in Figure 1c.

```
% Store in "C:\DSP_ML\Investigate_DSP_ML_XI.m"
function Investigate_DSP_ML_XI( )
clear classes; % Force MATLAB / OCTAVE to see class change
fastSampling = 128; phase0 = 0; duration = 1;
signal42Hz = TimeDomainSignal(1.0, 42, phase0, ...
duration, fastSampling);
signal47Hz = TimeDomainSignal(0.1, 42 + 5, phase0, ...
duration, fastSampling);
signal37Hz = TimeDomainSignal(0.25, 42 - 5, phase0, ...
duration, fastSampling);
sideBandedSignal = signal37Hz + signal42Hz + signal47Hz;
DisplayTimeSignal(sideBandedSignal, ...
'Simulated sampled signal -- 42 Hz with side bands', '-');
FrequencySpectrum(sideBandedSignal, ...
'Frequency spectrum -- 42 Hz with side bands', '-');
```

**Listing 2**

The new TimeCentred( ) method generates signals where time 0 s is at the centre of the sampled signals. This approach further corrects phase issues in the signal’s spectrum that were missed earlier.

```
% Store in "C:\DSP_ML\Investigate_DSP_ML_XI_B.m"
function Investigate_DSP_ML_XI_B( )
clear classes; % Force MATLAB / OCTAVE to see class change
fastSampling = 128; phase0 = 0; duration = 1;
signal42Hz = TimeDomainSignal.TimeCentred(1.0, 42, phase0, ...
duration, fastSampling);
signal47Hz = TimeDomainSignal.TimeCentred(0.1, 42 + 5, phase0, ...
duration, fastSampling);
signal37Hz = TimeDomainSignal.TimeCentred(0.25, 42 - 5, phase0, ...
duration, fastSampling);
sideBandedSignal = signal37Hz + signal42Hz + signal47Hz;
DisplayTimeSignal(sideBandedSignal, ...
'Simulated sampled signal -- 42 Hz with side bands', '-');
FrequencySpectrum(sideBandedSignal, ...
'Frequency spectrum -- 42 Hz with side bands', '-');
% Place this code after "end % methods" near the end in
% "C:\DSP_ML\@TimeDomainSignal\TimeDomainSignal.m"
methods (Static)
function signal = ...
TimeCentred(A, freq, phase, duration, samplingFreq)
signal = TimeDomainSignal(A, freq, phase, ...
duration, samplingFreq);
numPtsBy2 = signal.numPts / 2;
deltaT = 1 / signal.samplingFrequency;
signal.timeSampled(1:signal.numPts) = ...
(-numPtsBy2:(numPtsBy2 - 1)) * deltaT;
signal.timeData = ...
A * cos(2*pi * (freq * signal.timeSampled + phase / 360));
end
end
```

**Listing 3**

The function Investigate_DSP_ML_XII() allows us to investigate a more realistic Hum signal generated using SimulatePracticalSignal ( ) codes in Listing 4.

```
% Store in "C:\DSP_ML\Investigate_DSP_ML_XII.m"
function Investigate_DSP_ML_XII()
clear classes; % Force MATLAB / OCTAVE to see class change
freq1Harmonics = 24; freq2SS = 35; duration = 1.0;
doorBangPosition = 0.5; % Household noise half way during recording
practicalSignal = SimulatePracticalSignal(freq1Harmonics, freq2SS, ...
doorBangPosition, duration);
DisplayTimeSignal( practicalSignal , ...
'Signals with harmonics and sidebands and a door bang', '-');
%% Phase problem after FIR filtering
[filteredPracticalSignal, filterGain]...
= FIRFilterSignal(practicalSignal , 'Custom');
[frequency, freqData] = ...
FrequencySpectrum( filteredPracticalSignal , ...
'FIR filtered quadrupleSignal', '-');
plot(frequency, abs(filterGain) * max(real(freqData)), 'Linewidth', 3);
```

**Listing 4**

This code SimulatePracticalSignal ( ) generates a Hum noise nuisance with components having harmonics, others having side bands. There is a door and general background noise level.

```
% Store in "C:\DSP_ML\SimulatePracticalSignal.m"
function practicalSignal = ...
SimulatePracticalSignal(freq1, freq2, bangPosition, duration)
fastSampling = 256; phase0 = 0; phase90 = 90;
% Signal with harmonics freq1, 2 * freq1, 3 * freq1
signalFreq1H1 = TimeDomainSignal.TimeCentred(1.0, ...
req1, phase0, duration, fastSampling);
signalFreq1H2 = TimeDomainSignal.TimeCentred(0.75, ...
2 *freq1, phase0, duration, fastSampling);
signalFreq1H3 = TimeDomainSignal.TimeCentred(0.25, ...
3 *freq1, phase0, duration, fastSampling);
% Now add an additional signal with side bands
signalFreq2Phase90 = SimulateSignalWithSideBands(1.0, ...
freq2, 5, phase90, duration, fastSampling);
practicalSignal = signalFreq1H1 + signalFreq1H2 + ...
signalFreq1H3 + signalFreq2Phase90;
% Simulate a "door bang"
% using a Blackman-Harris blip in middle of signal
bangPositionArray = ...
floor(bangPosition * practicalSignal.numPts) + 1;
% You will need the line pkg load signal inside startup.m
% for Octave to recognize the %blackmanhrris command
householdPulse = 4 *window(@blackmanharris, 11)';
% Place the "blip" into position
practicalSignal.timeData((bangPositionArray - 5) : ...
(bangPositionArray + 5)) = ...
practicalSignal.timeData((bangPositionArray - 5) : ...
(bangPositionArray + 5)) ...
+ householdPulse;
% Add some noise and display time and frequency spectrum
practicalSignal.timeData = practicalSignal.timeData ...
+ 0.1 * randn(practicalSignal.numPts , 1)';
DisplayTimeSignal(practicalSignal, ...
'Practical signal – harmonics, side bands, bang and noise ', '-');
FrequencySpectrum(practicalSignal, ...
'Practical signal – harmonics, side bands, bang and noise ', '-');
% Store in "C:\DSP_ML\SimulateSignalWithSideBands.m"
function signal = SimulateSignalWithSideBands(amp, ...
freq, ssFreq, phase, duration, samplingFreq)
signal = TimeDomainSignal.TimeCentred(amp, ...
freq, phase, duration, samplingFreq);
signalSS1 = TimeDomainSignal.TimeCentred(amp * 0.3, ...
freq + ssFreq, phase, duration, samplingFreq);
signalSS2 = TimeDomainSignal.TimeCentred(amp * 0.1, ...
freq - ssFreq, phase, duration, samplingFreq);
signal = signal + signalSS1 + signalSS2;
```

**Listing 5**

This function AdjustedFIRFilterSignal( ) adjusts out the time delay introduced by calculating the FIR output from the weighted average of P + 1 earlier input values.

```
% Store in "C:\DSP_ML\Investigate_DSP_ML_XII_B.m"
function Investigate_DSP_ML_XII_B()
clear classes; % Force MATLAB / OCTAVE to see class change
freq1Harmonics = 24; freq2SS = 35; duration = 1.0;
bangPosition = 0.5; % Household noise half way during recording
practicalSignal = SimulatePracticalSignal(freq1Harmonics,...
freq2SS, bangPosition, duration);
DisplayTimeSignal( practicalSignal , ...
'Signals with harmonics and sidebands and a door bang', '-');
%% Adjust filter algorithm - phase issues removed
[ filteredPracticalSignal , filterGain]...
= AdjustedFIRFilterSignal(practicalSignal, 'Custom');
[frequency, freqData] = ...
FrequencySpectrum( filteredPracticalSignal, ...
'Adjusted FIR filtered quadrupleSignal', '-');
plot(frequency, abs(filterGain) * max(real(freqData)), 'Linewidth', 3);
% Store in
% "C:\DSP_ML\@TimeDomainSignal/AdjustedFIRFilterSignal.m"
function [filteredSignal, filterGain] = ...% NEW CLASS METHOD
AdjustedFIRFilterSignal(signal, whichFilter)
if strcmp(whichFilter, 'Moving Average')
filterCoeffs = MovingAverageFIR(5);
else
filterCoeffs = CustomFIR(11);
end
numFIRCoeffs= length(filterCoeffs);
data = signal.timeData;
filteredData = zeros(1, signal.numPts);
for count = numFIRCoeffs : signal.numPts
filteredValue = 0;
for coeffs = 1 : numFIRCoeffs
filteredValue = filteredValue + ...
data(count - coeffs + 1) * filterCoeffs(coeffs);
end
% filteredData(count) = filteredValue;
filteredData(count - floor(numFIRCoeffs / 2)) = filteredValue;
end
% Insert modifications here
filteredSignal = signal; % Generate a TimeDomainSignal Instance
filteredSignal.timeData = filteredData;
% Frequency response = DFT(Time domain Impulse response)
FIRImpulseResponse = zeros(signal.numPts, 1);
FIRImpulseResponse(1 : numFIRCoeffs) = filterCoeffs;
filterGain = fftshift( fft(FIRImpulseResponse) );
end
```

**Listing 6**

The first for loop code is a straight forward application of a modulus operator to keep the index in the bounds of an data[] array. However, the second approach using an AND mask to keep the array index in bounds is computationally faster if the number of points in the array are exactly a power of two.

```
% Modifications in
% "C:\DSP_ML\@TimeDomainSignal\AdjustedFIRFilterSignal.m"
% Using a modulus operator to allow for loop indexing inside the FIR
% filter to use end values as estimates of unknown initial values.
data = signal.timeData;
filteredData = zeros(1, signal.numPts);
for count = 1 : signal.numPts
filteredValue = 0;
for coeffs = 1 : numFIRCoeffs
inDataPosn = mod( (count - coeffs), signal.numPts);
filteredValue = filteredValue + ...
data(inDataPosn + 1) * filterCoeffs(coeffs);
end
outDataPosn = mod( (count - floor(numFIRCoeffs / 2) - 1), ...
signal.numPts);
filteredData(outDataPosn + 1) = filteredValue;
end
% When numPts is a power of 2, a bitwise AND mask is a much faster
% way of implementing a modulus operation --
% EXCEPT THIS CODE IS NOT THE WAY TO DO BITWISE OPS
% && and & re C++ operators for logical and bitwise AND operations
% Here we must use & and bitand( ) for logical and bitwise AND ops
data = signal.timeData;
filteredData = zeros(1, signal.numPts);
arrayIndexANDMask = signal.numPts - 1;
for count = 1 : signal.numPts
filteredValue = 0;
for coeffs = 1 : numFIRCoeffs
inDataPosn = (count - coeffs) & arrayIndexANDMask; % NO!!!!!
filteredValue = filteredValue + ...
data(inDataPosn + 1) * filterCoeffs(coeffs);
end
outDataPosn = ...
(count - floor(numFIRCoeffs / 2) - 1) & arrayIndexANDMask; % NO!
filteredData(outDataPosn + 1) = filteredValue;
end
```

The signal, generated from 0 s to 1 s is shown in **Figure 1a.** Each cosine wave at frequency f must be treated in the DSP world as 2 complex exponentials at frequencies +f and –f. We are expecting a six-peak spectrum with all components having positive amplitudes as shown in **Figure 1b**.

— ADVERTISMENT—

—Advertise Here—

However, as seen in **Figure 1c**, that is not what we get. The side band amplitudes are all negative, a clear indication that somehow the phases of these signals have been modified from the planned 0 to 180 degrees while the phase of the main signal remains correct. The simulation code allows us to verify that moving the side band frequencies from 5 Hz away from the center frequency to 6 Hz, 8 Hz or any other even number, gives signals with the expected positive amplitudes.

Something is wrong with our DSP code, ONCE AGAIN! In our June Part 1 article, we generated the frequency spectrum from a signal sampled at 16 Hz using the built in discrete Fourier transform *fft()* algorithm. We found that the results were not stored in a *frequencyDomain[]* array in the expected order 0 Hz to nearly 16 Hz, but in a folded order 0 Hz to nearly +8 Hz followed by -8 Hz to -1 Hz. A reminder on how to unfold the spectrum into the expected format after performing the *fft()* was provided by:

**DSP_ML-V:** *Applying fftshift() at least once* *every day will help keep frequency confusion* *at bay!*

We found that unless a second *fftshift()* operation was applied to fold the *timeData[]* array before the fft() that the phases of certain signals were shown incorrectly in their frequency spectrum. These phase corrections have been applied here too but, as seen from Figure 1c, somehow it is not sufficient.

We are not experiencing a new DSP_ML, but yet another example of:

**DSP_ML-VI:** *By symmetry, a problem during* *DSP frequency domain analysis is bound* *to be introducing a totally equivalent, but* *probably not immediately obvious, problem* *during DSP time domain analysis.*

In the frequency domain we found that application of the *fft()* did not generate a spectrum 0 Hz to nearly 16 Hz, but one with a folded frequency order, starting with 0 Hz to nearly +8 Hz followed by -8 Hz to -1 Hz. Murphy’s Law DSP_ML-VI reminds us that there are symmetry properties between the time and frequency domains as both approaches characterize the same signal.

This symmetry property suggests that we need to do more than just fold the time signal using the *fftshift()* operator before applying *fft()*. We must also generate a time signal with a format similar to that of the frequency spectrum. This means we need simulated signals with values sampled from -0.5 s to nearly +0.5 s and not from 0 s to nearly 1 s. This new approach is coded in the new TimeCentred method given in **Listing 2**, Investigate_DSP_ML_XI_B( ). The new *timeData[]* series, **Figure 2,** now generates the expected six positive amplitude frequency points of Figure 1b.

**Figure 2**

This side-banded signal has the expected frequency spectrum, Figure 1b, when simulated from time -0.5 s to nearly +0.5 s; a simple shift in time from the times 0 s to nearly 1 s used in Figure 1

There is an obvious question to ask: Why is the code broken now when generating this new signal and not before? The answer lies in:

**DSP_ML-IV:** *Nothing is as real / correct as it seems.*

In all the previous investigations we had a testing procedure that only involved signals with even frequency values, 2 Hz, 16 Hz and 42 Hz. Those signals have, by unfortunate happen chance, identical sample values whether sampled for the period -0.5 s to nearly +0.5 s or for the period 0 s to nearly 1 s. This hidden, unexpected property is no longer true for signals without even frequency values. The problem appears not only for 37 Hz and 47 Hz signals, but also 41.5 Hz and 42.5 Hz signals. In fact, the problem exists for every single signal in the DSP universe, EXCEPT for the ones we happened to use for testing during the Part 1 and Part 2 articles!

We can solve this problem by ensuring symmetry between the frequency and time domains during simulation. We will simulate signals that have been captured from time *–t/2* seconds to nearly *+t/2* seconds and display the signals from *–NyquistFrequency / 2* to nearly *+NyquistFrequency / 2*.

##### MURPHY’S DSP LAW XI

Ensuring time and frequency symmetry leads to the next question to answer: “Okay, we get the phases we anticipate by generating simulated signals sampled from time = -0.5 s to nearly time = +0.5 s. However, that is not going to happen in real life! There, we are going to first capture signals sampled at 0 s to nearly 1 s, then a second set from 1 s to nearly 2 s and so-on. What sort of spectra should we anticipate from those time shifted signals?”

We can get a potential answer about the expected frequency characteristics of these signals from the eleventh DSP Murphy’s Law:

**DSP_ML-XI:** *A shift in time,* *will twist frequency components real fine!*

— ADVERTISMENT—

—Advertise Here—

Suppose you have just captured samples of a cosinusoidal audio signal, Acos(2πft+f), with amplitude A, frequency f and phase f. If we start capturing more signals from the same signal source after a time delay TD, then the new signal is the original signal with an extra phase f_{EXTRA}=2πfTD:

We can also predict that their frequency spectra will look very similar to each other. The difference will be that the real and imaginary frequency component values of the second spectrum will be rotated through an angle f_{EXTRA} that depends on both the time delay TD and the frequency f. By rotated, we mean: any completely real frequency component A or completely imaginary B frequency component in the original signal will now appear with both real and imaginary frequency components: C + jD.

The half-second delay between the simulated signals in Figure 1a and Figure 2 means an extra phase shift of the 42 Hz signal by f_{42}=2π(42×0.5). Since 42 × 0.5 is a whole number, the phase shift is a multiple of 2π. This means the samples of this delayed frequency component will have the same phase as the original signal—so their spectra will be identical. However, the extra phase shift for the 47 Hz signal is:

The second term, π radians phase shift, means a 180-degree shift from the original signal, so its amplitude will be switched from positive to negative in its spectrum. Thus, DSP_ML-XI exactly explains the difference seen between the expected and actual amplitudes of the 47 Hz signal in Figures 1b and 1c.

##### VI AND XI IDEAS LEVERAGED

Personally, we prefer to analyze the impact of time shifts on frequency spectra in a different manner based on a more positive variation of DSP_ML-VI:

**DSP_ML-VIC:** *By symmetry, any idea found* *true and useful during DSP frequency domain* *analysis is bound to be true during DSP time* *domain analysis* *—if only you can recognize* *that you are in a situation where the idea* *might be useful!*

The DFT of the complex exponential time series exp(j2πft) is a single spike at location f. If we multiply the time series by a second complex exponential exp(j2πpt) we get the signal:

This has a DFT that is a single spike at location f + p. Applying this idea to all components in a time signal means that multiplication of a time signal by a complex exponential of exp(j2πpt) will cause all its spectral components to be upshifted by a distance p. A component at -42 Hz will be upshifted to a higher frequency (-42 + p) Hz, and one at + 42 Hz shifted to a higher frequency (42 + p) Hz.

So, multiplying a time series by a complex exponential exp(j2πpt) causes a shift in its frequency component by an amount p. Now flip the idea on its head, exchanging the words time and frequency as suggested by DSP ML-VIC. If we shift all the time signal components by time TS, then all the frequency components will end up being multiplied by the complex exponential exp(j2πfTS). This multiplication causes the spectral amplitudes to be unchanged with just the phases modified. This provides an alternative explanation of the differences between the planned and actual simulated signal spectra, Figure 1b and Figure 1c.

What happens when you move to a real-life scenario where the noise level in the real signal is not as bad as expected?

When we will simulate things we often deliberately work with worst case scenarios. For example, making the noise level on simulated signals very high to emphasize a point. This was done in our August Part 2 article when evaluating the signal-to-noise ratio (SNR) of down-sampled signals to show the need to apply well designed FIR (finite duration impulse response) digital filters. So, what happens in real-life when the environment actually turns kind on you? As expected, a new DSP variant of a standard Murphy’s Law appears:

**DSP_ML-XII:** *The further you get into a* *project, the more likely you are to find that* *you overlooked some important DSP signal* *behavior earlier.*

**Listing 3**, Investigate_DSP_ML_XII(), is code we are using to better simulate the real life Hum noise nuisance environment we expect to measure inside a home. **Figure 3a** shows the “captured signal” generated by **Listing 4**. As can be seen from the frequency spectrum, **Figure 3b**, there is a signal at 24 Hz with harmonics, over tones, at 48 Hz and 72 Hz and a separate, unrelated noise component at 35 Hz with two side bands. The large spike at time 0 s represents the sound of a door slamming. This narrow signal in the time domain has a very broad frequency spectrum. A small amount of white noise, same average noise power at all frequencies, has been added as background noise to the simulation to further improve the realism.

**Figure 3c** shows the result of applying the 10th order custom FIR filter from Article 2. The filter response is not what we expected. Our filtering code has high frequency noise removal BUT somehow caused a frequency spectrum twist. The shape of that twist brings back memories of DSP_ML-XI: Time shifts, frequency component twists.

**Figure 3**

(a) The more realistic Hum signal simulation and (b) its frequency spectrum. (c) The output of the FIR filter is both high frequency filtered and phase distorted.

##### UNTWISTING THE FREQUENCY

We should get into the practice of doing a detailed comparison between the input and the expected and actual output signals after processing with DSP algorithms. **Figure 4a** compares the actual Hum signal before and after FIR filtering. We can clearly see the details of the time shift in the signal which we predicted when thinking about the twists appearing in the frequency spectrum, Figure 3c.

In **Listing 5**, Investigate_DSP_ML_XII_B.m, we adjusted the FIR filter algorithm to remove this apparent time shift by applying:

**DSP_ML-IIIB:** *When your simulation results* *don’t pan out, you’ve probably calculated* *something just one, or perhaps many, sample* *points out!*

The original FIR filter algorithm in Article 2 placed the results in the *filteredData[]* array location count corresponding to the location to the newest input value used during the FIR operation.

```
for count=numFIRCoeffs : ...
signal.numPts
filteredValue=0;
for coeffs=1 : numFIRCoeffs
filteredValue=filteredValue +...
data(count-coeffs+1) *...
filterCoeffs(coeffs);
end
filteredData(count)=...
filteredValue;
end
```

The new version places the output at location (count – floor(numFIRCoeffs / 2) leading to **Figure 4b** with all the phase issues removed.

```
filteredData(count - ...
floor(numFIRCoeffs / 2)) ...
= filteredValue;
```

Moving the position of the filtered result in the output array sort of makes sense. FIR filters essentially perform a weighted average. An average value is more representative of the value at point (count – floor(numFIRCoeffs / 2) in the middle of a sequence than at the end point of the sequence, count.

Applying the new code generates Figure 4b where the phase twist present in Figure 3c has been removed. However, removing the twist has made another error present in the result more apparent. If you compare the frequency information in the ideal Figure 3b and the new result, Figure 4b, additional, unwanted ripples surrounding many of the frequency peaks are clearly visible in Figure 4b. These extra ripples are particularly visible around the 35 Hz peaks (red).

**Figure 4**

(a) The original audio, shown in black is delayed in time, red signal, after filtering. The filtering also introduces an initial starting transient (arrow). (b) Correcting the time delay removes the phase changes shown in Figure 3c but makes other ripple distortions more obvious as compared to Figure 3b.

This is clear evidence that the following Murphy’s Law is still active:

**DSP_ML-XIIB:** *There is always more DSP* *things going on than first meets the eye!*

This basically says: If we can see something not happening the way we expect in the frequency spectrum of a signal, then we know that something equivalently strange—and perhaps not immediately obvious—must be happening to the same signal in the time domain too.

##### FIXING FILTER TRANSIENTS

A problem with using a Pth order FIR filter is that it takes P + 1 data values to generate the first valid output. That is why the nested for loops in Listing 5 were written with the first output value calculated for the time count = numFIRCoeffs; not from count = 1

```
for count = numFIRCoeffs : ...
signal.numPts ...
for coeffs = 1 : numFIRCoeffs ...
Insert rest of loop code here
end
end
```

The first numFIRCoeffs – 1 output values of the filter can’t be calculated as there are no timeData[count] values for -numCoeffs < count < 1. In Listing 5, the first filter output values have been set to zero. This introduces the filter’s starting transient shown in Figure 4a leading to the spectral ripples seen around the sharp peaks in Figure 4b spectrum. There are many approaches that can be taken to fix this starting transient. No approach is perfect, and the different approaches have different successes depending on the characteristics of the data being filtered.

**Approach 1:** *Using zeros as estimates of the data missed before starting to measure the input signal.*

This approach codes the assumption that timeData[count] = 0 for -numCoeffs < count < 1. There are suggestions in the engineering literature that using noisy extra values rather than zeros makes the effect of incorrect transients less noticeable at later processing stages.

This approach MAY work well if the initial timeData values are close to zero, meaning the noise nuisance was initially quiet. That is not true for this example. A hand-waving, reasonability argument for still using this less-than-perfect method and expecting to see a potential improvement in the output signal can be made on the following grounds: Before we had numCoeffs – 1 output values that were totally incorrect. Now we anticipate that by using more and more of the real data values that the initial filter values are improving (nor fixing) the overall accuracy of the results, with the caveat: hopefully, but no guarantees.

**Approach 2:** *Use copies of the final data values measured as initial estimates of data missed.*

Sometimes the data you have captured have stationary characteristics, meaning it looks similar everywhere in time. In this case, the final data values you have stored for (numPoint – numCoeffs) < count ≤ numPoints MAY be a better estimate for the data values -numCoeffs < count < 1 than setting those initial unknown values to zero or small noisy values.

**Listing 6** shows two approaches for handling the *for loop* array indexing to use the end values as estimates for the missing data values -numCoeffs < count < 1.

The first is a straight forward application of a modulus operator to keep the index in the bounds of an *data[]* array. However, when implemented on actual hardware, this is a computationally slow approach. The modulus operator involves division, truncation, multiplication and subtraction operations:

A much faster approach is possible for array index bound checking if the number of points in the array are exactly a power of two. Prior to the loop we set up a mask:

```
arrayIndexMask = signal.numPts – 1
In our example, numPts = 256 (0x100) so that arrayIndexMask = 255 (0xFF). The bit wise AND operation &
inDataPosn = ...
(count - coeffs) & arrayIndexMask;
```

should keep only the last 8 bits of the inDataPosn index, 0 to 255. This ensures that the indexing inDataPosn + 1 value remains in the MATLAB array bounds 1 to numPts, but avoids the time consuming mod operation.

The only problem: the code does not work. **Figure 5** shows the impact of forgetting:

```
DSP_ML_IV: During DSP, nothing is as
CORRECT as it seems!
When we coded the & operation ….
inDataPosn = ...
(count - coeffs) & arrayIndexANDMask;
```

… we thought we were being very careful in distinguishing between logical AND operations, (using && to manipulate TRUE and FALSE values) and bitwise AND operations (using & to manipulate bit patterns such as 0x1ABC and 0xD743).

However, such carefulness ideas are only good when programming in C / C++ and many other languages. With MATLAB and Octave there is an entirely different approach needed. Now we need to use the operator & when operating on TRUE and FALSE values. For bitwise AND operations we need to use bitand(). And—thanks to DSP_ML_IV—we must use this function in different ways with Octave and MATLAB.

```
Bitwise AND in Octave:
inDataPosn = ...
bitand((count-coeffs), ...
arrayIndexANDMask);
Bitwise AND in MATLAB:
inDataPosn = ...
bitand((count-coeffs), ...
arrayIndexANDMask, 'int16');
```

**Figure 6** shows the application of this “use-last-data as initial estimates” approach on our simulated Hum data using the proper bitand() code. The results are a perfect spectrum—no starting filter transient effects and no strange phase effects. This result is a bit perplexing as our data, Figure 4a, by no stretch of the imagination fits the criterion “important features are in the center surrounded by a mundane semi-featureless, near-zero, background”. There is useful, meaning non-identical, information everywhere in our “Hum” signal.

Unanticipated DSP perfection should make us suspicious and immediately think of this Murphy’s Law variation:

**DSP_ML-XIIB:** *When you find DSP perfection,* *you are either lucky and have found a very* *special case that will make your customers* *congratulate you on your skill and insight* *—or your DSP world will shortly fall apart.*

Mike found such DSP perfection once when analyzing a specific vibration analysis problem for an industry customer. He has since spent many years trying to find a practical application to introduce that same “perfection” into his own biomedical engineering research!

**Approach 3:** *Capture more data than you need, filter all the data, and then keep all the undistorted results after the initial transient.*

We do have a special case on our hands with audio processing. If we have a fast enough processor, then we can filter the data as fast as we capture it. Then we can display snippets of the filtered data rather than snippets of the original. That way we have one early initial transient at the very start of our data capture which we can ignore for later analysis.

However, that’s a very power-hungry approach to use on a battery-operated audio capture system. What if there is a weekend’s worth of Hum audio data and all that we are interested in is the changing characteristics of small snippets of data captured at 10-minute intervals? In that case, simply occasionally process about

1.5 × numPoints of data, and using the last numPoints of the filtered data for the next processing stage is a potential solution. This again allows the filter’s starting transient to be avoided, or worked around, when generating an output signal that we need to analyze in detail.

##### TRIPPING OVER ANOTHER ML

After being able to find many solutions to solve filtering problems, you, as the volunteer Hum consultant, go home confident. You are ready to work on your suspicion that the noise is coming from some machine (somewhere) with specific frequency characteristics. You anticipate being able to track the strength of the acoustic signature of the 35 Hz signal with side bands over the many months of legacy data.

When doing something similar, we found potential seasonal variation in the Ranchlands’ Hum data we are working on. We have started to think about asking people questions such as: Is your noise nuisance louder when the weather is hot or the wind is blowing from a certain direction? If it is worse when the wind is blowing from the west, look for a noise source somewhere to the west of the house.

Can we match these Hum signals through a DSP correlation-based analysis with a Hum in another house in a different part of the city? If that householder hears the noise more when the wind is from the south west, then can we do triangulation with sound direction between a number of houses to better identify the region from which the sound is coming?

Then you come back the following day. The frequency of the engine you suspect is causing the noise nuisance problem has changed from 35 Hz to 30.5 Hz. No big deal. The engine rotation probably has just slowed after coming under a heavier load. It should be easy to show that it is the same engine as the spectrum will be the same, just shifted slightly in frequency by 4.5 Hz.

However, as can be seen in **Figure 7**, the spectral sharpness has degraded with just this small shift in frequency. It could be argued by a technical witness in court that this is a totally new, and unknown, noise source because the spectrum looks so different! Just what new DSP Murphy’s Laws are there to overcome before you’re able to confirm that the new spectral signature is from the same engine that produced the earlier spectra?

**Figure 7**

It just takes a small change in one signal’s frequency, from 35 Hz to 30.5 Hz, to introduce spectral distortions in the output signal even after implementing an FIR filter with time delays and initial transients removed.

There is an old medical expression that says: *“If in pain, apply a wet cloth and take two aspirins and call back the Doctor in the morning.”* In our next article, Murphy’s Laws in the DSP World Part 4, we will see that in our case the saying becomes: *“When DSP Murphy’s Laws cause problems, apply 2 data windows, re-compute and call the developer in the morning.”*

**ML-DSP Project Files**Cut and paste listings from the article .pdf in Ocatve and MATLAB .m scripts or download from the

*Circuit Cellar*article code archive.

**Installing Octave: See Circuit Cellar 335 for details or go here.**URL https://ftp.gnu.org/gnu/octave/windows/ to download

**Installing MATLAB:**** See Circuit Cellar 335 for details or go here.**https://www.mathworks.com/products/MATLAB.html?s_tid=hp_products_MATLAB

Mathworks | www.mathworks.com

GNU Octave | www.gnu.org/software/octave

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • SEPTEMBER 2018 #338 – Get a PDF of the issue

Sponsor this Article