Sensors and Synthesis
Musical instruments such as the piano allow musicians to play in different keys on a single instrument. In contrast, bamboo flutes are designed for only one key. This means flute players must own a different flute for every additional key in which they want to play in. Learn how these three Cornell students built an PIC32 MCU-based electronic flute that reduces the need for owning multiple flutes by incorporating two buttons that allow a flute player to change the key and octave.
Our goal for this project was to build an electronic flute that can play in any key. The first step in this project was understanding the design of a bamboo flute, which differs greatly from Western concert flutes. A typical bamboo flute can be played in only one key. It has a total of seven holes, six of which are used to play different notes. The seventh hole is for the inlet of wind (the player’s breath). The strength of air blown into it determines the octave of the note.
If the strength of the air blown exceeds a certain threshold, the flute produces sound in a higher octave. Otherwise, the flute produces sound in a lower octave. The arrangement of the player’s fingers over the six holes distinguishes the different notes. Whether a hole is open, half-covered or fully covered by a finger also differentiates the note being played. For example, if a fully covered hole generates a note C major key, the half-covered hole generates a note in C minor key.
Our electronic flute, shown in Figure 1, is built to be comparable in size, design and spectral dynamics to a typical bamboo flute. We simulated the six finger holes of a typical flute using capacitive touch sensors. A seventh hole holds the microphone and simulates a flute’s blow hole. Physically, these switches are pieces of copper tape connected to wires. We used a total of 13 capacitive touch sensors—two for each hole and one for the “chin sensor.” The chin sensor determines when someone is playing the flute. It is positioned directly under the microphone hole, and needs to be touched when playing. The microphone detects if air is blown into the flute, indicating that a sound should be produced. Note that the words button, switch and sensor used throughout this article functionally refer the same general mechanism.
At the heart of our electronic flute is a PIC32 microcontroller (MCU) from Microchip Technology, which reads the inputs from the copper tape buttons and microphone to produce the correct notes and sound. The sound is outputted to a speaker after going through a digital-to-analog converter (DAC).
A detailed breakdown of the hardware components of our electronic flute is shown in the block diagram in Figure 2. The 13 copper-tape touch sensors, the microphone, the key control switch and the octave control switch are inputs into the PIC32 MCU. The outputs from the PIC32, after running direct digital synthesis, are sent through the DAC to the amplified speaker to produce the flute sounds. Two buttons are used to control the key and octave of the flute, which get displayed on the TFT Display.
One unique feature on the PIC32 is the Charge Time Measurement Unit (CTMU). Our first step when designing our electronic flute was determining how to use the CTMU to create the capacitive touch sensors. The CTMU peripheral is available for use on all the ADC pins of the MCU. It is essentially a settable current source that can measure resistance, capacitance and more. To understand how we used a settable current source to measure capacitance, recall that the formula for capacitance is C = q/V, where C is the capacitance, V is the voltage across the capacitor and q is the charge. Charge can also be denoted as the product of the current (I) and time (t). Hence, the formula can be rewritten as C = (I × t) / V. Using the CTMU, if the pin is provided with a fixed current for a fixed amount of time, we get C as inversely proportional to V. This logic is used to measure the voltage on an ADC pin, which changes when the pin is touched and released. This is how CTMU can be used to create touch sensors.
The PIC32 does not provide enough ADC channels for the 13 touch sensors in our design, so we chose to connect the touch sensors to two analog multiplexers (Figure 2). The multiplexers enable us to connect the touch sensors to only one ADC channel and five other GPIO pins, which saves plenty of pins on the PIC32. We used the CD4051xB analog 8×1 multiplexer from Texas Instruments. With this multiplexer, if the select lines are 000, for example, then the input connects to the output 0. And if the select lines are 111, then the input connects to the output 7. Its chip-select line has the ability to turn off the entire chip, so that none of the outputs are connected to the input. This feature was useful to our project, since the outputs from the two different multiplexers are connected to one ADC pin, meaning that one chip is always off.
The breath-detecting microphone we used is the Electret Microphone Amplifier from Adafruit. In our circuit, the microphone is connected to a peak detector circuit to obtain the absolute value of the signal. The absolute value of the signal is needed to get a proper ADC reading. When we tested this microphone, the ADC readings obtained from its circuit were either in the range of the 500s (when no air was blown into it) or 900s (when air was blown into it). We later realized this observation was probably due to a calibration issue, since the ADC readings should increase linearly with the amount of air blown into the microphone. Because of the binary behavior of the microphone that we first observed, we used the microphone only to control the octave being played by the flute.
The uniqueness of our electronic flute design comes from the two buttons (switches) (SW1 and SW2 in Figure 3), which can be used to adjust the key and octave of the flute. When the user presses switch SW1, the key goes up—for instance from B to C. When the user presses button SW2, the octave goes up by number—for instance from C3 to C4. Both buttons can be used to circle back to the lowest key and octave. Then, the TFT display in our design lets users see the current key and octave being played.
The last main hardware component in our circuit is the DAC. As the name suggests, it converts the digital signal of the sound generated by the MCU into an analog signal, so it can be fed into the amplified speaker. The schematic in Figure 3 shows all the hardware components in our design. The schematic for the PIC32 development board was first created by Sean Carroll. A link to more information about Sean Carroll’s development board can be found on Circuit Cellar’s article materials webpage. The 13 touch sensors are indicated by the 13 circles on the left side of the schematic.
The entire software design was coded in C and consists of four different threads and an interrupt service routine (ISR). The ISR uses additive synthesis and direct digital synthesis to output the sound to the DAC. The Display Thread displays the current key and octave that the flute is playing. The ADC/CTMU Thread runs CTMU and reads the ADC value of the switches. The Frequency Thread determines what note to play. Last, the Debouncing Thread debounces the two buttons that change the octave and key of the flute. The Display Thread and the Debouncing Thread are straightforward to implement, so they will not be discussed in detail here.
ADC/CTMU Thread: The main purpose of the ADC/CTMU Thread is to read the ADC values of the 13 capacitance touch sensors, to determine which sensors are being pressed. As noted in the previous section, we used two multiplexers connected to the ADC channel AN11. The chip-select line of the multiplexers ensures that only one multiplexer is turned on at a time. A threshold of about 90% of the full ADC value is used to determine if a finger is touching any of the 12 buttons. The thread starts off by setting the ADC channel to AN11. Next, the CTMU is turned on, and a for-loop measures the voltage on each of the capacitive touch sensors, using CTMU. In this for-loop, the thread also determines which multiplexer to turn on using chip-select. The first eight values in the for-loop correspond to the first multiplexer, and the rest correspond to the second multiplexer.
For the CTMU to work correctly, the following sequence of events must occur in the ADC/CTMU Thread:
- 1) The internal discharge switch is closed to drain the external circuit of any charge, by connecting the ADC channel to ground.
- 2) The internal discharge switch is opened, and the internal charge switch is closed for 2 µs, to allow charge to build up.
- 3) During the charging period, the interrupts corresponding to the ISR are turned off to ensure that the program was not interrupted while charging. Since the interrupts are only turned off for 2 µs, the direct digital synthesis (DDS) that takes place in the ISR isn’t affected.
- 4) After the 2 µs, the internal charge switch is opened and the ADC value is read.
This sequence of events is placed inside a for-loop in our code so that CTMU can run for all 13 capacitive touch sensors.
There’s one more important aspect of the ADC/CTMU Thread. Outside the for-loop that runs CTMU, the thread sets the ADC channel to AN3, enabling the ADC value of the microphone to be read. The corresponding CTMU circuit is shown in Figure 4, where S1 is the internal charge switch and S2 is the internal discharge switch.
Frequency Thread: One of the main functions of the ISR is to run direct digital synthesis. We used DDS to produce sound waves from the PIC32. DDS works by creating a sine table of one sample frequency that contains the signal’s amplitude values at evenly spaced phase values. Then, by moving through the sine table at different rates, different frequencies can be produced. Because every note has a different frequency, the following equation shows how to generate different notes using one DDS sample frequency:
In this equation, Fout is the frequency trying to be produced, and Fs is the sample frequency. Manipulating this equation, we can solve for the phase increment value (inc), which determines the rate of movement through the sine table. In our ISR, every time an interrupt occurs, a phase accumulator variable is incremented by the phase increment value. The top byte of the phase accumulator variable is then used as the reference index for the sine table matrix, such that incrementing through one sine table using different-sized increments produces different frequencies.
The following method was used to determine how the Frequency Thread works and how we selected which note to play. Outside the Frequency Thread, each frequency value from A2 to G4# is defined. A 12 × 7 matrix holds 12 different keys, from A to G#. Inside the Frequency Thread, the program first checks if the chin sensor (indicating whether someone is playing the flute) is being touched. This equates to an if-statement that checks if the 13th bit of the button integer is set to 1. The rest of the thread consists of a switch-statement that determines if we play the lower or higher octave of Do, Re, Mi, Fa, So, La, Ti, Do in any key. An exact combination of capacitive touch sensors must be pressed to play a certain note, such as Do. The exact combination is represented by 12 bits, since each simulated “hole” in the flute comprises two capacitive touch sensors. This combination is identical to the combination of holes that need to be covered on a bamboo flute.
The most difficult part of the software design was sound synthesis. This was due to the fine tuning required for additive synthesis to create a flute-like sound. Additive synthesis is a technique that sums sine waves to mimic the natural sound spectrum of an instrument. Our main challenge with additive synthesis was determining how many harmonics to use, and how to adjust the amplitudes of those harmonics to create the most realistic sound. Another challenge was experimenting with FM modulation to create a vibrato effect.
The first step in our process was understanding the spectrum of harmonics that a flute makes. As a starting point, we used a sound spectrum of a flute, published by the University of New South Wales, which showed that a flute has a series of peaks at f, 2f, 3f, 4f and so on. The amplitudes of the peaks for 2f, 3f, and 4f are approximately -15 dB, -5 dB and -20 dB, respectively, from the fundamental f.
We first implemented the sound synthesis as stated above in Mathwork’s MATLAB. We started with a fundamental and four harmonics, and noticed that the sound didn’t improve much more when we added more harmonics to our additive synthesis. Moving the other way, the sound also didn’t change much with three harmonics, but did sound less flute-like with two harmonics. So, we decided that a fundamental and three harmonics was optimal. We then ran a variety of tests changing the amplitude of the harmonics. Figure 5, shows one test that was close to the flute sound we wanted. The ratio of our amplitudes changed slightly from our initial test, to create what we considered a more flute-like sound.
After fine tuning the amplitudes of our three harmonics, we still were not satisfied with the sound being generated. For this reason, we used amplitude modulation (AM) sound synthesis to create a vibrato effect. AM sound synthesis works by multiplying a wave function with a very small frequency to the main wave function. Therefore, we multiplied a cosine function with a frequency of 1 Hz to the sum of the four harmonics as shown by the following equation:
After we finished testing in MATLAB, we implemented our additive synthesis into the ISR and PlaySound method. The PlaySound method first determines the correct note to play by adjusting the phase increment value for the fundamental frequency. The increment values for the three harmonics are 2, 3 and 4 times greater, since the harmonics are at 2f, 3f and 4f from the fundamental frequency, f. The phase increment for the frequency modulation (FM) wave never changes, because the frequency of the wave is always 1 Hz. The ISR then uses the constantly updating increment values from the PlaySound method to implement additive synthesis to produce the final sound wave. Figure 6 shows the output of the sound spectrum produced by playing our flute after implementing this code.
Overall, we are satisfied with the performance of our electronic flute. With a total cost of $38.64, this project was an inexpensive way to explore MCU design and sound synthesis. One of our authors, Parth, is a flute player, and he believes the sound is realistic. We encourage readers to listen to Parth play the Theme from Titanic on our electronic flute. You can watch and listen to this on the YouTube video of our project below. You can judge for yourself how our electronic flute sounds compared to a real flute!
We learned some lessons while working on this project. First, with additional time to improve the design of our electronic flute, we would enhance the flute’s volume dynamics and sound quality. We didn’t use the microphone’s ADC readings to control the volume of the flute’s sound. If implemented correctly, the ADC readings of the microphone should increase linearly with how much air is blown into it. We could use this relationship to linearly increase the flute’s sound volume. When testing our sound synthesis design, we also tried to base our design on the Wind Instruments Synthesis Toolbox. Due to the complexity of the toolbox and our time constraints, we did not implement its algorithm. If we had more time, we believe this toolbox would have helped us create an even more realistic sound.
For detailed article references and additional resources go to:
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • NOVEMBER 2019 #352 – Get a PDF of the issueSponsor this Article