Music from Micros
In this project article, learn how these three Cornell students built a miniature recording studio using the Microchip PIC32. It can be used as an electric keyboard with the additional functionality of recording and playing back multiple layers of sounds. There is also a microphone that can be used to make custom recordings.
Our project was motivated by combining our passion for music and the knowledge we gained from an introductory microcontrollers (MCUs) course at Cornell University [1]. Inspired by electronic keyboards and the idea of making a song by looping multiple tracks together, we designed and built the PIC32 Recording Studio (Figure 1). A user can record a few seconds of music on various instruments and layer them over one another. For example, a user can first record a basic drum beat, then play back the drumbeat and add a bass track to the “song” they are currently building. Continuing in this fashion, a user can build a complete song, and using the provided microphone, can also add personalized sounds—such as their own voice—to the track. With visual feedback through an LCD screen on the Recording Studio, the user can easily switch modes and playback/recording options.

USER INTERFACE
The Studio can be controlled through a TFT LCD screen and two menu buttons. The TFT screen serves as a menu interface and provides some visual feedback for the user. The screen displays a menu that allows the user to toggle between playable instruments, to record/stop recording and to manage the recording through playback and delete options. As the toggle button is pressed, an arrow is cycled through all the menu options, indicating the current operating mode of the Studio. This is demonstrated in Figure 2 where the menu arrow is at “piano,” meaning the Studio is currently in piano mode. The various modes displayed on the menu are Piano, Guitar, Bass, Drums, User, Playback and Delete.
If the menu arrow is at one of the four “instrument” modes, the user can play one octave of that instrument on the buttons similar to those on an electric keyboard. To start/stop playback or delete the current recording, the user has to move the arrow to that menu option and then press the Select button. To record the instruments the user is currently playing, the user can press a designated Record button. If the user is recording or playing back a recording, a progress bar appears on the screen, so that current progress in the length of the recording can be tracked. We found this feature extremely useful for synchronizing the beats of our recording. User mode activates a microphone through which the user can sing/make sounds to add to the current track they are building, allowing the user to add personalized sounds to a recording.
In addition to the two buttons for the user interface, there are eight buttons for playing specific notes. For each of the three tonal instruments, these eight buttons allow the user to play one octave, without flats or sharps. Each instrument plays in a different octave from the others, to give the greatest possible range. While in drum mode, each of the eight buttons plays a unique drum hit. We arranged the buttons in the style of a keyboard to give the Studio a natural feel for experienced musicians.
To format the sounds to work with the PIC32, we searched YouTube for people playing C major scales with guitar, piano and bass. Links to these are provided at the end of this article. For the drums, eight different drum sounds were selected—about 0.5 seconds each. Using Audacity, we clipped the notes (not the drum sounds) to a relatively short length, about 0.05 seconds, and then put them on repeat for playback on the device. We then used MATLAB to sample the sound files at 8kHz, and scaled them to be integers in the range “0 to 256.” The final step was to copy these values into a header file as an array of unsigned chars. To save space for recording, we saved the header file in flash memory, separate from the main memory. Having the sound array in a header file also made it easier to access and modify the array, which we found ourselves doing many times throughout the debugging process.
RECORD AND PLAYBACK
— ADVERTISMENT—
—Advertise Here—
After basic sounds could be played in real time, the next step in the project was to incorporate the recording and playback features. The Studio allows for the user not only to record and layer the instruments they are playing, but also to record a personalized sound through a microphone. The microphone is set up to receive input sound from the user and feed it to the PIC32 through ADC. The output of the microphone was amplified by implementing a level shifter and non-inverting amplifier on the output of the microphone before wiring it to ADC (Figure 3). Since the microphone was intended for a person’s voice, we chose appropriate RC time constants to allow those frequencies to pass. For our amplifier, we chose to increase our gain to 100, so the waveforms would reach their full amplitude from -1.5 V to 1.5 V. This caused some clipping to occur when sounds were very loud into the microphone, but as long as the mic was used from a few inches away this was not a problem. To use the microphone to record sounds, a user must press Record while in user mode, then the input from the microphone is added to the recorded sound.

The Record button has several different functions. First and foremost, it can be used to begin recording. A user can record music by pressing Record while in one of the four instrument modes. If playback is on—which is useful to help time multiple tracks—the user should see “Get ready!” in the bottom right of the screen. On the next cycle of playback, the recording will begin. If playback is not on, a countdown appears on screen, so the user can see when the recording will begin. When the recording begins, a progress bar appears on the screen to indicate the current point in the recording. If a recording is already in progress, the current recording will stop. Another use of the Record button is to delete the current track. If Delete is selected in the menu, pressing the Record button sets the entire record wave to 0s, effectively deleting the recording. Alternatively, pressing the Record button while Playback is selected in the menu toggles playback between on and off. The tasks of actually generating the sounds through a speaker were divided into two parts in our code: a thread for reading input from the user and an interrupt service routine (ISR) for creating the sounds.
SOFTWARE IMPLEMENTATION
The software for this project was divided into a few main parts. An ISR was responsible for writing the bits of our sound waves to the output. We also had two threads [2]. The first—called the button thread—was for reading input from the buttons. The second was the draw thread, which updated the content on the TFT display, moving the arrow to the current mode selected and changing the progress bar if recording or playback was active. In the button thread, we read and debounced the buttons to determine what sounds needed to be generated, while the ISR took that information about what buttons were pressed and wrote the data from our sound array to the DAC through SPI [4].
The first challenge when writing the button thread was figuring out how to prevent mechanical bouncing from interfering with our readings. For this project, we designed a simple state machine that, after testing, we found worked reliably. In the thread, we read input from each button and stored the reading, then 30 ms later (enough time for the mechanical connections to settle), we read it again. If the new reading was the same as the original, we updated a variable to indicate the new state of the button, either pressed or unpressed. If the two readings were different, we considered the current state of the button unknown and left the variable unchanged. Figure 4 shows a state diagram describing these transitions.

The second challenge in writing this thread was that some of the buttons needed to be treated differently than others. For example, when holding down a C key in piano mode, we wanted to play a sustained C. However, if we were in drum mode, we only wanted the drum sound to be played once—not on a loop. This also applied to the menu buttons. If they were held down, we only wanted to indicate that they had been pressed once. To implement this, we only changed the value of our pressed variable if the reading went from low to high, not high to high. This setup ensured that before a button could be pressed again, it first had to be released by the user, because we were focusing only on low to high transitions. This also required us to add a flag to indicate when a drum sound finished playing, which indicated that we could change the state to unpressed, however this was handled in the other component of playing sounds: the ISR. Figure 5 shows a finite state machine (FSM) describing the buttons that we only wanted to read as being pressed once.

We configured our ISR to trigger on a timer at a rate of 8 kHz, the same sampling rate we used to sample our sounds. Each time through the ISR, we had to check which buttons were currently pressed, and then combine the values stored at the corresponding locations in the sound array to be sent to the DAC. Every time the ISR was triggered, we wanted to continue from the same place we left off on sounds that had to be played in the last pass. To do this, we kept a two-dimensional array of our current location in each of the sounds. When we played a sound, we then also had to increment the corresponding entry in the locations matrix. Again, we had to handle the drum sounds slightly different than the tonal sounds, because they should only be played once. This meant that after reaching the end of the drum sound, in addition to resetting the location back to the beginning of that portion of the array, we set a flag indicating that the sound had been played to completion.
One other nuance that we had to deal with was that some of the sounds we took from YouTube had different volumes. We adjusted for this by shifting the bits of some sounds more than others to get a more uniform volume among instruments. After we were confident that our software was correct for playing notes in real time, we had to modify the code to implement recording and playback.
Recordings were stored in a recording array of predetermined size. In the ISR, we modified our array of recorded sounds and the output to the DAC, depending on the values of Recording and Playback buttons. The size of recordings were limited to approximately 2 seconds, due to space limitations. If we were recording, we added the resulting sound from pressing buttons to our recording array, and if we were in user mode, we added the results of reading from the microphone to our recording array. If we were currently playing back, we just needed to add the recorded wave to the sound output to the DAC. When recording, if we reached the end of our recorded wave array, we stopped recording. If we were playing back and we reached the end of the recorded wave array, we looped back to the beginning of the array and repeated the playback.
TESTING AND RESULTS
Throughout the testing and debugging of our project, several changes were made to the original code to arrive at the final product previously described. One bug was that the recorded sounds would get louder as additional recordings were layered—even when the additional recordings were empty. We discovered the problem was that the recording wave was accidentally doubled when we were simultaneously recording and playing back. The most extensive testing we performed was with the sound files, which were loaded on the board and assessed by playing them out loud. One major error was that we had incorrectly indexed our individual instrument sounds. This was discovered through some oscilloscope testing (Figure 6).
— ADVERTISMENT—
—Advertise Here—

Once we had the sounds indexed correctly, we found that the sound quality was poor, due to discontinuities in the waveforms. To correct this, we looked at each of the 24 tonal waveforms individually and smoothed them out by hand, so that there were no discontinuities from the end of the wave to the beginning. This meant that we also had to adjust other values near the endpoints. After this fine tuning, the sound quality was actually good for most notes, despite there being only a single period of the waveform. We also had an issue with too much audio played at the same time, and reaching the upper limit of our SPI, so we limited the device to playing eight tones at any given point.
As a whole, we were very pleased with the resulting project. It worked! An especially interesting outcome we observed was that 2 seconds is enough time to create interesting sounds. A YouTube video of us demonstrating the PIC32 Recording Studio is shown below (Figure 7). The video shows that 2 seconds is enough time to create a beat and baseline, while playing a live melody and harmony above it.
While dealing with the large amount of sound files and playback ability, we ran into a roadblock with the limited memory space on the MCU. We found that sampling at 44 kHz would make our keyboard sounds nice enough that the different instruments and notes would be recognizable. This sample rate limited our recording capability to approximately 0.25 seconds, and completely erased the possibility of incorporating a user mode where the user could add sounds as a microphone input. We decided this was unacceptable, because it did not give the user enough recording time to make any interesting sounds. Therefore, we decided that a trade-off was required. We lowered our sampling rate to 8 kHz, which allowed us to provide about 2 seconds of recording capabilities. At 120 beats per minute, this was just about 1 measure of music, which we found was just enough to allow the user to experience the overlay capabilities and build up a nice music track. This trade-off also had the benefit of allowing us to incorporate the microphone.
The only feature we did not have time to implement for this project was an advanced user mode, with which the user could store newly created sounds and play them back with the press of a button, so that such sounds could be continuously incorporated in different music tracks. Given that we already maxed out our memory with our recording storage, we couldn’t add the advanced user mode without detracting from the recording.
THOUGHTS FOR THE FUTURE
In the future, if we were to repeat this project or improve upon it, we would love to incorporate this advanced user mode to expand the capabilities of the Recording Studio. We could do this by expanding the memory of the MCU—perhaps by using an external SRAM along with the existing flash memory. In addition to potentially longer recording times, this could allow us to include more sounds such as flats and sharps.
Because we found it difficult to keep tempo between different overlays when using our project, we also would find it useful to include a metronome option to keep tempo, so the user can layer tracks more confidently. Additionally, in another iteration of this project, we would hope to improve our user interface even more, perhaps by using a touch-screen display to avoid the hassle of the toggle buttons, and possibly by making our menu and interface appear more appealing
As people who enjoy and appreciate music, from a usability perspective, we thoroughly enjoyed making music—albeit short lengths of music—with our Recording Studio once it was complete. We are very proud of our results and satisfied with what we were able to achieve. Any further improvements we make would only optimize our current setup and expand its potential.
References:
[1] ECE 4760 Course Website: http://people.ece.cornell.edu/land/courses/ece4760/
[2] PIC32 Pinout: http://people.ece.cornell.edu/land/courses/ece4760/StudentWork/McNicoll/PIC32_Pinout_brl.pdf
[3] Bruce Land- Protothreads: http://people.ece.cornell.edu/land/courses/ece4760/PIC32/index_Protothreads.html
[4] SPI: http://people.ece.cornell.edu/land/courses/ece4760/PIC32/index_SPI.html
Sources/Parts:
- Microphone: Digikey 102-1720-ND
- MicroStickII
- PIC32MX250F128B
- TFT LCD
- Piano C Major Scale: https://www.youtube.com/watch?v=FCLzuQS5arI
- Guitar C Major Scale: https://www.youtube.com/watch?v=8ykeN4JC_O8
- Bass C Major Scale: https://www.youtube.com/watch?v=uYf7RN_PHkk
- Various Drum Sounds: https://www.youtube.com/watch?v=5UH7ydJddYI
PIC32 Family Reference Manual: http://www.microchip.com/pagehandler/en-us/family/32bit/
PIC32 Peripheral Libraries for MPLAB C32 Compiler:
http://ww1.microchip.com/downloads/en/DeviceDoc/32bitPeripheralLibraryGuide.pdf
PS1024ALRED Push Button Datasheet: https://media.digikey.com/pdf/Data%20Sheets/E-Switch%20PDFs/PS1024ALRED.pdf
CMA-6542PF Microphone Datasheet: https://media.digikey.com/pdf/Data%20Sheets/CUI%20Inc%20All%20Brands%20PDFs/CMA-6542PF.pdf
MCP6242 Op Amp Datasheet: http://ww1.microchip.com/downloads/en/DeviceDoc/21882d.pdf
RESOURCES
Digi-Key | www.digikey.com
Mathworks | www.mathworks.com
Microchip Technology | www.microchip.com
— ADVERTISMENT—
—Advertise Here—
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • APRIL 2019 #345 – Get a PDF of the issue
Sponsor this ArticleRadhika Chinni (rpc222@cornell.edu) will be pursuing a Masters in Electrical and computer Engineering beginning in Fall 2019. She graduated from Cornell University in December 2018 with a B.S. in Electrical and Computer Engineering and a minor in Computer Science. She ultimately wants to pursue a working with embedded systems in robotics.
Brandon Quinlan (bmq4@cornell.edu) is senior at Cornell University, double majoring in Computer Science and Electrical and Computer Engineering. He hopes to pursue a career in computer architecture after completing a Masters of Engineering in 2019.
Raymond Xu (ryx2@cornell.edu) graduated with a Master of Engineering from Cornell University in Electrical and Computer Engineering in May 2018. He previously graduated with a Bachelor of Science (also in ECE at Cornell) in December 2017.