Visualizing Sounds and Hearing Images
Oscilloscopes most often are used to measure and display voltage waves, and for testing, verifying, and debugging circuit designs. Learn how these three Cornell University students used an oscilloscope to generate oscilloscope music animations, using a PIC32 microcontroller and a DAC.
This project, “oscilloscope music,” allows users to generate images on an oscilloscope with sound. Using a graphical user interface (GUI), the user can independently control two audio signals’ frequencies, amplitudes, and relative phases. A PIC32 microcontroller generates these signals via a two-channel digital-to-analog converter (DAC). Each DAC output connects to a separate speaker, and to a separate input of a two-channel oscilloscope. Because these DAC outputs connect to both speakers and the oscilloscope, manipulations of the audio signals in the GUI change both the visual on the oscilloscope and the sound from the speaker.
To display an image on the oscilloscope, we put the oscilloscope in XY mode. The signal going into one channel of the oscilloscope controls the cursor’s horizontal (x-direction) movement, and the signal going into the other channel controls the cursor’s vertical (y-direction) movement. Both input audio signals are oscillating with respect to time, and thus the x and y positions of the cursor on the oscilloscope screen are also moving in a periodic fashion. This is comparable to parametric equations in mathematics, where we can have different time-based functions for x and y [for example, x = f(t) and y = g(t)].
When the user changes the features of one of the signals (relative phase, frequency, amplitude, and so on), the function for the respective dimension (x or y) changes, and so the movement in that direction is different. This results in a different picture displayed since the position of the cursor will now be at a different location at a given time. With this customizability, we are able to generate a variety of shapes. Since the same signals are being sent to both the speaker and the oscilloscope, the user can “visualize” the output audio and “hear” the displayed image.
Our project features five modes: streaming, signal tuning, shape selector, demo, and music. They can be selected using a Python GUI on a computer that communicates serially with the microcontroller.
In streaming mode (Figure 1), the output audio from a computer or any signal that can be output through a headphone jack is displayed directly on the oscilloscope and played on the speaker. The two channels from the computer audio are sampled through the analog-to-digital converter (ADC), and then sent directly to the DAC. Although it is not strictly necessary to use the microcontroller between the computer and the oscilloscope/speaker in this mode, it keeps the design consistent. This way, it can be controlled entirely through the GUI, as in all other modes, and prevents the user from having to rewire the hardware when switching modes.
In signal-tuning mode (Figure 2), the user can customize several degrees of freedom of the two output signals. This is accomplished through the GUI (see the Software Design section) via sliders for each of the modifiable attributes, including the shape of the wave (sine, triangle, square, and sawtooth), frequency, amplitude, and relative phase. In addition, a selector enables if the waves are spiraled or stable (see the Background and Math section). This allows the user to create custom shapes on the oscilloscope and sounds produced by the speakers.
In the shape-selector mode (Figure 3), the GUI can be used to select from a variety of shapes, including a circle, diamond, Lissajous curve, horizontal and vertical lines, dots, and a cube. In this mode, the microcontroller manipulates the GUI, taking predefined values of the degrees of freedom of the signal for each of the shapes and displaying this information through the sliders and selectors used in the signal tuning mode. Through this, the user can learn the settings that generate each shape.
To showcase the features of this project, we added the demo and the music modes. In the demo mode (Figure 4), the microcontroller cycles through predetermined, visually pleasing shapes made from different pairs of signals. Similarly, the music mode plays Cornell’s alma mater on loop with visuals on the oscilloscope. Both modes also show the required settings for generating the images and sounds played on the GUI.
One of the main mathematical concepts behind our implementation is direct digital synthesis (DDS). DDS is an algorithm that produces waveforms of a specified frequency. A hard-coded array contains the values corresponding to a single period of the desired waveform.
At 44kHz, the PIC32 sends a new value to the DAC by indexing into this array. The DDS algorithm calculates the particular index to access for each transaction, such that the output waveform is of the specified frequency. This means we have a DDS array for each type of wave that we included in our project (sine, triangle, square, and sawtooth). To create a relative phase between our signals, we can index into different locations in our array (different points in the period of the waveform) simultaneously.
Shape-selector mode allows us to generate a few simple 2D shapes. For instance, to generate a circle, we synthesize equal-amplitude sine waves on both channels (and thus both axes on the oscilloscope screen) but make them 90 degrees out of phase. This directly follows the parametric equation of a circle, where cosine is just a sine wave that is 90 degrees out of phase.
x = rsinθ and y = rcosθ
Thus, sending one sine wave to one channel and another sine wave phase-shifted 90 degrees to the other channel results in a circle (Figure 5).
Similar parametric functions can be used to show that a diamond shape is generated with two same amplitudes, same frequency, and 90 degrees out-of-phase triangle waves.
Another figure we allow the user to select and customize in the shape-selector mode is a Lissajous curve. A Lissajous curve is a set of parametric equations, both of which are sinusoids, that have different frequencies and potentially different phases:
x=Asin(at+ϕ) and y=Bsin(bt+ψ)
The ratio, a:b, highly affects the appearance of the curve. The simplest Lissajous would be an ellipse where a/b = 1. The number of “lobes” or loops is directly correlated to the ratio of a/b. The simplified form of the rational number indicated the number of lobes in that axis. So, using the example of the 2:1 Lissajous curve (Figure 6), we have two lobes in the horizontal direction and 1 in the vertical direction (like horizontal butterfly wings). Our GUI also allows the user to view a cube on the oscilloscope screen. This is where DDS really displays its usefulness and flexibility since it allows us to create unique waves of our own. As mentioned earlier, we produce signals by indexing into an array that contains a period of the desired type of waveform. This means that to create a cube “waveform”, we just need to design our own period of what this wave entails and store that into an array. The x-direction and y-directions of the shape (which we treat as two different waveforms, with one to be output on each channel) are shown in Figure 7.
To accomplish this, we broke the shape of the cube into a set of various sub-shapes. As shown, we can combine different types of waves that we already use such as two triangle waves to create the diamonds/faces of the cube, and a square and sine wave to create the vertical lines/edges of the cube. Even though each edge of the cube is being drawn separately, it is done at such a high frequency that the result is a single, still image.
Users are also able to generate spirals. We do this using time-based amplitude modulation. By setting the amplitude of the signal to the current value that increments by a set value over time, its magnitude starts small, increasing up to a maximum amplitude boundary, and then resets, repeating the process again. This creates the desired spiraled appearance.
The time increment can be changed to produce different amounts of spiraling. When the time increment and increment at which we index into the waveform arrays have a smaller least common multiple, the spiral is cleaner, as shown in Figure 8.
If the frequency of the spiraling and the actual waves change at the same rate, then the amplitude changes once from 0 to 1,023 on every iteration, resulting in one spiral. Otherwise, if the frequency of the spiraling is double that of two signals, then the amplitude goes linearly from 0 to 1,023 twice in the time that the shape is drawn once, in other words, more spiraling.
Another effect that we can generate is animating. If we modify the two frequencies so that they are slightly different values, we see an animation that appears to be infinitely looping. With frequencies that are very off, instead of seeing a Lissajous curve, we see animations whose period is the same as the least common multiple of the two frequencies. Especially if the two frequencies are relatively prime, the shape changes with time because differing frequencies mean that different parts of the waves line up at different times. It is almost as if the phase is changing with time. Animation is shown in our demonstration video .
Besides the visual effects, we also can output the sound of the signal displayed on the screen. The frequency of the wave determines the pitch, and having different frequency waves in the two channels produces a simple two-note chord. Additionally, different signals produce different sound qualities. For example, a sine wave is a beeping sound, whereas a sawtooth wave is more buzzer-like.
Our project hardware consists of the PIC32 microcontroller, a breakout PCB containing a DAC and an ADC, an oscilloscope, speakers, and an audio jack. A major addition we added was a biased high-pass filter. Ideally, we would have no filtering, to allow signals of all frequencies to be displayed on the oscilloscope. However, early on we had issues when streaming music from our computer through the microcontroller because the signal obtained from the computer’s audio jack ranged from -2V to 2V. However, our PIC32 operates with voltages from 0V to 3.3V. Using a biased high-pass filter allowed us to shift the computer’s output up. As shown in the schematic (Figure 9), the filter is a standard high-pass with a pull-up resistor added to the output node. This creates a voltage divider, essentially boosting the signal.
With the pull-up resistor tied to 3.3V and the same-valued resistors in the voltage divider, the computer audio signal is boosted by 1.65V, giving us a range from -0.35V to 3.65V. This is not exactly the PIC32 voltage range, but by limiting the computer volume to 80%, we can ensure that the output voltage does not rise above 3.3V. This created a new side-effect—high passing our signal. To mitigate this effect, we selected a 10kΩ resistor and a 1µF capacitor to obtain a very low cutoff frequency of 16rad/s, which is approximately 2.5Hz. This means we are only filtering out signals near DC, which are not even audible, and leaving the rest.
The output of the biased high-pass filter is then sampled by the ADC when streaming music from the PC and then passed directly to the DAC. When we are not streaming music from the PC, we no longer need to sample from the ADC, because we are synthesizing our own waveforms using the PIC32. The two output channels for the waveforms are still DAC-A and DAC-B. Each of these outputs branches into two wires, giving a total of four output wires—two to the speaker and two to the oscilloscope. The full hardware schematic is shown in Figure 10.
Our software design is composed of two parts: the PIC32 that does DDS to generate two output signals, and a GUI on a PC that allows us to change the setting of those two generated signals, and to run preset demos. We send and receive messages for changing settings and interface features between the two parts through a UART channel.
We created a Python interface (Figure 11) that is able to change various parameters of the signals generated, along with the mode. With our interface, we have buttons for selecting one of the five modes (PC streaming, signal tuning, shape selector, demo, and music). When any of the input sliders, buttons, or toggles are moved or pushed/released in the Python user interface, the interface sends a string with information specifying the type of button and value (if applicable) through UART. The PIC32 microcontroller parses the string it receives, detects either a slider, radio button, combo button, or toggle, and acts accordingly.
When a slider is detected, the PIC32 updates the corresponding variable in software, but we restrict which sliders can actually be changed depending on the current mode. For example, in PC-streaming mode, none of the sliders update the corresponding variables, except for the sampling frequency.The sampling frequency slider is a little different since it affects the rate of the interrupt that we use both to sample from the ADC and send to the DAC. In general, we use a 16-bit timer, incrementing at 40MHz, and overflowing some designated number of cycles (period), causing an interrupt. In the corresponding interrupt service routine (ISR), we read from the ADC or update our DAC value, depending on the mode. To change the sampling frequency (Fs), we calculate the new overflow period (40MHz/Fs) and update the DDS-related values that are dependent on the sampling frequency. We then restart the timer and have a new sampling frequency.
For the shape selector mode, the user is able to select between a series of predetermined shapes such as circle, diamond, Lissajous, and cube. A message indicating which shape button is pressed is sent to the PIC32, and after parsing the message, we change the frequencies, and the relative phases accordingly. We also set the corresponding wave types for DAC output signals A and B. For example, to create a diamond, we need two triangle waves, and for circles and Lissajous curves, we need two sine waves. Based on the wave type, we use a different wave-table array (pre-populated once at the beginning). The frequencies of both signals are reset to be 440kHz (our arbitrary default value), and the relative phase is set to be 90 degrees for all shapes except for the cube. Just as with the sliders, the shape-selector buttons are restricted according to the mode.
We included a toggle button to enable or disable our spiraling effect and sent its state to the PIC32. If spiraling is enabled, we use varying amplitudes that change with respect to time as described in the Background section. Otherwise, we use the constant amplitude that is set in the GUI. The spiral order changes the spiral increment as previously described. Besides allowing the user to configure the output signals from the user input, in the music, demo, and shape modes, the sliders will change to reflect the current values of what is displayed. This provides a good example of the effects of the different attributes.
We again use UART communication, but this time from the PIC32 to the PC. The string sent is parsed on the computer, producing information on which item on the GUI to change and to what value (Figure 12).
The most important aspect of our PIC32 software, however, is the ISR that is attached to a 16-bit timer incrementing at 40MHz, which handles all the DDS calculations, ADC sampling, and DAC output. In PC streaming mode, we simply sample from each ADC input and set the DAC output to the exact values that are sampled. For our DDS calculations for the first signal, signal A, we first increment the DDS phase by a value (determined by our sampling rate and desired frequency). The DAC output for signal A is the sampled value from the applicable wavetable array scaled by an amplitude.
We repeat this process for signal B, except for this signal, we handle the phase offset. We add this to our shifted DDS phase to sample the wave at a different location compared to that of signal A, introducing a phase shift. The offset lets us maintain the same frequency of the signal, but change how the signals align. Finally, we send the output DAC values to the DAC through SPI, creating our visualizations on the oscilloscope, and playing them on the speaker.
To show what interesting images we can create, we include a hard-coded demo video. This also includes updating the GUI to reflect the settings of the image being displayed. In our demo, we included a circle from shape selector, a moving circle due to a slight offset in frequencies, a spinning spiral, a cube, a jumping cube, a triangle wave that moves, a few moving Lissajous figures of different orders, a few spiraling Lissajous figures, and a stable Lissajous figure with 5/4 order. Not only did we want to display some interesting visuals, but we also wanted to create music for users to enjoy and visualize. Since we all attend Cornell, we decided to recreate our school’s alma mater, “Far Above Cayuga’s Waters.”
Our first step was to find the frequencies of all the notes in the song. For example, each note is a sinusoidal signal with a specific frequency, and doubling that frequency raises the note an octave. After this, we focused on setting the duration of each note, which is calculated in milliseconds and is based on the song’s beats per minute. Our song is 100bpm, with each quarter note as a beat, so each quarter note’s duration is 600ms. We also improved our audio quality by ramping the volume of each note up and down. When done quickly, this helps prevent popping sounds from the speaker, due to sharp voltage increases, but does not affect the listener’s experience.
RESULTS & CONCLUSIONS
There are many potential avenues of improvement for this project. For example, our code that runs on the microcontroller isn’t as optimized as it could be. This could be done by ensuring that all the locations where we perform floating-point arithmetic could be changed to be fixed-point instead, improving the overall speed and potentially reducing lag. In addition, all the values of the degrees of freedom for the shape selector could be stored in arrays and looped through, as opposed to hard-coding all the values where the output is updated. This would greatly improve the overall readability of our code. One possible feature that could be added is displaying the visuals on a display other than an oscilloscope. This project was extremely enjoyable, and there is so much room to expand on it!
In the end, this project was considered an overall success. We were able to support different modes for different features, create modes that not only showcased interesting visualizations, but also played a song and showed its visual representation. A video demonstration of our oscilloscope music project is available on YouTube .
We also took time to ensure the quality of our work, such as introducing the amplitude envelope for the music mode to avoid popping sounds when changing notes, as well as facilitated communication back to the GUI, to teach the user the effects of the degrees of freedom on the sounds and visuals. In addition, we ensured that our shapes looked like what was advertised and that we kept each of the modes isolated, so that they didn’t affect one another. We also created a project that anyone could get lost in playing with and enjoying.
 Video demonstration of the oscilloscope music project:
Microchip | www.microchip.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • AUGUST 2022 #385 – Get a PDF of the issueSponsor this Article
Ruby Min (firstname.lastname@example.org) is a senior at Cornell University studying Electrical and Computer Engineering and Computer Science. Technically, she is interested in embedded systems as well as computer graphics, but also enjoys dancing, drawing, and practicing martial arts in her free time.
Samantha Cobado (email@example.com) is a senior Electrical and Computer Engineering major at Cornell University. She enjoys the intersection of computer architecture and machine learning. Outside of class, she enjoys running, hiking, and baking delicious treats to share with friends.
Eric Kahn (firstname.lastname@example.org) is an Electrical and Computer Engineering Masters student at Cornell University, completing his degree in December 2022. His interests include digital/analog VLSI, power electronics, embedded systems, and signal processing. Outside of class he enjoys skiing, reading, rock climbing, and hiking/camping.