Processor for Voice-Controlled Devices

To address the convergence of immersive sensory experiences fueled by voice, video and audio demands, NXP Semiconductors has launched the i.MX 8M family of applications processors. The processors combine robust media capabilities on one chip. Voice commands are expected to dominate 50% of all searches in the next two years, increasingly thinner TVs are driving the popularity of sound bars for home automation, and consumers are embracing the IoT for creating more convenient richer sensory-driven experiences.

The NXP i.MX 8M processors address designers’ requirements for one platform that combines A/V and machine learning to create connected products that can be controlled via voice command. The chips provide the process technology and edge computing needs to manage and reduce the command and question response time of smart connected devices. The i.MX 8MF is suited for a wide range of residential IoT and device control applications including everything.from smart TVs, television subscription services, sound bars and other smart speakers, to streaming media players and DVR/PVR. The processor family is also ideal for managing lighting, thermostats, door locks, home security, smart sprinklers, other systems and devices for a more intuitive and responsive home environment.

NXP’s i.MX 8M family’s features that include:

  • Video and audio capabilities with full 4K Ultra HD resolution, High Dynamic Range (HDR) and the highest levels of pro-audio fidelity
  • Performance and versatility with up to four 1.5 GHz ARM Cortex-A53 cores, flexible memory options, and high-speed interfaces for flexible connectivity
  • Advanced Human Machine Interface (HMI) featuring dual displays, vision procession unit (VPU), and an enriched user experience
  • Scalability and pin-and-power compatibility

NXP Semiconductors | www.nxp.com/iMX8M

Talking Hands: American Sign Language Gesture Recognition Glove

Roberto developed a glove that enables communication between the user and those
around him. While the design is intended for use by people communicating in American Sign Language, you can apply what you learn in this article to a variety of communications applications.Capture
PHOTO 1-Here you see the finished product with all of the sensors sewn in. The use of string as opposed to adhesive for the sensors allowed the components to smoothly slide back and forth as the hand was articulated.

By Roberto Villalba

While studying at Cornell University in 2014, my lab partner Monica Lin and I designed and built a glove to be worn on the right hand that uses a machine learning (ML) algorithm to translate sign language into spoken English (see Photo 1). Our goal was to create a way for the speech impaired to be able to communicate with the general public more easily. Since every person’s hand is a unique size and shape, we aimed to create a device that could provide reliable translations regardless of those differences. Our device relies on a variety of sensors, such as flex sensors, a gyroscope, an accelerometer, and touch sensors to quantify the state of the user’s hand. These sensors allow us to capture the flex on each of the fingers, the hand’s orientation, rotation, and points of contact. By collecting a moderate amount of this data for each sign and feeding it into a ML algorithm, we are able to learn the association between sensor readings and their corresponding signs. We make use of a microcontroller to read, filter and send the data from the glove to a PC. Initially, some data is gathered from the users and the information is used to train a classifier that learns to differentiate between signs. Once the training is done, the user is able to put on the glove and make gestures which the computer then turns into audible output.

After performing some calculation and characterizing our flex sensors, we decided to use a 10-kΩ resistor. Note that the rightmost point goes into one of the microcontroller’s ADC.

FIGURE 1-After performing some calculation and characterizing our flex sensors, we decided to use a 10-kΩ resistor. Note that the rightmost point goes into one of the microcontroller’s ADC.

HIGH-LEVEL DESIGN
We use the microcontroller’s analog-to digital converter (ADC) to read the voltage drop across each of the flex sensors. We then move on to reading the linear acceleration and rotation values from the accelerometer and gyro sensor using I 2C. And finally, we get binary readings from each of the touch sensors regarding if there exists contact or not. We perform as many readings as possible within a given window of time and use all of this data to do some smoothing. This information is then sent through serial to the PC where it is gathered and processed. Python must listen to information coming in from the microprocessor and either store data or predict based on already learned information. Our code includes scripts for gathering data, loading stored data, classifying the data that is being streamed live, and some additional scripts to help with visualization of sensor readings and so on.

MCU & SENSORS
The design comprises an Atmel ATmega1284P microcontroller and a glove onto which the various sensors and necessary wires were sewn. Each finger has one Spectra Symbol flex sensor stitched on the backside of the glove. The accelerometer and gyro sensors are attached to the center of the back of the glove. The two contact sensors were made out of copper tape and wire that was affixed to four key locations.

Since each flex sensor has a resistance that varies depending on how much the finger is bent, we attached each flex sensor as part of a voltage divider circuit in order to obtain a corresponding voltage that can then be input into the microcontroller.

Capture3

We determined a good value for R1 by analyzing expected values from the flex sensor. Each one has a flat resistance of 10 k and a maximum expected resistance (obtained by measuring its resistance on a clenched fist) of about 27 k. In order to obtain the maximum range of possible output voltages from the divider circuit given an input voltage of 5 V, we plotted the expected ranges using the above equation and values of R1 in the range of 10 to 22 k. We found that the differences between the ranges were negligible and opted to use 10 k for R1 (see Figure 1).

Our resulting voltage divider has an output range of about 1 V. We were initially concerned that the resulting values from the microcontroller’s ADC converter would be too close together for the learning algorithm to discern between different values sufficiently. We planned to address this by increasing the input voltage to the voltage divider if necessary, but we found that the range of voltages described earlier was sufficient and performed extremely well.

The InvenSense MPU-6050 accelerometer and gyro sensor packet operates on a lower VCC (3.3 V) compared to the microcontroller’s 5 V. So as not to burn out the chip, we created a voltage regulator using an NPN transistor and a trimpot, connected as shown. The trimpot was adjusted so that the output of the regulator reads 3.3 V. This voltage also serves as the source for the pull-up resistors on the SDA and SCL wires to the microcontroller. Since the I 2C devices are capable only of driving the input voltages low, we connect them to VCC via two 4.7-k pull-up resistors (see Figure 2).

As described later, we found that we needed to add contact sensors to several key spots on the glove (see Figure 3). These would essentially function as switches that would pull the microcontroller input pins to ground to signal contact (be sure to set up the microcontroller pins to use the internal pull up resistors).

Figure 2: Here we see the schematic of the voltage regulator circuit that we created in order to obtain 3.3 V. The bottom of the schematic shows how this same regulator was used to pull up the signals at SCL and SDA. Figure 3: The contact sensor circuitry was quite simple. The input pins of the microcontroller are set to the internal pull-up resistors and whenever the two corresponding copper ends on the fingers touch the input is pulled low.

Figure 2: Here we see the schematic of the voltage regulator circuit that we created in order to obtain 3.3 V. The bottom of the schematic shows how this same regulator was used to pull up the signals at SCL and SDA.

Figure 3: The contact sensor circuitry was quite simple. The input pins of the microcontroller are set to the internal pull-up resistors and whenever the two corresponding copper ends on the fingers touch the input is pulled low.

I2C COMMUNICATIONS
Interfacing with the MPU-6050 required I 2C communication, for which we chose to use Peter Fleury’s public I 2C library for AVR microcontrollers. I 2C is designed to support multiple devices using a single dedicated data (SDA) bus and a single clock (SCL) bus. Even though we were only using the interface for the microcontroller to regularly poll the MPU6050, we had to adhere to the I 2C protocol. Fleury’s library provided us with macros for issuing start and stop conditions from the microcontroller (which represent different signals that the microcontroller is requesting data from the MPU-6050 or is releasing control of the bus). These provided macros allowed for us to easily initialize the I 2C interface, set up the MPU-6050, and request and receive the accelerometer and gyroscope data (described later).

Figure 4: The image is the visual output received from plotting sequences of sensor readings. The clear divisions across the horizontal signal the different signs A, B, C, and D, respectively.

Figure 4: The image is the visual output received from plotting sequences of sensor readings. The clear divisions across the horizontal signal the different signs A, B, C, and D, respectively.

While testing our I2C communication with the MPU-6050, we found that the microcontroller would on rare occasions hang while waiting for data from the I2C bus. To prevent this from stalling our program, we enabled a watchdog timer that would reset the system every 0.5 seconds, unless our program continued to progress to regular checkpoint intervals, at which time we would reset the watchdog timer to prevent it from unnecessarily resetting the system. We were able to leverage the fact that our microcontroller’s work consists primarily of continuously collecting sensor data and sending packets to a separate PC.

Photo 2: In this image we see the hand gestures for R, U, and V. As you can tell, there is not much difference in the hand’s orientation or the amount of flex on the fingers. However, note that the copper pieces make different kinds of contact for each of the signs.

Photo 2: In this image we see the hand gestures for R, U, and V. As you can tell, there is not much difference in the hand’s orientation or the amount of flex on the fingers. However, note that the copper pieces make different kinds of contact for each of the signs.

TINYREALTIME
For the majority of the code, we used Dan Henriksson and Anton Cervin’s TinyRealTime kernel. The primary reason for using this kernel is that we wanted to take advantage of the already implemented non-blocking UART library in order to communicate with the PC. While we only had a single thread running, we tried to squeeze in as much computation as possible while the data was being transmitted.

The program first initializes the I 2C, the MPU, and the ADC. After it enters an infinite loop it resets the watchdog timer and gets 16 readings from all of the sensors: accelerometers, gyroscopes, flex-sensors, and touch sensors. We then take all of the sensor values and compute filtered values by summing all of the 16 readings from each sensor. Since summation of the IMU sensors can produce overflow, we make sure to shift all of their values by 8 before summing them up. The data is then wrapped up into byte array packet that is organized in the form of a header (0xA1B2C3D4), the data, and a checksum of the data. Each of the sensors is stored into 2 bytes and the checksum is calculated by summing up the unsigned representation of each of the bytes in the data portion of the packet into a 2-byte integer. Once the packet has been created it is sent through the USB cable into the computer and the process repeats.

PYTHON COMMUNICATION
Communication with the microcontroller was established through the use of Python’s socket and struct libraries. We created a class called SerialWrapper whose main goal is to receive data from the microcontroller. It does so by opening a port and running a separate thread that waits on new data to be available. The data is then scanned for the header and a packet of the right length is removed when available. The checksum is then calculated and verified, and, if valid, the data is unpacked into the appropriate values and fed into a queue for other processes to extract. Since we know the format of the packet, we can use the struct library to extract all of the data from the packet, which is in a byte array format. We then provide the user with two modes of use. One that continuously captures and labels data in order to make a dataset, and another that continuously tries to classify incoming data. Support Vector Machines (SVM) are a widely used set of ML algorithms that learn to classify by using a kernel. While the kernel can take various forms, the most common kind are the linear SVMs. Simply put, the classification, or sign, for a set of readings is decided by taking the dot product of the readings and the classifier. While this may seem like a simple approach, the results are quite impressive. For more information about SVMs, take a look at scikit-learn’s “Support Vector Machines” (http://scikit-learn.org/stable/modules/svm.html).

PYTHON MACHINE LEARNING
For the purposes of this project we chose to focus primarily on the alphabet, a-z, and we added two more labels, “nothing” and “relaxed”, to the set. Our rationale for providing the classifier “nothing” was in order to have a class that was made up of mostly noise. This class would not only provide negative instances to help learn our other classes, but it also gave the classifier a way of outputting that the gestured sign is not recognized as one of the ones that we care about. In addition, we didn’t want the classifier to be trying to predict any of the letters when the user was simply standing by, thus we taught it what a “relaxed” state was. This state was simply the position that the user put his/her hand when they were not signing anything. In total there were 28 signs or labels. For our project we made extensive use of Python’s scikit-learn library. Since we were using various kinds of sensors with drastically different ranges of values, it was important to scale all of our data so that the SVM would have an easier time classifying. To do so we made use of the preprocessing tools available from scikit-learn. We chose to take all of our data and scale it so that the mean for each sensor was centered at zero and the readings had unit variance. This approach brought about drastic improvements in our performance and is strongly recommended. The classifier that we ended up using was a SVM that is provided by scikit-learn under the name of SVC.

Figure 5: The confusion matrix demonstrates how many times each label is predicted and how many times that prediction is accurate. We would like to see a perfect diagonal line, but we see that one square does not adhere to this. This square corresponds to “predicted V when it was really U” and it shows about a 66% accuracy.

Figure 5: The confusion matrix demonstrates how many times each label is predicted and how many times that prediction is accurate. We would like to see a perfect diagonal line, but we see that one square does not adhere to this. This square corresponds to “predicted V when it was really U” and it shows about a 66% accuracy.

Another part that was crucial to us as developers was the use of plotting in order to visualize the data and qualify how well a learning algorithm should be able to predict the various signs. The main tool that was developed for this was the plotting of a sequence of sensor readings as an image (see Figure 4). Since each packet contained a value for each of the sensors (13 in total), we could concatenate multiple packets to create a matrix. Each row is thus one of the sensor and we look at a row from left to right we get progressively later sensor readings. In addition, every packet makes up a column. This matrix could then be plotted with instances of the same sign grouped together and the differences between these and the others could then be observed. If the difference is clear to us, then the learning algorithm should have no issue telling them apart. If this is not the case, then it is possible that the algorithm could struggle more and changes to the approach could have been necessary.

The final step to classification is to pass the output of the classifier through a final level of filtering and debouncing before the output reaches the user. To accomplish this, we fill up a buffer with the last 10 predictions and only consider something a valid prediction if it has been predicted for at least nine out of the last 10 predictions. Furthermore, we debounce this output and only notify the user if this is a novel prediction and not just a continuation of the previous. We print this result on the screen and also make use of Peter Parente’s pyttsx text-to-speech x-platform to output the result as audio in the case that it is neither “nothing” or “relaxed.”

RESULTS
Our original glove did not have contact sensors on the index and middle fingers. As a result, it had a hard time predicting “R,” “U,” and “V” properly. These signs are actually quite similar to each other in terms of hand orientation and flex. To mitigate this, we added two contact sensors: one set on the tips of the index and middle fingers to detect “R,” and another pair in between the index and middle fingers to discern between “U” and “V.”

As you might have guessed, the speed of our approach is limited by the rate of communication between the microcontroller and the computer and by the rate at which we are able to poll the ADC on the microprocessor. We determined how quickly we could send data to the PC by sending data serially and increasing the send rate until we noticed a difference between the rate at which data was being received and the rate at which data was being sent. We then reduced the send frequency back to a reasonable value and converted this into a loop interval (about 3 ms).

We then aimed to gather as much data as possible from the sensors in between packet transmission. To accomplish this, we had the microcontroller gather as much data as possible between packets. And in addition to sending a packet, the microcontroller also sent the number of readings that it had performed. We then used this number to come up with a reasonable number of values to poll before aggregating the data and sending it to the PC. We concluded that the microcontroller was capable of reading and averaging each of the sensors 16 times each, which for our purposes would provide enough room to do some averaging.

The Python algorithm is currently limited by the rate at which the microcontroller sends data to the PC and the time that it takes the speech engine to say the word or letter. The rate of transfer is currently about thirty hertz and we wait to fill a buffer with about ten unanimous predictions. This means that the fastest that we could output a prediction would be about three times per second which for our needs was suitable. Of course, one can mess around with the values in order to get faster but slightly less accurate predictions. However, we felt that the glove was responsive enough at three predictions per second.

While we were able to get very accurate predictions, we did see some slight variations in accuracy depending on the size of the person’s hands. The accuracy of each flexsensor is limited beyond a certain point. Smaller hands will result in a larger degree of bend. As a result, the difference between slightly different signs with a lot of flex tends to be smaller for users with more petite hands. For example, consider the signs for “M” and “S.” The only difference between these signs is that “S” will elicit slightly more flex in the fingers. However, for smaller hands, the change in the resistance from the flex-sensor is small, and the algorithm may be unable to discern the difference between these signs.

Figure 6: We can see that even with very small amounts of data the classifier does quite well. After gathering just over 60 readings per sign it achieves an accuracy of over 98%.

Figure 6: We can see that even with very small amounts of data the classifier does quite well. After gathering just over 60 readings per sign it achieves an accuracy of over 98%.

In the end, our current classifier was able to achieve an accuracy of 98% (the error being composed almost solely of u, v sign confusion) on a task of 28 signs, the full alphabet as well as “relaxed” and “nothing” (see Figure 5). A random classifier would guess correctly 4% of the time, clearly indicating that our device is quite accurate. It is however worth noting that the algorithm could greatly benefit from improved touch sensors (seeing as the most common mistake is confusing U for V), being trained on a larger population of users, and especially on larger datasets. With a broad enough data set we could provide the new users with a small test script that only covers difficult letters to predict and relies on the already available data for the rest. The software has currently been trained on the two team members and it has been tested on some users outside of the team. The results were excellent for the team members that trained the glove and mostly satisfying though not perfect for the other volunteers. Since the volunteers did not have a chance to train the glove and were not very familiar with the signs, it is hard to say if their accuracy was a result of overfitting, individual variations in signing, or inexperience with American Sign Language. Regardless, the accuracy of the software on users who trained was near perfect and mostly accurate for users that did not know American Sign Language prior to and did not train the glove.

Lastly it is worth noting that the amount of data necessary for training the classifier was actually surprisingly small. With about 60 instances per label the classifier was able to reach the 98% mark. Given that we receive 30 samples per second and that there are 28 signs, this would mean that gathering data for training could be done in under a minute (see Figure 6).

FUTURE UPGRADES
The project met our expectations. Our initial goal was to create a system capable of recognizing and classifying gestures. We were able to do so with more than 98% average accuracy across all 28 classes. While we did not have a solid time requirement for the rate of prediction, the resulting speed made using the glove comfortable and it did not feel sluggish. Looking ahead, it would make sense to improve our approach for the touch sensors since the majority of the ambiguity in signs come from the difference between U and V. We want to use materials that lend themselves more seamlessly to clothing and provide a more reliable connection. In addition, it will be beneficial to test and train our project on a large group of people since this would provide us with richer data and more consistency. Lastly, we hope to make the glove wireless, which would allow it to easily communicate with phones and other devices and make the system truly portable.

RESOURCES
Arduino, “MPU-6050 Accelerometer + Gyro,” http://playground.arduino.cc/ Main/MPU-6050.

Atmel Corp., “8-Bit AVR Microcontroller with 128K Bytes In-System Programmable Flash: ATmega1284P,” 8059D­AVR­ 11/09, 2009,
www.atmel. com/images/doc8059.pdf.

Fleury, “AVR-Software,” 2006,
http://homepage. hispeed.ch/peterfleury/avrsoftware.html.

Lund University, “Tiny Real Time,” 2006, www.control.lth. se/~anton/tinyrealtime/.

Parente, “pyttsx – Text-tospeech x-platform,” pyttsx “struct–Interpret Strings as Packed Binary Data,” https://docs.python.org/2/ library/struct.html.

scikit-learn, “Preprocessing Data,”
http:// scikit-learn.org/stable/modules/preprocessing. html.

“Support Vector Machines,” scikit-learn.org/stable/modules/svm.html.

Spectra Symbol, “Flex Sensor FS,” 2015,
www.spectrasymbol.com/wp-content/themes/spectra/images/datasheets/FlexSensor.pdf.

Villalba and M. Lin, “Sign Language Glove,” ECE4760, Cornell University, 2014,
http:// people.ece.cornell.edu/land/courses/ece4760/FinalProjects/f2014/rdv28_mjl256/webpage/.

SOURCES
ATmega1284P Microcontroller Atmel | www.atmel.com

MPU-6050 MEMS MotionTracking Device InvenSense | www.invensense.com

Article originally published in Circuit Cellar June 2016, Issue #311

New Audio DAC Offers High-Res Audio to Pro Audio Devices

Cirrus Logic recently introduced the CS43130 MasterHIFI, which is a low-power, digital-to-analog converter (DAC) featuring a headphone amplifier. Operating at 4× lower power consumption, the CS43130 DAC supports Direct Stream Digital (DSD) formats and includes a NOS Filter and 512 single-bit elements to eliminate unwanted noise from the signal and for best filter response. The IC minimizes board space requirements while enabling performance and features to drive design differentiation.Cirrus CS43130

The CS43130’s features, specs, and benefits:

  • Supports DSD, DSD DoP (DSD over PCM), up to DSD128 and all PCM high-resolution audio formats up to 32-bit 384 kHz
  • Proprietary DSD processor handles switching between DSD and PCM audio streams, while matching the analog audio output level
  • Advanced hi-fi filters, allowing OEMs to tune their own signature sound
  • Analog bypass switch provides easy switching between hi-fi and voice call modes and includes a low-power state for voice only
  • Unique Non-Oversampling (NOS) emulation mode
  • High impedance of 600 Ω and inter-channel isolation of greater than 110 dB.
  • Proprietary digital-interpolation filters support five selectable digital filter responses.
  • On-board, low-noise, ground-centered headphone amplifier provides proprietary AC impedance detection to support headphone fingerprinting
  • Uses 512 individual DACs per channel in an analog/digital filter array
  •  THD+N of –108 dB and dynamic range of 130 dB
  •  Consumes 23 mW of power
  • Supports up to 32-bit, 384-kHz sample rate audio playback

The CS43130 and CS4399 are available in a 42-ball WLCS or a 40-pin QFN package. A development kit is also available.

Source: Cirrus Logic

Transform IoT Audio, Voice, and Video Interactions

NXP Semiconductors (now part of Qualcomm) recently introduced the new i.MX 8M family of applications processors specifically designed to meet increasing audio and video system requirements for smart home and smart mobility applications such as over-the-top (OTT) set-top boxes, digital media adapters, surround sound, sound bars, A/V receivers, voice control, voice assistance, digital signage, and general-purpose human machine interface (HMI) solutions.NXP-iMX8M-FS

The concept of the smart home is expanding rapidly, heightening consumers’ expectations for audio and video entertainment and transforming the requirements for consumer electronics devices. NXP’s i.MX 8M family addresses the major inflection points currently underway in streaming media: voice recognition and networked speakers in audio, and the move to 4K High Dynamic Range (HDR) and the growth of smaller, more compact form factors in video.

NXP’s i.MX 8M family of processors has up to four 1.5-GHz ARM Cortex-A53 and Cortex-M4 cores, flexible memory options and high-speed connectivity interfaces. The processors also feature full 4K UltraHD resolution and HDR (Dolby Vision, HDR10 and HLG) video quality, the highest levels of pro audio fidelity, up to 20 audio channels and DSD512 audio. The i.MX 8M family is tailored to streaming video devices, streaming audio devices and voice control applications.

Capable of driving dual displays, the new devices include:

  • The i.MX 8M Dual/i.MX 8M Quad, which integrates two or four ARM Cortex-A53 cores, one Cortex- M4F core, a GC7000Lite GPU and 4kp60, h.265 and VP9 video capability.
  • The i.MX 8M QuadLite, which integrates four ARM Cortex-A53 cores, one Cortex- M4F core and a GC7000Lite GPU.
  • The i.MX 8M Solo, which integrates one ARM Cortex-A53 core, one Cortex-M4F core and a GC7000nanoULTRA GPU.

The i.MX 8 applications processor is highly scalable with a pin- and power-compatible package and comprehensive software support. The i.MX 8 multi-sensory enablement kit (MEK) is now available to prototype i.MX 8M systems. Limited sampling of i.MX 8M will begin in the second quarter of 2017, and general availability is expected in the fourth quarter of 2017.

Source: NXP Semiconductors

New DACs with 140-dB Dynamic Range

ESS Technology announced its new flagship ES9038PRO SABRE DAC at CES 2016, generating immediate attention from audio manufacturers looking to raise the standard on new generation high-resolution audio products. The new ESS PRO SABRE 32-bit, 8-channel DAC chip goes the extra mile by offering the industry’s highest dynamic range (DNR) of 140dB with an impressive THD+N at -122dB.ESS ES90

During CES 2016, ESS Technology announced its new professional series of Digital-to-Analog Converters (DAC) targeted at premium high-end consumer and professional recording studio equipment. The flagship offering for this professional series is the ES9038PRO SABRE DAC. This first member of the ESS PRO SABRE series sets a new benchmark in high-end audio by offering the industry’s highest dynamic range (DNR) of 140dB and offers impressively low total harmonic distortion plus noise (THD+N) at -122dB in a 32-bit, 8-channel DAC.

As high resolution content proliferates through new, high-end music download services, users are looking for equipment that delivers the highest quality sound possible, regardless of file format or device. The ES9038PRO SABRE DAC sets a new standard for immersive and high-resolution audio (HRA) experiences.

The ES9038PRO SABRE DAC features ESS’s patented 32-bit HyperStream DAC technology. The HyperStream architecture is responsible for both the outstanding sound quality of ESS PRO SABRE DACs and the extremely low THD+N. Other 32-Bit 8-Channel DACs, using typical delta-sigma architecture, feature –107 dB THD+N (0.0004%), which when subjected to individual listening tests do not equal the clarity and sound stage of the ES9038PRO. This new flagship SABRE DAC was created to integrate seamlessly with both the existing and future portfolio of ESS headphone amplifiers as well as other audio building block technology.

New hardware features include full-scale manual/auto-gain calibration to reduce device-to-device gain error (allowing to configure multiple DACs for high channel count systems), option for programmable volume control ramp-rate with +18 dB, DSD over PCM (DoP) decoder and a total of eight preset filters for maximum design flexibility. Its programmable functions allow customizing outputs to mono, stereo, and 8-channel output in current-mode or voltage-mode based on performance criteria, together with user-programmable filters and programmable THD compensation to minimize THD caused by external components.

For audio designers, the ES9038PRO SABRE DAC includes significant advancements over previous generations, simplifying the implementation of specific software and reducing debugging time. The volume level of all internal DACs can be updated with a single software instruction. Clock gearing reduces MCLK frequency and saves power – the chip has 500 mW power consumption at 192 kHz sampling and 100 MHz MCLK – while advanced power management features enable a low-power idle mode when the audio signal is absent.

According to ESS, the ES9038PRO SABRE DAC was designed for premium home theater equipment including Blu-ray players, preamplifiers, all-in-one A/V receivers, and more. Studio environments can also leverage the ES9038PRO SABRE DAC’s industry-leading performance for professional audio workstations and other equipment. The PRO series enables studio professionals to recreate popular signature sound styles, using external DSP and specialized software packages, while remaining true to the artists’ musical vision.

In addition to the SABRE ES9038PRO, ESS is also announcing other members of the PRO series – the ES9028PRO and ES9026PRO SABRE DACs. These 32-bit, 8-channel PRO series DACs are designed for the audiophile/enthusiast who demands the high quality and performance of a SABRE DAC at a more economical price point. The ES9028PRO and ES9026PRO are pin-compatible upgrades for previous generation ESS products — the ES9018S and ES9016S — and feature 129 dB and 124 dB dynamic range (DNR), and -120 dB and -110 dB total harmonic distortion plus noise (THD+N).

Source: audioXpress

Next-Gen OPA1612 Audio Operational Amplifier

Texas Instruments recently introduced the OPA1622 audio operational amplifier (op-amp), which is the latest addition to the company’s Burr-Brown Audio line. The OPA1622 delivers high output power of up to 150 mW and extremely low distortion of –135 dB at 10 mW.

TI introduces the industry's highest-performance audio operational amplifier (PRNewsFoto/Texas Instruments)

TI introduces the industry’s highest-performance audio operational amplifier (PRNewsFoto/Texas Instruments)

The compact OPA1622’s low power consumption and low distortion can deliver high-fidelity audio in portable devices such as headphone amplifiers and smartphones. Headphone amplifier designers can take advantage of its low total harmonic distortion (THD) of –135 dB at 10-mW output power into a 32-Ω load. In addition, the OPA1622 delivers maximum output power of up to 150 mW before clipping while maintaining the lowest THD and noise (THD+N), providing a clean signal path for pro audio applications.

The OPA1622 consumes low quiescent current of 2.6-mA per channel and delivers high linear output current of 80 mARMS in a 3-mm × 3-mm DFN package. Additionally, the increased power-supply rejection ratio (PSRR) of –97/–123 dB at 20 kHz enables low distortion from switching power supplies with no LDO, thereby saving board space without compromising audio performance.

The OPA1622’s ground-referenced enable pin is directly controllable from the low-power GPIO pins without level-shifting circuits. Its pinout improves PCB layout and enables exceptional distortion performance at high output power. An enable-circuitry design limits output transients when the OPA1622 is transitioning into or out of shutdown mode, effectively eliminating audible clicks and pops.

A TINA-TI SPICE macromodel is available for the OPA1622 to help verify board-level signal-integrity requirements. The OPA1622 is available in a 3-mm × 3-mm DFN package $2.90 in 1,000-unit quantities.

Source: Texas Instruments

Utilize Simple Radios with Simple Computers

I ordered some little UHF transmitters and receivers from suppliers on AliExpress, the Chinese equivalent of Amazon.com, in order to extend my door chimes into areas of my home where I could not hear them. These ridiculously inexpensive units are currently about $1 per transmitter-receiver pair in quantities of five, including shipping, and are available at 315 and 433.92 MHz. Photo 1 shows a transmitter and receiver pair.  Connections are power and ground and data in or out.

Photo 1: 315 MHz Transmitter-Receiver Pair (Receiver on Left)

Photo 1: The 315-MHz transmitter-receiver pair (receiver on left)

The original attempt at a door chime extender modulated the transmit RF with an audio tone and searched for the presence of that tone at the receiver with a narrow audio filter, envelope detector, and threshold detector. This sort of worked, but I started incorporating the same transmitters into another project that interfered, despite the audio filter.

The other project used Arduino Uno R3 computers and Virtual Wire to convey data reliably between transmitters and receivers. Do not expect a simple connection to a serial port to work well. As the other project evolved, I learned enough about the Atmel ATtiny85 processor, a smaller alternative to the Atmel ATmega328 processor in the Arduino Uno R3, to make new and better and very much simpler circuits. That project evolved to come full circle and now serves as a better doorbell extender. The transmitters self identify, so a second transmit unit now also notifies me when the postman opens the mailbox.

Note the requirement for Virtual Wire.  Do not expect a simple connection to a serial port to work very well.

Transmitter

Figure 1 shows the basic transmitter circuit, and Photo 2 shows the prototype transmitter. There is only the ATtiny85 CPU and a transmitter board. The ATtiny85 only has eight pins with two dedicated to power and one to the Reset input.

Figure 1: Simple Transmitter Schematic

Figure 1: Simple transmitter schematic

One digital output powers the transmitter and a second digital output provides data to the transmitter.  The remaining three pins are available to serve as inputs.  One serves to configure and control the unit as a mailbox alarm, and the other two set the identification message the transmitter sends to enable the receiver to discriminate among a group of such transmitters.

Photo 2: 315 MHz Transmitter and ATtiny85 CPU

Photo 2: The 315-MHz transmitter and ATtiny85 CPU

When input pin 3 is high at power-up, the unit enters mailbox alarm mode. In mailbox alarm mode, the input pins 2 and 7 serve as binary identification bits to define the value of the single numeric character that the transmitter sends, and the input pin 3 serves as the interrupt input. Whenever input pin 3 transitions from high-to-low or low-to-high, the ATtiny85 CPU wakes from SLEEP_MODE_PWR_DOWN, makes a single transmission, and goes back to sleep. The current mailbox sensor is a tilt switch mounted to the door of the mailbox. The next one will likely be a reed relay, so only a magnet will need to move.

When in SLEEP_MODE_PWR_DOWN, the whole circuit draws under 0.5 µA. I expect long life from the three AAA batteries if they can withstand heat, cold, and moisture. I can program the ATtiny to pull the identification inputs high, but each binary identification pin then draws about 100 µA when pulled low. In contrast, the 20- or 22-MΩ pull-up resistors I use as pull-ups each draw only a small fraction of a microampere when pulled low.

When input pin 3 is low at power-up, the unit enters doorbell extender alarm mode. In doorbell extender alarm mode, the input pins 2 and 7 again serve as binary identification bits to define the value of the single numeric character that the transmitter sends; but in doorbell extender mode, the unit repetitively transmits the identification character whenever power from the door chimes remains applied.

Receiver

Figure 2 shows the basic receiver circuit, and Photo 3 shows the prototype receiver. There is only the ATtiny85 CPU with a 78L05 voltage regulator and a receiver board.

Figure 2: Simple Receiver Schematic

Figure 2: Simple receiver schematic

The receiver output feeds the input at pin 5. The Virtual Wire software decodes and presents the received character. Software in the CPU sends tone pulses to a loudspeaker that convey the value of the identification code received, so I can tell the difference between the door chime and the mailbox signals. Current software changes both the number of beep tones and their audible frequency to indicate the identity of the transmit source.

Photo 3: The 315-MHz receiver with ATtiny85 CPU and 78L05 voltage regulator

Photo 3: The 315-MHz receiver with ATtiny85 CPU and 78L05 voltage regulator

Note that these receivers are annoyingly sensitive to power supply ripple, so receiver power must either come from a filtered and regulated supply or from batteries.

Photo 4 shows the complete receiver with the loudspeaker.

Photo 4: Receiver with antenna connections and loudspeaker

Photo 4: Receiver with antenna connections and a loudspeaker

Link Margin

A few inches of wire for an antenna will reach anywhere in my small basement. To improve transmission distance from the mailbox at the street to the receiver in my basement, I added a simple half-wave dipole antenna to both transmitter and receiver. Construction is with insulated magnet wire so I can twist the balanced transmission line portion as in Photo 5. I bring the transmission line out through an existing hole in my metal mailbox and staple the vertical dipole to the wooden mail post. My next mailbox will not be metal.

Photo 5: Simple half-wave dipole for both Tx and Rx increases link distance

Photo 5: Simple half-wave dipole for both Tx and Rx increases link distance

I don’t have long term bad weather data to show this will continue to work through heavy ice and snow, but my mailman sees me respond promptly so far.

Operating Mode Differences

The mailbox unit must operate at minimum battery drain, and it does this very well. The doorbell extender operates continuously when the AC door chime applies power. In order to complete a full message no matter how short a time someone presses the doorbell push button, I rectify the AC and store charge in a relatively large electrolytic capacitor to enable sufficient transmission time.

Photo 6: New PCBs for receive and transmit

Photo 6: New PCBs for receive and transmit

Availability

This unit is fairly simple to fabricate and program your self, but if there is demand, my friend Lee Johnson will make and sell boards with pre-programmed ATtiny85 CPUs. (Lee Johnson, NØVI, will have information on his website if we develop this project into a product: www.citrus-electronics.com.) We will socket the CPU so you can replace it to change the program. The new transmitter and receiver printed circuit boards appear in Photo 6.


Dr. Sam Green (WØPCE) is a retired aerospace engineer living in Saint Louis, MO. He holds degrees in Electronic Engineering from Northwestern University and the University of Illinois at Urbana. Sam specialized in free space and fiber optical data communications and photonics. He became KN9KEQ and K9KEQ in 1957, while a high school freshman in Skokie, IL, where he was a Skokie Six Meter Indian. Sam held a Technician class license for 36 years before finally upgrading to Amateur Extra Class in 1993. He is a member of ARRL, a member of the Boeing Employees Amateur Radio Society (BEARS), a member of the Saint Louis QRP Society (SLQS), and breakfasts with the Saint Louis Area Microwave Society (SLAMS). Sam is a Registered Professional Engineer in Missouri and a life senior member of IEEE. Sam is listed as inventor on 18 patents.

SoC FPGA Development Kit for Audio & Processing Applications

Coveloz recently announced the availability of its Pro Audio Ethernet AVB FPGA Development Kit, which is a ready-to-play platform for building scalable, cost-effective networked audio and processing applications built on modular hardware.Covelozpro-audio-dev-kit

Coveloz introduced its Networked Pro Audio SoC FPGA Development Kit during the Integrated System Europe (ISE) show in Amsterdam. According to the company, the new platform will enable manufacturers to achieve faster AVnu certification for new AVB solutions, creating an ideal development environment for live sound, conferencing systems, public address, audio post production, music creation, automotive infotainment and ADAS applications.

At the heart of the Coveloz development platform is a highly integrated System-on-Module (SOM), featuring an Altera Cyclone V SoC FPGA, which includes a dual-core ARM A9 processor, DDR3 memory and a large FPGA fabric, all in a low cost and compact package. The kit includes a multitude of networking and audio interfaces, including three Gigabit Ethernet ports as well as I2S, AES10/MADI, AES3/EBU and TDM audio.

Coveloz provides FPGA and Linux firmware enabling designers to quickly build AVnu Certified products for the broadcast, pro-audio/video and automotive markets. The platform is aimed at time-synchronized networks and includes grandmaster, PPS and word clock inputs and outputs as well as high quality timing references.

The Coveloz development kit is also host to the BACH-SOC platform, which integrates AES67 and Ethernet AVB audio networking and processing. Both SoC and PCIe-based FPGA implementations are available.

The Coveloz Bach Module is a full-featured and programmable audio networking and processing solution for easily integrating industry-standard AES67 and/or Ethernet AVB/TSN networking into audio/video distribution and processing products. The solution enables products with over 128+128 channels of digital streaming and 32-bit audio processing at 48, 96, or 192 kHz.

Supporting a wide range of interfaces, Coveloz complements the development platform with a comprehensive software toolkit and engineering services to help manufacturers reducing time to market. Coveloz also provides application examples to demonstrate the capabilities of the BACH-SOC platform.

The programmable BACH-SOC can be customized to a particular application in many ways—for instance, from selecting the number and type of audio interface to choosing audio processing alone, transport alone, or a combination.

Source: Coveloz

New JukeBlox Wi-Fi Platform for Streaming Audio

Microchip Technology’s fourth-generation JukeBlox platform enables product developers to build low-latency systems, such as wireless speakers, sound bars, AV receivers, micro systems, and more. The JukeBlox 4 Software Development Kit (SDK) in combination with the CY920 Wi-Fi & Bluetooth Network Media Module features dual-band Wi-Fi technology, multi-room features, AirPlay and DLNA connectivity, and integrated music services.

Microchip-JukeBlox-Wifi

Streaming audio with JukeBlox

The CY920 module is based on Microchip’s DM920 Wi-Fi Network Media Processor, which features 2.4- and 5-GHz 802.11a/b/g/n Wi-Fi, high-speed USB 2.0 and Ethernet connectivity. By using the 5-GHz band, speakers aren’t impacted by the RF congestion found in the 2.4-GHz band.

The DM920 processor also features integrated dual 300-MHz DSP cores that can reduce or eliminate the need for costly standalone DSP chips. An PC-based GUI simplifies the use of a predeveloped suite of standard speaker-tuning DSP algorithms, including a 15-band equalizer, multiband dynamic range compression, equalizer presets, and a variety of filter types. Even if you don’t have DSP coding experience, you can implement DSP into your designs.

JukeBlox 4 enables you to directly stream cloud-based music services, such as Spotify Connect and Rhapsody, while using mobile devices as remote controls. Mobile devices can be used anywhere in the Wi-Fi network without interrupting music playback. In addition, JukeBlox technology offers cross-platform support for iOS, Android, Windows 8, and Mac, along with a complete range of audio codecs and ease-of-use features to simplify network setup.

The JukeBlox 4 SDK, along with the JukeBlox CY920 module, is now available for sampling and volume production.

Source: Microchip Technology

Summit Semiconductor Extended Distance Modules Support WiSA Whole House Audio Specification

Summit Wireless, a division of Summit Semiconductor (from Portland, Oregon), supported the Wireless Speaker and Audio (WiSA) Association demonstrations at the CEDIA Expo 2014 in September. During the show, WiSA announced new multi-zone requirements and feature set for simultaneous support of both wireless home theater playback and multi-zone stereo audio streams. WiSA compliant systems should be able deliver high resolution, uncompressed audio, up to 100 m line of sight or 20 to 40 m through walls.SummitWiSAAmopReferenceDesignWeb

Summit Semiconductor confirms availability of new extended distance transmit and receive modules for those multi-zone home theater and whole home audio applications with support to the WiSA Association’s updated compliance and interoperability test specification.

WiSA members recognize the growth of whole house stereo audio solutions in the market place and the practical need to keep system cost down for mass market acceptance. With the new extended distance radio capabilities, a single system will also provide consumers with a WiSA-compliant home theater system and a whole house system.

The new Summit Semiconductor modules can transmit high quality, uncompressed audio up to 100 m line of site. When integrated in an AVR, audio hub, HDTV, Blu-ray player or gaming console, a single transmit module can manage up 32 speakers, and 8 different zones with separate volume control while simultaneously supporting both home theater and multi-zone audio transport. For example, during the CEDIA Expo the WiSA Association demonstrated a wireless 5.1 home theater system with Summit’s extended distance modules that will also simultaneously transmit a separate stereo pair with different content to a separate location.

The new extended distance modules come pre-certified by country and are backward compatible to the prior generation of Summit’s home theater wireless modules. Engineering samples for the new extended distance modules are available from Summit Semiconductor.

“We’re excited to offer the new extended distance modules in support of the WiSA Association’s multi-zone and whole house initiative,” says Tony Parker, vice president of marketing, Summit Semiconductor. “This is a perfect tool for audio products that need high resolution multi-channel audio, but want to appeal to a broader base as a multi-media whole house platform. Products such as a HDTV, AVR/Pre-amp, game console, soundbar and HTiB can benefit significantly from adding these modules to their designs.”

Source: Summit Semiconductor

PIC32MX1/2/5 Microcontrollers for Embedded Control & More

Microchip Technology’s new PIC32MX1/2/5 series enables a wide variety of applications, ranging from digital audio to general-purpose embedded control. The microcontroller series offers a robust peripheral set for a wide range of cost-sensitive applications that require complex code and higher feature integration.MicrochipPIC32MX125-starterkit

The microcontrollers feature:

  • Up to 83 DMIPS performance
  • Scalable memory options from 64/8-KB to 512/64-KB flash memory/RAM
  • Integrated CAN2.0B controllers with DeviceNet addressing support and programmable bit rates up to 1 Mbps, along with system RAM for storing up to 1024 messages in 32 buffers.
  •  Four SPI/I2S interfaces
  • A Parallel Master Port (PMP) and capacitive touch sensing hardware
  • A 10-bit, 1-Msps, 48-channel ADC
  • Full-speed USB 2.0 Device/Host/OTG peripheral
  • Four general-purpose direct memory access controllers (DMAs) and two dedicated DMAs on each CAN and USB module

 

Microchip’s MPLAB Harmony software development framework supports the MCUs. You can take advantage of Microchip’s software packages, such as Bluetooth audio development suites, Bluetooth Serial Port Profile library, audio equalizer filter libraries, various Decoders (including AAC, MP3, WMA and SBC), sample-rate conversion libraries, CAN2.0B PLIBs, USB stacks, and graphics libraries.

Microchip’s free MPLAB X IDE, the MPLAB XC32 compiler for PIC32, the MPLAB ICD3 in-circuit debugger, and the MPLAB REAL ICE in-circuit emulation system also support the series.

The PIC32MX1/2/5 Starter Kit costs $69. The new PIC32MX1/2/5 microcontrollers with the 40-MHz/66 DMIPS speed option are available in 64-pin TQFP and QFN packages and 100-pin TQFP packages. The 50-MHz/83 DMIPS speed option for this PIC32MX1/2/5 series is expected to be available starting in late January 2015. Pricing starts at $2.75 each, in 10,000-unit quantities.

 

Source: Microchip Technology

Twin-T Oscillator Configuration

Since retiring in 2013, electrical engineer Larry Cicchinelli has provided technical support at an educational radio station. For audio circuit debugging and testing, he uses a DIY battery-powered oscillator/volume unit (VU) meter. Details follow.

Originally, I was only going to build the audio source. When I thought about how I would use the unit, it occurred to me that the device should have a display. I decided to design and build an easy-to-use unit that would combine a calibrated audio source with a level display. Then, I would have a single, battery-powered instrument to do some significant audio circuit testing and debugging.

The front panel of the oscillator/volume unit (VU) meter contains all the necessary controls. (Source: L. Cicchinelli)

The front panel of the oscillator/volume unit (VU) meter contains all the necessary controls. (Source: L. Cicchinelli)

Cicchinelli describes the Twin-T Oscillator:

The oscillator uses the well-known Twin-T configuration with a minor modification to ensure a constant level over a range of power supply voltages. The circuit I implemented maintains its output level over a range of at least 6 to 15 V. Below 6 V, the output begins to distort if you have full output voltage (0 dBu). The modification consists of two antiparallel diodes in the feedback loop. The idea came from a project on DiscoverCircuits.com. The project designer also indicates that the diodes reduce distortion.

Figure 1 shows the oscillator’s schematic. Header H1 and diode D1 enable you to have two power sources. I installed a 9-V battery and snap connector in the enclosure as well as a connector for external power. The diode enables the external source to power the unit if its voltage is greater than the battery. Otherwise the battery will power the unit. The oscillator draws about 4 mA so it does not create a large battery drain.

The standard professional line level is 4 dBu, which is 1.228 VRMS or 3.473 VPP into a 600-Ω load. The circuit values enable you to use R18 to calibrate it, so the maximum output can be set to the 4-dBu level. A 7.7 (3.473/0.45) gain is required to provide 4 dBu at the transformer. Using the resistors shown in Figure 1, R18 varies the gain of U1.2 from about 4.3 to 13.

The Twin-T oscillator’s circuitry

Figure 1: The Twin-T oscillator’s circuitry

You may need to use different resistor values for R18, R19, and R20 to achieve a different maximum level. If you prefer to use 0 dBm (0.775 VRMS into 600 Ω) instead of 4 dBu, you should change R20 to about 5 kΩ to give R18 a range more closely centered on a 4.87 (2.19/0.45) gain. The R20’s value shown in Figure 1 will probably work, but the required gain is too close to the minimum necessary for comfort. Most schematics for a Twin-T oscillator will show the combination of R3 and R4 as a single resistor of value Rx/2. They will also show the combination of C1 and C2 as a single capacitor of value Cx × 2. These values lead to the following formula:

CicchinelliEQ1

As you can see in the nearby photo, the Twin-T Oscillator and VU meter contain separate circuit boards.

The Twin-T oscillator and dual VU meter have separate circuit boards

The Twin-T oscillator and dual VU meter have separate circuit boards

This article first appeared in audioXpress January 2014. audioXpress is one of Circuit Cellar‘s sister publications.

 

Arduino-Based Tube Stereo Preamp Project

If you happen to be electrical engineer as well as an audiophile, you’re in luck. With an Arduino, some typical components, and a little knowhow, you can build DIY tube stereo preamplifier design.

Shannon Parks—owner of Mahomet, IL-based Parks Audio—designed his “Budgie” preamp after reading an article about Arduino while he was thinking about refurbishing a classic Dynaco PAS-3.

Budgie preamp (Source: S. Parks)

Budgie preamp (Source: S. Parks)

In a recent audioXpress article about the project, Parks noted:

Over the last 10 years, I have built many tube power amplifiers but I had never built a tube preamplifier. The source switching seemed particularly daunting. A friend recommended that I refurbish a classic Dynaco PAS-3 which has been a popular choice with many upgrade kit suppliers. Unfortunately, the main part of these older designs is a clumsy rotary selector switch, not to mention the noisy potentiometers and slide switches. In the 1980s, commercial stereo preamplifiers started using IC microcontrollers that permitted cleaner designs with push-button control, relays for signal switching, and a wireless remote. While reading an article about the Arduino last year, I realized these modern features could easily be incorporated into a DIY preamplifier design.

All the circuits are on one custom PCB along with the power supply and microcontroller (Source: S. Parks)

All the circuits are on one custom PCB along with the power supply and microcontroller (Source: S. Parks)

Parks said the Arduino made sense for a few key reasons:

I found these features were incredibly useful:

  • A bank of relays could switch between the four stereo inputs as well as control mute, standby, gain, and bass boost settings.
  • A red power LED could use PWM to indicate if the preamplifier is muted or in standby.
  • An IR receiver with a remote could control a motor-driven volume potentiometer, change the source input selection, and turn the unit on/off. Any IR remote could be used with a code learning mode.
  • A backlit display could easily show all the settings at a glance.
  • Momentary push buttons could select the input device, bass boost, gain, and mute settings.
  • Instead of using several Arduino shields wired to an Arduino board, all the circuits could fit on one custom PCB along with the power supply and the microcontroller.

Parks used an Arduino Nano, which 0.73” × 1.70”. “The tiny Nano can be embedded using a 32-pin dual in-line package (DIP) socket, which cleans up the design. It can be programmed in-circuit and be removed and easily replaced,” he noted.

Parks used an Arduino Nano for the preamp project (Source: S. Parks)

Parks used an Arduino Nano for the preamp project (Source: S. Parks)

Parks described the shift register circuit:

The Budgie preamplifier uses a serial-in, parallel-out (SIPO) shift register to drive a bank of relays ….

A SIPO shift register is used to drive a bank of relays (Source: S. Parks)

A SIPO shift register is used to drive a bank of relays (Source: S. Parks)

Only four Arduino digital outputs—enable, clock, latch, and data—are needed to control eight DPDT relays. These correspond to the four outputs labeled D3, D4, D5, and D7 s …. The Texas Instruments TPIC6C595 shift register used in this project has heavy-duty field-effect transistor (FET) outputs that can handle voltages higher than logic levels. This is necessary for operating the 24-V relays. It also acts as a protective buffer between the Arduino and the relays.

Here you see the how to set up the Arduino Nano, LCD, power supply, push button , IR and motor control circuits (Source: S. Parks)

Here you see the how to set up the Arduino Nano, LCD, power supply, push button , IR and motor control circuits (Source: S. Parks)

As for the audio circuit, Parks explained:

The 12B4 triode was originally designed to be used in televisions as a vertical deflection amplifier. New-old-stock (NOS) 12B4s still exist. They can be purchased from most US tube resellers. However, a European equivalent doesn’t exist. The 12B4 works well in preamplifiers as a one-tube solution, having both high input impedance and low output impedance, without need for an output transformer. An audio circuit can then be distilled down to a simple circuit with few parts consisting of a volume potentiometer and a grounded cathode gain stage.
The 12B4 has about 23-dB gain, which is more than is needed. This extra gain is used as feedback to the grid, in what is often referred to as an anode follower circuit. The noise, distortion, and output impedance are reduced (see Figure 3). Using relays controlled by the Arduino enables switching between two feedback amounts for adjustable gain. For this preamplifier, I chose 0- and 6-dB overall gain. A second relay enables a bass boost with a series capacitor.
You only need a lightweight 15-to-20-V plate voltage to operate the 12B4s at 5 mA. Linearity is very good due to the small signal levels involved, as rarely will the output be greater than 2 VPP. A constant current source (CCS) active load is used with the 12B4s instead of a traditional plate resistor. This maximizes the possible output voltage swing before clipping. For example, a 12B4 biased at 5-mA plate current with a 20-kΩ plate resistor would drop 100 V and would then require a 120-V supply voltage or higher. Conversely, the CCS will only drop about 2 V. Its naturally high impedance also improves the tube’s gain and linearity while providing high levels of power supply noise rejection.

This article first appeared in Circuit Cellar’s sister publication, audioXpress (July 2014).

 

 

Q&A with Arduino-Based Skube Codesigner

The Arduino-based Skube

The Arduino-based Skube

Andrew Spitz is a Copenhagen, Denmark-based sound designer, interaction designer, and programmer. Among his various innovative projects is the Arduino-based Skube music player, which is an innovative design that enables users to find and share music.

Spitz worked on the design with Andrew Nip, Ruben van der Vleuten, and Malthe Borch. Check out the video to see the Skube in action. On his blog SoundPlusDesign.com, Spitz writes: “It is a fully working prototype through the combination of using ArduinoMax/MSP and an XBee wireless network. We access the Last.fm API to populate the Skube with tracks and scrobble, and using their algorithms to find similar music when in Discover mode.”

Skube – A Last.fm & Spotify Radio from Andrew Nip on Vimeo.

The following is an abridged  version of an interview that appears in the December 2012 issue of audioXpress magazine, a sister publication of Circuit Cellar magazine..

SHANNON BECKER: Tell us a little about your background and where you live.

Andrew Spitz: I’m half French, half South African. I grew up in France, but my parents are South African so when I was 17, I moved to South Africa. Last year, I decided to go back to school, and I’m now based in Copenhagen, Denmark where I’m earning a master’s degree at the Copenhagen Institute of Interaction Design (CID).

SHANNON: How did you become interested in sound design? Tell us about some of your initial projects.

Andrew: From the age of 16, I was a skydiving cameraman and I was obsessed with filming. So when it was time to do my undergraduate work, I decided to study film. I went to film school thinking that I would be doing cinematography, but I’m color blind and it turned out to be a bigger problem than I had hoped. At the same time, we had a lecturer in sound design named Jahn Beukes who was incredibly inspiring, and I discovered a passion for sound that has stayed with me.

Shannon: What do your interaction design studies at CIID entail? What do you plan to do with the additional education?

Andrew: CIID is focused on a user-centered approach to design, which involves finding intuitive solutions for products, software, and services using mostly technology as our medium. What this means in reality is that we spend a lot of time playing, hacking, prototyping, and basically building interactive things and experiences of some sort.

I’ve really committed to the shift from sound design to interaction design and it’s now my main focus. That said, I feel like I look at design from the lens of a sound designer as this is my background and what has formed me. Many designers around me are very visual, and I feel like my background gives me not only a different approach to the work but also enables me to see opportunities using sound as the catalyst for interactive experiences. Lots of my recent projects have been set in the intersection among technology, sound, and people.

SHANNON: You have worked as a sound effects recordist and editor, location recordist and sound designer for commercials, feature films, and documentaries. Tell us about some of these experiences?

ANDREW: I love all aspects of sound for different reasons. Because I do a lot of things and don’t focus on one, I end up having more of a general set of skills than going deep with one—this fits my personality very well. By doing different jobs within sound, I was able to have lots of different experiences, which I loved! nLocation recording enabled me to see really interesting things—from blowing up armored vehicles with rocket-propelled grenades (RPGs) to interviewing famous artists and presidents. And, documentaries enabled me to travel to amazing places such as Rwanda, Liberia, Mexico, and Nigeria. As a sound effects recordist on Jock of the Bushvelt, a 3-D animation, I recorded animals such as lions, baboons, and leopards in the South African bush. With Bakgat 2, I spent my time recording and editing rugby sounds to create a sound effects library. This time in my life has been a huge highlight, but I couldn’t see myself doing this forever. I love technology and design, which is why I made the move...

SHANNON: Where did the idea for Skube originate?

Andrew: Skube came out of the Tangible User Interface (TUI) class at CIID where we were tasked to rethink audio in the home context. So understanding how and where people share music was the jumping-off point for creating Skube.

We realized that as we move more toward a digital and online music listening experience, current portable music players are not adapted for this environment. Sharing mSkube Videousic in communal spaces is neither convenient nor easy, especially when we all have such different taste in music.

The result of our exploration was Skube. It is a music player that enables you to discover and share music and facilitates the decision process of picking tracks when in a communal setting.

audioXpress is an Elektor International Media publication.

New DSP “Lab-in-a-Box” for ARM-Based Audio Systems

Cambridge, UK-based, ARM and its partners will start shipping a DSP “Lab-in-a-Box” (LiB) to universities worldwide to help boost practical skills development and the creation of new ARM-based audio systems. This will include products such as high-definition home media and voice-controlled home automation systems. The LiB kits contain ARM Cortex-M4-based microcontroller boards by STMicroelectronics and audio cards from Wolfson Microelectronics and Farnell element14.ARMDSPLiBWeb

As the centerpiece of the ARM University Program, LiB packages offer ARM-based technology and high-quality teaching and training materials that support electronics and computer engineering courses. DSP courses have traditionally used software simulation packages, or hands-on labs using relatively expensive development kits costing around $300 per student. By comparison, this new DSP LiB will cost around $50 and will allow students to practice theory with advanced hardware sourced from widely-available products.

“Our Lab-in-a-Box offerings are proving hugely popular in universities because of the low-cost access to state-of-the-art technology,” said Khaled Benkrid, manager of the Worldwide University Program, ARM. “The DSP kits, powered by ARM Cortex-M4-based processors, enable high performance yet energy-efficient digital signal processing at a very affordable price. We expect to see them being used by students to create commercially-viable audio applications and it’s another great example of our partnership supporting engineers in training and beyond.”

The DSP LiB will begin shipping to universities in July 2014. It is the latest in a series of initiatives led by ARM which span multiple academic topics including embedded systems design, programming and SoC design. The DSP kits will also be offered to developers outside academia at a later date.

[via audioXpress.com]