CC Blog Projects Research & Design Hub

Improved Color Control for Analog LED Strips

Figure 1 Infineon PSoC 6 Wi-Fi BT Prototyping Kit. The CAPSENSE module can be seen on the bottom-right, and the PDM-PCM digital microphone interface can be seen on the top-right. The prototype board is placed in the middle.

Using a PSoC 6 MCU and Picovoice AI

Analog LED strips are often used for decorative lighting because they’re inexpensive and easy to operate. In this article, I describe how I improved the color and brightness variation of an analog strip by replacing its remote control with a PSoC 6 microcontroller (MCU) and a Picovoice artificial intelligence (AI) user interface.

  • How can I improve color and brightness control over my analog LED strips?
  • In what projects can I use Picovoice AI?
  • How can I control LED lights with an MCU?
  • Infineon PSoC6 Wi-Fi BT Prototyping Kit (CY8CPROTO-062-4343W)
  • Picovoice Voice Artificial Intelligence
  • Analog LED strip

LED strip lights (also known as LED tapes or ribbon lights) have become a common form of decorative lighting. They come in various lengths and designs, and are composed of sections of flexible circuits populated by small RGB LEDs. It’s easy to attach them to walls, floors, and furniture, and they can offer spectacular lightning.

But depending on where you procure such products, not all of them achieve the desired result. For example, the LED strip in today’s project is controlled by a remote, which doesn’t offer flexibility in setting the desired color and brightness.

This article features a DIY project in which a remote-controlled LED strip is connected to a PSoC 6 microcontroller (MCU) to maximize color variations. Removing the remote control meant that a new method of user interaction was needed. There are many possible options for light control. With an MCU, you can add a motion sensor, so the lights turn on whenever a person is nearby. Or you can make the lights play to the rhythm of music by adding an audio sensor to the MCU. The only limitation is the creator’s imagination. For this project, I opted to use voice artificial intelligence (AI).

PSOC6 AND PICOVOICE

The MCU used in this project is the Infineon PSoC6 Wi-Fi BT Prototyping Kit (CY8CPROTO-062-4343W) (Figure 1). This development board delivers dual cores, with a 150-MHz Arm Cortex-M4 as the primary application processor and a 100-MHz Arm Cortex-M0+ as the secondary processor for low-power operations. It supports full-speed USB, capacitive sensing with CAPSENSE, and a PDM-PCM digital microphone interface.

Figure 1
Infineon PSoC 6 Wi-Fi BT Prototyping Kit. The CAPSENSE module can be seen on the bottom-right, and the PDM-PCM digital microphone interface can be seen on the top-right. The prototype board is placed in the middle.
Figure 1
Infineon PSoC 6 Wi-Fi BT Prototyping Kit. The CAPSENSE module can be seen on the bottom-right, and the PDM-PCM digital microphone interface can be seen on the top-right. The prototype board is placed in the middle.

The Infineon website contains full support and documentation, including some tutorials, for each of its products. The PSoC6 Wi-Fi BT Prototyping Kit can be evaluated using ModusToolbox. It includes the PSoC6 Software Development Kit (SDK), with a diverse range of template projects with which to start designs. I highly recommend watching tutorials and videos explaining how to get started with PSoC 6 and ModusToolbox [1].

Voice AI is a category of technology in which machines receive, recognize, interpret, and perform voice commands. In this project, the voice AI-based user interface uses just the microphone block interface. Picovoice is an end-to-end platform that makes voice AI easier to implement. It enables the training of a model, which recognizes the user’s phrases and processes them further. To create a model, the user can access the Picovoice console, so it doesn’t involve any additional machine-learning (ML) coding. A detailed discussion about Picovoice and its implementation is presented later in this article.

— ADVERTISMENT—

Advertise Here

LED STRIPS

The LED strips are flexible circuit boards with RGB LEDs soldered on at equal distances. Each of these LEDs is composed of three smaller lights with different colors (red, green and blue). By controlling their relative intensities, millions of colors can be produced. To check out all the possible combinations using the three primary colors, search on the Internet for “RGB Color Codes Chart”—this gives the color, the RGB hex code, and the RGB decimal code [2]. The decimal code indicates the intensities of each LED color component, which takes a value between 0 (LOW, no color) and 255 (HIGH, full color). Any number between them sets the LED to partial light emission.

In computer graphics, the decimal RGB code (0,0,0) is equivalent to black (LED not on). An RGB code with the same value for each primary color (for example, 48, 48, 48) offers a shade of grey. However, in this project, we are talking about propagation of light. With the same example of an RGB code of (48, 48, 48), the LED strip will be set to white, but dimmer.

Thus, the decimal RGB code controls the brightness of each primary color. And by adjusting these intensities, you can obtain different shades and colors.

There are two types of LED strips: analog or digital. An analog strip has all LEDs connected in parallel, so that all the LEDs mounted on the circuit board give the same color. The digital strip can control individual LED blocks, therefore allowing for more color variation. This article focuses on analog strips, because they’re cheaper and easier to use.

Analog LED strips have four connections (pins). One is the voltage supply, which in this case will be 12V. The other three pins represent each LED color—red, green, and blue. These pins set the brightness of each primary color. The circuit schematic referenced in the next section shows the connections between the LED strips, power supply and the development board.

Analog LED strips come in segments that are 10cm long and contain three LED blocks per segment. That means each section has nine LEDs (three blue, three red, and three green). These sections can be cut at the boundary, and you can solder pin headers or wires to the copper tabs of the circuit board (Figure 2).

Setting the color of a 10cm segment to white at full brightness draws approximately 60mA to 80mA. That means each LED color can draw a maximum current of 20mA, if set to the highest brightness.

Figure 2  Each section of the LED strip can be cut at the boundary—look for the cut mark. This is useful to solder pins to the LED strip or to make it shorter.
Figure 2 Each section of the LED strip can be cut at the boundary—look for the cut mark. This is useful to solder pins to the LED strip or to make it shorter.
HARDWARE

The main hardware components used in this project are listed in Table 1. The PSoC 6 Wi-Fi BT Prototyping Kit can operate on 5V or 3.3V (in this case, 5V). The LED strip always comes with a power supply. You can use it to supply the MCU, but a voltage regulator needs to be connected to the circuit. Because the development board operates on 5V and the power supply is 12V, the 5V, 5.5A step-down voltage regulator (D36V50F5) from Pololu was a good option. The power supply is connected directly to the LED strip through the 12V pin and the PSoC 6, with the help of the voltage regulator.

Table 1  This is the hardware used in this project. Any analog strip that takes 12V as input and has the other three pins for each primary color will do just fine. The only limitation is the length of the strip. Too long, and the MOSFET transistors might burn. I explain this further in the article.
Table 1 This is the hardware used in this project. Any analog strip that takes 12V as input and has the other three pins for each primary color will do just fine. The only limitation is the length of the strip. Too long, and the MOSFET transistors might burn. I explain this further in the article.

To control the brightness and state of each color of the LED strip, three n-channel, IRLB8721 MOSFETs were added. The n-channel MOSFET is an NMOS transistor, which has the same function as a simple, electrically controlled switch (Figure 3). It has three pins, representing the source, gate and drain. NMOS transistors are OFF when a low voltage is applied to the gate, and ON when a high voltage is applied.

— ADVERTISMENT—

Advertise Here

Figure 3
The wiring and functionality of the NMOS transistor
Figure 3
The wiring and functionality of the NMOS transistor

By wiring the source pin to the ground, the gate pins to the MCU, and the drain pin to one of the three color pins, the electrons can flow through the NMOS. These electrons flow from the source to the drain, and the gate works as a switch. When the MCU sends a signal near 1 (or “high”), the circuit between ground and the corresponding pin of the LED strip is closed, and the LED strip lights up with that color. The brightness of each color can be controlled by sending an analog signal to the gate. The closer this signal is to 0 (or “low”), the lower the brightness will be.

Consider the length of the LED strip used. Longer strips require more current, which could fry the transistors. The IRLB8721 can handle up to 16A of continuous current—so that’s at least 750 LEDs (a 5m strip has around 300 LEDs).

If the microphone module of the PSoC6 board isn’t working to the user’s expectations, it can be replaced by another digital PDM microphone. In this project, the microphone module was removed from the board and replaced with the XENSIV MEMS microphone. The microphone was mounted on a flex board and connected to an adapter board (KIT_IM72D128V01_FLEX). Figure 4 is the wiring of all the hardware.

Figure 4 Here is the circuit schematic used in this project. It follows from the list of components shown in Table 1.
Figure 4
Here is the circuit schematic used in this project. It follows from the list of components shown in Table 1.
PICOVOICE

As I mentioned, Picovoice is an end-to-end platform that adds voice recognition to any project or device. It trains custom keywords and phrases, and detects them in conversations and recordings. All voice data is processed on-device, so the user’s privacy is secured.

Picovoice can be used in a great variety of platforms and devices, including Android, Raspberry Pi, PSoC 6, and macOS. The voice-detection accuracy of devices with Picovoice rivals that of the most popular conversational AI engines, such as Alexa, Google Dialogflow, IBM Watson, and Microsoft LUIS.

The prototype board CY8CPROTO-062-4343W is perfect for implementing voice AI. The MCU can be used for developing audio applications using the I2S and PDM-to-PCM blocks—but be warned that not all development boards of the PSoC 6 family offer this functionality.

At this time, Picovoice offers five different platforms for development: Leopard Speech-to-Text, Octopus Speech-to-Index, Porcupine Wake Word, Rhino Speech-to-Intent, and Cobra Voice Activity Detection. Only Porcupine and Rhino are covered in this article. For interested readers, the other three are discussed on the Picovoice website.

Porcupine Wake Word trains a model consisting of a catchphrase, which is used to wake the device from power-saving mode. It is similar to Alexa, but the user can set the watchword. Rhino Speech-to-Intent implements a list of instructions. When the user’s phrase is similar to the expressions trained, it returns the user’s intent.

The device should always be in power-saving mode when not detecting any speaking, and should process the user’s phrases only when the speech is addressed to the device. By default, the Porcupine engine will always be active and will listen for the wake word. Because the Rhino engine is far more complex, it will remain in sleep mode and will only activate when the user says the wake word. After the user is done with the device, both engines will return to their default states. The implementation of Porcupine and Rhino on both ends (console and MCU) are detailed later in this article.

Porcupine Wake Word: Three steps are used to implement Porcupine or Rhino. The first step is to work with the Picovoice console, which requires a free sign-up. In this step, the model given—consisting of sentences, words and phrases—is trained. The second step is to download the trained model and implement it in the device’s code. In the last step, the developer can create functions and algorithms that are called in the Porcupine and/or Rhino callback functions. These callback functions are executed every time the user says the Porcupine Wake Word or a Rhino expression. In that manner, the control interface of the LED strip can be connected to the voice AI.

Porcupine Wake Word can be implemented easily. After logging on to the Picovoice console, three options show up: Porcupine, Rhino, and Leopard & Cheetah. Select the first platform to begin developing the wake-up word. The user interface has two sections (Figure 5). In the first, you create a new phrase, which you can type in. You can also select the language and the platform. The second shows the models already trained, which are available to download.

Figure 5
The Picovoice console user interface when developing a Porcupine model
Figure 5
The Picovoice console user interface when developing a Porcupine model

So, to train a new word, type a catchphrase or a word that is easy to say. After selecting the desired language, the platform needs to be specified. Use ARM Cortex-M, with the board PSoC 6.

The last thing that needs to be done is to get the board’s UID. To do this, connect the prototype board to a computer. Upload the code found in the resources section of the article to the MCU using ModusToolbox. This will be explained in detail in the Code Development section of this article. After the code has been successfully uploaded to the PSoC 6, open a serial terminal connected to the board, as shown in Figure 6. To see which port the board is connected to, Windows users can access Device Manager—Ports. Also, set the connection speed to 115,200Bd. After a successful connection, press the board’s reset button, and the first thing that should pop up is the UID.

Figure 6 
This is the PuTTY user interface. Note that only the serial line needs to be changed in order to connect to the PSoC 6.
Figure 6
This is the PuTTY user interface. Note that only the serial line needs to be changed in order to connect to the PSoC 6.

After typing the wake-up phrase, language and platform, the model can be saved and trained. A zip file is then available to be downloaded. Only the header file (file extension .h) will be used in the project’s code.

Rhino Speech-to-Intent: Creating a Rhino Speech-to-Intent model is similar to Porcupine. However, compared to Porcupine, Rhino is based on a network of sentences and expressions. For example, in the case of a coffee maker, one of the phrases it should recognize is the size of the cup. A suitable phrase would be, “Make a big cup of coffee.” When Picovoice hears that exact sentence, it will understand the intent. But, for a good model, it’s better to have some variety in the expressions. For example, using synonyms or changing the sentence structure are ways to improve the model trained.

— ADVERTISMENT—

Advertise Here

A specific example associated with this project is changing the colors of the LED strip. To create an intent for an action, use the “new intent” box shown on the left side of the Rhino section of the console (Figure 7). Create one with the name “changeColor.” After the intent’s creation, add one or more expressions (type a phrase in the box on the right side of Figure 7). Expressions are the phrases mentioned earlier in the coffee machine example. A variety of phrases is needed to achieve a good model.

Figure 7
This is the Picovoice console user interface when developing a Rhino model.
Figure 7
This is the Picovoice console user interface when developing a Rhino model.

Let’s start with the phrase, “Switch the color to red.” The first problem that occurs is that the model should recognize a list of colors, not just red. Create a “slot” (left side of Figure 7), “color,” and then type in some color names in the elements section beneath the slot name. This eliminates the problem of having to rewrite each sentence with a different color. To implement the slot created in the expression, change the phrase to “Switch the color to $color:color.” Now, if the spoken color is found in the elements of the slot, it will be recognized.

To test your model, use the microphone button on the right column. It saves the current model, and then it waits for the user’s input. Once you speak, it will return a result indicating whether the model recognized the phrase or not, as shown in Figure 8.

Figure 8
Here is an example of the JSON type returned when the users wants to change the color of the lights to blue.
Figure 8
Here is an example of the JSON type returned when the users wants to change the color of the lights to blue.

Note that when the AI recognizes an expression, if any slots have been declared, it will also return the value of the slot. Thus, if the user says, “Change the color to blue,” the result will consist of the intent’s name (“changeColor”) and the slot’s value (“blue”). This makes it easier to develop the code based on these slots.

Another way to declare a variable that takes a list of literals is with macros. Macros work exactly like slots, but they won’t be returned in the recognition result, even if they are present in the expression. Macro types can be used to represent a list of possible phrases. They can be used in the case when you want to use multiple variations or synonyms of a word and don’t want to type multiple expressions. For example, the macro “turn” can take a list of the synonyms of the verb “turn” in the context of changing the color or brightness of the lights: “change,” “modify,” “switch,” and so on. To add it to the intent’s expressions, they can be typed as “@macro_name.” So, “@turn the color to $color:color” would allow the model to also recognize the sentence “Switch the color to yellow.”

Some functions offered by the Picovoice console can be implemented into expressions, such as by making words optional. For example, “@turn (the) lights to $color:color” allows for sentence recognition with or without the word “the.” The final implementation of the changing colors intent is shown in Figure 7. I recommend reading the tutorials on the Picovoice website to see what else can be done [3]. There are also several video tutorials on YouTube.

CODE DEVELOPMENT

ModusToolbox Software supports a wide range of Infineon MCU devices, including the MCU used in this project. It is architected on a flexible build environment with support, so users can directly code in their preferred IDE—Eclipse IDE (recommended), Microsoft Visual Studio Code, and AR Embedded Workbench. There are tutorials, including from Infineon, on how to set up your IDE with ModusToolbox. One Infineon document is a great place to start [1].

After setting up the IDE, the user can create a new application using the project creator tool. An application can include one or more projects, depending on the number of CPUs required. A project will contain a MAKEFILE, a README, the libraries folder, the build folder and the main class file, where the user will mainly code.

To import the code used in this project, download the application from the resources section of this article. Using ModusToolbox, import the downloaded folder as an application, build it and upload it to the PSoC 6 board (with KitProg3_MiniProg4 as debugger/programmer). If the wiring has been done according to the circuit schematic (Figure 4), the LED strip should light up.

To interact with the project, first take a look at the lines of the main.c class file. There is an un-initialized char named ACCESS_KEY. Replace the empty string with the access key that can be obtained from the account made on the Picovoice console.

I will focus mainly on the most important parts of the code. However, the ReadMe file offers detailed information about the application.

The CY8CPROTO-062-4343W is a prototype board, meaning that there is not a lot of support and help available if the user encounters a problem. The same goes for the template applications offered by Modus Toolbox. Most PSoC 6 development boards equipped with I2S and PDM-to-PCM blocks have the Picovoice Porcupine and Rhino implementation as a template. To resolve this issue, just create such an application (“Picovoice E2E Voice Recognition Demo”) with a different board (for example, CY8CKIT-062S2-43012), and then change the active Board Support Package (BSP) from the selected board to the desired prototype. This can be done by accessing the Library Manager from the Tools section. After updating, some errors might occur because of operations the old board had in addition to the current one. So, further adjustments are required. In the current case, the CY8CKIT-062S2-43012 is using an RGB LED, which is not present in the prototype board. Deleting the code lines that contain the LED should remove the error. Of course, the code used for the CY8CPROTO-062-4343W is always available.

The current application uses the Rhino context shown in Figure 9, with “Hey Bot Reboot” as the Porcupine Wake Word. If the models of Rhino or Porcupine are different than the ones used, they need to be implemented into the code. In the Picovoice console, download the created model (which needs to be trained first). It will download as a ZIP folder containing several files. The only file that will be used is the header file (with the extension .h). Open that file and copy only the array of hexadecimal values. In the ModusToolbox application, open the pv_params.h header file (which is inside the “include” folder), and replace the arrays with the new ones. These arrays should keep the same names as the older ones. The first array is for the wake-up word, and the second is for the context. These are the steps to connect the model created in the console and the project. The ReadMe file also covers this in detail.

Figure 9  This is the Rhino context used in the current project. It offers expressions to change the color of the lights, change the brightness, and turn the LEDs on and off.
Figure 9 This is the Rhino context used in the current project. It offers expressions to change the color of the lights, change the brightness, and turn the LEDs on and off.

Following the template Picovoice application, at this stage, it can recognize what the user says. It even shows the full context of each voice command through a UART communication protocol. 

One option to access the message sent by the PSoC is PuTTY, a free, open-source terminal emulator, serial console, and network file transfer application. 

When greeted by the PuTTY user interface, shown in Figure 6, change the connection type to serial. Connect the PSoC 6 board to the computer. To see which port the board is connected to, open the Device Manager (only on Windows). In the Ports section, you can find the COM to which the board is connected. Enter the name of the port (for example, COM12) in the serial line input. Set the speed to 115,20Bd, which corresponds to the default baud rate of the PSoC 6. After that, the set-up is ready, and the console can be started. For each wake word and phrase intent, the board will write to the terminal the context, similar to Figure 8.

Note that the static void inference_callback(pv_inference_t *inference) method shows how the intent and the pairs slot-value can be accessed. This function is run after the static void wake_word_callback(void) function, which is called each time the user says the wake-up word. inference->is_understood is a Boolean value, and returns true if the voice AI understood and recognized the user’s phrase. The string inference->intent represents the name of the intent declared in the Picovoice console (for example, “changeColor”). If there are any slots present in the recognized expression, they will also be returned as name_slot-value pair. If there are more such pairs (inference->num_slots is greater than zero), and you want to call a specific one, you can access it with the help of an index: inference->slots[index] and inference->values[index] . For the first pair, the index will be zero and will increase by one with each slot-value pair. These calls will be used to process the user’s input to control the state, brightness, and color of the LED strip. 

Additional classes and functions are required to integrate the LED strip into the project. These functions will control the analog output signals of each color pin. First, an array is required to add context to each color name (returned by the expressions which use the slot “color”). For each color name, the array will take three numbers between 0 and 100, which represent the RGB values. The implementation can be seen in the “colour_led.h” header file.

Pulse-width modulation (PWM) is a method of reducing the average electrical power sent by an electric signal. An analog signal can be sent as output from the prototype board by setting the frequency and the duty cycle. The duty cycle (Figure 10) represents the period of time the output signal is HIGH proportional to a regular interval, usually determined by the frequency. In this case, the duty cycle is equal to the brightness, and the frequency is equal to 1MHz.

Figure 10
Here is an illustration of how the electrical signal looks with different duty cycles. When the duty cycle is near 50%, it resembles a "square" wave.
Figure 10
Here is an illustration of how the electrical signal looks with different duty cycles. When the duty cycle is near 50%, it resembles a “square” wave.

The led.c and led.h files implement the functionalities of three PWM pins (P9_1, P9_2 and P9_4, as shown in the circuit schematics in Figure 4). To control the brightness of the LED strip, we will send an analog signal to the gate of the n-channel MOSFETs. 

The methods implemented in the led.h header file will be used to control the state and the brightness of the LEDs, using a cyhal_pwm_t variable for each color. For example, to set the color to red, the function cy_rslt_t cyhal_pwm_set_duty_cycle(cyhal_pwm_t *obj, float duty_cycle, uint32_t frequencyhal_hz) can be used to turn off the green and blue LEDs and set the red LED to its highest brightness. More explanation about each function and functionality is given in the code.

In the class main.c, everything declared until now will be used in the static void inference_callback(pv_inference_t *inference) callback method. Several conditions need to be implemented, depending on the user’s input. These conditions need to cover every case—switching the state of the lights, controlling the brightness and modifying the color. For each recognized phrase, the PWM pins corresponding to each LED color will be updated with the correct duty cycle, which is the same as the brightness. 

In case something doesn’t work in the code, it is recommended to use the UART serial communication and the serial console to debug it. For example, if the brightness is not set as intended, declare in the inference_callback method a “print” function, which writes to the console the current brightness: printf(“currentBrightness: %u \r\n”, currentBrightness);. Using this method, it’s easy to see what part of the code isn’t working as intended, which can be adjusted. 

IMPROVEMENTS

The best part of a DIY project is that everyone can add to or change any piece of the software or hardware. For example, the code can be improved by developing an animation or a pattern for the LED strip, alternating the color after a given time delay. Sensors can be added to turn on the lights whenever a person is nearby. The PSoC 6 MCU also offers Bluetooth and Wi-Fi functionalities, so it is possible to control the lights with a smartphone, alternating the user interaction.

The first thing I would personally change about this project is the look of the hardware. Right now, all the components just sit on the floor, mounted on a breadboard. I would design a PCB and a 3D-printed case to make it look more professional. The main idea is that everyone can take part in DIY projects—the only limitations are the imagination and resources of the developer. 

SOURCES
Infineon PSoC 6, https://www.infineon.com/cms/en/product/microcontroller/32-bit-psoc-arm-cortex-microcontroller/psoc-6-32-bit-arm-cortex-m4-mcu/
Picovoice, https://picovoice.ai/
Picovoice Console, https://console.picovoice.ai

REFERENCES
[1] Getting Started with PSoC 6 MCU on Modus Toolbox Software, https://www.infineon.com/dgdl/Infineon-AN228571_Getting_started_with_PSoC_6_MCU_on_ModusToolbox_software-ApplicationNotes-v06_00-EN.pdf?fileId=8ac78c8c7cdc391c017d0d36de1f66d1
[2] RGB Color Codes Chart, https://www.rapidtables.com/web/color/RGB_Color.html
[3] Picovoice Tutorials for Porcupine and Rhino, https://picovoice.ai/docs/

Code and Supporting Files

PUBLISHED IN CIRCUIT CELLAR MAGAZINE • FEBRUARY 2023 #391 – Get a PDF of the issue

Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.


Note: We’ve made the May 2020 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Sponsor this Article
+ posts

Alexandru Dumitrache is a student at the Tudor Vianu National College of Computer Science in Bucharest, Romania. His passions are technology robotics, electronics and he is keen on developing projects (with Arduino and Raspberry Pi) that can change people’s lives for the better. His focus is on changing society by reducing pollution, new ideas for utility robots and the implementation of eco energy and intelligent systems. In the future, using his PCB and CAD design skills, as well as my programming knowledge in C/C++ and Python, he wants to add AI into projects that develop beneficial devices. Contact him at anova9347@gmail.com

Supporting Companies

Upcoming Events


Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2023 KCK Media Corp.

Improved Color Control for Analog LED Strips

by Alexandru Dumitrache time to read: 19 min