Superhero Sight
In this project article, learn how these two Cornell students designed a system to overlay images from a visible light camera and an infrared camera. They use software running on a PIC32 MCU to interface the two types of cameras. The MCU does the computation to create the overlaid images, and displays them on an LCD screen.
How do you get superhero vision? Those who want to extend their vision into the infrared spectrum can spend a lifetime trying to tame nature. Alternatively, they can use a visible and infrared camera system to overlay visible and infrared light data, thereby enhancing their awareness of the world. To make this possible, we designed a software program that uses the Microchip PIC32 microcontroller (MCU) to interface a visible light color camera and an infrared light camera. This system displays three images on a TFT LCD: the infrared image, the color image and an overlay of infrared and color images. The overlaid image saturates color for hot objects, and gray-scales color for cold objects.
This project was performed in two stages. The first stage—performed by both of us—interfaces an infrared camera and displays an image on an LCD screen [1]. The second stage, conducted by one of us (DE), overlays a color image on the infrared image. This article focuses on the second stage of the project.
HARDWARE IMPLEMENTATION
Integrating the two cameras and the LCD requires the use of every pin on the PIC32. The 60 × 80 pixel FLIR Systems infrared camera has a 50-degree horizontal field of view (HFOV), uses SPI protocol to transmit pixel data and connects to three pins on the PIC32. The Omnivision Technologies OV7670 visible light color camera is a 640×480 VGA camera, which is reconfigured to 160×120 QQVGA, has a 25-degree HFOV, has pixel data timing lines [2], uses parallel transmission for pixel data, I2C for reconfiguration and connects to 13 pins on the PIC32. The 240×320 pixel LCD uses SPI protocol to transmit pixel data, connects to three pins on the PIC32 and shares its SCK with the infrared camera. Figure 1 is the schematic of the system setup.
With no pins to spare on the PIC32, managing hardware connections is a balancing act. PORTB (pins RB0-RB15) is used to read parallel 8-bit transmissions from the color camera. To keep reads from PORTB simple in software, it is desirable to use pins RB0–RB7 for data lines D0-D7. However, this PIC32 package lacks pin RB6, which is repurposed as a voltage bus. Instead, D6 is connected to RB10.
Figure 2 shows the system setup. All the wires from both cameras and the LCD connect to a solderless board and to the pins of a PCB [3], which hold the PIC32. The two cameras capture the same scene and are held in position with electrical tape and two clamp stands. The scene in Figure 2 shows a hot coffee cup in front of a piece of wood. The clamp ends are adjusted until the cameras’ images are aligned. The cameras are adjusted by hand once along the horizontal axis, and have to be adjusted along the vertical axis every time the scene changes. The focal points of both cameras need to match up on the surface of the object as shown.
— ADVERTISMENT—
—Advertise Here—

Setup and aligning pixels
SOFTWARE IMPLEMENTATION
The C program on the PIC32 interfaces and overlays the color and infrared cameras’ images, and displays them on the LCD. This program is summarized in the finite state machine in Figure 3. In the “IR Capture frame” state, the infrared image frame is captured and saved to memory. Afterward, in the “IR Display frame” state, the infrared image is displayed on the top left of the LCD. The infrared data capture is explained later in the Infrared Camera section.
Next, in the “Color capture row” state, the Line count variable is set to 0 and captures and saves the uppermost row of the color camera. Only one row at a time can be captured and saved, because the data size of the color image is too large to store on this PIC32 package. This row is then displayed on the top right of the LCD in the “Color display row” state. Next, in the “Color & IR Overlay display row” state, a line from the infrared image that corresponds to the same location in the color image is used to grayscale color pixels for cold surfaces and saturate color pixels for hot surfaces.
The program then checks whether Line count has reached the end of the frame, which is row 120; if not, Line count is incremented by one, and the program loops back to the “Color capture row” state to capture the next row of data. The program loops through states “Color capture row,” “Color display row” and “Color & IR Overlay display row” 120 times to process one frame from the OV7670 camera. When Line count reaches 120, the program loops back to the “IR Capture frame” state. The color camera data capture is explained in the Color Camera section. And the displaying of infrared and color images to LCD is explained later in the Displaying to LCD section.
COLOR CAMERA
The color camera for this project is reconfigured through the camera’s SCCB interface. The SCCB protocol works similarly to I2C and allows the camera to communicate with I2C on the PIC32 using pins RB8 and RB9. The camera is reconfigured from VGA to QQVGA, since the image will use one-fourth of the LCD. The data output is configured to RGB565 format, which is a 16-bit value with the five first bits reserved for red, the six middle bits for green and the five last bits for blue. There are many other settings configured, which can be seen in the code.
The following describes the method of communication between the color camera and the PIC32 [2]. The color camera requires an input clock to pin XCLK. This input clock controls the processing speed of the camera. An 8 MHz clock signal from RPA2 on the PIC32 is provided through the use of its output compare module and timer. The output compare module is hardware that generates pulses in response to a selected time base event. The Video Timing Generator is the camera’s module that takes in the external input clock to XCLK and outputs to four timing pins: STROBE, HREF, PCLK and VSYNC. There are pre-scalars for each timing pin set during configuration of the camera via I2C. They divide the input clock by factors of multiples of 2. These pre-scalars are used to control the frequency of each of the timing block outputs.
The VSYNC output pin is active low and drops when a new frame is ready to transmit. This provides a method for the code to determine when to start reading a new frame. With an 8 MHz clock to XCLK, the frequency of VSYNC is 10 Hz, as measured on an oscilloscope. During color camera configuration via I2C, PCLK is set to toggle only when data are transmitting, so the first drop in PCLK is the signal for a new row of data. The color camera in QQVGA mode outputs 120 rows with 160 pixels in each row. Each pixel’s data are 2-bytes long, so there are 320 parallel transmissions per row.
When the VSYNC line drops low, the active low pixel clock (PCLK) is used to determine when to read the 8-bit transmission on pins D0-D7, which are connected to RB0-RB5, RB7, and RB10 on PORTB of the PIC32. Remember that RB6 is a voltage bus, so D6 connects to RB10. The code in Listing 1 shows how D6 is right-shifted, merged with the rest of the byte transmission, and stored as a char value. Capturing the 8-bit pixel data transmission as soon as it is valid is critical, as PORTB is read twice per transmitted byte.
// Reading parallel byte transmissions
#ifndef OV7670_READ_PIXEL_BYTE
// PORTB BITS 0-5, 7 and 10. BIT 10 is D6. Right shift BIT 10 by four.
#define OV7670_READ_PIXEL_BYTE(b) \
b = (PORTB & 0b10111111) | (PORTB & 0b10000000000)>>4
#endif
// Read one line of color image frame
#define NOP asm(“nop”);
void readLine() {
pixelBuffer.writeBufferPadding = 0;
for (i=0; i<320; i++) {
waitForPixelClockLow();
NOP; // wait for data to be valid and stable
OV7670_READ_PIXEL_BYTE(pixelBuffer.writeBuffer[i]);
NOP; // wait for clock to reach logic high
}
}
— ADVERTISMENT—
—Advertise Here—
LISTING 1 – Reading parallel transmission from PORTB and sending to the LCD display
If the data are read too early, they are unstable and cause some reads to capture only data from D6. This results in red speckles in the image. If the data are read too late, D6 is always zero. The color camera datasheet specs show that when the pixel clock drops, there is a 5 ns delay until the output data are valid, and then a 15 ns delay until the output data are stable. One NOP after the pixel clock drops yields a 25 ns delay, which is sufficient to read stable data [2]. The NOP after the read of PORTB provides enough time for the pixel clock to reach logic high again.
INFRARED CAMERA
The FLIR Systems infrared camera is mounted on a FLIR Lepton breakout board, which processes the frame data and outputs it serially. The default data output RAW14—a 14-bit value that represents heat intensity—is used. The data from the thermal camera are transferred to the PIC32 through 1-byte transactions using SPI. The infrared camera has 60 rows with 80 pixels in each row. Each pixel is represented by 2-bytes, and there are 80 pixels in each row. Each image frame consists of 60 packets, with each packet containing 164 bytes of data. The first 4 bytes are used for identification and error detection, and the remaining 160-bytes are used for the 80 pixels in the row [1], shown in Figure 4.
Unlike the color camera, the IR camera’s Lepton transmits thermal data only when its VoSPI [4] interface is synchronized. To synchronize the Lepton with the PIC32, the chip select for the camera is set low and the PIC32 waits for the camera to transmit a valid packet. The Lepton transmits discard packets until it is ready to begin transmitting data from the frame. In a discard packet, the first byte contains a hexadecimal value of xFxx. When the camera’s Lepton finishes processing a frame of data, the first valid packet is transmitted to the PIC32 and is stored into a char array named frame_data. The code breaks out of the while loop, continues to read the remaining 59 valid packets and stores them into this array. After the 60 packets are read and stored into memory, chip select (CS) is dropped to provide time for processing image data.
DISPLAYING TO LCD
The LCD screen diagram is shown in Figure 5. The infrared data are displayed on the upper left of the LCD screen via SPI from the PIC32. To map the 60×80 pixel frame to a 120×160 pixel array, each pixel in the frame is mapped to a 2×2 pixel square on the LCD. For each pixel, the first byte is concatenated with the second byte, and the concatenated value is left-shifted by two to increase the brightness.
The QQVGA color image is displayed to the upper right of the LCD screen. Displaying to the LCD consists of defining an address window on the LCD. The tft_setAddrWindow function in Listing 2 is used to define the address window width of 160 pixels and the row number. Each pixel is represented by a 16-bit word. SPI Channel 1 transmits the processed pixel data to the LCD. The SPI transmit buffer holds up to 128 bits of data, but each row in the image has 160 pixels. To prevent overflowing the buffer, we performed tests to ensure that there was an appropriate lag time between pixel transmissions.
// Infrared and color image overlay
tft_setAddrWindow(x+120, y, x+120, y+w-1);
_dc_high();
_cs_low();
while (index++ < w) {
color = getPixelByte(index*2)<<8 | getPixelByte(index*2+1);
R = (0b1111100000000000 & color)>>11;
G = (0b11111100000 & color)>>6; // G shifted to 5-bit value
B = 0b11111 & color;
// Check IR for hot pixels
if (ColdPixel(x/4 + 15,(index/4)*2 + 39)) {
// If infrared is cold pixel, grayscale the color pixel
// Use largest component in RGB565 to set grayscale value
if (R > G && R > B) B = G = R;
else if (G > B) B = R = G;
else R = G = B;
color = R<<11 | G<<6 | B;
}
else {
// If infrared is hot pixel, saturate the largest component
if (R > G && R > B) R *= 2;
else if (G > B) G *= 2;
else B *= 2;
color = R<<11 | G<<6 | B;
}
SPI1BUF = color;
}
_cs_high();
LISTING 2 – Infrared and color image overlay
You are now able to see objects in visible and infrared light! To enhance your vision further, the code uses the infrared pixel data not only to saturate color for hot locations, but also to gray-scale color for cold locations in the color image (Listing 2). First, the 16-bit RGB565 value is broken down into red, green and blue components. The program then checks whether the color pixel corresponds to a hot or cold infrared pixel in the infrared image, by comparing the infrared pixel with a fixed threshold. If the infrared pixel value is below a fixed threshold, two of the RGB components are assigned to the largest of the RGB components, resulting in a gray-scaled pixel. If the infrared pixel value is above the threshold, the largest RGB component is doubled to make the pixel appear saturated.
Processing is required to perform the mapping between the color and infrared images, since the color camera has an HFOV of 25 degrees and the infrared camera has an HFOV of 50 degrees. This processing involves mapping the 120×160 pixel array in the center of the infrared image to the entire 240×320 pixel array in the color image. Subsequently, the RGB components are concatenated back to a 16-bit value and are displayed to the bottom right of the LCD. Note that in both cases, the 6-bit green component is right-shifted by one to be on the same scale as the 5-bit red and blue components. When the RGB components are concatenated back together, the green value is left-shifted by one to rescale it for RGB565 format.
RESULTS AND CONCLUSIONS
What is it like to have superhuman vision? Take a look at Figure 6 and Figure 7. Your new ability to see infrared is shown on the top left, standard color image in the top right and overlaid image on the bottom right of each figure. In Figure 6, the pixels for the hot coffee cup are saturated, and the remainder are gray-scaled. In Figure 7, the pixels for a heated rock are saturated, and the cold colorful background is gray-scaled.

The most difficult part of this project was working with the color camera’s image data time constraints. One of the timing constraints is the 637.5 µs window between each row of data’s finishing transmission and the beginning of the next row’s transmission. This leaves little time to display the row twice on the LCD, first in original format and again after some image processing. This prevented the use of more advanced processing methods that involved floating point computation. Another timing constraint was capturing the byte transmission with two reads by PORTB, as described earlier in the Software Implementation section.
Three improvements could enhance this project. First, the threshold for determining hot and cold objects could be better implemented as a dynamic threshold that adjusts the threshold based on the mean temperature in the frame. This would provide better results as the environmental temperature varies. Second, the alignment of the cameras is a tedious process that could be automated by using two servos and a camera mounting assembly with 2 degrees of freedom. The color camera would remain fixed, and the infrared camera’s angle would adjust using the two servos. To adjust the cameras, the program would perform edge detection on the infrared and color images and rotate the servos until the edges match up. Third, this program updates the three images at 5 fps when running the PIC32 at 40 MHz. The image updates could be increased with a faster processor, but is limited to 8 fps by the 8 Hz maximum frame rate of the infrared camera.
For detailed article references and additional resources go to:
www.circuitcellar.com/article-materials
References [1] through [4] as marked in the article can be found there.
RESOURCES
FLIR Systems | www.flir.com
Microchip Technology | www.microchip.com
Omnivision Technologies | www.ovt.com
— ADVERTISMENT—
—Advertise Here—
PUBLISHED IN CIRCUIT CELLAR MAGAZINE• JANUARY 2019 #342 – Get a PDF of the issue
Sponsor this ArticleDaniel Edens is an undergraduate student studying Electrical and Computer
Engineering at Cornell University. His primary interests are embedded
systems and hardware. He can be reached at de229@cornell.edu.
Elise Weir graduated from Cornell University with a degree in Electrical and
Computer Engineering and a minor in Computer Science. She is currently working
at The Charles Stark Draper Laboratory Incorporated in Cambridge, MA as
an embedded software engineer. She can be reached at enw23@cornell.edu.