Graphene Enables Broad Spectrum Sensor Development

Team successfully marries a CMOS IC with graphene, resulting in a camera able to image visible and infrared light simultaneously.

Graphene Enables Broad Spectrum Sensor Development

By Wisse Hettinga

Researchers at ICFO—the Institute of Photonic Sciences, located in Catalonia, Spain—have developed a broad-spectrum sensor by depositing graphene with colloidal quantum dots onto a standard, off-the-shelf read-out integrated circuit. It is the first-time scientists and engineers were able to integrate a CMOS circuit with graphene to create a camera capable of imaging visible and infrared light at the same time. Circuit Cellar visited ICFO

Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.

Stijn Goossens is a Research Engineer at ICFO- the Institute of Photonic Sciences.

and talked with Stijn Goossens, one of the lead researchers of the study.

HETTINGA: What is ICFO?

GOOSSENS: ICFO is a research institute devoted to the science and technologies of light. We carry out frontier research in fundamental science in optics and photonics as well as applied research with the objective of developing products that can be brought to market. The institute is based in Castelldefels, in the metropolitan area of Barcelona (Catalonia region of Spain).

HETTINGA: Over the last 3 to 4 years, you did research on how to combine graphene and CMOS. What is the outcome?

GOOSSENS: We’ve been able to create a sensor that is capable of imaging both visible and infrared light at the same time. A sensor like this can be very useful for many applications—automotive solutions and food inspection, to name a few. Moreover, being able to image infrared light can enable night vision features in a smartphone.

HETTINGA: For your research, you are using a standard off-the-shelf CMOS read-out circuit correct?

GOOSSENS: Indeed. We’re using a standard CMOS circuit. These circuits have all the electronics available to read the charges induced in the graphene, the rows and columns selects and the drivers to make the signal available for further processing by a computer or smartphone. For us, it’s a very easy platform to work on as a starting point. We can deposit the graphene and quantum dot layer on top of the CMOS sensor (Photo 1).

PHOTO 1 The CMOS image sensor serves as the base for the graphene layer.

PHOTO 1
The CMOS image sensor serves as the base for the graphene layer.

HETTINGA: What is the shortcoming of normal sensors that can be overcome by using graphene?

GOOSSENS: Normal CMOS imaging sensors only work with visible light. Our solution can image visible and infrared light. We use the CMOS circuit for reading the signal from the graphene and quantum dot sensors. Tt acts more like an ‘infrastructure’ solution. Graphene is a 2D material with very special specifications: it is strong, flexible, almost 100 percent transparent and is a very good conductor.

HETTINGA: How does the graphene sensor work?

GOOSSENS: There are different layers (Figure 1). There’s a layer of colloidal quantum dots. A quantum dot is a nano-sized semiconductor. Due to its small size, the optical and electronic properties differ from larger size particles. The quantum dots turn the photons they receive into an electric charge. This electric charge is then transferred to the graphene layer that acts like a highly sensitive charge sensor. With the CMOS circuit, we then read the change in resistance of the graphene and multiplex the signal from the different pixels on one output line.

FIGURE 1 The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.

FIGURE 1
The graphene sensor is comprised of a layer of colloidal quantum dots, a graphene layer and a CMOS circuitry layer.

HETTINGA: What hurdles did you have to overcome in the development?

GOOSSENS: You always encounter difficulties during the course of a research study and sometimes you’re close to giving up. However, we knew it would work. And with the right team, the right technologies and the lab at ICFO we have shown it is indeed possible. The biggest problem was the mismatch we faced between the graphene layer and the CMOS layer. When there’s a mismatch, that means there’s a lack of an efficient resistance read-out of the graphene—but we were able to solve that problem.

HETTINGA: What is the next step in the research?

GOOSSENS: Together with the European Graphene Flagship project, we are developing a production machine that will allow us to start a more automated production process for these graphene sensors.

HETTINGA: Where will we see graphene-based cameras?

GOOSSENS: One of the most interesting applications will be related to self-driving cars. A self-driving car needs a clear vision to function efficiently. If you want to be able to drive a car through a foggy night or under extreme weather conditions, you’ll definitely need an infrared camera to see what’s ahead of you. Today’s infrared cameras are expensive. With our newly-developed image sensor, you will have a very effective, low-cost solution. Another application will be in the food inspection area. When fruit ripens, the infrared light absorption changes. With our camera, you can measure this change in absorption, which will allow you to identify which fruits to buy in the supermarket. We expect this technology to be integrated in smartphone cameras in the near future.

ICFO | www.icfo.eu

This article appeared in the September 326 issue of Circuit Cellar

New High-Performance VC Z Series Cameras

Vision Components recently announced the availability of its new intelligent camera series VC Z. The embedded systems offer real-time image processing suitable for demanding high-speed and line scan applications. All models are equipped with Xilinx’s Zynq module, an ARM dual-core Cortex-A9 with 866 MHz and an integrated FPGA.Vision Components - VC_Z_series_stapel_pingu

The new camera is based on the board camera series VCSBC nano Z. With a footprint of 40 × 65 mm, these compact systems are especially easy to integrate into machines and plants. They are optionally available with one or two remote sensor heads and thus suitable for stereo applications.You can choose between two enclosed camera types: the VC nano Z, which has housing dimensions of 80 × 45 × 20 mm, and the VC pro Z, which measures 90 × 58 × 36 mm and can be fitted with a lens and an integrated LED illumination. The new operating system VC Linux ensures optimal interaction between hardware and software.

Source: Vision Components

Member Profile: Richard Lord

Richard Lord is an engineer, author, and photographer whose article series on an innovative digital camera controller project will begin in the October issue of Circuit Cellar.  Lord’s Photo-Pal design is an electronic flash-trigger camera controller built around a Microchip Technology PIC16F873. It features four modes of operation: triggered shutter, triggered flash, multiple flash, and time lapse. Now you too can take sound-triggered photos.

The Photo-Pal enables Richard to take amazing photos like this and capture high-speed action.

  • Member Name: Richard H. Lord
  • Location: Durham, NH, United States
  • Education: BS Electrical Engineering 1969, MS Biomedical Engineering, 1971
  • Occupation: Retired electronics hardware design engineer
  • Member Status: Richard said he has subscribed to Circuit Cellar for at least 14 years, maybe longer.
  • Technical Interests: Richard’s interests include photography, model railroading, and microcontroller projects.
  • Most Recent Embedded Tech-Related Purchase: Richard’s most recent purchase was a Microchip Technology dsPIC30F4013 digital signal controller.
  • Current Project: Richard is working on a Microchip PIC16F886-based multipurpose front panel interface controller.
  • Thoughts on the Future of Embedded Technology: “With the ready availability of prepackaged 32-bit processor modules, it’s easy to forget there are many applications where 8-bit controllers are more appropriate”, Richard said. He continued by saying he gets a lot of enjoyment from the challenge of working within the capabilities and constraints of the smaller microcontrollers.