The Future of Monolithically Integrated LED Arrays

LEDs are ubiquitous in our electronic lives. They are widely used in notification lighting, flash photography, and light bulbs, to name a few. For displays, LEDs have been commercialized as backlights in televisions and projectors. However, their use in image formation has been limited.

A prototype emissive LED display chip is shown. The chip includes an emissive compass pattern ready to embed into new applications.

A prototype emissive LED display chip is shown. The chip includes an emissive compass pattern ready to embed into new applications.

The developing arena of monolithically integrated LED arrays, which involves fabricating millions of LEDs with corresponding transistors on a single chip, provides many new applications not possible with current technologies, as the LEDs can simultaneously act as the backlight and the image source.

The common method of creating images is to first generate light (using LEDs) and then filter that light using a spatial light modulator. The filter could be an LCD, liquid crystal on silicon (LCoS), or a digital micromirror device (DMD) such as a Digital Light Processing (DLP) projector. The filtering processes cause significant loss of light in these systems, despite the brightness available from LEDs. For example, a typical LCD uses only 1% to 5% of the light generated.

Two pieces are essential to a display: a light source and a light controller. In most display technologies, the light source and light control functionalities are served by two separate components (e.g., an LED backlight and an LCD). However, in emissive displays, both functionalities are combined into a single component, enabling light to be directly controlled without the inherent inefficiencies and losses associated with filtering. Because each light-emitting pixel is individually controlled, light can be generated and emitted exactly where and when needed.

Emissive displays have been developed in all sizes. Very-large-format “Times Square” and stadium displays are powered by large arrays of individual conventional LEDs, while new organic LED (OLED) materials are found in televisions, mobile phones, and other micro-size applications. However, there is still a void. Emissive “Times Square” displays cannot be scaled to small sizes and emissive OLEDs do not have the brightness available for outdoor environments and newer envisioned applications. An emissive display with high brightness but in a micro format is required for applications such as embedded cell phone projectors or displays on see-through glasses.

We know that optimization by the entire LED industry has made LEDs the brightest controllable light source available. We also know that a display requires a light source and a method of controlling the light. So, why not make an array of LEDs and control individual LEDs with a matching array of transistors?

The marrying of LED materials (light source) to transistors (light control) has long been researched. There are three approaches to this problem: fabricate the LEDs and transistors separately, then bond them together; fabricate transistors first, then integrate LEDs on top; and fabricate LEDs first, then integrate transistors on top. The first method is not monolithic. Two fabricated chips are electrically and mechanically bonded, limiting integration density and thus final display resolutions. The second method, starting with transistors and then growing LEDs, offers some advantages in monolithic (single-wafer) processing, but growth of high-quality, high-efficiency LEDs on transistors has proven difficult.

My start-up company, Lumiode (www.lumiode.com), is developing the third method, starting with optimized LEDs and then fabricating silicon transistors on top. This leverages existing LED materials for efficient light output. It also requires careful fabrication of the integrated transistor layer as to not damage the underlying LED structures. The core technology uses a laser method to provide extremely local high temperatures to the silicon while preventing thermal damage to the LED. This overcomes typical process incompatibilities, which have previously held back development of monolithically integrated LED arrays. In the end, there is an array of LEDs (light source) and corresponding transistors to control each individual LED (light control), which can reach the brightness and density requirements of future microdisplays.

Regardless of the specific integration method employed, a monolithically integrated LED and transistor structure creates a new range of applications requiring higher efficiency and brightness. The brightness available from integrated LED arrays can enable projection on truly see-through glass, even in outdoor daylight environments. The efficiency of an emissive display enables extended battery lifetimes and device portability. Perhaps we can soon achieve the types of displays dreamed up in movies.

3-D Integration Impact and Challenges

People want transistors—lots of them. It pretty much doesn’t matter what shape they’re in, how small they are, or how fast they operate. Simply said, the more the merrier. Diversity is also good. The more different the transistors, the more useful and interesting the product. And without any question, the cheaper the transistors, the better. So the issue is, how best to achieve as many diverse transistors at the lowest cost possible.

One approach is more chips. Placing a lot of chips close together on a small board will produce a system with many transistors. Another way is more transistors per chip. Keep on scaling the technology to provide more transistors in one or a few chips.

silicon chipThe third option combines these two approaches. Let’s have many chips with many transistors and end up with a huge number of transistors. However, there is a limit to this approach. It’s well understood that scaling is coming to an end. And placing multiple chips on a board can have a terrible effect on a system’s overall speed and power dissipation.

But there is an elegant and intellectually simple solution. Rather than connecting these chips horizontally across a board, connect them vertically, providing N times more transistors, where N is the number of chips stacked one above another. Such vertical, 3-D integration was first broached by William Shockley, co-inventor of the transistor at Bell Labs in 1947. Shockley described the 3-D integration concept in a 1958 patent, which was followed by Merlin Smith and Emanuel Stern’s 1967 patent outlining how best to produce the holes between layers. We now call these inter-layer holes through silicon vias (TSVs). Technology is still catching up to these 3-D concepts.

Three-dimensional integration offers exciting advantages. For example, the vertical distance between layers is much shorter than the horizontal dimensions across a chip. Three-dimensional circuits, therefore, operate faster and dissipate less power than their 2-D equivalent. A 3-D system is shockingly small, permitting it to fit much more conveniently into a tiny space. Think small portable electronics (e.g., credit cards).

But the most exciting advantage of 3-D integration isn’t the small form factor, higher speed, or lower power; it’s the natural ability to support many disparate technologies and functions as one integrated, heterogeneous system. Even better, each chip layer can be optimized for a particular function and technology, since the individual chips can each be developed in isolation. No more trading off different capabilities to combine disparate technologies on the same chip. Now we can use the absolute best technology for each layer and a completely different and optimized technology for a different layer. This approach enables all kinds of novel applications that until now couldn’t have been conceived or would have been cost-prohibitive.

Imagine placing a microprocessor plane below a MEMS-accelerometer plane below an analog plane (with ADCs) below a temperature sensor, all below a video imager (which has to be at the top to “see”). All of these planes fit together into a tiny (smaller than a fingernail) silicon cube while operating at higher speeds and dissipating lower power.

There are technical issues, including: how to best make the TSVs, how to construct the system architecture to fully exploit the system’s 3-D nature, how to deliver power across these multiple planes, how to synchronize this system to best move data around the cube, how to manage system design complexity, and much more.

Two issues rise to the top. The first is power dissipation (specifically, power density). When many transistors switch at a high rate within a tiny volume, the temperature rises, which can impair performance and reliability. I believe this issue, albeit difficult, is technically solvable and simply will require a lot of good engineering.

The real problem is cost. How do we mature this technology quickly enough to drive the costs down to a point where volume commercial applications are possible? Many companies are close to producing tangible 3-D-based products. Cubes of highly dense memory will likely be the first serious and cost-effective product. Early versions are already available. Three-dimensional integration will soon be here in a serious way with what will be a fascinating assortment of all kinds of exciting new products. You won’t have to wait too long.

Amplifier Classes from A to H

Engineers and audiophiles have one thing in common when it comes to amplifiers. They want a design that provides a strong balance between performance, efficiency, and cost.

If you are an engineer interested in choosing or designing the amplifier best suited to your needs, you’ll find columnist Robert Lacoste’s article in Circuit Cellar’s December issue helpful. His article provides a comprehensive look at the characteristics, strengths, and weaknesses of different amplifier classes so you can select the best one for your application.

The article, logically enough, proceeds from Class A through Class H (but only touches on the more nebulous Class T, which appears to be a developer’s custom-made creation).

“Theory is easy, but difficulties arise when you actually want to design a real-world amplifier,” Lacoste says. “What are your particular choices for its final amplifying stage?”

The following article excerpts, in part, answer  that question. (For fuller guidance, download Circuit Cellar’s December issue.)

CLASS A
The first and simplest solution would be to use a single transistor in linear mode (see Figure 1)… Basically the transistor must be biased to have a collector voltage close to VCC /2 when no signal is applied on the input. This enables the output signal to swing

Figure 1—A Class-A amplifier can be built around a simple transistor. The transistor must be biased in so it stays in the linear operating region (i.e., the transistor is always conducting).

Figure 1—A Class-A amplifier can be built around a simple transistor. The transistor must be biased in so it stays in the linear operating region (i.e., the transistor is always conducting).

either above or below this quiescent voltage depending on the input voltage polarity….

This solution’s advantages are numerous: simplicity, no need for a bipolar power supply, and excellent linearity as long as the output voltage doesn’t come too close to the power rails. This solution is considered as the perfect reference for audio applications. But there is a serious downside.

Because a continuous current flows through its collector, even without an input signal’s presence, this implies poor efficiency. In fact, a basic Class-A amplifier’s efficiency is barely more than 30%…

CLASS B
How can you improve an amplifier’s efficiency? You want to avoid a continuous current flowing in the output transistors as much as possible.

Class-B amplifiers use a pair of complementary transistors in a push-pull configuration (see Figure 2). The transistors are biased in such a way that one of the transistors conducts when the input signal is positive and the other conducts when it is negative. Both transistors never conduct at the same time, so there are very few losses. The current always goes to the load…

A Class-B amplifier has more improved efficiency compared to a Class-A amplifier. This is great, but there is a downside, right? The answer is unfortunately yes.
The downside is called crossover distortion…

Figure 2—Class-B amplifiers are usually built around a pair of complementary transistors (at left). Each transistor  conducts 50% of the time. This minimizes power losses, but at the expense of the crossover distortion at each zero crossing (at right).

Figure 2—Class-B amplifiers are usually built around a pair of complementary transistors (at left). Each transistor conducts 50% of the time. This minimizes power losses, but at the expense of the crossover distortion at each zero crossing.

CLASS AB
As its name indicates, Class-AB amplifiers are midway between Class A and Class B. Have a look at the Class-B schematic shown in Figure 2. If you slightly change the transistor’s biasing, it will enable a small current to continuously flow through the transistors when no input is present. This current is not as high as what’s needed for a Class-A amplifier. However, this current would ensure that there will be a small overall current, around zero crossing.

Only one transistor conducts when the input signal has a high enough voltage (positive or negative), but both will conduct around 0 V. Therefore, a Class-AB amplifier’s efficiency is better than a Class-A amplifier but worse than a Class-B amplifier. Moreover, a Class-AB amplifier’s linearity is better than a Class-B amplifier but not as good as a Class-A amplifier.

These characteristics make Class-AB amplifiers a good choice for most low-cost designs…

CLASS C
There isn’t any Class-C audio amplifier Why? This is because a Class-C amplifier is highly nonlinear. How can it be of any use?

An RF signal is composed of a high-frequency carrier with some modulation. The resulting signal is often quite narrow in terms of frequency range. Moreover, a large class of RF modulations doesn’t modify the carrier signal’s amplitude.

For example, with a frequency or a phase modulation, the carrier peak-to-peak voltage is always stable. In such a case, it is possible to use a nonlinear amplifier and a simple band-pass filter to recover the signal!

A Class-C amplifier can have good efficiency as there are no lossy resistors anywhere. It goes up to 60% or even 70%, which is good for high-frequency designs. Moreover, only one transistor is required, which is a key cost reduction when using expensive RF transistors. So there is a high probability that your garage door remote control is equipped with a Class-C RF amplifier.

CLASS D
Class D is currently the best solution for any low-cost, high-power, low-frequency amplifier—particularly for audio applications. Figure 5 shows its simple concept.
First, a PWM encoder is used to convert the input signal from analog to a one-bit digital format. This could be easily accomplished with a sawtooth generator and a voltage comparator as shown in Figure 3.

This section’s output is a digital signal with a duty cycle proportional to the input’s voltage. If the input signal comes from a digital source (e.g., a CD player, a digital radio, a computer audio board, etc.) then there is no need to use an analog signal anywhere. In that case, the PWM signal can be directly generated in the digital domain, avoiding any quality loss….

As you may have guessed, Class-D amplifiers aren’t free from difficulties. First, as for any sampling architecture, the PWM frequency must be significantly higher than the input signal’s highest frequency to avoid aliasing….The second concern with Class-D amplifiers is related to electromagnetic compatibility (EMC)…

Figure 3—A Class-D amplifier is a type of digital amplifier (at left). The comparator’s output is a PWM signal, which is amplified by a pair of low-loss digital switches. All the magic happens in the output filter (at right).

Figure 3—A Class-D amplifier is a type of digital amplifier. The comparator’s output is a PWM signal, which is amplified by a pair of low-loss digital switches. All the magic happens in the output filter.

CLASS E and F
Remember that Class C is devoted to RF amplifiers, using a transistor conducting only during a part of the signal period and a filter. Class E is an improvement to this scheme, enabling even greater efficiencies up to 80% to 90%. How?
Remember that with a Class-C amplifier, the losses only occur in the output transistor. This is because the other parts are capacitors and inductors, which theoretically do not dissipate any power.

Because power is voltage multiplied by current, the power dissipated in the transistor would be null if either the voltage or the current was null. This is what Class-E amplifiers try to do: ensure that the output transistor never has a simultaneously high voltage across its terminals and a high current going through it….

CLASS G AND CLASS H
Class G and Class H are quests for improved efficiency over the classic Class-AB amplifier. Both work on the power supply section. The idea is simple. For high-output power, a high-voltage power supply is needed. For low-power, this high voltage implies higher losses in the output stage.

What about reducing the supply voltage when the required output power is low enough? This scheme is clever, especially for audio applications. Most of the time, music requires only a couple of watts even if far more power is needed during the fortissimo. I agree this may not be the case for some teenagers’ music, but this is the concept.

Class G achieves this improvement by using more than one stable power rail, usually two. Figure 4 shows you the concept.

Figure 4—A Class-G amplifier uses two pairs of power supply rails. b—One supply rail is used when the output signal has a low power (blue). The other supply rail enters into action for high powers (red). Distortion could appear at the crossover.

Figure 4—A Class-G amplifier uses two pairs of power supply rails. b—One supply rail is used when the output signal has a low power (blue). The other supply rail enters into action for high powers (red). Distortion could appear at the crossover.

PWM Controller Uses BJTs to Reduce Costs

Dialog iW1679 Digital PWM Controller

Dialog iW1679 Digital PWM Controller

The iW1679 digital PWM controller drives 10-W power bipolar junction transistor (BJT) switches to reduce  costs in 5-V/2-A smartphone adapters and chargers. The controller enables designers to replace field-effect transistors (FETs) with lower-cost BJTs to provide lower standby power and higher light-load and active average efficiency in consumer electronic products.

The iW1679 uses Dialog’s adaptive multimode PWM/PFM control to dynamically change the BJT switching frequency. This helps the system improve light-load efficiency, power consumption, and electromagnetic interference (EMI). The iW1679 provides high, 83% active average efficiency; maintains high efficiency at loads as light as 10%. It achieves less than 30-mW no-load standby power with fast standby recovery time. The controller meets stringent global energy efficiency standards, including: US Department of Energy, European Certificate of Conformity (CoC) version 5, and Energy Star External Power Supplies (EPS) 2.0.

The iW1679 offers a user-configurable, four-level cable drop compensation option. It comes in a standard, low-cost, eight-lead SOIC package and provides protection from fault conditions including output short-circuit, output overvoltage, output overcurrent, and overtemperature.

The iW1679 costs $0.29 each in 1,000-unit quantities.

Dialog Semiconductor
www.iwatt.com

The Future of Very Large-Scale Integration (VLSI) Technology

The historical growth of IC computing power has profoundly changed the way we create, process, communicate, and store information. The engine of this phenomenal growth is the ability to shrink transistor dimensions every few years. This trend, known as Moore’s law, has continued for the past 50 years. The predicted demise of Moore’s law has been repeatedly proven wrong thanks to technological breakthroughs (e.g., optical resolution enhancement techniques, high-k metal gates, multi-gate transistors, fully depleted ultra-thin body technology, and 3-D wafer stacking). However, it is projected that in one or two decades, transistor dimensions will reach a point where it will become uneconomical to shrink them any further, which will eventually result in the end of the CMOS scaling roadmap. This essay discusses the potential and limitations of several post-CMOS candidates currently being pursued by the device community.

Steep transistors: The ability to scale a transistor’s supply voltage is determined by the minimum voltage required to switch the device between an on- and an off-state. The sub-threshold slope (SS) is the measure used to indicate this property. For instance, a smaller SS means the transistor can be turned on using a smaller supply voltage while meeting the same off current. For MOSFETs, the SS has to be greater than ln(10) × kT/q where k is the Boltzmann constant, T is the absolute temperature, and q is the electron charge. This fundamental constraint arises from the thermionic nature of the MOSFET conduction mechanism and leads to a fundamental power/performance tradeoff, which could be overcome if SS values significantly lower than the theoretical 60-mV/decade limit could be achieved. Many device types have been proposed that could produce steep SS values, including tunneling field-effect transistors (TFETs), nanoelectromechanical system (NEMS) devices, ferroelectric-gate FETs, and impact ionization MOSFETs. Several recent papers have reported experimental observation of SS values in TFETs as low as 40 mV/decade at room temperature. These so-called “steep” devices’ main limitations are their low mobility, asymmetric drive current, bias dependent SS, and larger statistical variations in comparison to traditional MOSFETs.

Spin devices: Spintronics is a technology that utilizes nano magnets’ spin direction as the state variable. Spintronics has unique properties over CMOS, including nonvolatility, lower device count, and the potential for non-Boolean computing architectures. Spintronics devices’ nonvolatility enables instant processor wake-up and power-down that could dramatically reduce the static power consumption. Furthermore, it can enable novel processor-in-memory or logic-in-memory architectures that are not possible with silicon technology. Although in its infancy, research in spintronics has been gaining momentum over the past decade, as these devices could potentially overcome the power bottleneck of CMOS scaling by offering a completely new computing paradigm. In recent years, progress has been made toward demonstration of various post-CMOS spintronic devices including all-spin logic, spin wave devices, domain wall magnets for logic applications, and spin transfer torque magnetoresistive RAM (STT-MRAM) and spin-Hall torque (SHT) MRAM for memory applications. However, for spintronics technology to become a viable post-CMOS device platform, researchers must find ways to eliminate the transistors required to drive the clock and power supply signals. Otherwise, the performance will always be limited by CMOS technology. Other remaining challenges for spintronics devices include their relatively high active power, short interconnect distance, and complex fabrication process.

Flexible electronics: Distributed large area (cm2-to-m2) electronic systems based on flexible thin-film-transistor (TFT) technology are drawing much attention due to unique properties such as mechanical conformability, low temperature processability, large area coverage, and low fabrication costs. Various forms of flexible TFTs can either enable applications that were not achievable using traditional silicon based technology, or surpass them in terms of cost per area. Flexible electronics cannot match the performance of silicon-based ICs due to the low carrier mobility. Instead, this technology is meant to complement them by enabling distributed sensor systems over a large area with moderate performance (less than 1 MHz). Development of inkjet or roll-to-roll printing techniques for flexible TFTs is underway for low-cost manufacturing, making product-level implementations feasible. Despite these encouraging new developments, the low mobility and high sensitivity to processing parameters present major fabrication challenges for realizing flexible electronic systems.

CMOS scaling is coming to an end, but no single technology has emerged as a clear successor to silicon. The urgent need for post-CMOS alternatives will continue to drive high-risk, high-payoff research on novel device technologies. Replicating silicon’s success might sound like a pipe dream. But with the world’s best and brightest minds at work, we have reasons to be optimistic.

Author’s Note: I’d like to acknowledge the work of PhD students Ayan Paul and Jongyeon Kim.