Neuromorphic Computing Devices and Materials
The information age runs on “data as the new oil”, and the volume of data consumed by the world is expected to grow from around 50 zettabytes (ZB) in 2020 to around 170ZB by 2025. The COVID pandemic led to a big increase in data streams to support telepresence for healthcare, shopping, work from home (WFH), and school from home (SFH). Semiconductor integrated circuits (ICs) are the engines of the information age, built using atomic-scale devices in communications, logic and memory chips. A global market for critical materials worth approximately $50B (US dollars) supports a semiconductor manufacturing industry that produces greater than $400B in annual revenues.
The greatest promise in future computing involves not just evolutions in current technologies, but revolutionary new systems featuring new electronic devices built using newly engineered materials. Artificial intelligence (AI) and machine learning (ML), responsive “edge” systems for an Internet of Things (IoT), swarms of intelligent vehicles and improved healthcare systems all need device breakthroughs to be able to efficiently process massively parallel data streams.
Without new hardware as a foundation, future intelligent systems will not be able to efficiently integrate software algorithms. We need new IC types so that AI for the IoT (AIoT) only sends useful information from the edge to the cloud.
Device R&D involves finding the right mix of specialty materials that can be cost-effectively controlled using fabrication processes. The leading-edge of IC fabrication is already engineering devices at the atomic-scale. Consider that a 7nm fin is only about 20 silicon atoms wide and requires inter-layers just a few atoms thick. It could take many years and billions of dollars to develop an entirely new hardware IC foundation, yet that’s the only way to enable new software to work in new systems (Figure 1).
BRAIN INSPIRED COMPUTING
Neuromorphic computing—also known as brain-inspired computing (BIC) technology is expected to allow ICs to do “compute in memory” (CIM) with a thousand- to a million-times improved power-consumption compared to the best digital AI chips today. The human brain consumes less energy than a light bulb while performing highly complex real-time processing of parallel information. In both organic brains and in AI, learning occurs when changes in the strength of connections changes a pattern of loops. From an IC perspective, we can say that our brains co-locate logic and memory with asynchronous analog information-transfer. For energy-efficient and scalable AI ICs, we need to mimic the brain’s architecture and computation schemes.
Leading commercial AI and ML chips today use arrays of digital logic CPU and graphics processing units (GPU) along with memory chips to simulate the strengths of artificial neuronal connections, resulting in large inefficient systems. Field programmable gate array (FPGA) ICs using digital CMOS FETs allow for neuromorphic learning simulations, yet consume orders of magnitude too much power just to keep idling.
Analog AI systems are one of the “Grand Challenges” in the 2020 U.S. SIA/SRC Decadal Plan for Semiconductors , and many academic, government and industry groups are now in active R&D to find the right building blocks to create new CIM devices. While many different materials and devices have been explored, the commonality in all approaches is to route complex on-chip metal interconnects through a highly parallel net of synaptic paths.
Figure 2 shows an example of a “cross-bar” circuit architecture that enables the complex weighting of inputs needed for AI such as pattern-recognition applications. With proper 3D routing of metal traces, an efficient analog IC array like this can meet all of the specifications dreamed of by AI designers. The only problem has been finding the right analog building blocks for memory cells.
Analog memory for neural nets must allow for precise reading and writing of continuous scalar values, instead of the simpler need in digital memory to merely deal with discrete bits. Analog scalars allow a single unit cell to hold the entire synaptic weight, and with so many synapses in parallel we need the written weights to stay in place using some manner of non-volatile memory (NVM) technology.
NVM technologies include ferroelectric tunnel junction (FTJ), NAND flash, phase-change memory (PCM), and “memristive” resistance random access memory (RRAM) devices. Digital versions of all of these are in commercial production for different applications, while analog variants are some of the CIM cells explored by leading R&D labs for AL and ML today.
Most NVM materials are complex oxide, nitride or chalcogenide alloys with 3-5 elements, and most require integration with similarly complex interlayers and electrodes. Moreover, process conditions such as pressure and temperature alter the characteristics of the needed atomic-scale cleaning, deposition and patterning steps, so the number of possible combinations of fabrication process reaches into the millions.
Hardware R&D for commercial ICs at the atomic-scale is like trying to find a needle in a haystack, and then build an intricate clock around that needle. Only after a new material is co-optimized with HVM processes and device structures can we learn how it will work in a fab. Figure 1 shows that “cycles of learning” in hardware R&D can takes years to resolve using old sequential work-flows. Materials and processes and devices must be co-optimized to find the right materials stack for analog neuromorphic hardware.
The need for more energy-efficient automated inference in our world requires innovation to facilitate Sensing-to- Action and true edge intelligence for the AIoT. Analog neuromorphic ICs should consume a thousand- to a million times less energy compared to the best digital AI chips today, even though most analog AI materials technologies are extensions of digital chip materials. Leading semiconductor fabricators are in hot pursuit of new analog materials, processes and devices to find the foundations for future ML and AI systems.
 2020 U.S. SIA/SRC Decadal Plan for Semiconductors https://www.src.org/about/decadal-plan
Intermolecular | www.intermolecular.com
PUBLISHED IN CIRCUIT CELLAR MAGAZINE • JUNE 2021 #371 – Get a PDF of the issueSponsor this Article
Ganesh Panaman leads the Strategic and Emerging Technologies team at Intermolecular, a business of Merck KGaA, Darmstadt, Germany. His prior experience includes, process development, Integration and HVM operations working in various industries.