Q&A: Teaching, CAD Research, and VLSI Innovation

Shiyan Hu is an assistant professor in the Department of Electrical and Computer Engineering at Michigan Technological University. We discussed his research in the fields of computer-aided design (CAD), very-large-scale integration (VSLI), smart home monitoring, and biochip design.—Nan Price, Associate Editor

 

Shiyan Hu

Shiyan Hu

NAN: How long have you been at Michigan Technological University? What courses do you currently teach and what do you enjoy most about instructing?

SHIYAN: I have been with Michigan Tech for six years as an assistant professor. Effective September 2014, I will be an associate professor.

I have recently taught the graduate-level “Advanced Embedded System for Smart Infrastructure,” the graduate-level “Advanced Very-Large-Scale Integration (VLSI) Computer-Aided Design (CAD),” and the undergraduate-level “VLSI Design” courses.
The most exciting part about teaching is the interactions with students. For example, questions from students—although sometimes challenging—can be intriguing and it is fun to observe diversified thoughts. In addition, students taking graduate-level courses need to discuss their course projects with me. During the discussions, I can clearly see how excited they feel about their progress, which makes the teaching very enjoyable.

NAN: What “hot topics” currently interest your students?

SHIYAN: Students are very interested in embedded system designs for smart homes, including FPGA design and embedded programming for the scheduling of various smart home appliances to improve convenience and reduce the cost of electricity bills. I also frequently have meetings with students who are interested in portable or wearable electronics targeting health-care applications.

Shiyan and a team of students he advises developed this sensor-based smart video monitoring system (left) and a 3-D mouse (right).

Photo 1: Shiyan and a team of students he advises developed this sensor-based smart video monitoring system.

Photo 2: A 3-D mouse developed by Shayin and his team.

Photo 2: A 3-D mouse developed by Shiyan and his team.

NAN: Describe your role as director of Michigan Tech’s VLSI CAD research lab.

SHIYAN: I have been advising a team of PhD and MS students who conduct research in the area of VLSI CAD in the Electrical and Computer Engineering (ECE) department. A main research focus of our lab is VLSI physical design including buffer insertion, layer assignment, routing, gate sizing, and so forth. In addition, we have developed some embedded system prototypes such as sensor-based video monitoring and a 3-D mouse (see Photos 1 and 2).

There is also growing collaboration between our lab and the power system lab on the research of a CAD technique for smart-grid systems. The collaboration has led to an innovative optimization technique for an automatic feeder remote terminal unit that addresses cybersecurity attacks to smart power distribution networks. Further, there is an ongoing joint project on an FPGA-based embedded system for power quality prediction.

Although most of my time as the research lab director is spent on student mentoring and project management, our lab also contributes considerably to education in our department. For example, instructional and lab materials for the undergraduate “VLSI Design” course are produced by our lab.

NAN: Tell us more about your smart home research and the technique you developed to address cybersecurity problems.

SHIYAN: My smart home research emphasizes embedded systems that handle scheduling and cybersecurity issues. Figure 1 shows a typical smart home system, which consists of various components such as household appliances, energy storage, photovoltaic (PV) arrays, and a plug-in hybrid electrical vehicle (PHEV) charger. Smart meters are installed at the customer side and connected to the smart power distribution system.

The smart meter can periodically receive updated pricing information from utility companies. The smart meter also has a scheduling unit that automatically determines the operation of each household appliance (e.g., the starting time and working power level), targeting the minimization of the monetary expense of each residential customer. This technology is called “smart home scheduling.”

In the real-time pricing model, utility pricing is determined by the load while the load is influenced by the pricing, forming a feedback loop. In this process, the pricing information is transmitted from the utility to the smart meters through some communication network, which could be wireless or wired. Cyber attackers can hack some access points in the transmission or just directly hack the smart meters. Those impacted smart meters would receive fake pricing information and generate the undesired scheduling solutions. Cyber attackers can take advantage of this by scheduling their own energy-consuming tasks during the inexpensive hours, which would be expensive without a cyber attack. This is an interesting topic I am working on.

This smart home system architecture includes HVAC and several home appliances.

Figure 1: This smart home system architecture includes HVAC and several home appliances.

NAN: Describe your VSLI research.

SHIYAN: Modern ICs and chips are ubiquitous. Their applications include smartphones, modern home appliances, PCs, and laptops, as well as the powerful servers for big data storage and processing. In VLSI and system-on-a-chip (SoC) design, the layout design (a.k.a., physical design) often involves billions of transistors and is therefore enormously complex. Handling such a complex problem requires high-performance software automation tools (i.e., physical design automation tools) to achieve design objectives within a reasonable time frame. VLSI physical design is a key part of my research area.

NAN: Are you involved in any other areas of research?

SHIYAN: I also work on microfluidic biochip design. The traditional clinical diagnosis procedure includes collecting blood from patients and then sending it to laboratories, which require space and are labor-intensive and expensive, yet sometimes inaccurate.

The invention of the lab on a chip (a.k.a., biochip) technology offers some relief. The expensive laboratory procedures can be simply performed within a small chip, which provides much higher sensitivity and detection accuracy in blood sample analysis and disease diagnosis. Some point-of-care versions of these have already become popular in the market.

A major weakness of the prevailing biochip technology is that such a chip often has very limited functionality in terms of the quantities it can measure. The reason is that currently only up to thousands of biochemical reactions can be handled in a single biochip. Since the prevailing biochips are always manually designed, this seems to be the best one can achieve. If a single biochip could simultaneously execute a few biological assays corresponding to related diseases, then the clinical diagnosis would be much less expensive and more convenient to conduct. This is also the case when utilizing biochips for biochemical research and drug discovery.

My aim for this biochip research project is to largely improve the integration complexity of miniaturized components in a biochip to provide many more functionalities. The growing design complexity has mandated a shift from the manual design process toward a CAD process.

Basically, in the microfluidic biochip CAD methodology, those miniaturized components, which correspond to fundamental biochemical operations (e.g., mix and split), are automatically placed and routed using computer algorithms. This methodology targets minimizing the overall completion time of all biochemical operations, limiting the sizes of biochips, and improving the yield in the biochip fabrication. In fact, some results from our work were recently featured on the front cover of IEEE Transactions on Nanobioscience (Volume 13, No. 1, 2014), a premier nanobioscience and engineering journal. In the future, we will consider inserting on-chip optical sensors to provide real-time monitoring of the biological assay execution, finding possible errors during execution, and dynamically reconfiguring the biochip for error recovery.

NAN: You’ve earned several distinctions and awards over the last few years. How do these acknowledgments help your research?

SHIYAN: Those awards and funding certainly help me a lot in pursuing the research of fascinating topics. For example, I am a 2014 recipient of the NSF CAREER award, which is widely regarded as one of the most prestigious honors for up-and-coming researchers in science and engineering.

My five-year NSF CAREER project will focus on carbon nanotube interconnect-based circuit design. In the prevailing 22-nm technology node, wires are made from coppers and such a thin copper wire has a very small cross-section area. This results in large wire resistance and large interconnect delay. In fact, the interconnect delay has become the limiting factor for chip timing. Due to the fundamental physical limits of copper wires, novel on-chip interconnect materials (e.g., carbon nanotubes and graphene nanoribbons) are more desirable due to their many salient features (e.g., superior conductivity and resilience to electromigration).

To judiciously integrate the benefits from both nanotechnology interconnects and copper interconnects, my NSF CAREER project will develop a groundbreaking physical layout codesign methodology for next-generation ICs. It will also develop various physical design automation techniques as well as a variation-aware codesign technique for the new methodology. This project aims to integrate the pioneering nanotechnologies into the practical circuit design and it has the potential to contribute to revolutionizing the prevailing circuit design paradigm.

NAN: Give us some background information. When did you first become interested in computer engineering?

SHIYAN: I started to work on computer engineering when I entered Texas A&M University conducting research with professor Jiang Hu, a leading expert in the field of VLSI physical design. I learned a lot about VLSI CAD from him and I did several interesting research projects including buffer insertion, gate sizing, design for manufacturability, and post silicon tuning. Through his introduction, I also got the chance to collaborate with leading experts from IBM Research on an important project called “slew buffering.”

NAN: Tell us more about your work at IBM Research.

SHIYAN: As VLSI technology scales beyond the 32-nm node, more devices can fit onto a chip, which implies continued growth of design size. The increased wire delay dominance due to finer wire widths makes design closure an increasingly challenging problem.
Buffer insertion, which improves an IC’s timing performance by inserting non-inverting buffers or inverting buffers (a.k.a., inverters), has proven to be indispensable in interconnect optimization. It has been well documented that typically more than 25% of gates are buffers in IBM ASIC designs.

Together with my collaborators at IBM Research, I proposed a new slew buffering-driven dynamic programming technique. The testing with IBM ASIC designs demonstrated that our technique achieves a more than 100× speed increase compared to the classical buffering technique while still saving buffers. Therefore, the slew buffering-driven technique has been implemented and deployed into the IBM physical design flow as a default option.

IBM researchers have witnessed that the slew buffering technique contributes to a great reduction in the turnaround time of the physical synthesis flow. In addition, more extensive deployment of buffering techniques leads to superior design quality. Such an extensive buffer deployment-based interconnect synthesis was not possible prior to this work, due to the inefficiency of the previous buffering techniques.

After the publication of this work, various extensions to the slew buffering-driven technique were developed by other experts in the field. In summer 2010, I was invited by the group again to take a visiting professorship working on physical design, which resulted in a US patent being granted.

Q&A: Marilyn Wolf, Embedded Computing Expert

Marilyn Wolf has created embedded computing techniques, co-founded two companies, and received several Institute of Electrical and Electronics Engineers (IEEE) distinctions. She is currently teaching at Georgia Institute of Technology’s School of Electrical and Computer Engineering and researching smart-energy grids.—Nan Price, Associate Editor

NAN: Do you remember your first computer engineering project?

MARILYN: My dad is an inventor. One of his stories was about using copper sewer pipe as a drum memory. In elementary school, my friend and I tried to build a computer and bought a PCB fabrication kit from RadioShack. We carefully made the switch features using masking tape and etched the board. Then we tried to solder it and found that our patterning technology outpaced our soldering technology.

NAN: You have developed many embedded computing techniques—from hardware/software co-design algorithms and real-time scheduling algorithms to distributed smart cameras and code compression. Can you provide some information about these techniques?

Marilyn Wolf

Marilyn Wolf

MARILYN: I was inspired to work on co-design by my boss at Bell Labs, Al Dunlop. I was working on very-large-scale integration (VLSI) CAD at the time and he brought in someone who designed consumer telephones. Those designers didn’t care a bit about our fancy VLSI because it was too expensive. They wanted help designing software for microprocessors.

Microprocessors in the 1980s were pretty small, so I started on simple problems, such as partitioning a specification into software plus a hardware accelerator. Around the turn of the millennium, we started to see some very powerful processors (e.g., the Philips Trimedia). I decided to pick up on one of my earliest interests, photography, and look at smart cameras for real-time computer vision.

That work eventually led us to form Verificon, which developed smart camera systems. We closed the company because the market for surveillance systems is very competitive.
We have started a new company, SVT Analytics, to pursue customer analytics for retail using smart camera technologies. I also continued to look at methodologies and tools for bigger software systems, yet another interest I inherited from my dad.

NAN: Tell us a little more about SVT Analytics. What services does the company provide and how does it utilize smart-camera technology?

MARILYN: We started SVT Analytics to develop customer analytics for software. Our goal is to do for bricks-and-mortar retailers what web retailers can do to learn about their customers.

On the web, retailers can track the pages customers visit, how long they stay at a page, what page they visit next, and all sorts of other statistics. Retailers use that information to suggest other things to buy, for example.

Bricks-and-mortar stores know what sells but they don’t know why. Using computer vision, we can determine how long people stay in a particular area of the store, where they came from, where they go to, or whether employees are interacting with customers.

Our experience with embedded computer vision helps us develop algorithms that are accurate but also run on inexpensive platforms. Bad data leads to bad decisions, but these systems need to be inexpensive enough to be sprinkled all around the store so they can capture a lot of data.

NAN: Can you provide a more detailed overview of the impact of IC technology on surveillance in recent years? What do you see as the most active areas for research and advancements in this field?

MARILYN: Moore’s law has advanced to the point that we can provide a huge amount of computational power on a single chip. We explored two different architectures: an FPGA accelerator with a CPU and a programmable video processor.

We were able to provide highly accurate computer vision on inexpensive platforms, about $500 per channel. Even so, we had to design our algorithms very carefully to make the best use of the compute horsepower available to us.

Computer vision can soak up as much computation as you can throw at it. Over the years, we have developed some secret sauce for reducing computational cost while maintaining sufficient accuracy.

NAN: You wrote several books, including Computers as Components: Principles of Embedded Computing System Design and Embedded Software Design and Programming of Multiprocessor System-on-Chip: Simulink and System C Case Studies. What can readers expect to gain from reading your books?

MARILYN: Computers as Components is an undergraduate text. I tried to hit the fundamentals (e.g., real-time scheduling theory, software performance analysis, and low-power computing) but wrap around real-world examples and systems.

Embedded Software Design is a research monograph that primarily came out of Katalin Popovici’s work in Ahmed Jerraya’s group. Ahmed is an old friend and collaborator.

NAN: When did you transition from engineering to teaching? What prompted this change?

MARILYN: Actually, being a professor and teaching in a classroom have surprisingly little to do with each other. I spend a lot of time funding research, writing proposals, and dealing with students.

I spent five years at Bell Labs before moving to Princeton, NJ. I thought moving to a new environment would challenge me, which is always good. And although we were very well supported at Bell Labs, ultimately we had only one customer for our ideas. At a university, you can shop around to find someone interested in what you want to do.

NAN: How long have you been at Georgia Institute of Technology’s School of Electrical and Computer Engineering? What courses do you currently teach and what do you enjoy most about instructing?

MARILYN: I recently designed a new course, Physics of Computing, which is a very different take on an introduction to computer engineering. Instead of directly focusing on logic design and computer organization, we discuss the physical basis of delay and energy consumption.

You can talk about an amazingly large number of problems involving just inverters and RC circuits. We relate these basic physical phenomena to systems. For example, we figure out why dynamic RAM (DRAM) gets bigger but not faster, then see how that has driven computer architecture as DRAM has hit the memory wall.

NAN: As an engineering professor, you have some insight into what excites future engineers. With respect to electrical engineering and embedded design/programming, what are some “hot topics” your students are currently attracted to?

MARILYN: Embedded software—real-time, low-power—is everywhere. The more general term today is “cyber-physical systems,” which are systems that interact with the physical world. I am moving slowly into control-oriented software from signal/image processing. Closing the loop in a control system makes things very interesting.

My Georgia Tech colleague Eric Feron and I have a small project on jet engine control. His engine test room has a 6” thick blast window. You don’t get much more exciting than that.

NAN: That does sound exciting. Tell us more about the project and what you are exploring with it in terms of embedded software and closed-loop control systems.

MARILYN: Jet engine designers are under the same pressures now that have faced car engine designers for years: better fuel efficiency, lower emissions, lower maintenance cost, and lower noise. In the car world, CPU-based engine controllers were the critical factor that enabled car manufacturers to simultaneously improve fuel efficiency and reduce emissions.

Jet engines need to incorporate more sensors and more computers to use those sensors to crunch the data in real time and figure out how to control the engine. Jet engine designers are also looking at more complex engine designs with more flaps and controls to make the best use of that sensor data.

One challenge of jet engines is the high temperatures. Jet engines are so hot that some parts of the engine would melt without careful design. We need to provide more computational power while living with the restrictions of high-temperature electronics.

NAN: Your research interests include embedded computing, smart devices, VLSI systems, and biochips. What types of projects are you currently working on?

MARILYN: I’m working on with Santiago Grivalga of Georgia Tech on smart-energy grids, which are really huge systems that would span entire countries or continents. I continue to work on VLSI-related topics, such as the work on error-aware computing that I pursued with Saibal Mukopodhyay.

I also work with my friend Shuvra Bhattacharyya on architectures for signal-processing systems. As for more unusual things, I’m working on a medical device project that is at the early stages, so I can’t say too much specifically about it.

NAN: Can you provide more specifics about your research into smart energy grids?

MARILYN: Smart-energy grids are also driven by the push for greater efficiency. In addition, renewable energy sources have different characteristics than traditional coal-fired generators. For example, because winds are so variable, the energy produced by wind generators can quickly change.

The uses of electricity are also more complex, and we see increasing opportunities to shift demand to level out generation needs. For example, electric cars need to be recharged, but that can happen during off-peak hours. But energy systems are huge. A single grid covers the eastern US from Florida to Minnesota.

To make all these improvements requires sophisticated software and careful design to ensure that the grid is highly reliable. Smart-energy grids are a prime example of Internet-based control.

We have so many devices on the grid that need to coordinate that the Internet is the only way to connect them. But the Internet isn’t very good at real-time control, so we have to be careful.

We also have to worry about security Internet-enabled devices enable smart grid operations but they also provide opportunities for tampering.

NAN: You’ve earned several distinctions. You were the recipient of the Institute of Electrical and Electronics Engineers (IEEE) Circuits and Systems Society Education Award and the IEEE Computer Society Golden Core Award. Tell us about these experiences.

MARILYN: These awards are presented at conferences. The presentation is a very warm, happy experience. Everyone is happy. These things are time to celebrate the field and the many friends I’ve made through my work.

The Future of Very Large-Scale Integration (VLSI) Technology

The historical growth of IC computing power has profoundly changed the way we create, process, communicate, and store information. The engine of this phenomenal growth is the ability to shrink transistor dimensions every few years. This trend, known as Moore’s law, has continued for the past 50 years. The predicted demise of Moore’s law has been repeatedly proven wrong thanks to technological breakthroughs (e.g., optical resolution enhancement techniques, high-k metal gates, multi-gate transistors, fully depleted ultra-thin body technology, and 3-D wafer stacking). However, it is projected that in one or two decades, transistor dimensions will reach a point where it will become uneconomical to shrink them any further, which will eventually result in the end of the CMOS scaling roadmap. This essay discusses the potential and limitations of several post-CMOS candidates currently being pursued by the device community.

Steep transistors: The ability to scale a transistor’s supply voltage is determined by the minimum voltage required to switch the device between an on- and an off-state. The sub-threshold slope (SS) is the measure used to indicate this property. For instance, a smaller SS means the transistor can be turned on using a smaller supply voltage while meeting the same off current. For MOSFETs, the SS has to be greater than ln(10) × kT/q where k is the Boltzmann constant, T is the absolute temperature, and q is the electron charge. This fundamental constraint arises from the thermionic nature of the MOSFET conduction mechanism and leads to a fundamental power/performance tradeoff, which could be overcome if SS values significantly lower than the theoretical 60-mV/decade limit could be achieved. Many device types have been proposed that could produce steep SS values, including tunneling field-effect transistors (TFETs), nanoelectromechanical system (NEMS) devices, ferroelectric-gate FETs, and impact ionization MOSFETs. Several recent papers have reported experimental observation of SS values in TFETs as low as 40 mV/decade at room temperature. These so-called “steep” devices’ main limitations are their low mobility, asymmetric drive current, bias dependent SS, and larger statistical variations in comparison to traditional MOSFETs.

Spin devices: Spintronics is a technology that utilizes nano magnets’ spin direction as the state variable. Spintronics has unique properties over CMOS, including nonvolatility, lower device count, and the potential for non-Boolean computing architectures. Spintronics devices’ nonvolatility enables instant processor wake-up and power-down that could dramatically reduce the static power consumption. Furthermore, it can enable novel processor-in-memory or logic-in-memory architectures that are not possible with silicon technology. Although in its infancy, research in spintronics has been gaining momentum over the past decade, as these devices could potentially overcome the power bottleneck of CMOS scaling by offering a completely new computing paradigm. In recent years, progress has been made toward demonstration of various post-CMOS spintronic devices including all-spin logic, spin wave devices, domain wall magnets for logic applications, and spin transfer torque magnetoresistive RAM (STT-MRAM) and spin-Hall torque (SHT) MRAM for memory applications. However, for spintronics technology to become a viable post-CMOS device platform, researchers must find ways to eliminate the transistors required to drive the clock and power supply signals. Otherwise, the performance will always be limited by CMOS technology. Other remaining challenges for spintronics devices include their relatively high active power, short interconnect distance, and complex fabrication process.

Flexible electronics: Distributed large area (cm2-to-m2) electronic systems based on flexible thin-film-transistor (TFT) technology are drawing much attention due to unique properties such as mechanical conformability, low temperature processability, large area coverage, and low fabrication costs. Various forms of flexible TFTs can either enable applications that were not achievable using traditional silicon based technology, or surpass them in terms of cost per area. Flexible electronics cannot match the performance of silicon-based ICs due to the low carrier mobility. Instead, this technology is meant to complement them by enabling distributed sensor systems over a large area with moderate performance (less than 1 MHz). Development of inkjet or roll-to-roll printing techniques for flexible TFTs is underway for low-cost manufacturing, making product-level implementations feasible. Despite these encouraging new developments, the low mobility and high sensitivity to processing parameters present major fabrication challenges for realizing flexible electronic systems.

CMOS scaling is coming to an end, but no single technology has emerged as a clear successor to silicon. The urgent need for post-CMOS alternatives will continue to drive high-risk, high-payoff research on novel device technologies. Replicating silicon’s success might sound like a pipe dream. But with the world’s best and brightest minds at work, we have reasons to be optimistic.

Author’s Note: I’d like to acknowledge the work of PhD students Ayan Paul and Jongyeon Kim.

Q&A: Hai (Helen) Li (Academic, Embedded System Researcher)

Helen Li came to the U.S. from China in 2000 to study for a PhD at Purdue University. Following graduation she worked for Intel, Qualcomm, and Seagate. After about five years of working in industry, she transitioned to academia by taking a position at the Polytechnic Institute of New York University, where she teaches courses such as circuit design (“Introduction to VLSI”), advanced computer architecture (“VLSI System and Architecture Design”), and system-level applications (“Real-Time Embedded System Design”).

Hai (Helen) Li

In a recent interview Li described her background and provided details about her research relating to spin-transfer torque RAM-based memory hierarchy and memristor-based computing architecture.

An abridged version of the interview follows.

NAN: What were some of your most notable experiences working for Intel, Qualcomm, and Seagate?

HELEN: The industrial working experience is very valuable to my whole career life. At Seagate, I led a design team on a test chip for emerging memory technologies. Communication and understanding between device engineers and design communities is extremely important. The joined effects from all the related disciplines (not just one particular area anymore) became necessary. The concept of cross layers (including device/circuit/architecture/system) cooptimization, and design continues in my research career.

NAN: In 2009, you transitioned from an engineering career to a career teaching electrical and computer engineering at the Polytechnic Institute of New York University (NYU). What prompted this change?

HELEN: After five years of working at various industrial companies on wireless communication, computer systems, and storage, I realized I am more interested in independent research and teaching. After careful consideration, I decided to return to an academic career and later joined the NYU faculty.

NAN: How long have you been teaching at the Polytechnic Institute of NYU? What courses do you currently teach and what do you enjoy most about teaching?

HELEN: I have been teaching at NYU-Poly since September 2009. My classes cover a wide range of computer engineering, from basic circuit design (“Introduction to VLSI”), to advanced computer architecture (“VLSI System and Architecture Design”), to system-level applications (“Real-Time Embedded System Design”).

Though I have been teaching at NYU-Poly, I will be taking a one-year leave of absence from fall 2012 to summer 2013. During this time, I will continue my research on very large-scale integration (VLSI) and computer engineering at University of Pittsburgh.

I enjoy the interaction and discussions with students. They are very smart and creative. Those discussions always inspire new ideas. I learn so much from students.

Helen and her students are working on developing a 16-Kb STT-RAM test chip.

NAN: You’ve received several grants from institutions including the National Science Foundation and the Air Force Research Lab to fund your embedded systems endeavors. Tell us a little about some of these research projects.

HELEN: The objective of the research for “CAREER: STT-RAM-based Memory Hierarchy and Management in Embedded Systems” is to develop an innovative spin-transfer torque random access memory (STT-RAM)-based memory hierarchy to meet the demands of modern embedded systems for low-power, fast-speed, and high-density on-chip data storage.

This research provides a comprehensive design package for efficiently integrating STT-RAM into modern embedded system designs and offers unparalleled performance and power advantages. System architects and circuit designers can be well bridged and educated by the research innovations. The developed techniques can be directly transferred to industry applications under close collaborations with several industry partners, and directly impact future embedded systems. The activities in the collaboration also include tutorials at the major conferences on the technical aspects of the projects and new course development.

The main goal of the research for “CSR: Small Collaborative Research: Cross-Layer Design Techniques for Robustness of the Next-Generation Nonvolatile Memories” is to develop design technologies that can alleviate the general robustness issues of next-generation nonvolatile memories (NVMs) while maintaining and even improving generic memory specifications (e.g., density, power, and performance). Comprehensive solutions are integrated from architecture, circuit, and device layers for the improvement on the density, cost, and reliability of emerging nonvolatile memories.

The broader impact of the research lies in revealing the importance of applying cross-layer design techniques to resolve the robustness issues of the next-generation NVMs and the attentions to the robust design context.

The research for “Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques” was inspired by memristors, which have recently attracted increased attention since the first real device was discovered by Hewlett-Packard Laboratories (HP Labs) in 2008. Their distinctive memristive characteristic creates great potentials in future computing system design. Our objective is to investigate process-variation aware memristor modeling, design methodology for memristor-based computing architecture, and exploitation of circuit techniques to improve reliability and density.

The scope of this effort is to build an integrated design environment for memristor-based computing architecture, which will provide memristor modeling and design flow to circuit and architecture designers. We will also develop and implement circuit techniques to achieve a more reliable and efficient system.

An electric car model controlled by programmable emerging memories is in the developmental stages.

NAN: What types of projects are you and your students currently working on?

HELEN: Our major efforts are on device modeling, circuit design techniques, and novel architectures for computer systems and embedded systems. We primarily focus on the potentials of emerging devices and leveraging their advantages. Two of our latest projects are a 16-Kb STT-RAM test chip and an electric car model controlled by programmable emerging memories.

The complete interview appears in Circuit Cellar 267 (October 2012).