Current Multipliers Improve Processor Performance

Vicor has announced the introduction of Power-on-Package modular current multipliers for high performance, high current, CPU/GPU/ASIC (“XPU”) processors. By freeing up XPU socket pins and eliminating losses associated with delivery of current from the motherboard to the XPU, Vicor’s Power-on-Package solution enables higher current delivery for maximum XPU performance.

In response to the ever-increasing demands of high performance applications–artificial intelligence, machine learning, big data mining—XPU operating currents have risen to Power-on-Package-Enables-Higher-Performance-for-Artificial-Intelligence-Processorshundreds of Amperes. Point-of-Load power architectures in which high current power delivery units are placed close to the XPU, mitigate power distribution losses on the motherboard but do nothing to lessen interconnect challenges between the XPU and the motherboard. With increasing XPU currents, the remaining short distance to the XPU—the “last inch”—consisting of motherboard conductors and interconnects within the XPU socket has become a limiting factor in XPU performance and total system efficiency.

Vicor’s new Power-on-Package Modular Current Multipliers (“MCMs”) fit within the XPU package to expand upon the efficiency, density, and bandwidth advantages of Vicor’s Factorized Power Architecture, already established in 48 V Direct-to-XPU motherboard applications by early adopters. As current multipliers, MCMs mounted on the XPU substrate under the XPU package lid, or outside of it, are driven at a fraction (around 1/64th) of the XPU current from an external Modular Current Driver (MCD). The MCD, located on the motherboard, drives MCMs and accurately regulates the XPU voltage with high bandwidth and low noise. The solution profiled today, consisting of two MCMs and one MCD, enables delivery of up to 320 A of continuous current to the XPU, with peak current capability of 640 A.

With MCMs mounted directly to the XPU substrate, the XPU current delivered by the MCMs does not traverse the XPU socket. And, because the MCD drives MCMs at a low current, power from the MCD can be efficiently routed to MCMs reducing interconnect losses by 10X even though 90% of the XPU pins typically required for power delivery are reclaimed for expanded I/O functionality. Additional benefits include a simplified motherboard design and a substantial reduction in the minimum bypass capacitance required to keep the XPU within its voltage limits.

Multiple MCMs may be operated in parallel for increased current capability. The small (32mm x 8mm x 2.75mm) package and low noise characteristics of the MCM make it suitable for co-packaging with noise-sensitive, high performance ASICs, GPUs and CPUs. Operating temperature range is -40°C to +125°C. These devices represent the first in a portfolio of Power-on-Package solutions scalable to various XPU needs.

Vicor | www.vicorpower.com

Microsoft Real-time AI Project Leverages FPGAs

At Hot Chips 2017 Microsoft unveiled a new deep learning acceleration platform, codenamed Project Brainwave. The system performs real-time AI. Real-time here means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.

Hot-Chips-Stratix-10-board-1-

 

The Project Brainwave system is built with three main layers: a high-performance, distributed system architecture; a hardware DNN engine synthesized onto FPGAs; and a compiler and runtime for low-friction deployment of trained models. Project Brainwave leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years. By attaching high-performance FPGAs directly to Microsoft’s datacenter network, they can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop. This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.

Project Brainwave uses a powerful “soft” DNN processing unit (or DPU), synthesized onto commercially available FPGAs.  A number of companies—both large companies and a slew of startups—are building hardened DPUs.  Although some of these chips have high peak performance, they must choose their operators and data types at design time, which limits their flexibility.  Project Brainwave takes a different approach, providing a design that scales across a range of data types, with the desired data type being a synthesis-time decision. The design combines both the ASIC digital signal processing blocks on the FPGAs and the synthesizable logic to provide a greater and more optimized number of functional units.  This approach exploits the FPGA’s flexibility in two ways.  First, the developers have defined highly customized, narrow-precision data types that increase performance without real losses in model accuracy.  Second, they can incorporate research innovations into the hardware platform quickly (typically a few weeks), which is essential in this fast-moving space.  As a result, the Microsoft team achieved performance comparable to – or greater than – many of these hard-coded DPU chips but are delivering the promised performance today. At Hot Chips, Project Brainwave was demonstrated using Intel’s new 14 nm Stratix 10 FPGA.

Project Brainwave incorporates a software stack designed to support the wide range of popular deep learning frameworks. They support Microsoft Cognitive Toolkit and Google’s Tensorflow, and plan to support many others. They have defined a graph-based intermediate representation, to which they convert models trained in the popular frameworks, and then compile down to their high-performance infrastructure.

Microsoft | www.microsoft.com

Dev Kit Enables Cars to Express Their Emotions

Renesas Electronics has announced that it has developed a development kit for its R-Car that takes advantage of “emotion engine”, an artificial sensibility and intelligence technology pioneered by cocoro SB Corp. The new development kit enables cars with the sensibility to read the driver’s emotions and optimally respond to the driver’s needs based on their emotional state.

The development kit includes cocoro SB’s emotion engine, which was developed leveraging its sensibility technology to recognize emotional states such as confidence or uncertainty based on the speech of the driver. The car’s response to the driver’s emotional state is displayed by a new driver-attentive user interface (UI) implemented in the Renesas R-Car system-on-chip (SoC). Since it is possible for the car to understand the driver’s words and emotional state, it can provide the appropriate response that ensures optimal driver safety.

20170719-verbal-emotion-recognition-engine-st

As this technology is linked to artificial intelligence (AI) based machine learning, it is possible for the car to learn from conversations with the driver, enabling it to transform into a car that is capable of providing the best response to the driver. Renesas plans to release the development kit later this year.

Renesas  demonstrated its connected car simulator incorporating the new development kit based on cocoro SB’s emotion engine at the SoftBank World 2017 event earlier this month in held by SoftBank at the Prince Park Tower Tokyo.

Renesas considers the driver’s emotional state, facial expression and eyesight direction as key information that combines with the driver’s vital signs to improve the car and driver interface, placing drivers closer to the era of self-driving cars. For example, if the car can recognize the driver is experiencing an uneasy emotional state, even if he or she has verbally accepted the switch to hands free autonomous-driving mode, it is possible for the car to ask the driver “would you prefer to continue driving and not switch to autonomous-driving mode for now?” Furthermore, understanding the driver’s emotions enables the car to control vehicle speed according to how the driver is feeling while driving at night in autonomous-driving mode. By providing carmakers and IT companies with the development kit that takes advantage of this emotion engine, Renesas hopes to expand the possibilities for this service model to the development of new interfaces between cars and drivers and other mobility markets that can take advantage of emotional state information. Based on the newly-launched Renesas autonomy, a new advanced driving assistance systems (ADAS) and automated driving platform, Renesas enables a safe, secure, and convenient driving experience by providing next-generation solutions for connected cars.

Renesas Electronics America | www.renesas.com

The Future of Intelligent Robots

Robots have been around for over half a century now, making constant progress in terms of their sophistication and intelligence levels, as well as their conceptual and literal closeness to humans. As they become smarter and more aware, it becomes easier to get closer to them both socially and physically. That leads to a world where robots do things not only for us but also with us.

Not-so-intelligent robots made their first debut in factory environments in the late ‘50s. Their main role was to merely handle the tasks that humans were either not very good at or that were dangerous for them. Traditionally, these robots have had very limited sensing; they have essentially been blind despite being extremely strong, fast, and repeatable. Considering what consequences were likely to follow if humans were to freely wander about within the close vicinity of these strong, fast, and blind robots, it seemed to be a good idea to isolate them from the environment by placing them in safety cages.

Advances in the fields of sensing and compliant control made it possible to get a bit closer to these robots, again both socially and physically. Researchers have started proposing frameworks that would enable human-robot collaborative manipulation and task execution in various scenarios. Bi-manual collaborative manufacturing robots like YuMi by ABB and service robots like HERB by the Personal Robotics Lab of Carnegie Mellon University[1] have started emerging. Various modalities of learning from/programming by demonstration, such as kinesthetic teaching and imitation, make it very natural to interact with these robots and teach them the skills and tasks we want them perform the way we teach a child. For instance, the Baxter robot by Rethink Robotics heavily utilizes these capabilities and technologies to potentially bring a teachable robot to every small company with basic manufacturing needs.

As robots gets smarter, more aware, and safer, it becomes easier to socially accept and trust them as well. This reduces the physical distance between humans and robots even further, leading to assistive robotic technologies, which literally “live” side by side with humans 24/7. One such project is the Assistive Dexterous Arm (ADA)[2] that we have been carrying out at the Robotics Institute and the Human-Computer Interaction Institute of Carnegie Mellon University. ADA is a wheelchair mountable, semi-autonomous manipulator arm that utilizes the sliding autonomy concept in assisting people with disabilities in performing their activities of daily living. Our current focus is on assistive feeding, where the robot is expected to help the users eat their meals in a very natural and socially acceptable manner. This requires the ability to predict the user’s behaviors and intentions as well as spatial and social awareness to avoid awkward situations in social eating settings. Also, safety becomes our utmost concern as the robot has to be very close to the user’s face and mouth during task execution.

In addition to assistive manipulators, there have also been giant leaps in the research and development of smart and lightweight exoskeletons that make it possible for paraplegics to walk by themselves. These exoskeletons make use of the same set of technologies, such as compliant control, situational awareness through precise sensing, and even learning from demonstration to capture the walking patterns of a healthy individual.

These technologies combined with the recent developments in neuroscience have made it possible to get even closer to humans than an assistive manipulator or an exoskeleton, and literally unite with them through intelligent prosthetics. An intelligent prosthetic limb uses learning algorithms to map the received neural signals to the user’s intentions as the user’s brain is constantly adapting to the artificial limb. It also needs to be highly compliant to be able to handle the vast variance and uncertainty in the real world, not to mention safety.

Extrapolating from the aforementioned developments and many others, we can easily say that robots are going to be woven into our lives. Laser technology used to be unreachable and cutting-edge from an average person’s perspective a couple decades ago. However, as Rodney Brooks says in his book titled Robot: The Future of Flesh and Machines, (Penguin Books, 2003), now we do not know exactly how many laser devices we have in our houses, and more importantly we don’t even care! That will be the case for the robots. In the not so distant future, we will be enjoying the ride in our autonomous vehicle as a bunch of nanobots in our blood stream are delivering drugs and fixing problems, and we will feel good knowing that our older relatives are getting some great care from their assistive companion robots.

[1] http://www.cmu.edu/herb-robot/
[2] https://youtu.be/glpCAdKEWAA

Tekin Meriçli, PhD, is a well-rounded roboticist with in-depth expertise in machine intelligence and learning, perception, and manipulation. He is currently a Postdoctoral Fellow at the Human-Computer Interaction Institute at Carnegie Mellon University, where he leads the efforts on building intuitive and expressive interfaces to interact with semi-autonomous robotic systems that are intended to assist elderly and disabled. Previously, he was a Postdoctoral Fellow at the National Robotics Engineering Center (NREC) and the Personal Robotics Lab of the Robotics Institute at Carnegie Mellon University. He received his PhD in Computer Science from Bogazici University, Turkey.

This essay appears in Circuit Cellar 298, May 2015.