Multiphase PMICs Boast High Efficiency and Small Footprint

Renesas Electronics has announced three programmable power management ICs (PMICs) that offer high power efficiency and small footprint for application processors in smartphones and tablets: the ISL91302B, ISL91301A, and ISL91301B PMICs. The PMICs also deliver power to artificial intelligence (AI) processors, FPGAs and industrial microprocessors (MPUs). They are also well-suited for powering the supply rails in solid-state drives (SSDs), optical transceivers, and a wide range of consumer, industrial and networking devices. The ISL91302B dual/single output, multiphase PMIC provides up to 20 A of output current and 94 percent peak efficiency in a 70 mm2 solution size that is more than 40% smaller than competitive PMICs.
In addition to the ISL91302B, Renesas’ ISL91301A triple output PMIC and ISL91301B quad output PMIC both deliver up to 16 A of output power with 94% peak efficiency. The new programmable PMICs leverage Renesas’ R5 Modulation Technology to provide fast single-cycle transient response, digitally tuned compensation, and ultra-high 6 MHz (max) switching frequency during load transients. These features make it easier for power supply designers to design boards with 2 mm x 2 mm, 1mm low profile inductors, small capacitors and only a few passive components.

Renesas PMICs also do not require external compensation components or external dividers to set operating conditions. Each PMIC dynamically changes the number of active phases for optimum efficiency at all output currents. Their low quiescent current, superior light load efficiency, regulation accuracy, and fast dynamic response significantly extend battery life for today’s feature-rich, power hungry devices.

Key Features of ISL91302B PMIC:

  • Available in three factory configurable options for one or two output rails:
    • Dual-phase (2 + 2) configuration supporting 10 A from each output
    • Triple-phase (3 + 1) configuration supporting 15 A from one output and  5A from the second output
    • Quad-phase (4 + 0) configuration supporting 20A from one output
  • Small solution size: 7 mm x 10 mm for 4-phase design
  • Input supply voltage range of 2.5 V to 5.5 V
  • I2C or SPI programmable Vout from 0.3 V to 2 V
  • R5 modulator architecture balances current loads with smooth phase adding and dropping for power efficiency optimization
  • Provides 75 μA quiescent current in discontinuous current mode (DCM)
  • Independent dynamic voltage scaling for each output
  • ±0.7percent system accuracy for -10°C to 85°C with remote voltage sensing
  • Integrated telemetry ADC senses phase currents, output current, input/output voltages, and die temperature, enabling PMIC diagnostics during operation
  • Soft-start and fault protection against under voltage (UV), over voltage (OV), over current (OC), over temperature (OT), and short circuit

Key Features of ISL91301A and ISL91301B PMICs

  • Available in two factory configurable options:
    • ISL91301A: dual-phase, three output rails configured as 2+1+1 phase
    • ISL91301B: single-phase, four output rails configured as 1+1+1+1 phase
  • 4A per phase for 2.8 V to 5.5 V supply voltage
  • 3A per phase for 2.5 V to 5.5 V supply voltage
  • Small solution size: 7 mm x 10 mm for 4-phase design
  • I2C or SPI programmable Vout from 0.3 V to 2 V
  • Provides 62μA quiescent current in DCM mode
  • Independent dynamic voltage scaling for each output
  • ±0.7percent system accuracy for -10°C to 85°C with remote voltage sensing
  • Soft-start and fault protection against UV, OV, OC, OT, and short circuit

Pricing and Availability

The ISL91302B dual/single output PMIC is available now in a 2.551 mm x 3.670 ball WLCSP package and is priced at $3.90 in 1k quantities. For more information on the ISL91302B, please visit: www.intersil.com/products/isl91302B.

The ISL91301A triple-output PMIC and ISL91301B quad-output PMIC are available now in 2.551 mm x 2.87 mm, 42-ball WLCSP packages, both priced at $3.12 in 1k quantities. For more information on the ISL91301A, please visit: www.intersil.com/products/isl91301A. For more information on the ISL91301B, please visit: www.intersil.com/products/isl91301B.

Renesas Electronics | www.renesas.com

Movidius AI Acceleration Technology Comes to a Mini-PCIe Card

By Eric Brown

UP AI Core (front)

As promised by Intel when it announced an Intel AI: In Production program for its USB stick form factor Movidius Neural Compute Stick, Aaeon has launched a mini-PCIe version of the device called the UP AI Core. It similarly integrates Intel’s AI-infused Myriad 2 Vision Processing Unit (VPU). The mini-PCIe connection should provide faster response times for neural networking and machine vision compared to connecting to a cloud-based service.

UP AI Core (back)

The module, which is available for pre-order at $69 for delivery in April, is designed to “enhance industrial IoT edge devices with hardware accelerated deep learning and enhanced machine vision functionality,” says Aaeon. It can also enable “object recognition in products such as drones, high-end virtual reality headsets, robotics, smart home devices, smart cameras and video surveillance solutions.”

 

 

UP Squared

The UP AI Core is optimized for Aaeon’s Ubuntu-supported UP Squared hacker board, which runs on Intel’s Apollo Lake SoCs. However, it should work with any 64-bit x86 computer or SBC equipped with a mini-PCIe slot that runs Ubuntu 16.04. Host systems also require 1GB RAM and 4GB free storage. That presents plenty of options for PCs and embedded computers, although the UP Squared is currently the only x86-based community backed SBC equipped with a Mini-PCIe slot.

Myriad 2 architecture

Aaeon had few technical details about the module, except to say it ships with 512MB of DDR RAM, and offers ultra-low power consumption. The UP AI Core’s mini-PCIe interface likely provides a faster response time than the USB link used by Intel’s $79 Movidius Neural Compute Stick. Aaeon makes no claims to that effect, however, perhaps to avoid

Intel’s Movidius
Neural Compute Stick

disparaging Intel’s Neural Compute Stick or other USB-based products that might emerge from the Intel AI: In Production program.

It’s also possible the performance difference between the two products is negligible, especially compared with the difference between either local processing solutions vs. an Internet connection. Cloud-based connections for accessing neural networking services suffer from reduced latency, network bandwidth, reliability, and security, says Aaeon. The company recommends using the Linux-based SDK to “create and train your neural network in the cloud and then run it locally on AI Core.”

Performance issues aside, because a mini-PCIe module is usually embedded within computers, it provides more security than a USB stck. On the other hand, that same trait hinders ease of mobility. Unlike the UP AI Core, the Neural Compute Stick can run on an ARM-based Raspberry Pi, but only with the help of the Stretch desktop or an Ubuntu 16.04 VirtualBox instance.

In 2016, before it was acquired by Intel, Movidius launched its first local-processing version of the Myriad 2 VPU technology, called the Fathom. This Ubuntu-driven USB stick, which miniaturized the technology in the earlier Myriad 2 reference board, is essentially the same technology that re-emerged as Intel’s Movidius Neural Compute Stick.

UP AI Core, front and back

Neural network processors can significantly outperform traditional computing approaches in tasks like language comprehension, image recognition, and pattern detection. The vast majority of such processors — which are often repurposed GPUs — are designed to run on cloud servers.

AIY Vision Kit

The Myriad 2 technology can translate deep learning frameworks like Caffe and TensorFlow into its own format for rapid prototyping. This is one reason why Google adopted the Myriad 2 technology for its recent AIY Vision Kit for the Raspberry Pi Zero W. The kit’s VisionBonnet pHAT board uses the same Movidius MA2450 chip that powers the UP AI Core. On the VisionBonnet, the processor runs Google’s open source TensorFlow machine intelligence library for neural networking, enabling visual perception processing at up to 30 frames per second.

Intel and Google aren’t alone in their desire to bring AI acceleration to the edge. Huawei released a Kirin 970 SoC for its Mate 10 Pro phone that provides a neural processing coprocessor, and Qualcomm followed up with a Snapdragon 845 SoC with its own neural accelerator. The Snapdragon 845 will soon appear on the Samsung Galaxy S9, among other phones, and will also be heading for some high-end embedded devices.

Last month, Arm unveiled two new Project Trillium AI chip designs intended for use as mobile and embedded coprocessors. Available now is Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. Due this summer is a Machine Learning (ML) Processor, which will accelerate AI applications including machine translation and face recognition.

Further information

The UP AI Core is available for pre-order at $69 for delivery in late April. More information may be found at Aaeon’s UP AI Core announcement and its UP Community UP AI Edge page for the UP AI Core.

Aaeon | www.aaeon.com

This article originally appeared on LinuxGizmos.com on March 6.

NVIDIA Graphics Tapped for Mercedes-Benz MBUX AI Cockpit

At the CES show last month, Mercedes-Benz its NVIDIA-powered MBUX infotainment system–a next-gen car cabin experience can learn and adapt to driver and passenger preferences, thanks to artificial intelligence.

According to NVIDIA, all the key MBUX systems are built together with NVIDIA, and they’re all powered by NVIDIA. The announcement comes a year after Huang joined Mercedes-Benz execs on stage at CES 2017 and said that their companies were collaborating on an AI car that would be ready in 2018.

Powered by NVIDIA graphics and deep learning technologies, the Mercedes-Benz User Experience, or MBUX, has been designed to deliver beautiful new 3D touch-screen displays. It can be controlled with a new voice-activated assistant that can be summoned with the phrase “Hey, Mercedes. It’s an intelligent learning system that adapts to the requirements of customers, remembering such details as the seat and steering wheel settings, lights and other comfort features.

The MBUX announcement highlights the importance of AI to next-generation infotainment systems inside the car, even as automakers are racing put AI to work to help vehicles navigate the world around them autonomously. The new infotainment system aims to use AI to adapt itself to drivers and passengers— automatically suggesting your favorite music for your drive home, or offering directions to a favorite restaurant at dinner time. It’s also one that will benefit from “over-the-air” updates delivering new features and capabilities.

Debuting in this month (February) in the new Mercedes-Benz A-Class, MBUX will power dramatic wide-screen displays that provide navigation, infotainment and other capabilities, touch-control buttons on the car’s steering wheel, as well as an intelligent assistant that can be summoned with a voice command. It’s an interface that can change its look to reflect the driver’s mood—whether they’re seeking serenity or excitement—and understand the way a user talks.

NVIDIA | www.nvidia.com

Current Multipliers Improve Processor Performance

Vicor has announced the introduction of Power-on-Package modular current multipliers for high performance, high current, CPU/GPU/ASIC (“XPU”) processors. By freeing up XPU socket pins and eliminating losses associated with delivery of current from the motherboard to the XPU, Vicor’s Power-on-Package solution enables higher current delivery for maximum XPU performance.

In response to the ever-increasing demands of high performance applications–artificial intelligence, machine learning, big data mining—XPU operating currents have risen to Power-on-Package-Enables-Higher-Performance-for-Artificial-Intelligence-Processorshundreds of Amperes. Point-of-Load power architectures in which high current power delivery units are placed close to the XPU, mitigate power distribution losses on the motherboard but do nothing to lessen interconnect challenges between the XPU and the motherboard. With increasing XPU currents, the remaining short distance to the XPU—the “last inch”—consisting of motherboard conductors and interconnects within the XPU socket has become a limiting factor in XPU performance and total system efficiency.

Vicor’s new Power-on-Package Modular Current Multipliers (“MCMs”) fit within the XPU package to expand upon the efficiency, density, and bandwidth advantages of Vicor’s Factorized Power Architecture, already established in 48 V Direct-to-XPU motherboard applications by early adopters. As current multipliers, MCMs mounted on the XPU substrate under the XPU package lid, or outside of it, are driven at a fraction (around 1/64th) of the XPU current from an external Modular Current Driver (MCD). The MCD, located on the motherboard, drives MCMs and accurately regulates the XPU voltage with high bandwidth and low noise. The solution profiled today, consisting of two MCMs and one MCD, enables delivery of up to 320 A of continuous current to the XPU, with peak current capability of 640 A.

With MCMs mounted directly to the XPU substrate, the XPU current delivered by the MCMs does not traverse the XPU socket. And, because the MCD drives MCMs at a low current, power from the MCD can be efficiently routed to MCMs reducing interconnect losses by 10X even though 90% of the XPU pins typically required for power delivery are reclaimed for expanded I/O functionality. Additional benefits include a simplified motherboard design and a substantial reduction in the minimum bypass capacitance required to keep the XPU within its voltage limits.

Multiple MCMs may be operated in parallel for increased current capability. The small (32mm x 8mm x 2.75mm) package and low noise characteristics of the MCM make it suitable for co-packaging with noise-sensitive, high performance ASICs, GPUs and CPUs. Operating temperature range is -40°C to +125°C. These devices represent the first in a portfolio of Power-on-Package solutions scalable to various XPU needs.

Vicor | www.vicorpower.com

Microsoft Real-time AI Project Leverages FPGAs

At Hot Chips 2017 Microsoft unveiled a new deep learning acceleration platform, codenamed Project Brainwave. The system performs real-time AI. Real-time here means the system processes requests as fast as it receives them, with ultra-low latency. Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.

Hot-Chips-Stratix-10-board-1-

 

The Project Brainwave system is built with three main layers: a high-performance, distributed system architecture; a hardware DNN engine synthesized onto FPGAs; and a compiler and runtime for low-friction deployment of trained models. Project Brainwave leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years. By attaching high-performance FPGAs directly to Microsoft’s datacenter network, they can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop. This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.

Project Brainwave uses a powerful “soft” DNN processing unit (or DPU), synthesized onto commercially available FPGAs.  A number of companies—both large companies and a slew of startups—are building hardened DPUs.  Although some of these chips have high peak performance, they must choose their operators and data types at design time, which limits their flexibility.  Project Brainwave takes a different approach, providing a design that scales across a range of data types, with the desired data type being a synthesis-time decision. The design combines both the ASIC digital signal processing blocks on the FPGAs and the synthesizable logic to provide a greater and more optimized number of functional units.  This approach exploits the FPGA’s flexibility in two ways.  First, the developers have defined highly customized, narrow-precision data types that increase performance without real losses in model accuracy.  Second, they can incorporate research innovations into the hardware platform quickly (typically a few weeks), which is essential in this fast-moving space.  As a result, the Microsoft team achieved performance comparable to – or greater than – many of these hard-coded DPU chips but are delivering the promised performance today. At Hot Chips, Project Brainwave was demonstrated using Intel’s new 14 nm Stratix 10 FPGA.

Project Brainwave incorporates a software stack designed to support the wide range of popular deep learning frameworks. They support Microsoft Cognitive Toolkit and Google’s Tensorflow, and plan to support many others. They have defined a graph-based intermediate representation, to which they convert models trained in the popular frameworks, and then compile down to their high-performance infrastructure.

Microsoft | www.microsoft.com

Dev Kit Enables Cars to Express Their Emotions

Renesas Electronics has announced that it has developed a development kit for its R-Car that takes advantage of “emotion engine”, an artificial sensibility and intelligence technology pioneered by cocoro SB Corp. The new development kit enables cars with the sensibility to read the driver’s emotions and optimally respond to the driver’s needs based on their emotional state.

The development kit includes cocoro SB’s emotion engine, which was developed leveraging its sensibility technology to recognize emotional states such as confidence or uncertainty based on the speech of the driver. The car’s response to the driver’s emotional state is displayed by a new driver-attentive user interface (UI) implemented in the Renesas R-Car system-on-chip (SoC). Since it is possible for the car to understand the driver’s words and emotional state, it can provide the appropriate response that ensures optimal driver safety.

20170719-verbal-emotion-recognition-engine-st

As this technology is linked to artificial intelligence (AI) based machine learning, it is possible for the car to learn from conversations with the driver, enabling it to transform into a car that is capable of providing the best response to the driver. Renesas plans to release the development kit later this year.

Renesas  demonstrated its connected car simulator incorporating the new development kit based on cocoro SB’s emotion engine at the SoftBank World 2017 event earlier this month in held by SoftBank at the Prince Park Tower Tokyo.

Renesas considers the driver’s emotional state, facial expression and eyesight direction as key information that combines with the driver’s vital signs to improve the car and driver interface, placing drivers closer to the era of self-driving cars. For example, if the car can recognize the driver is experiencing an uneasy emotional state, even if he or she has verbally accepted the switch to hands free autonomous-driving mode, it is possible for the car to ask the driver “would you prefer to continue driving and not switch to autonomous-driving mode for now?” Furthermore, understanding the driver’s emotions enables the car to control vehicle speed according to how the driver is feeling while driving at night in autonomous-driving mode. By providing carmakers and IT companies with the development kit that takes advantage of this emotion engine, Renesas hopes to expand the possibilities for this service model to the development of new interfaces between cars and drivers and other mobility markets that can take advantage of emotional state information. Based on the newly-launched Renesas autonomy, a new advanced driving assistance systems (ADAS) and automated driving platform, Renesas enables a safe, secure, and convenient driving experience by providing next-generation solutions for connected cars.

Renesas Electronics America | www.renesas.com

The Future of Intelligent Robots

Robots have been around for over half a century now, making constant progress in terms of their sophistication and intelligence levels, as well as their conceptual and literal closeness to humans. As they become smarter and more aware, it becomes easier to get closer to them both socially and physically. That leads to a world where robots do things not only for us but also with us.

Not-so-intelligent robots made their first debut in factory environments in the late ‘50s. Their main role was to merely handle the tasks that humans were either not very good at or that were dangerous for them. Traditionally, these robots have had very limited sensing; they have essentially been blind despite being extremely strong, fast, and repeatable. Considering what consequences were likely to follow if humans were to freely wander about within the close vicinity of these strong, fast, and blind robots, it seemed to be a good idea to isolate them from the environment by placing them in safety cages.

Advances in the fields of sensing and compliant control made it possible to get a bit closer to these robots, again both socially and physically. Researchers have started proposing frameworks that would enable human-robot collaborative manipulation and task execution in various scenarios. Bi-manual collaborative manufacturing robots like YuMi by ABB and service robots like HERB by the Personal Robotics Lab of Carnegie Mellon University[1] have started emerging. Various modalities of learning from/programming by demonstration, such as kinesthetic teaching and imitation, make it very natural to interact with these robots and teach them the skills and tasks we want them perform the way we teach a child. For instance, the Baxter robot by Rethink Robotics heavily utilizes these capabilities and technologies to potentially bring a teachable robot to every small company with basic manufacturing needs.

As robots gets smarter, more aware, and safer, it becomes easier to socially accept and trust them as well. This reduces the physical distance between humans and robots even further, leading to assistive robotic technologies, which literally “live” side by side with humans 24/7. One such project is the Assistive Dexterous Arm (ADA)[2] that we have been carrying out at the Robotics Institute and the Human-Computer Interaction Institute of Carnegie Mellon University. ADA is a wheelchair mountable, semi-autonomous manipulator arm that utilizes the sliding autonomy concept in assisting people with disabilities in performing their activities of daily living. Our current focus is on assistive feeding, where the robot is expected to help the users eat their meals in a very natural and socially acceptable manner. This requires the ability to predict the user’s behaviors and intentions as well as spatial and social awareness to avoid awkward situations in social eating settings. Also, safety becomes our utmost concern as the robot has to be very close to the user’s face and mouth during task execution.

In addition to assistive manipulators, there have also been giant leaps in the research and development of smart and lightweight exoskeletons that make it possible for paraplegics to walk by themselves. These exoskeletons make use of the same set of technologies, such as compliant control, situational awareness through precise sensing, and even learning from demonstration to capture the walking patterns of a healthy individual.

These technologies combined with the recent developments in neuroscience have made it possible to get even closer to humans than an assistive manipulator or an exoskeleton, and literally unite with them through intelligent prosthetics. An intelligent prosthetic limb uses learning algorithms to map the received neural signals to the user’s intentions as the user’s brain is constantly adapting to the artificial limb. It also needs to be highly compliant to be able to handle the vast variance and uncertainty in the real world, not to mention safety.

Extrapolating from the aforementioned developments and many others, we can easily say that robots are going to be woven into our lives. Laser technology used to be unreachable and cutting-edge from an average person’s perspective a couple decades ago. However, as Rodney Brooks says in his book titled Robot: The Future of Flesh and Machines, (Penguin Books, 2003), now we do not know exactly how many laser devices we have in our houses, and more importantly we don’t even care! That will be the case for the robots. In the not so distant future, we will be enjoying the ride in our autonomous vehicle as a bunch of nanobots in our blood stream are delivering drugs and fixing problems, and we will feel good knowing that our older relatives are getting some great care from their assistive companion robots.

[1] http://www.cmu.edu/herb-robot/
[2] https://youtu.be/glpCAdKEWAA

Tekin Meriçli, PhD, is a well-rounded roboticist with in-depth expertise in machine intelligence and learning, perception, and manipulation. He is currently a Postdoctoral Fellow at the Human-Computer Interaction Institute at Carnegie Mellon University, where he leads the efforts on building intuitive and expressive interfaces to interact with semi-autonomous robotic systems that are intended to assist elderly and disabled. Previously, he was a Postdoctoral Fellow at the National Robotics Engineering Center (NREC) and the Personal Robotics Lab of the Robotics Institute at Carnegie Mellon University. He received his PhD in Computer Science from Bogazici University, Turkey.

This essay appears in Circuit Cellar 298, May 2015.