Quad Bench Power Supply

The need for a bevy of equipment for building and testing presents a problem: how to deliver an adequate power supply while keeping workbench clutter to a minimum. Brian decided to tackle this classic engineering conundrum with a small, low-capacity quad bench power supply.

To the right of the output Johnson posts are the switches that set the polarity of the floating supplies—as well as the switch that disconnects all power supply outputs—while leaving the unit still powered up.

To the right of the output Johnson posts are the switches that set the polarity of the floating supplies—as well as the switch that disconnects all power supply outputs—while leaving the unit still powered up.

In “Quad Bench Power Supply,” Millier writes:

I hate to admit it, but my electronics bench is not a pretty sight, at least in the midst of a project anyway. Of course, I’m always in the middle of some project that, more often than not, contains two or three different projects in various stages of completion. To make matters worse, most of my projects involve microchips, which have to be programmed. Because I use ISP flash memory MCUs exclusively, it makes sense to locate a computer on my construction bench to facilitate programming and testing. To save space, I initially used my laptop’s parallel port for MCU programming. It was only a matter of time before I popped the laptop’s printer port by connecting it to a prototype circuit with errors on it.

Fixing my laptop’s printer port would have involved replacing its main board, which is an expensive proposition. Therefore, I switched over to a desktop computer (with a $20 ISA printer port board) for programming and testing purposes. The desktop, however, took up much more room on my bench.

You can’t do without lots of testing equipment, all of which takes up more bench space. Amongst my test equipment, I have several bench power supplies, which are unfortunately large because I built them with surplus power supply assemblies taken from older, unused equipment. This seemed like a good candidate for miniaturization.

At about the same time, I read a fine article by Robert Lacoste describing a high-power tracking lab power supply (“A Tracking Lab Power Supply,” Circuit Cellar 139). Although I liked many of Robert’s clever design ideas, most of my recent projects seemed to need only modest amounts of power. Therefore, I decided to design my own low-capacity bench supply that would be compact enough to fit in a small case. In this article, I’ll describe that power supply.

MY WISH LIST

Even though I mentioned that my recent project’s power demands were fairly modest, I frequently needed three or more discrete voltage levels. This meant lugging out a couple of different bench supplies and wiring all of them to the circuit I was building. If the circuit required all of the power supplies to cycle on and off simultaneously, the above arrangement was extremely inconvenient. In any event, it took up too much space on my bench.

I decided that I wanted to have four discrete voltage sources available. One power supply would be ground referenced. Two additional power supplies would be floating power supplies. Each of these would have the provision to switch either the positive or negative terminal to the negative (ground) terminal of the ground-referenced supply, allowing for positive or negative output voltage. Alternately, these supplies could be left floating with respect to ground by leaving the aforementioned switch in the center position.

This arrangement allows for one positive and two positive, negative or floating voltage outputs. To round off the complement, I added Condor’s commercial 5-V, 3-A linear power supply module, which I had on hand in my junk box. Table 1 shows the capabilities of the four power supplies.

I wanted to provide the metering of voltage and current for the three variable power supplies. The simultaneous voltage and current measurement of three completely independent power supplies seemed to indicate the need for six digital panel meters. Indeed, this is the path that Robert Lacoste used in his tracking lab supply.

As you can see, there are four power supplies. I’ve included all of the information you need to understand their capabilities.

As you can see, there are four power supplies. I’ve included all of the information you need to understand their capabilities.

I had used many of these DPM modules before, so I was aware of the fact that the modules require their negative measurement terminal to float with respect to the DPM’s own power supply. I solved this problem in the past by providing the DPM module with its own independent power source. Robert solved it by designing a differential drive circuit for the DPM. Either solution, when multiplied by six, is not trivial. Add to this the fact that high-quality DPMs cost about $40 in Canada, and you’ll see why I started to consider a different solution.

I decided to incorporate an MCU into the design to replace the six DPMs as well as six 10-turn potentiometers, which are also becoming expensive. In place of $240 worth of DPMs, I used three inexpensive dual 12-bit ADCs, an MCU, and an inexpensive LCD panel. The $100 worth of 10-turn potentiometers was replaced with three dual digital potentiometers and two inexpensive rotary encoders.

Using a microcontroller-based circuit basically allows you to control the bench supply with a computer for free. I have to admit that I decided to add the commercial 5-V supply module at the last minute; therefore, I didn’t allow for the voltage or current monitoring of this particular supply.

THE ANALOG CORE

Although there certainly is a digital component to this project, the basic power supply core is a standard analog series-pass regulator design. I borrowed a bit of this design from Robert’s lab supply circuit.

Basically, all three power supplies share the same design. The ground-referenced power supply provides less voltage and more current than the floating supplies. Thus, it uses a different transformer than the two floating supplies. The ground-referenced supply’s digital circuitry (for control of the digital potentiometer and ADC) can be connected directly to the MCU port lines. The two floating supplies, in addition to the different power transformer, also need isolation circuitry to connect to the MCU.

Figure 1 is the schematic for the ground-referenced supply. As you can see, a 24VCT PCB-mounted transformer provides all four necessary voltage sources. A full wave rectifier comprised of D4, D5, and C5 provides the 16 V that’s regulated down to the actual power supply output. Diodes D6, R10, C8, and Zener diode D7 provide the negative power supply needed by the op-amps. …

The ground-referenced power supply includes an independent 5-V supply to run the microcontroller module.

The ground-referenced power supply includes an independent 5-V supply to run the microcontroller module.

MCU AND USER INTERFACE

As with every other project I’ve worked on in the last two years, I chose the Atmel AVR family for the MCU. In this case, I went with the AT90S8535 for a couple of reasons. I needed 23 I/O lines to handle the three SPI channels, LCD, rotary encoders, and RS-232. This ruled out the use of smaller AVR devices. I could’ve used the slightly less expensive AT90LS8515, but I wanted to allow for the possibility of adding a temperature-sensing meter/alarm option to the circuit. The ’8535 has a 10-bit ADC function that’s suitable for this purpose; the ’8515 does not.

The ’8535 MCU has 8 KB of ISP flash memory, which is just about right for the necessary firmware. It also contains 512 bytes of EEPROM. I used a small amount of the EEPROM to store default values for the three programmable power supplies. That is to say, the power supply will power up with the same settings that existed at the time its Save Configuration push button was last pressed.

To simplify construction, I decided to use a SIMM100 SimmStick module made by Lawicel. The SIMM100 is a 3.5″ × 2.0″ PCB containing the ’8535, power supply regulator, reset function, RS-232 interface, ADC, ISP programming headers, and a 30-pin SimmStick-style bus. I’ve used this module for prototypes several times in the past, but this is the first time I’ve actually incorporated one into a finished project. …

eded to operate the three SPI channels and interface the two rotary encoders come out through the 30-pin bus. As you now know, I designed the ground-referenced power supply PCB to include space to mount the SIMM100 module, as well as the IsoLoop isolators. The SIMM100 mounts at right angles to this PCB; it’s hard-wired in place using 90° header pins. The floating power supplies share a virtually identical PCB layout apart from being smaller because of the lack of traces and circuitry associated with the SIMM100 bus and IsoLoop isolators.

The SIMM100 module has headers for the ISP programming cable and RS-232 port. I used its ADC header to run the LCD by reassigning six of the ADC port pins to general I/O pins.

When I buy in bulk, it’s inevitable that by the time I use the last item in my stock, something better has taken its place. After contacting Lawicel to request a .jpg image of the SIMM100 for this article, I was introduced to the new line of AVR modules that the company is developing.

Rather than a SimmStick-based module, the new modules are 24- and 40-pin DIP modules that are meant to replace Basic Stamps. Instead of using PIC chips/serial EEPROM and a Basic Interpreter, they implement the most powerful members of Atmel’s AVR family—the Mega chips.

Mega chips execute compiled code from fast internal flash memory and contain much more RAM and EEPROM than Stamps. Even though flash programming AVR-family chips is easy through SPI, using inexpensive printer port programming cables, these modules go one step further by incorporating RS-232 flash memory programming. This makes field updates a snap. …

The user interface I settled on consisted of a common 4 × 20 LCD panel along with two rotary encoders. One encoder is used to scroll through the various power supply parameters, and the other adjusts the selected parameter. The cost of LCDs and rotary encoders is reasonable these days. Being able to eliminate the substantial cost of six DPMs and six 10-turn potentiometers was the main reason for choosing an MCU-based design in the first place.

Brian Millier’s article first appeared in Circuit Cellar 149.

OptiMOS Product Family Exceeds 95% Efficiency

Infineon Technologies recently launched the OptiMOS 5 25- and 30-V product family, the next generation of Power MOSFETs in standard discrete packages, a new class of power stages named Power Block, and in an integrated power stage, DrMOS 5×5. Together with Infineon’s driver and digital controller products the company delivers full system solutions for applications such as server, client, datacom or telecom.Infineon-OptiMOS

The newly introduced OptiMOS family offers benchmark solutions with efficiency improvements of around 1% across the whole load range compared to its previous generation, exceeding 95% peak efficiency in a typical server voltage regulator design. This improved performance is based for example on the reduction of switching losses (Q switch) by 50% compared to the previous OptiMOS technology. Thus, implementing the new OptiMOS 25 V would lead to energy savings of 26.3 kWh per year for a single 130-W server CPU working 365 days.

The launch of the OptiMOS product family is accompanied by the introduction of a new packaging technology offering a further reduction in PCB area consumption. It is used in the Power Block product family and in the integrated powerstage DrMOS 5×5 and offers a source down low-side MOSFET for improved thermal performance, with a reduction by 50% of the thermal resistance in comparison to standard package solution, such as SuperSO8.

Infineon`s Power Block is a leadless SMD package comprising the low-side and high-side MOSFET of a synchronous DC/DC converter into a 5.0 × 6.0 mm 2 package outline. With Power Block, customers can shrink their designs up to 85 percent by replacing two separate discrete packages, such as SuperSO8 or SO-8. Both, the small package outline and the interconnection of the two MOSFETs within the package minimize the loop inductance for best system performance.

OptiMOS 5 25V is also used in an integrated power stage, combining DrMOS 5×5, driver and two MOSFETs, for a total area consumption on the PCB equal to 25mm². The integrated driver plus MOSFETs solution results in a shorter design time and is easy to design-in. Additionally, the dovetailed power stage includes a high accurate temperature sense of +/-5°C (compared to +/-10°C of an external one) which enables higher system reliability and performance.

Samples of the new OptiMOS 25- and 30-V devices in SuperSO8, S3O8 and Power Block packages, with on-state resistances from 0.9 mΩ to 3.3 mΩ are available. Additional products with monolithic integrated Schottky-like diode and products in 30 V will be available from Q2 2015 onwards. DrMOS 5×5 will be released in Q2 2015. Samples are available.

Source: Infineon

Two Source/Measure Units for N6700 Modular Power Systems

Keysight Technologies recently added two source/measure units (SMUs) to its N6700 Series modular power systems. The N6785A two-quadrant SMU is for battery drain analysis. The N6786A two-quadrant SMU is for functional test. Both SMUs provide power output up to 80 W.

The two new SMUs expand the popular N6780A Series SMU family by offering up to 4× more power than the previous models. The new models offer superior sourcing, measurement, and analysis so engineers can deliver the best possible battery life in their devices. The N6785A and N6786A SMUs allow engineers to test devices that require current up to 8 A, such as tablets, large smartphones, police/military handheld radios, and components of these devices.keysight N6700

The N6780A Series SMUs eliminate the challenges of measuring dynamic currents with a feature called seamless measurement ranging. With seamless measurement ranging, engineers can precisely measure dynamic currents without any glitches or disruptions to the measurement. As the current drawn by the device under test (DUT) changes, the SMU automatically detects the change and switches to the current measurement range that will return the most precise measurement.

When combined with the SMU’s built-in 18-bit digitizer, seamless measurement ranging enables unprecedented effective vertical resolution of ~28-bits. This capability lets users visualize current drain from nA to A in one pass. All data needed is presented in a single picture, which helps users unlock insights to deliver exceptional battery life.

The new SMUs are a part of the N6700 modular power system, which consists of the N6700 low-profile mainframes for ATE applications and the N6705B DC power analyzer mainframe for R&D. The product family has four mainframes and more than 30 DC power modules, providing a complete spectrum of solutions, from R&D through design validation and manufacturing.

Source: Keysight Technologies 

Quad Channel DPWM Step-Down Controller

Exar Corp. has introduced the XR77128, a universal PMIC that drives up to four independently controlled external DrMOS power stages at currents greater than 40 A for the latest 64-bit ARM processors, FPGAs, DSPs and ASICs. DrMOS technology is quickly growing in popularity in telecom and networking applications. These same applications find value in Exar’s Programmable Power technology which allows low component count, rapid development, easy system integration, dynamic control and telemetry. Depending on output current requirements, each output can be independently configured to directly drive external MOSFETs or DrMOS power stages.EX045_XR77128

The XR77128 is quickly configured to power nearly any FPGA, SoC, or DSP system through the use of Exar’s design tool, PowerArchitect, and programmed through an I²C-based SMBus compliant serial interface. It can also monitor and dynamically control and configure the power system through the same I²C interface. Five configurable GPIOs allow for fast system integration for fault reporting and status or for sequencing control.  A new Arduino-based development platform allows software engineers to begin code development for telemetry and dynamic control long before their hardware is available.

The XR77128 is available in a RoHS-compliant, green/halogen free space-saving 7 mm × 7 mm TQFN. It costs $7.75 in 1000-piece quantities.

Source: Exar Corp.

Industry’s Smallest Dual 3A/Single 6A Step-Down Power Module

Intersil Corp. recently announced the ISL8203M, a dual 3A/single 6A step-down DC/DC power module that simplifies power supply design for FPGAs, ASICs, microprocessors, DSPs, and other point of load conversions in communications, test and measurement, and industrial systems. The module’s compact 9.0 mm × 6.5 mm × 1.83 mm footprint combined with industry-leading 95% efficiency provides power system designers with a high-performance, easy-to-use solution for low-power, low-voltage applications.INT0325_ISL8203M_Intersil_Power_Module The ISL8203M is a complete power system in an encapsulated module that includes a PWM controller, synchronous switching MOSFETs, inductors and passive components to build a power supply supporting an input voltage range of 2.85 to 6 V. With an adjustable output voltage between 0.8 and 5 V, you can use one device to build a single 6-A or dual output 3-A power supply.

Designed to maximize efficiency, the ISL8203M power module offers best-in-class 15° C/W thermal performance and delivers 6 A at 85°C without the need for heatsinks or a fan. The ISL8203M leverages Intersil’s patented technology and advanced packaging techniques to deliver high power density and the best thermal performance in the industry, allowing the ISL8203M to operate at full load over a wide temperature range. The power module also provides over-temperature, over-current and under-voltage lockout protection, further enhancing its robustness and reliability.

Features and specifications:
•       Dual 3-A or single 6-A switching power supply
•       High efficiency, up to 95°
•       Wide input voltage range: 2.85 to 6 V
•       Adjustable output range: 0.8 to 5 V
•       Internal digital soft-start: 1.5 ms
•       External synchronization up to 4 MHz
•       Overcurrent protection

The ISL8203M power module is available in a 9 mm × 6.5 mm, QFN package. It costs $5.97 in 1,000-piece quantities. The ISL8203MEVAL2Z evaluation costs $67.

Source: Intersil

NexFET N-Channel Power MOSFETs Achieve Industry’s Lowest Resistance

Texas Instruments recently introduced 11 new N-channel power MOSFETs to its NexFET product line, including the 25-V CSD16570Q5B and 30-V CSD17570Q5B for hot swap and ORing applications with the industry’s lowest on-resistance (Rdson) in a QFN package. In addition, TI’s new 12-V FemtoFET CSD13383F4 for low-voltage battery-powered applications achieves the lowest resistance at 84% below competitive devices in a tiny 0.6 mm × 1 mm package. TI CSD16570Q5B

The CSD16570Q5B and CSD17570Q5B NexFET MOSFETs deliver higher power conversion efficiencies at higher currents, while ensuring safe operation in computer server and telecom applications. For instance, the 25-V CSD16570Q5B supports a maximum of 0.59 mΩ of Rdson, while the 30-V CSD17570Q5B achieves a maximum of 0.69 mΩ of Rdson.

TI’s new CSD17573Q5B and CSD17577Q5A can be paired with the LM27403 for DC/DC controller applications to form a complete synchronous buck converter solution. The CSD16570Q5B and CSD17570Q5B NexFET power MOSFETs can be paired with a TI hot swap controller such as the TPS24720.

The currently available products range in price from $0.10 for the FemtoFET CSD13383F4 to $1.08 for the CSD17670Q5B and CSD17570Q5B in 1,000-unit quantities.

Source: Texas Instruments

12-W Receiver IC for Wireless Mobile Device Charging

At CES 2015, Toshiba America Electronic Components introduced its newest IC enabling wireless mobile device charging. The TC7765WBG wireless power receiver controller IC can manage the 12-W power transfer required for the wireless charging of tablet devices. The TC7765WBG is compatible with the Qi low-power specification version 1.1 defined by the Wireless Power Consortium (WPC). It delivers a user experience comparable to that of conventional wired charging for tablets, as well as smartphones and other portable devices.Toshiba TC7765WBG

The TC7765WBG was built with Toshiba’s mixed-signal process using a high-performance MOSFET design that maximizes power efficiency and thermal performance. The IC combines modulation and control circuitry with a rectifier power pickup, I2C interface, and circuit protection functions. Compliance with the “Foreign Object Detection” (FOD) aspect of the Qi specification prevents heating of any metal objects in the path of wireless power transfer between the receiver and the transmitter.

The 12-W TC7765WBG is designed in a compact WCSP-28 2.4 mm × 3.67 mm × 0.5 mm package. This further facilitates design-in and contributes to the new chipset’s backward compatibility with the lower-power receiver IC. Combining the TC7765WBG with a copper coil, charging IC, and peripheral components creates a wireless power receiver. Joining the receiver with a Qi-compliant wireless power transmitter containing a Toshiba wireless power transmitter IC (e.g., TB6865AFG Enhanced version) forms a complete wireless power charging solution.

Toshiba announced that samples of the TC7765WBG wireless power receiver IC will be available at the end of January, with mass production set to begin in Q2 2015.

ARM-based Embedded Power Family for Smart Motor Control

In mid-November 2014, Infineon announced an ARM-based Embedded Power family of bridge drivers offering an unmatched level of integration to address the growing trend towards intelligent motor control for a wide range of automotive applications.  The Embedded Power family offers 32-bit performance in an application space that it is typically associated with 16-bit. Sample quantities of the first members of the Embedded Power family are available for the TLE987x series for three-phase (brushless DC) motors and the TLE986x series for two-phase (DC) motors.Infineon-Embedded-Power-IC_VQFN-48

Infineon combined its proprietary automotive qualified 130-nm Smart Power manufacturing technology with its vast experience in motor control drivers into the new, highly integrated Embedded Power family, available in a standard QFN package of only 7 mm × 7 mm in dimension. Where previous multi-chip designs needed a standalone microcontroller, a bridge driver, and a LIN transceiver, automotive system suppliers now benefit from motor control designs of minimum external components count. The newly released Embedded Power products reduce the component count down to less than 30, thus allowing integration of all functions and associated external components for the motor control in a PCB area of merely 3 cm². As a result, the Embedded Power family enables the integration of electronics close to the motor for true mechatronic designs.

Both the TLE987x and TLE986x bridge drivers use the ARM Cortex TM-M3 processor. Their peripheral set includes a current sensor, a successive approximation 10-bit ADC synchronized with the capture and compare unit (CAPCOM6) for PWM control and 16-bit timers. A LIN transceiver is integrated to enable communication to the devices along with a number of general-purpose I/Os. Both series include an on-chip linear voltage regulator to supply external loads. Their flash memory is scalable from 36 to 128 KB. They operate from 5.4 V up to 28 V. An integrated charge pump enables low-voltage operation using only two external capacitors. The bridge drivers feature programmable charging and discharging current. The patented current slope control technique optimizes the system EMC behavior for a wide range of MOSFETs. The products can withstand load dump conditions up to 40 V while maintaining an extended supply voltage operating down to 3.0V where the microcontroller and the flash memory are fully functional.

The TLE987x series of bridge drivers addresses three-phase (BLDC) motor applications such as fuel pumps, HVAC blowers, engine cooling fans, and water pumps. It supports sensor-less and sensor-based (including field-oriented control) BLDC motor applications addressed by LIN or controlled via PWM.

The TLE986x series is optimized to drive two-phase DC motors by integrating four NFET drivers. The TLE986x series is suitable for applications such as sunroofs, power window lifts and generic smart motor control via NFET H-Bridge.

Engineering samples of the TLE987x and TLE986x bridge drivers in a space-saving VQFN-48 package are available with volume production planned to start in Q1 2015. For both series, there are several derivatives available, differing for example in system clock (24 MHz or 40 MHz) and flash sizes.

Source: Infineon

 

Data Center Power & Cost Management

Computers drive progress in today’s world. Both individuals and industry depends on a spectrum of computing tools. Data centers are at the heart of many computational processes from communication to scientific analysis. They also consume over 3% of total power in the United States, and this amount continues to increase.[1]

Data centers service jobs, submitted by their customers, on the data center’s servers, a shared resource. Data centers and their customers negotiate a service-level agreement (SLA), which establishes the average expected job completion time. Servers are allocated for each job and must satisfy the job’s SLA. Job-scheduling software already provides some solutions to the budgeting of data center resources.

Data center construction and operation include fixed and accrued costs. Initial building expenses, such as purchasing and installing computing and cooling equipment, are one-time costs and are generally unavoidable. An operational data center must power this equipment, contributing an ongoing cost. Power management and the associated costs define one of the largest challenges for data centers.

To control these costs, the future of data centers is in active participation in advanced power markets. More efficient cooling also provides cost saving opportunities, but this requires infrastructure updates, which is costly and impractical for existing data centers. Fortunately, existing physical infrastructure can support participation in demand response programs, such as peak shaving, regulation services (RS), and frequency control. In demand-response programs, consumers adjust their power consumption based on real-time power prices. The most promising mechanism for data center participation is RS.

Independent system operators (ISOs) manage demand response programs like RS. Each ISO must balance the power supply with the demand, or load, on the power grid in the region it governs. RS program participants provide necessary reserves when demand is high or consume more energy when demand is lower than the supply. The ISO communicates this need by transmitting a regulation signal, which the participant must follow with minimal error. In return, ISOs provide monetary incentives to the participants.

This essay appears in Circuit Cellar #293 (December 2014).

 
Data centers are ideal participants for demand response programs. A single data center requires a significant amount of power from the power grid. For example, the Massachusetts Green High-Performance Computing Center (MGHPCC), which opened in 2012, has power capacity of 10 MW, which is equivalent to as many as 10,000 homes (www.mghpcc.org). Additionally, some workload types are flexible; jobs can be delayed or sped up within the given SLA.

Data centers have the ability to vary power consumption based on the ISO regulation signal. Server sleep states and dynamic voltage and frequency scaling (DVFS) are power modulation techniques. When the regulation signal requests lower power consumption from participants, data centers can put idle servers to sleep. This successfully reduces power consumption but is not instantaneous. DVFS performs finer power variations; power in an individual server can be quickly reduced in exchange for slower processing speeds. Demand response algorithms for data centers coordinate server state changes and DVFS tuning given the ISO regulation signal.

Accessing data from real data centers is a challenge. Demand response algorithms are tested via simulations of simplified data center models. Before data centers can participate in RS, algorithms must account for the complexity in real data centers.

Data collection within data center infrastructure enables more detailed models. Monitoring aids performance evaluation, model design, and operational changes to data centers. As part of my work, I analyze power, load, and cooling data collected from the MGHPCC. Sensor integration for data collection is essential to the future of data center power and cost management.

The power grid also benefits from data center participation in demand response programs. Renewable energy sources, such as wind and solar, are more environmentally friendly than traditional fossil fuel plants. However, the intermittent nature of such renewables creates a challenge for ISOs to balance the supply and load. Data center participation makes larger scale incorporation of renewables into the smart grid possible.

The future of data centers requires the management of power consumption in order to control costs. Currently, RS provides the best opportunities for existing data centers. According to preliminary results, successful participation in demand response programs could yield monetary savings around 50% for data centers.[2]


[1] J. Koomey, “Growth in Data Center Electricity Use 2005 to 2010,” Analytics Press, Oakland, August, 1, 2010, www.analyticspress.com/datacenters.html.

[2] H. Chen, M. Caramanis, and A. K. Coskun, “The Data Center as a Grid Load Stabilizer,” Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC), p. 105–112, January 2014.


LaneTTF Annie Lane studies computer engineering at Boston University, where she performs research as part of the Performance and Energy-Aware Computing Lab (www.bu.edu/peaclab). She received the Clare Boothe Luce Scholar Award in 2014. Annie received additional funding from the Undergraduate Research Opportunity Program (UROP) and Summer Term Alumni Research Scholars (STARS). Her research focuses strategies power and cost optimization strategies in data centers.

 

Budgeting Power in Data Centers

In my May 2014 Circuit Cellar article, “Data Centers in the Smart Grid” (Issue 286), I discussed the growing data center energy challenge and a novel potential solution that modulates data center power consumption based on the requests from the electricity provider. In the same article, I elaborated on how the data centers can provide “regulation service reserves” by tracking a dynamic power regulation signal broadcast by the independent service operator (ISO).

Demand-side provision of regulation service reserves is one of the ways of providing capacity reserves that are picking up traction in US energy markets. Frequency control reserves and operating reserves are other examples. These reserves are similar to each other in the sense that the demand-side, such as a data center, modulates its power consumption in reaction to local measurements and/or to signals broadcast by the ISO. The time-scale of modulation, however, differs depending on the reserves: modulation can be done in real time, every few seconds, or every few minutes.

In addition to the emerging mechanisms of providing capacity reserves in the grid, there are several other options for a data center to manage its electricity cost. For example, the data center operators can negotiate electricity pricing with the ISO such that the electricity cost is lower when the data center consumes power below a given peak value. In this scenario, the electricity cost is significantly higher if the center exceeds the given limit. “Peak shaving,” therefore, refers to actively controlling the peak power consumption using data center power-capping mechanisms. Other mechanisms of cost and capacity management include load shedding, referring to temporary load reduction in a data center, load shifting, which delays executing loads to a future time, and migration of a subset of loads to other facilities, if such an option is available.

All these aforementioned mechanisms require the data center to be able to dynamically cap its power within a tolerable error margin. Even in absence of advanced cost management strategies, a data center generally needs to operate under a predetermined maximum power consumption level as the electricity distribution infrastructure of the data center needs to be built accordingly.

This article appears in Circuit Cellar 292.

Most data centers today run a diverse set of workloads (applications) at a given time. Therefore, an interesting sub-problem of the power capping problem is how to distribute a given total power cap efficiently among the computational, cooling, and other components in a data center. For example, if there are two types of applications running in a data center, should one give equal power caps to the servers running each of these applications, or should one favor one of the applications?

Even when the loads have the same level of urgency or priority, designating equal power to different types of loads does not always lead to efficient operation. This is because the power-performance trade-offs of applications vary significantly. One application may meet user quality-of-service (QoS) expectations or service level agreements (SLAs) while consuming less power compared to another application.

Another reason that makes the budgeting problem interesting is the temperature and cooling related heterogeneity among the servers in a data center. Even when servers in a data center are all of the same kind (which is rarely the case), their physical location in the data center, the heat recirculation effects (which refer to some of the heat output of servers being recirculated back into the center and affecting the thermal dynamics), and the heat transfer among the servers create differences in temperatures and cooling efficiencies of servers. Thus, while budgeting, one may want to dedicate larger power caps to servers that are more cooling-efficient.

As the computational units in a data center need to operate at safe temperatures below manufacturer-provided limits, the budgeting policy in the data center needs to make sure a sufficient power budget is saved for the cooling elements. On the other hand, if there is over-cooling, then the overall efficiency drops because there is a smaller power budget left for computing.

I refer to the problem of how to efficiently allocate power to each server and to the cooling units as the “power budgeting” problem. The rest of the article elaborates on how this problem can be formulated and solved in a practical scenario.

Characterizing Loads

For distributing a total computational power budget in an application-aware manner, one needs to have an estimate of the relationship between server power and application performance. In my lab at Boston University, my students and I studied the relationship between application throughput and server power on a real-life system, and constructed empirical models that mimic this relationship.

Figure 1 demonstrates how the relationship between the instruction throughput and power consumption of a specific enterprise server changes depending on the application. Another interesting observation out of this figure is that, performance of some of the applications saturates beyond a certain power value. In other words, even when a larger power budget is given to such an application by letting it run with more threads (or in other cases, letting the processor operate at a higher speed), the application throughput does not improve further.

Figure 1: The plot demonstrates billion of instructions per second (BIPS) versus server power consumption as measured on an Oracle enterprise server including two SPARC T3 processors.

Figure 1: The plot demonstrates billion of instructions per second (BIPS) versus server power consumption as measured on an Oracle enterprise server including two SPARC T3 processors.

Estimating the slope of the throughput-power curve and the potential performance saturation point helps make better power budgeting decisions. In my lab, we constructed a model that estimates the throughput given server power and hardware performance counter measurements. In addition, we analyzed the potential performance bottlenecks resulting from a high number of memory accesses and/or the limited number of software threads in the application. We were able to predict the saturation point for each application via a regression-based equation constructed based on this analysis. Predicting the maximum server power using this empirical modeling approach gave a mean error of 11 W for our 400-to-700-W enterprise server.[1]

Such methods for power-performance estimations highlight the significance of telemetry-based empirical models for efficient characterization of future systems. The more detailed measurement capabilities newer computing systems can provide—such as the ability to measure power consumption of various sub-components of a server—the more accuracy one can achieve in constructing models to help with the data center management.

Temperature, Once Again

In several of my earlier articles this year, I emphasized the key role of temperature awareness for improving computing energy efficiency. This key role is a result of the high cost of cooling, the fact that server energy dynamics also rely on temperature substantially (i.e., consider the interactions among temperature, fan power and leakage power), and the impact of processor thermal management policies on performance.

Solving the budgeting problem efficiently, therefore, relies on having good estimates for how a given power allocation among the servers and cooling units would affect the temperature. The first step is estimating the CPU temperature for a given server power cap. In my lab, we modeled the CPU temperature as a function of the CPU junction-to-air thermal resistance, CPU power, and the inlet temperature to the server. CPU thermal resistance is determined by the hardware and packaging choices, and can be characterized empirically. For a given total server power, CPU power can be estimated using performance counter measurements in a similar way to estimating the performance given a server cap, as described above (see Figure 1). Our simple empirical temperature model was able to estimate temperature with a mean error of 2.9°C in our experiments on an Oracle enterprise server.[1]

Heat distribution characteristics of a data center depend strongly on the cooling technology used. For example, traditional data centers use a hot aisle-cold aisle configuration, where the cold air from the computer room air conditioners (CRAC) and the hot air coming out of the serves are separated by the rows of racks that contain the servers. The second step in thermal estimation, therefore, has to do with estimating the impact of servers to one another and the overall impact of the cooling system.

In a traditional hot-cold aisle setting, the inlet server temperatures can be estimated based on a heat distribution matrix, power consumption of all the servers, and the CRAC air temperature (which is the cold air input to the data center). Heat distribution matrix can be considered as a lumped model representing the impact of heat recirculation and the air flow properties together in a single N × N matrix, where N is the number of servers.[2]

Recently, using in-row coolers that leverage liquid cooling to improve efficiency of cooling is preferred in some (newer) data centers to improve cooling efficiency. In such settings, the heat recirculation effects are expected to be less significant as the most of the heat output of the servers is immediately removed from the data center.

In my lab, my students and I used low-cost data center temperature models to enable fast dynamic decisions.[1] Detailed thermal simulation of data centers is possible through computational fluid dynamics tools. Such tools, however, typically require prohibitively long simulation times.

Budgeting Optimization

What should the goal be during power budgeting? Maximizing overall throughput in the data center may seem like a reasonable goal. However, such a goal would favor allocating larger power caps to applications with higher throughput, and absolute throughput does not necessarily give an idea on whether the application QoS demand is met. For example, an application with a lower BIPS may have a stricter QoS target.

Consider this example for a better budgeting metric: the fair speed-up metric computes the harmonic mean of per-server speedup (i.e., per-server speedup is the ratio of measured BIPS to the maximum BIPS for an application). The purpose of this metric is to ensure none of the applications are starving while maximizing overall throughput.

It is also possible to impose constraints on the budgeting optimization such that a specific performance or throughput level is met for one or more of the applications. Ability to meet such constraints strongly relies on the ability to estimate the power-vs.-performance trends of the applications. Thus, empirical models I mentioned above are also essential for delivering more predictable performance to users.

Figure 2 demonstrates how the hill-climbing strategy my students and I designed for optimizing fair speed up evolves.  The algorithm starts setting the CRAC temperature to its last known optimal value, which is 20.6°C in this example. The CRAC power consumption corresponding to providing air input to the data center at 20.6°C can be computed using the relationship between CRAC temperature and the ratio of computing power to cooling power.[3] This relationship can often be derived from datasheets for the CRAC units and/or for the data center cooling infrastructure.

Figure 2: The budgeting algorithm starts from the last known optimal CRAC temperature value, and then iteratively aims to improve on the objective.

Figure 2: The budgeting algorithm starts from the last known optimal CRAC temperature value, and then iteratively aims to improve on the objective.

Once the cooling power is subtracted from the overall cap, the algorithm then allocates the remaining power among the servers with the objective of maximizing the fair speed up. Other constraints in the optimization formulation prevent any server to exceed manufacturer-given redline temperatures and ensure each server to receive a feasible power cap that falls between the server’s minimum and maximum power consumption levels.

The algorithm then iteratively searches for a better solution as demonstrated in steps 2 to 6 in Figure 2. Once the algorithm detects that the fair speed up is decreasing (e.g., fair speedup in step 6 is less than the speedup in step 5), it converges to the solution computed in the last step (e.g., converges to step 5 in the example). Note that setting cooler CRAC temperatures typically indicate a larger amount of cooling power, thus the fair speedup drops. However, as the CRAC temperature increases beyond a point, the performance of the hottest servers are degraded to maintain CPU temperatures below the redline; thus, a further increase in the CRAC temperature is not useful any longer (as in step 6).

This iterative algorithm took less than a second of running time using Matlab CVX[4] in our experiments for a small data center of 1,000 servers on an average desktop computer. This result indicates that the algorithm can be run in much shorter time with an optimized implementation, allowing for frequent real-time re-budgeting of power in a modern data center with a larger number of servers. Our algorithm improved fair speedup and BIPS per Watt by 10% to 20% compared to existing budgeting techniques.

Challenges

The initial methods and results I discussed above demonstrate promising energy efficiency improvements; however, there are many open problems for data center power budgeting.

First, the above discussion does not consider loads with some dependence to each other. For example, high-performance computing applications often have heavy communication among server nodes. This means that the budgeting method needs to account for the impact of inter-node communication for performance estimates as well as while making job allocation decisions in data centers.

Second, especially for data centers with a non-negligible amount of heat recirculation, thermally-aware job allocation significantly affects CPU temperature. Thus, job allocation should be optimized together with budgeting.

In data centers, there are elements other than the servers that consume significant amounts of power such as storage units. In addition there are a heterogeneous set of servers. Thus, a challenge lies in budgeting the power to a heterogeneous computing, storage, and networking elements.

Finally, the discussion above focuses on budgeting a total power cap among servers that are actively running applications. One can, however, also adjust the number of servers actively serving the incoming loads (by putting some servers into sleep mode/turning them off) and also consolidate the loads if desired. Consolidation often decreases performance predictability. The server provisioning problem needs to be solved in concert with the budgeting problem, taking the additional overheads into account. I believe all these challenges make the budgeting problem an interesting research problem for future data centers.

 

Ayse CoskunAyse K. Coskun (acoskun@bu.edu) is an assistant professor in the Electrical and Computer Engineering Department at Boston University. She received MS and PhD degrees in Computer Science and Engineering from the University of California, San Diego. Coskun’s research interests include temperature and energy management, 3-D stack architectures, computer architecture, and embedded systems. She worked at Sun Microsystems (now Oracle) in San Diego, CA, prior to her current position at BU. Coskun serves as an associate editor of the IEEE Embedded Systems Letters.

 

 
[1] O. Tuncer, K. Vaidyanathan, K. Gross, and A. K. Coskun, “CoolBudget: Data Center Power Budgeting with Workload and Cooling Asymmetry Awareness,” in Proceedings of IEEE International Conference on Computer Design (ICCD), October 2014.
[2] Q. Tang, T. Mukherjee, S. K. S. Gupta, and P. Cayton, “Sensor-Based fast Thermal Evaluation Model for Energy Efficient High-Performance Datacenters,” in ICISIP-06, October 2006.
[3] J. Moore, J. Chase, P. Ranganathan, and R. Sharma, “Making Scheduling ‘Cool’: Temperature-Aware Workload Placement in Data Centers,” in USENIX ATC-05, 2005.
[4] CVX Research, “CVX: Matlab Software for Disciplined Convex Programming,” Version 2.1, September 2014, http://cvxr.com/cvx/.

Small High-Current Power Modules

 

Exar Corp. recently announced the 10-A XR79110 and 15-A XR79115 single-output, synchronous step-down power modules. The modules will be available in mid-November in RoHS-compliant, green/halogen-free, QFN packages.

In a product release, Exar noted that “both devices provide easy to use, fully integrated power converters including MOSFETs, inductors, and internal input and output capacitors.”

The modules come in compact 10 x 10 x 4 mm and 12 x 12 x 4 mm footprints, respectively. The XR79110 and XR79115 offer versatility to convert from common input voltages such as 5, 12, and 19 V.

Both modules feature Exar’s emulated current-mode COT control scheme. The COT control loop enables operation with ceramic output capacitors and eliminates loop compensation components. According to Exar documentation, tthe output voltage can be set from 0.6 to 18 V and with exceptional full range 0.1% line regulation and 1% output accuracy over full temperature range.

The XR79110 and XR79115 are priced at $8.95 and $10.95, respectively, in 1,000-piece quantities.

Source: Exar Corp.

High-Bandwidth Oscilloscope Probe

Keysight Technologies recently announced a new high-bandwidth, low-noise oscilloscope probe, the N7020A, for making power integrity measurements to characterize DC power rails. The probe’s specs include:

  • low noise
  • large ± 24-V offset range
  • 50 kΩ DC input impedance
  • 2-GHz bandwidth for analyzing fast transients on their DC power-rails KeysightN7020A

According to Keysight’s product release, “The single-ended N7020A power-rail probe has a 1:1 attenuation ratio to maximize the signal-to-noise ratio of the power rail being observed by the oscilloscope. Comparable oscilloscope power integrity measurement solutions have up to 16× more noise than the Keysight solution. With its lower noise, the Keysight N7020A power-rail probe provides a more accurate view of the actual ripple and noise riding on DC power rails.”

 

The new N7020A power-rail probe starts at $2,650.

Source: Keysight Technologies 

Client Profile: Invenscience LC

Invenscience2340 South Heritage Drive, Suite I
Nibley UT, 84321

CONTACT: Collin Lewis, sales@invenscience.com
invenscience.com

EMBEDDED PRODUCTS: Torxis Servos and various servo controllers

FEATURED PRODUCT: Invenscience features a wide range of unique servo controllers that generate the PWM signal for general RC servomotors of all brands and Torxis Servos. (The Simple Slider Servo Controller is pictured.) Included in this lineup are:

  • Gamer joystick controllers
  • Conventional joystick controllers
  • Equalizer-style slider controllers
  • Android device Bluetooth controllers

All of these controllers provide power and the radio control (RC) PWM signal necessary to make servos move without any programming effort.

EXCLUSIVE OFFER: Use the promo code “CC2014” to receive a 10% discount on all purchases through March 31, 2014.

Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.

Testing Power Supplies (EE Tip #112)

How can you determine the stability of your lab or bench-top supply? You can get a good impression of the stability of a power supply under various conditions by loading the output dynamically. This can be implemented using just a handful of components.

Power supply testing

Power supply testing

Apart from obvious factors such as output voltage and current, noise, hum and output resistance, it is also important that a power supply has a good regulation under varying load conditions. A standard test for this uses a resistor array across the output that can be switched between two values. Manufacturers typically use resistor values that correspond to 10% and 90% of the rated power output of the supply.

The switching frequency between the values is normally several tens of hertz (e.g. 40 Hz). The behavior of the output can then be inspected with an oscilloscope, from which you can deduce how stable the power supply is. At the rising edge of the square wave you will usually find an overshoot, which is caused by the way the regulator functions, the inductance of the internal and external wiring and any output filter.

This dynamic behavior is normally tested at a single frequency, but the designers in the Elektor Lab have tested numerous lab supplies over the years and it seemed interesting to check what happens at higher switching frequencies. The only items required for this are an ordinary signal generator with a square wave output and the circuit shown in Figure 1.Fig1-pwrsupply

You can then take measurements up to several megahertz, which should give you a really good insight for which applications the power supply is suitable. More often than not you will come across a resonance frequency at which the supply no longer remains stable and it’s interesting to note at which frequency that occurs.

The circuit really is very simple. The power MOSFET used in the circuit is a type that is rated at 80 V/75 A and has an on-resistance of only 10 mΩ (VGS = 10 V).

The output of the supply is continuously loaded by R2, which has a value such that 1/10th of the maximum output current flows through it (R2 = Vmax/0.1/max). The value of R1 is chosen such that 8/10th of the maximum current flows through it (R1 = Vmax/0.8/max). Together this makes 0.9/max when the MOSFET conducts. You should round the calculated values to the nearest E12 value and make sure that the resistors are able to dissipate the heat generated (using forced cooling, if required).

At larger output currents the MOSFET should also be provided with a small heatsink. The gate of the FET is connected to ground via two 100-Ω resistors, providing a neat 50-Ω impedance to the output of the signal generator. The output voltage of the signal generator should be set to a level between 5 V and 10 V, and you’re ready to test. Start with a low switching frequency and slowly increase it, whilst keeping an eye on the square wave on the oscilloscope. And then keep increasing the frequency… Who knows what surprises you may come across? Bear in mind though that the editorial team can’t be held responsible for any damage that may occur to the tested power supply. Use this circuit at your own risk!

— Harry Baggen and Ton Giesberts (Elektor, February 210)

High-Voltage Gate Driver IC

Allegro A4900 Gate Driver IC

Allegro A4900 Gate Driver IC

The A4900 is a high-voltage brushless DC (BLDC) MOSFET gate driver IC. It is designed for high-voltage motor control for hybrid, electric vehicle, and 48-V automotive battery systems (e.g., electronic power steering, A/C compressors, fans, pumps, and blowers).

The A4900’s six gate drives can drive a range of N-channel insulated-gate bipolar transistors (IGBTs) or power MOSFET switches. The gate drives are configured as three high-voltage high-side drives and three low-side drives. The high-side drives are isolated up to 600 V to enable operation with high-bridge (motor) supply voltages. The high-side drives use a bootstrap capacitor to provide the supply gate drive voltage required for N-channel FETs. A TTL logic-level input compatible with 3.3- or 5-V logic systems can be used to control each FET.

A single-supply input provides the gate drive supply and the bootstrap capacitor charge source. An internal regulator from the single supply provides the logic circuit’s lower internal voltage. The A4900’s internal monitors ensure that the high- and low-side external FET’s gate source voltage is above 9 V when active.

The control inputs to the A4900 offer a flexible solution for many motor control applications. Each driver can be driven with an independent PWM signal, which enables implementation of all motor excitation methods including trapezoidal and sinusoidal drive. The IC’s integrated diagnostics detect undervoltage, overtemperature, and power bridge faults that can be configured to protect the power switches under most short-circuit conditions. Detailed diagnostics are available as a serial data word.

The A4900 is supplied in a 44-lead QSOP package and costs $3.23 in 1,000-unit quantities.

Allegro MicroSystems, LLC
www.allegromicro.com