Two Source/Measure Units for N6700 Modular Power Systems

Keysight Technologies recently added two source/measure units (SMUs) to its N6700 Series modular power systems. The N6785A two-quadrant SMU is for battery drain analysis. The N6786A two-quadrant SMU is for functional test. Both SMUs provide power output up to 80 W.

The two new SMUs expand the popular N6780A Series SMU family by offering up to 4× more power than the previous models. The new models offer superior sourcing, measurement, and analysis so engineers can deliver the best possible battery life in their devices. The N6785A and N6786A SMUs allow engineers to test devices that require current up to 8 A, such as tablets, large smartphones, police/military handheld radios, and components of these devices.keysight N6700

The N6780A Series SMUs eliminate the challenges of measuring dynamic currents with a feature called seamless measurement ranging. With seamless measurement ranging, engineers can precisely measure dynamic currents without any glitches or disruptions to the measurement. As the current drawn by the device under test (DUT) changes, the SMU automatically detects the change and switches to the current measurement range that will return the most precise measurement.

When combined with the SMU’s built-in 18-bit digitizer, seamless measurement ranging enables unprecedented effective vertical resolution of ~28-bits. This capability lets users visualize current drain from nA to A in one pass. All data needed is presented in a single picture, which helps users unlock insights to deliver exceptional battery life.

The new SMUs are a part of the N6700 modular power system, which consists of the N6700 low-profile mainframes for ATE applications and the N6705B DC power analyzer mainframe for R&D. The product family has four mainframes and more than 30 DC power modules, providing a complete spectrum of solutions, from R&D through design validation and manufacturing.

Source: Keysight Technologies 

Quad Channel DPWM Step-Down Controller

Exar Corp. has introduced the XR77128, a universal PMIC that drives up to four independently controlled external DrMOS power stages at currents greater than 40 A for the latest 64-bit ARM processors, FPGAs, DSPs and ASICs. DrMOS technology is quickly growing in popularity in telecom and networking applications. These same applications find value in Exar’s Programmable Power technology which allows low component count, rapid development, easy system integration, dynamic control and telemetry. Depending on output current requirements, each output can be independently configured to directly drive external MOSFETs or DrMOS power stages.EX045_XR77128

The XR77128 is quickly configured to power nearly any FPGA, SoC, or DSP system through the use of Exar’s design tool, PowerArchitect, and programmed through an I²C-based SMBus compliant serial interface. It can also monitor and dynamically control and configure the power system through the same I²C interface. Five configurable GPIOs allow for fast system integration for fault reporting and status or for sequencing control.  A new Arduino-based development platform allows software engineers to begin code development for telemetry and dynamic control long before their hardware is available.

The XR77128 is available in a RoHS-compliant, green/halogen free space-saving 7 mm × 7 mm TQFN. It costs $7.75 in 1000-piece quantities.

Source: Exar Corp.

Industry’s Smallest Dual 3A/Single 6A Step-Down Power Module

Intersil Corp. recently announced the ISL8203M, a dual 3A/single 6A step-down DC/DC power module that simplifies power supply design for FPGAs, ASICs, microprocessors, DSPs, and other point of load conversions in communications, test and measurement, and industrial systems. The module’s compact 9.0 mm × 6.5 mm × 1.83 mm footprint combined with industry-leading 95% efficiency provides power system designers with a high-performance, easy-to-use solution for low-power, low-voltage applications.INT0325_ISL8203M_Intersil_Power_Module The ISL8203M is a complete power system in an encapsulated module that includes a PWM controller, synchronous switching MOSFETs, inductors and passive components to build a power supply supporting an input voltage range of 2.85 to 6 V. With an adjustable output voltage between 0.8 and 5 V, you can use one device to build a single 6-A or dual output 3-A power supply.

Designed to maximize efficiency, the ISL8203M power module offers best-in-class 15° C/W thermal performance and delivers 6 A at 85°C without the need for heatsinks or a fan. The ISL8203M leverages Intersil’s patented technology and advanced packaging techniques to deliver high power density and the best thermal performance in the industry, allowing the ISL8203M to operate at full load over a wide temperature range. The power module also provides over-temperature, over-current and under-voltage lockout protection, further enhancing its robustness and reliability.

Features and specifications:
•       Dual 3-A or single 6-A switching power supply
•       High efficiency, up to 95°
•       Wide input voltage range: 2.85 to 6 V
•       Adjustable output range: 0.8 to 5 V
•       Internal digital soft-start: 1.5 ms
•       External synchronization up to 4 MHz
•       Overcurrent protection

The ISL8203M power module is available in a 9 mm × 6.5 mm, QFN package. It costs $5.97 in 1,000-piece quantities. The ISL8203MEVAL2Z evaluation costs $67.

Source: Intersil

NexFET N-Channel Power MOSFETs Achieve Industry’s Lowest Resistance

Texas Instruments recently introduced 11 new N-channel power MOSFETs to its NexFET product line, including the 25-V CSD16570Q5B and 30-V CSD17570Q5B for hot swap and ORing applications with the industry’s lowest on-resistance (Rdson) in a QFN package. In addition, TI’s new 12-V FemtoFET CSD13383F4 for low-voltage battery-powered applications achieves the lowest resistance at 84% below competitive devices in a tiny 0.6 mm × 1 mm package. TI CSD16570Q5B

The CSD16570Q5B and CSD17570Q5B NexFET MOSFETs deliver higher power conversion efficiencies at higher currents, while ensuring safe operation in computer server and telecom applications. For instance, the 25-V CSD16570Q5B supports a maximum of 0.59 mΩ of Rdson, while the 30-V CSD17570Q5B achieves a maximum of 0.69 mΩ of Rdson.

TI’s new CSD17573Q5B and CSD17577Q5A can be paired with the LM27403 for DC/DC controller applications to form a complete synchronous buck converter solution. The CSD16570Q5B and CSD17570Q5B NexFET power MOSFETs can be paired with a TI hot swap controller such as the TPS24720.

The currently available products range in price from $0.10 for the FemtoFET CSD13383F4 to $1.08 for the CSD17670Q5B and CSD17570Q5B in 1,000-unit quantities.

Source: Texas Instruments

12-W Receiver IC for Wireless Mobile Device Charging

At CES 2015, Toshiba America Electronic Components introduced its newest IC enabling wireless mobile device charging. The TC7765WBG wireless power receiver controller IC can manage the 12-W power transfer required for the wireless charging of tablet devices. The TC7765WBG is compatible with the Qi low-power specification version 1.1 defined by the Wireless Power Consortium (WPC). It delivers a user experience comparable to that of conventional wired charging for tablets, as well as smartphones and other portable devices.Toshiba TC7765WBG

The TC7765WBG was built with Toshiba’s mixed-signal process using a high-performance MOSFET design that maximizes power efficiency and thermal performance. The IC combines modulation and control circuitry with a rectifier power pickup, I2C interface, and circuit protection functions. Compliance with the “Foreign Object Detection” (FOD) aspect of the Qi specification prevents heating of any metal objects in the path of wireless power transfer between the receiver and the transmitter.

The 12-W TC7765WBG is designed in a compact WCSP-28 2.4 mm × 3.67 mm × 0.5 mm package. This further facilitates design-in and contributes to the new chipset’s backward compatibility with the lower-power receiver IC. Combining the TC7765WBG with a copper coil, charging IC, and peripheral components creates a wireless power receiver. Joining the receiver with a Qi-compliant wireless power transmitter containing a Toshiba wireless power transmitter IC (e.g., TB6865AFG Enhanced version) forms a complete wireless power charging solution.

Toshiba announced that samples of the TC7765WBG wireless power receiver IC will be available at the end of January, with mass production set to begin in Q2 2015.

ARM-based Embedded Power Family for Smart Motor Control

In mid-November 2014, Infineon announced an ARM-based Embedded Power family of bridge drivers offering an unmatched level of integration to address the growing trend towards intelligent motor control for a wide range of automotive applications.  The Embedded Power family offers 32-bit performance in an application space that it is typically associated with 16-bit. Sample quantities of the first members of the Embedded Power family are available for the TLE987x series for three-phase (brushless DC) motors and the TLE986x series for two-phase (DC) motors.Infineon-Embedded-Power-IC_VQFN-48

Infineon combined its proprietary automotive qualified 130-nm Smart Power manufacturing technology with its vast experience in motor control drivers into the new, highly integrated Embedded Power family, available in a standard QFN package of only 7 mm × 7 mm in dimension. Where previous multi-chip designs needed a standalone microcontroller, a bridge driver, and a LIN transceiver, automotive system suppliers now benefit from motor control designs of minimum external components count. The newly released Embedded Power products reduce the component count down to less than 30, thus allowing integration of all functions and associated external components for the motor control in a PCB area of merely 3 cm². As a result, the Embedded Power family enables the integration of electronics close to the motor for true mechatronic designs.

Both the TLE987x and TLE986x bridge drivers use the ARM Cortex TM-M3 processor. Their peripheral set includes a current sensor, a successive approximation 10-bit ADC synchronized with the capture and compare unit (CAPCOM6) for PWM control and 16-bit timers. A LIN transceiver is integrated to enable communication to the devices along with a number of general-purpose I/Os. Both series include an on-chip linear voltage regulator to supply external loads. Their flash memory is scalable from 36 to 128 KB. They operate from 5.4 V up to 28 V. An integrated charge pump enables low-voltage operation using only two external capacitors. The bridge drivers feature programmable charging and discharging current. The patented current slope control technique optimizes the system EMC behavior for a wide range of MOSFETs. The products can withstand load dump conditions up to 40 V while maintaining an extended supply voltage operating down to 3.0V where the microcontroller and the flash memory are fully functional.

The TLE987x series of bridge drivers addresses three-phase (BLDC) motor applications such as fuel pumps, HVAC blowers, engine cooling fans, and water pumps. It supports sensor-less and sensor-based (including field-oriented control) BLDC motor applications addressed by LIN or controlled via PWM.

The TLE986x series is optimized to drive two-phase DC motors by integrating four NFET drivers. The TLE986x series is suitable for applications such as sunroofs, power window lifts and generic smart motor control via NFET H-Bridge.

Engineering samples of the TLE987x and TLE986x bridge drivers in a space-saving VQFN-48 package are available with volume production planned to start in Q1 2015. For both series, there are several derivatives available, differing for example in system clock (24 MHz or 40 MHz) and flash sizes.

Source: Infineon

 

Data Center Power & Cost Management

Computers drive progress in today’s world. Both individuals and industry depends on a spectrum of computing tools. Data centers are at the heart of many computational processes from communication to scientific analysis. They also consume over 3% of total power in the United States, and this amount continues to increase.[1]

Data centers service jobs, submitted by their customers, on the data center’s servers, a shared resource. Data centers and their customers negotiate a service-level agreement (SLA), which establishes the average expected job completion time. Servers are allocated for each job and must satisfy the job’s SLA. Job-scheduling software already provides some solutions to the budgeting of data center resources.

Data center construction and operation include fixed and accrued costs. Initial building expenses, such as purchasing and installing computing and cooling equipment, are one-time costs and are generally unavoidable. An operational data center must power this equipment, contributing an ongoing cost. Power management and the associated costs define one of the largest challenges for data centers.

To control these costs, the future of data centers is in active participation in advanced power markets. More efficient cooling also provides cost saving opportunities, but this requires infrastructure updates, which is costly and impractical for existing data centers. Fortunately, existing physical infrastructure can support participation in demand response programs, such as peak shaving, regulation services (RS), and frequency control. In demand-response programs, consumers adjust their power consumption based on real-time power prices. The most promising mechanism for data center participation is RS.

Independent system operators (ISOs) manage demand response programs like RS. Each ISO must balance the power supply with the demand, or load, on the power grid in the region it governs. RS program participants provide necessary reserves when demand is high or consume more energy when demand is lower than the supply. The ISO communicates this need by transmitting a regulation signal, which the participant must follow with minimal error. In return, ISOs provide monetary incentives to the participants.

This essay appears in Circuit Cellar #293 (December 2014).

 
Data centers are ideal participants for demand response programs. A single data center requires a significant amount of power from the power grid. For example, the Massachusetts Green High-Performance Computing Center (MGHPCC), which opened in 2012, has power capacity of 10 MW, which is equivalent to as many as 10,000 homes (www.mghpcc.org). Additionally, some workload types are flexible; jobs can be delayed or sped up within the given SLA.

Data centers have the ability to vary power consumption based on the ISO regulation signal. Server sleep states and dynamic voltage and frequency scaling (DVFS) are power modulation techniques. When the regulation signal requests lower power consumption from participants, data centers can put idle servers to sleep. This successfully reduces power consumption but is not instantaneous. DVFS performs finer power variations; power in an individual server can be quickly reduced in exchange for slower processing speeds. Demand response algorithms for data centers coordinate server state changes and DVFS tuning given the ISO regulation signal.

Accessing data from real data centers is a challenge. Demand response algorithms are tested via simulations of simplified data center models. Before data centers can participate in RS, algorithms must account for the complexity in real data centers.

Data collection within data center infrastructure enables more detailed models. Monitoring aids performance evaluation, model design, and operational changes to data centers. As part of my work, I analyze power, load, and cooling data collected from the MGHPCC. Sensor integration for data collection is essential to the future of data center power and cost management.

The power grid also benefits from data center participation in demand response programs. Renewable energy sources, such as wind and solar, are more environmentally friendly than traditional fossil fuel plants. However, the intermittent nature of such renewables creates a challenge for ISOs to balance the supply and load. Data center participation makes larger scale incorporation of renewables into the smart grid possible.

The future of data centers requires the management of power consumption in order to control costs. Currently, RS provides the best opportunities for existing data centers. According to preliminary results, successful participation in demand response programs could yield monetary savings around 50% for data centers.[2]


[1] J. Koomey, “Growth in Data Center Electricity Use 2005 to 2010,” Analytics Press, Oakland, August, 1, 2010, www.analyticspress.com/datacenters.html.

[2] H. Chen, M. Caramanis, and A. K. Coskun, “The Data Center as a Grid Load Stabilizer,” Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC), p. 105–112, January 2014.


LaneTTF Annie Lane studies computer engineering at Boston University, where she performs research as part of the Performance and Energy-Aware Computing Lab (www.bu.edu/peaclab). She received the Clare Boothe Luce Scholar Award in 2014. Annie received additional funding from the Undergraduate Research Opportunity Program (UROP) and Summer Term Alumni Research Scholars (STARS). Her research focuses strategies power and cost optimization strategies in data centers.

 

Budgeting Power in Data Centers

In my May 2014 Circuit Cellar article, “Data Centers in the Smart Grid” (Issue 286), I discussed the growing data center energy challenge and a novel potential solution that modulates data center power consumption based on the requests from the electricity provider. In the same article, I elaborated on how the data centers can provide “regulation service reserves” by tracking a dynamic power regulation signal broadcast by the independent service operator (ISO).

Demand-side provision of regulation service reserves is one of the ways of providing capacity reserves that are picking up traction in US energy markets. Frequency control reserves and operating reserves are other examples. These reserves are similar to each other in the sense that the demand-side, such as a data center, modulates its power consumption in reaction to local measurements and/or to signals broadcast by the ISO. The time-scale of modulation, however, differs depending on the reserves: modulation can be done in real time, every few seconds, or every few minutes.

In addition to the emerging mechanisms of providing capacity reserves in the grid, there are several other options for a data center to manage its electricity cost. For example, the data center operators can negotiate electricity pricing with the ISO such that the electricity cost is lower when the data center consumes power below a given peak value. In this scenario, the electricity cost is significantly higher if the center exceeds the given limit. “Peak shaving,” therefore, refers to actively controlling the peak power consumption using data center power-capping mechanisms. Other mechanisms of cost and capacity management include load shedding, referring to temporary load reduction in a data center, load shifting, which delays executing loads to a future time, and migration of a subset of loads to other facilities, if such an option is available.

All these aforementioned mechanisms require the data center to be able to dynamically cap its power within a tolerable error margin. Even in absence of advanced cost management strategies, a data center generally needs to operate under a predetermined maximum power consumption level as the electricity distribution infrastructure of the data center needs to be built accordingly.

This article appears in Circuit Cellar 292.

Most data centers today run a diverse set of workloads (applications) at a given time. Therefore, an interesting sub-problem of the power capping problem is how to distribute a given total power cap efficiently among the computational, cooling, and other components in a data center. For example, if there are two types of applications running in a data center, should one give equal power caps to the servers running each of these applications, or should one favor one of the applications?

Even when the loads have the same level of urgency or priority, designating equal power to different types of loads does not always lead to efficient operation. This is because the power-performance trade-offs of applications vary significantly. One application may meet user quality-of-service (QoS) expectations or service level agreements (SLAs) while consuming less power compared to another application.

Another reason that makes the budgeting problem interesting is the temperature and cooling related heterogeneity among the servers in a data center. Even when servers in a data center are all of the same kind (which is rarely the case), their physical location in the data center, the heat recirculation effects (which refer to some of the heat output of servers being recirculated back into the center and affecting the thermal dynamics), and the heat transfer among the servers create differences in temperatures and cooling efficiencies of servers. Thus, while budgeting, one may want to dedicate larger power caps to servers that are more cooling-efficient.

As the computational units in a data center need to operate at safe temperatures below manufacturer-provided limits, the budgeting policy in the data center needs to make sure a sufficient power budget is saved for the cooling elements. On the other hand, if there is over-cooling, then the overall efficiency drops because there is a smaller power budget left for computing.

I refer to the problem of how to efficiently allocate power to each server and to the cooling units as the “power budgeting” problem. The rest of the article elaborates on how this problem can be formulated and solved in a practical scenario.

Characterizing Loads

For distributing a total computational power budget in an application-aware manner, one needs to have an estimate of the relationship between server power and application performance. In my lab at Boston University, my students and I studied the relationship between application throughput and server power on a real-life system, and constructed empirical models that mimic this relationship.

Figure 1 demonstrates how the relationship between the instruction throughput and power consumption of a specific enterprise server changes depending on the application. Another interesting observation out of this figure is that, performance of some of the applications saturates beyond a certain power value. In other words, even when a larger power budget is given to such an application by letting it run with more threads (or in other cases, letting the processor operate at a higher speed), the application throughput does not improve further.

Figure 1: The plot demonstrates billion of instructions per second (BIPS) versus server power consumption as measured on an Oracle enterprise server including two SPARC T3 processors.

Figure 1: The plot demonstrates billion of instructions per second (BIPS) versus server power consumption as measured on an Oracle enterprise server including two SPARC T3 processors.

Estimating the slope of the throughput-power curve and the potential performance saturation point helps make better power budgeting decisions. In my lab, we constructed a model that estimates the throughput given server power and hardware performance counter measurements. In addition, we analyzed the potential performance bottlenecks resulting from a high number of memory accesses and/or the limited number of software threads in the application. We were able to predict the saturation point for each application via a regression-based equation constructed based on this analysis. Predicting the maximum server power using this empirical modeling approach gave a mean error of 11 W for our 400-to-700-W enterprise server.[1]

Such methods for power-performance estimations highlight the significance of telemetry-based empirical models for efficient characterization of future systems. The more detailed measurement capabilities newer computing systems can provide—such as the ability to measure power consumption of various sub-components of a server—the more accuracy one can achieve in constructing models to help with the data center management.

Temperature, Once Again

In several of my earlier articles this year, I emphasized the key role of temperature awareness for improving computing energy efficiency. This key role is a result of the high cost of cooling, the fact that server energy dynamics also rely on temperature substantially (i.e., consider the interactions among temperature, fan power and leakage power), and the impact of processor thermal management policies on performance.

Solving the budgeting problem efficiently, therefore, relies on having good estimates for how a given power allocation among the servers and cooling units would affect the temperature. The first step is estimating the CPU temperature for a given server power cap. In my lab, we modeled the CPU temperature as a function of the CPU junction-to-air thermal resistance, CPU power, and the inlet temperature to the server. CPU thermal resistance is determined by the hardware and packaging choices, and can be characterized empirically. For a given total server power, CPU power can be estimated using performance counter measurements in a similar way to estimating the performance given a server cap, as described above (see Figure 1). Our simple empirical temperature model was able to estimate temperature with a mean error of 2.9°C in our experiments on an Oracle enterprise server.[1]

Heat distribution characteristics of a data center depend strongly on the cooling technology used. For example, traditional data centers use a hot aisle-cold aisle configuration, where the cold air from the computer room air conditioners (CRAC) and the hot air coming out of the serves are separated by the rows of racks that contain the servers. The second step in thermal estimation, therefore, has to do with estimating the impact of servers to one another and the overall impact of the cooling system.

In a traditional hot-cold aisle setting, the inlet server temperatures can be estimated based on a heat distribution matrix, power consumption of all the servers, and the CRAC air temperature (which is the cold air input to the data center). Heat distribution matrix can be considered as a lumped model representing the impact of heat recirculation and the air flow properties together in a single N × N matrix, where N is the number of servers.[2]

Recently, using in-row coolers that leverage liquid cooling to improve efficiency of cooling is preferred in some (newer) data centers to improve cooling efficiency. In such settings, the heat recirculation effects are expected to be less significant as the most of the heat output of the servers is immediately removed from the data center.

In my lab, my students and I used low-cost data center temperature models to enable fast dynamic decisions.[1] Detailed thermal simulation of data centers is possible through computational fluid dynamics tools. Such tools, however, typically require prohibitively long simulation times.

Budgeting Optimization

What should the goal be during power budgeting? Maximizing overall throughput in the data center may seem like a reasonable goal. However, such a goal would favor allocating larger power caps to applications with higher throughput, and absolute throughput does not necessarily give an idea on whether the application QoS demand is met. For example, an application with a lower BIPS may have a stricter QoS target.

Consider this example for a better budgeting metric: the fair speed-up metric computes the harmonic mean of per-server speedup (i.e., per-server speedup is the ratio of measured BIPS to the maximum BIPS for an application). The purpose of this metric is to ensure none of the applications are starving while maximizing overall throughput.

It is also possible to impose constraints on the budgeting optimization such that a specific performance or throughput level is met for one or more of the applications. Ability to meet such constraints strongly relies on the ability to estimate the power-vs.-performance trends of the applications. Thus, empirical models I mentioned above are also essential for delivering more predictable performance to users.

Figure 2 demonstrates how the hill-climbing strategy my students and I designed for optimizing fair speed up evolves.  The algorithm starts setting the CRAC temperature to its last known optimal value, which is 20.6°C in this example. The CRAC power consumption corresponding to providing air input to the data center at 20.6°C can be computed using the relationship between CRAC temperature and the ratio of computing power to cooling power.[3] This relationship can often be derived from datasheets for the CRAC units and/or for the data center cooling infrastructure.

Figure 2: The budgeting algorithm starts from the last known optimal CRAC temperature value, and then iteratively aims to improve on the objective.

Figure 2: The budgeting algorithm starts from the last known optimal CRAC temperature value, and then iteratively aims to improve on the objective.

Once the cooling power is subtracted from the overall cap, the algorithm then allocates the remaining power among the servers with the objective of maximizing the fair speed up. Other constraints in the optimization formulation prevent any server to exceed manufacturer-given redline temperatures and ensure each server to receive a feasible power cap that falls between the server’s minimum and maximum power consumption levels.

The algorithm then iteratively searches for a better solution as demonstrated in steps 2 to 6 in Figure 2. Once the algorithm detects that the fair speed up is decreasing (e.g., fair speedup in step 6 is less than the speedup in step 5), it converges to the solution computed in the last step (e.g., converges to step 5 in the example). Note that setting cooler CRAC temperatures typically indicate a larger amount of cooling power, thus the fair speedup drops. However, as the CRAC temperature increases beyond a point, the performance of the hottest servers are degraded to maintain CPU temperatures below the redline; thus, a further increase in the CRAC temperature is not useful any longer (as in step 6).

This iterative algorithm took less than a second of running time using Matlab CVX[4] in our experiments for a small data center of 1,000 servers on an average desktop computer. This result indicates that the algorithm can be run in much shorter time with an optimized implementation, allowing for frequent real-time re-budgeting of power in a modern data center with a larger number of servers. Our algorithm improved fair speedup and BIPS per Watt by 10% to 20% compared to existing budgeting techniques.

Challenges

The initial methods and results I discussed above demonstrate promising energy efficiency improvements; however, there are many open problems for data center power budgeting.

First, the above discussion does not consider loads with some dependence to each other. For example, high-performance computing applications often have heavy communication among server nodes. This means that the budgeting method needs to account for the impact of inter-node communication for performance estimates as well as while making job allocation decisions in data centers.

Second, especially for data centers with a non-negligible amount of heat recirculation, thermally-aware job allocation significantly affects CPU temperature. Thus, job allocation should be optimized together with budgeting.

In data centers, there are elements other than the servers that consume significant amounts of power such as storage units. In addition there are a heterogeneous set of servers. Thus, a challenge lies in budgeting the power to a heterogeneous computing, storage, and networking elements.

Finally, the discussion above focuses on budgeting a total power cap among servers that are actively running applications. One can, however, also adjust the number of servers actively serving the incoming loads (by putting some servers into sleep mode/turning them off) and also consolidate the loads if desired. Consolidation often decreases performance predictability. The server provisioning problem needs to be solved in concert with the budgeting problem, taking the additional overheads into account. I believe all these challenges make the budgeting problem an interesting research problem for future data centers.

 

Ayse CoskunAyse K. Coskun (acoskun@bu.edu) is an assistant professor in the Electrical and Computer Engineering Department at Boston University. She received MS and PhD degrees in Computer Science and Engineering from the University of California, San Diego. Coskun’s research interests include temperature and energy management, 3-D stack architectures, computer architecture, and embedded systems. She worked at Sun Microsystems (now Oracle) in San Diego, CA, prior to her current position at BU. Coskun serves as an associate editor of the IEEE Embedded Systems Letters.

 

 
[1] O. Tuncer, K. Vaidyanathan, K. Gross, and A. K. Coskun, “CoolBudget: Data Center Power Budgeting with Workload and Cooling Asymmetry Awareness,” in Proceedings of IEEE International Conference on Computer Design (ICCD), October 2014.
[2] Q. Tang, T. Mukherjee, S. K. S. Gupta, and P. Cayton, “Sensor-Based fast Thermal Evaluation Model for Energy Efficient High-Performance Datacenters,” in ICISIP-06, October 2006.
[3] J. Moore, J. Chase, P. Ranganathan, and R. Sharma, “Making Scheduling ‘Cool’: Temperature-Aware Workload Placement in Data Centers,” in USENIX ATC-05, 2005.
[4] CVX Research, “CVX: Matlab Software for Disciplined Convex Programming,” Version 2.1, September 2014, http://cvxr.com/cvx/.

Small High-Current Power Modules

 

Exar Corp. recently announced the 10-A XR79110 and 15-A XR79115 single-output, synchronous step-down power modules. The modules will be available in mid-November in RoHS-compliant, green/halogen-free, QFN packages.

In a product release, Exar noted that “both devices provide easy to use, fully integrated power converters including MOSFETs, inductors, and internal input and output capacitors.”

The modules come in compact 10 x 10 x 4 mm and 12 x 12 x 4 mm footprints, respectively. The XR79110 and XR79115 offer versatility to convert from common input voltages such as 5, 12, and 19 V.

Both modules feature Exar’s emulated current-mode COT control scheme. The COT control loop enables operation with ceramic output capacitors and eliminates loop compensation components. According to Exar documentation, tthe output voltage can be set from 0.6 to 18 V and with exceptional full range 0.1% line regulation and 1% output accuracy over full temperature range.

The XR79110 and XR79115 are priced at $8.95 and $10.95, respectively, in 1,000-piece quantities.

Source: Exar Corp.

High-Bandwidth Oscilloscope Probe

Keysight Technologies recently announced a new high-bandwidth, low-noise oscilloscope probe, the N7020A, for making power integrity measurements to characterize DC power rails. The probe’s specs include:

  • low noise
  • large ± 24-V offset range
  • 50 kΩ DC input impedance
  • 2-GHz bandwidth for analyzing fast transients on their DC power-rails KeysightN7020A

According to Keysight’s product release, “The single-ended N7020A power-rail probe has a 1:1 attenuation ratio to maximize the signal-to-noise ratio of the power rail being observed by the oscilloscope. Comparable oscilloscope power integrity measurement solutions have up to 16× more noise than the Keysight solution. With its lower noise, the Keysight N7020A power-rail probe provides a more accurate view of the actual ripple and noise riding on DC power rails.”

 

The new N7020A power-rail probe starts at $2,650.

Source: Keysight Technologies 

Client Profile: Invenscience LC

Invenscience2340 South Heritage Drive, Suite I
Nibley UT, 84321

CONTACT: Collin Lewis, sales@invenscience.com
invenscience.com

EMBEDDED PRODUCTS: Torxis Servos and various servo controllers

FEATURED PRODUCT: Invenscience features a wide range of unique servo controllers that generate the PWM signal for general RC servomotors of all brands and Torxis Servos. (The Simple Slider Servo Controller is pictured.) Included in this lineup are:

  • Gamer joystick controllers
  • Conventional joystick controllers
  • Equalizer-style slider controllers
  • Android device Bluetooth controllers

All of these controllers provide power and the radio control (RC) PWM signal necessary to make servos move without any programming effort.

EXCLUSIVE OFFER: Use the promo code “CC2014” to receive a 10% discount on all purchases through March 31, 2014.

Circuit Cellar prides itself on presenting readers with information about innovative companies, organizations, products, and services relating to embedded technologies. This space is where Circuit Cellar enables clients to present readers useful information, special deals, and more.

Testing Power Supplies (EE Tip #112)

How can you determine the stability of your lab or bench-top supply? You can get a good impression of the stability of a power supply under various conditions by loading the output dynamically. This can be implemented using just a handful of components.

Power supply testing

Power supply testing

Apart from obvious factors such as output voltage and current, noise, hum and output resistance, it is also important that a power supply has a good regulation under varying load conditions. A standard test for this uses a resistor array across the output that can be switched between two values. Manufacturers typically use resistor values that correspond to 10% and 90% of the rated power output of the supply.

The switching frequency between the values is normally several tens of hertz (e.g. 40 Hz). The behavior of the output can then be inspected with an oscilloscope, from which you can deduce how stable the power supply is. At the rising edge of the square wave you will usually find an overshoot, which is caused by the way the regulator functions, the inductance of the internal and external wiring and any output filter.

This dynamic behavior is normally tested at a single frequency, but the designers in the Elektor Lab have tested numerous lab supplies over the years and it seemed interesting to check what happens at higher switching frequencies. The only items required for this are an ordinary signal generator with a square wave output and the circuit shown in Figure 1.Fig1-pwrsupply

You can then take measurements up to several megahertz, which should give you a really good insight for which applications the power supply is suitable. More often than not you will come across a resonance frequency at which the supply no longer remains stable and it’s interesting to note at which frequency that occurs.

The circuit really is very simple. The power MOSFET used in the circuit is a type that is rated at 80 V/75 A and has an on-resistance of only 10 mΩ (VGS = 10 V).

The output of the supply is continuously loaded by R2, which has a value such that 1/10th of the maximum output current flows through it (R2 = Vmax/0.1/max). The value of R1 is chosen such that 8/10th of the maximum current flows through it (R1 = Vmax/0.8/max). Together this makes 0.9/max when the MOSFET conducts. You should round the calculated values to the nearest E12 value and make sure that the resistors are able to dissipate the heat generated (using forced cooling, if required).

At larger output currents the MOSFET should also be provided with a small heatsink. The gate of the FET is connected to ground via two 100-Ω resistors, providing a neat 50-Ω impedance to the output of the signal generator. The output voltage of the signal generator should be set to a level between 5 V and 10 V, and you’re ready to test. Start with a low switching frequency and slowly increase it, whilst keeping an eye on the square wave on the oscilloscope. And then keep increasing the frequency… Who knows what surprises you may come across? Bear in mind though that the editorial team can’t be held responsible for any damage that may occur to the tested power supply. Use this circuit at your own risk!

— Harry Baggen and Ton Giesberts (Elektor, February 210)

High-Voltage Gate Driver IC

Allegro A4900 Gate Driver IC

Allegro A4900 Gate Driver IC

The A4900 is a high-voltage brushless DC (BLDC) MOSFET gate driver IC. It is designed for high-voltage motor control for hybrid, electric vehicle, and 48-V automotive battery systems (e.g., electronic power steering, A/C compressors, fans, pumps, and blowers).

The A4900’s six gate drives can drive a range of N-channel insulated-gate bipolar transistors (IGBTs) or power MOSFET switches. The gate drives are configured as three high-voltage high-side drives and three low-side drives. The high-side drives are isolated up to 600 V to enable operation with high-bridge (motor) supply voltages. The high-side drives use a bootstrap capacitor to provide the supply gate drive voltage required for N-channel FETs. A TTL logic-level input compatible with 3.3- or 5-V logic systems can be used to control each FET.

A single-supply input provides the gate drive supply and the bootstrap capacitor charge source. An internal regulator from the single supply provides the logic circuit’s lower internal voltage. The A4900’s internal monitors ensure that the high- and low-side external FET’s gate source voltage is above 9 V when active.

The control inputs to the A4900 offer a flexible solution for many motor control applications. Each driver can be driven with an independent PWM signal, which enables implementation of all motor excitation methods including trapezoidal and sinusoidal drive. The IC’s integrated diagnostics detect undervoltage, overtemperature, and power bridge faults that can be configured to protect the power switches under most short-circuit conditions. Detailed diagnostics are available as a serial data word.

The A4900 is supplied in a 44-lead QSOP package and costs $3.23 in 1,000-unit quantities.

Allegro MicroSystems, LLC
www.allegromicro.com

Solar Cells Explained (EE Tip #104)

All solar cells are made from at least two different materials, often in the form of two thin, adjacent layers. One of the materials must act as an electron donor under illumination, while the other material must act as an electron acceptor. If there is some sort of electron barrier between the two materials, the result is an electrical potential. If each of these materials is now provided with an electrode made from an electrically conductive material and the two electrodes are connected to an external load, the electrons will follow this path.

Source: Jens Nickels, Elektor, 070798-I, 6/2009

Source: Jens Nickels, Elektor, 070798-I, 6/2009

The most commonly used solar cells are made from thin wafers of polycrystalline silicon (polycrystalline cells have a typical “frosty” appearance after sawing and polishing). The silicon is very pure, but it contains an extremely small amount of boron as a dopant (an intentionally introduced impurity), and it has a thin surface layer doped with phosphorus. This creates a PN junction in the cell, exactly the same as in a diode. When the cell is exposed to light, electrons are released and holes (positive charge carriers) are generated. The holes can recombine with the electrons. The charge carriers are kept apart by the electrical field of the PN junction, which partially prevents the direct recombination of electrons and holes.

The electrical potential between the electrodes on the top and bottom of the cell is approximately 0.6 V. The maximum current (short-circuit current) is proportional to the surface area of the cell, the impinging light energy, and the efficiency. Higher voltages and currents are obtained by connecting cells in series to form strings and connecting these strings of cells in parallel to form modules.

The maximum efficiency achieved by polycrystalline cells is 17%, while monocrystalline cells can achieve up to 22%, although the overall efficiency is lower if the total module area is taken into account. On a sunny day in central Europe, the available solar energy is approximately 1000 W/m2, and around 150 W/m2 of this can be converted into electrical energy with currently available solar cells.

Source: Jens Nickels, Elektor, 070798-I, 6/2009

Source: Jens Nickels, Elektor, 070798-I, 6/2009

Cells made from selenium, gallium arsenide, or other compounds can achieve even higher efficiency, but they are more expensive and are only used in special applications, such as space travel. There are also other approaches that are aimed primarily at reducing costs instead of increasing efficiency. The objective of such approaches is to considerably reduce the amount of pure silicon that has to be used or eliminate its use entirely. One example is thin-film solar cells made from amorphous silicon, which have an efficiency of 8 to 10% and a good price/performance ratio. The silicon can be applied to a glass sheet or plastic film in the form of a thin layer. This thin-film technology is quite suitable for the production of robust, flexible modules, such as the examples described in this article.

Battery Charging

From an electrical viewpoint, an ideal solar cell consists of a pure current source in parallel with a diode (the outlined components in the accompanying schematic diagram). When the solar cell is illuminated, the typical U/I characteristic of the diode shifts downward (see the drawing, which also shows the opencircuit voltage UOC and the short-circuit current ISC). The panel supplies maximum power when the load corresponds to the points marked “MPP” (maximum power point) in the drawing. The power rating of a cell or panel specified by the manufacturer usually refers to operation at the MPP with a light intensity of 100,000 lux and a temperature of 25°C. The power decreases by approximately 0.2 to 0.5 %/°C as the temperature increases.

A battery can be charged directly from a panel without any problems if the open-circuit voltage of the panel is higher than the nominal voltage of the battery. No voltage divider is necessary, even if the battery voltage is only 3 V and the nominal voltage of the solar panel is 12 V. This is because a solar cell always acts as a current source instead of a voltage source.

If the battery is connected directly to the solar panel, a small leakage current will flow through the solar panel when it is not illuminated. The can be prevented by adding a blocking diode to the circuit (see the schematic). Many portable solar modules have a built-in blocking diode (check the manufacturer’s specifications).

This simple arrangement is adequate if the maximum current from the solar panel is less than the maximum allowable overcharging current of the battery. NiMH cells can be overcharged for up to 100 hours if the charging current (in A) is less than one-tenth of their rated capacity in Ah. This means that a panel with a rated current of 2 A can be connected directly to a 20-Ah battery without any problems. However, under these conditions the battery must be fully discharged by a load from time to time.

Practical Matters

When positioning a solar panel, you should ensure that no part of the panel is in the shade, as otherwise the voltage will decrease markedly, with a good chance that no current will flow into the connected battery.

Most modules have integrated bypass diodes connected in reverse parallel with the solar cells. These diodes prevent reverse polarization of any cells that are not exposed to sunlight, so the current from the other cells flows through the diodes, which can cause overheating and damage to the cells. To reduce costs, it is common practice to fit only one diode to a group of cells instead of providing a separate diode for each cell.

—Jens Nickels, Elektor, 070798-I, 6/2009

Simple Guitar Transmitter (EE Tip #102)

You need a guitar amplifier to play an electric guitar. The guitar must be connected with a cable to the amplifier, which you might consider an inconvenience. Most guitar amplifiers operate off the AC power line. An electric guitar fitted with a small transmitter offers several advantages. You can make the guitar audible via an FM tuner/amplifier, for example. Both the connecting cable and amplifier are then unnecessary. With a portable FM broadcast radio or, if desired, a boombox, you can play in the street or in subway.

Source: Elektor 3/2009

Source: Elektor 3/2009

stations (like Billy Bragg). In that case, everything is battery-powered and independent of a fixed power point. (You might need a permit, though.)

Designing a transmitter to do this is not necessary. A variety of low-cost transmitters are available. The range of these devices is often not more than around 30′, but that’s likely plenty for most applications. Consider a König FMtrans20 transmitter. After fitting the batteries and turning it on, you can detect a carrier signal on the radio. Four channels are available, so it should always be possible to find an unused part of the FM band. A short cable with a 3.5-mm stereo audio jack protrudes from the enclosure. This is the audio input. The required signal level for sufficient modulation is about 500 mVPP.

If a guitar is connected directly, the radio’s volume level will have to be high to get sufficient sound. In fact, it will have to be so high that the noise from the modulator will be quite annoying. Thus, a preamplifier for the guitar signal is essential.

To build this preamplifier into the transmitter, you first have to open the enclosure. The two audio channels are combined. This is therefore a single channel (mono) transmitter. Because the audio preamplifier can be turned on and off at the same time as the transmitter, you also can use the transmitter’s on-board power supply for power. In our case, that was about 2.2 V. This voltage is available at the positive terminal of an electrolytic capacitor. Note that 2.2 V is not enough to power an op-amp. But with a single transistor the gain is already big enough and the guitar signal is sufficiently modulated. The final implementation of the modification involves soldering the preamplifier circuit along an edge of the PCB so that everything still fits inside the enclosure. The stereo cable is replaced with a 11.8″ microphone cable, fitted with a guitar plug (mono jack). The screen braid of the cable acts as an antenna as well as a ground connection for the guitar signal. The coil couples the low-frequency signal to ground, while it isolates the high-frequency antenna signal. While playing, the cable with the transmitter just dangles below the guitar, without being a nuisance. If you prefer, you can also secure the transmitter to the guitar with a bit of double-sided tape.

—Gert Baars, “Simple Guitar Transmitter,” Elektor,  080533-1, 3/2009.