New Frequency-Programmable, Narrow-Band Transmitter

Leading RF module designer and manufacturer Lemos International/Radiometrix recently launched a new range of flexible, frequency-programmable, RF power adjustable radios. The new NTX2B Transmitter offers industry-leading true Narrow-Band FM performance. It is available on user/factory-programmable custom frequencies between 425 and 470 MHz. Superseding the popular NTX2, the new transmitter offers greater stability and improved performance due to VCTCXO reference. The NTX2B provides users with the ability to dynamically reprogram the module via the microcontroller UART to other channel frequencies in the band or store new frequency/power settings on EEPROM.

Source: Lemos International

Source: Lemos International

The standard NTX2B version is a 10-mW, 25-kHz narrow-band Transmitter with data rates up to 10 kbps and is available on 434.075 and 434.650 MHz European SRD frequencies and 25 mW on 458.700 MHz for the UK. The NTX2B is also available with 12.5- or 20-kHz channel spacing for licensed US FCC Part 90 or legacy European Telemetry/Telecommand bands. The NTX2B features an internal LDO voltage regulator that enables the transmitter to be operated down to 2.9 V and up to 15-V voltage supply at a nominal current consumption of 18 mA and less than 3 µA in power-down mode, which can be enabled within 5 ms. NTX2B can transmit both digital and 3-VPP analog signals. Offering greater range than wideband modules, the transmitter can be paired with the new NRX2B receiver for a usable range of over 500 m, which is ideal for performance-critical, low-power wireless applications, including security, sensor networks, industrial/commercial telemetry and remote control.

Source: Lemos

 

Consumer Interest in Wearables Increases

New consumer research from Futuresource Consulting highlights a significant increase in consumers’ intentions to purchase wearable devices. Interviewing more than 8,000 people in May and and October in the US, the UK, France, and Germany, the study saw interest in fitness trackers and smart watches rise by 50% and 125%, respectively. However, interest in smart glasses and heart rate monitors has stalled.

Source: Futuresource

Source: Futuresource

The overall wearables market has seen significant growth so far in 2014, with Futuresource forecasting full-year sales of over 51 million units worldwide. However, it’s only just warming up, and wearables sales are expected to accelerate from 2015 as new brands enter the space.

The most marked change since May is the strong growth in the number of iPhone owners intending to purchase wearable devices. iPhone owners now lead the way in all categories – particularly in smartwatches, which 17% of iPhone owners expressed an intent to purchase in the next 12 months, up from only 6% in May 2014. This increase coincides with September’s announcement of the Apple Watch. As Apple customers are typically some of the earliest adopters of new technologies, their increasing engagement with the smartwatch category is a strong positive for the Apple Watch release in early 2015.

Source: Futuresource Consulting

Microcontroller-Based Air Quality Mapper

Raul Alvarez Torrico’s Air Quality Mapper is a portable device designed to track levels of CO2 and CO gasses for constructing “Smog Maps” to determine the healthiest routes. Featuring a Renesas RDKRL78G13 development board, the Mapper receives location data from its GPS module, takes readings of the CO2 and CO concentrations along a specific route, and stores the data in an SD card. With the aid of PC utility software, you can upload the data to a web server and see maps of gas concentrations in a web browser.

air q

The portable data logger prototype

In his Circuit Cellar 293 article (December 2014), Torrico notes:

My design, the Air Quality Mapper, is a data-logging, online visualization system comprising a portable data logger and a webserver for the purpose of measuring and visualizing readings of the quality of air in given areas. You take readings over a given route and then upload the data to the server, which in turn serves a webpage containing a graphical representation of all readings using Google Maps technology.

The webpage displaying CO2 measurements acquired in a session

The webpage displaying CO2 measurements acquired in a session

The data logging system features a few key components: a Renesas YRDKRL78G13 development board,  a Polstar PMB-648 GPS module, an SD card, and gas sensors.

The portable data logger hardware prototype is based on the Renesas YRDKRL78G13 development board, which contains a Renesas R5F100LEA 16-bit microcontroller with 64 KB of program memory, 4 KB of data flash memory, and 4 KB of RAM, running from a 12-MHz external crystal…

Air Quality Mapper system

Air Quality Mapper system

The board itself is a bit large for a portable or hand-held device (5,100 x 5,100 mils); but on the other hand, it includes the four basic peripherals I needed for the prototype: a graphic LCD, an SD card slot, six LEDs, and three push buttons for the user interface. The board also includes other elements that could become very handy when developing an improved version of the portable device: a three-axis accelerometer, a temperature sensor, ambient light sensor, a 512-KB serial EEPROM, a small audio speaker, and various connection headers (not to mention other peripherals less appealing for this project: an audio mic, infrared emitter and detector, a FET, and a TRIAC, among other things). The board includes a Renesas USB debugger, which makes it a great entry-level prototyping board for Renesas RL78/G13 microcontrollers.

For the GPS module, I used a Polstar PMB-648 with 20 parallel satellite-tracking channels. It’s advertised as a low-power device with built-in rechargeable battery for backup memory and RTC backup. It supports the NMEA0183 v2.2 data protocol, it includes a serial port interface, and it has a position accuracy 2DRMS of approximately 5 m and velocity accuracy of 0.1 m per second without selective availability imposed. It has an acquisition time of 42 s from a cold start and 1 s from a hot start. It also includes a built-in patch antenna and a 3.3- to 5-V power supply input.

The GPS module provides NMEA0183 V2.2 GGA, GSV, GSA, and RMC formatted data streams via its UART port. A stream comes out every second containing, among other things, latitude, longitude, a timestamp, and date information. In the system, this module connects to the R5F100LEA microcontroller’s UART0 port at 38,400 bps and sources the 3.3-VDC power from the YRDKRL78G13 board.

For the CO2 sensor, I used a Hanwei Electronics Co. MG-811 sensor, which has an electrolyte that in the presence of heat reacts in proportion to the CO2 concentration present in air. The sensor has an internal heating element that needs to be powered with 6 VDC or 6 VAC. For small CO2 concentrations, the sensor outputs a higher voltage, and for high concentrations the output voltage decreases. Because I didn’t have proper calibration instrumentation at hand for this type of sensor, I made a very simple calibration process just by exposing the sensor to a “clean air” environment outside the city. I took an average of various readings in a 15-minute period to define a 400-PPM concentration, which is generally defined as the average for a clean air environment. Not an optimal calibration method of course, but I thought it was acceptable to get some meaningful data for prototyping purposes. For a proper calibration of the sensor, I would’ve needed another CO2 sensing system already calibrated with a high degree of accuracy and a set up in a controllable environment (e.g., a laboratory) in order to generate and measure the amount of CO2.

This sensor provides an output voltage between 30 and 50 mV. And due to their high output impedance, the signal must be properly conditioned with an op-amp. So, I used a Microchip Technology MCP6022 instrumentation amplifier in a noninverting configuration with a gain of 9.2.

You can read the complete article in Circuit Cellar 293 (December 2014).

APEI Builds First Multiphysics Simulation App with the Application Builder

Application Builder was released in October 2014 and is now available with COMSOL Multiphysics software version 5.0. The Application Builder allows COMSOL software users to build an intuitive interface to run any COMSOL model. COMSOL Multiphysics users are already building applications and exploring the benefits of sharing their models with colleagues and customers worldwide.

Image made using COMSOL Multiphysics and is provided courtesy of COMSOL

Image made using COMSOL Multiphysics and is provided courtesy of COMSOL

One such company is Arkansas Power Electronics Intl. (APEI), a manufacturer of high-power density and high performance power electronics products. APEI has found that the Application Builder can provide enormous benefits throughout the organization.

“I’m building applications to help us expedite our design processes,” says Brice McPherson, a Senior Staff Engineer at APEI. “Our engineers often spend time running analyses for the sales or manufacturing departments to find model results based on diverse conditions and requirements. The Application Builder will be hugely important for accelerating our work in this respect; any colleague outside of the engineering team will now be able to confidently run these studies by themselves, with no learning curve.”

The first application built by APEI looks at fusing current and ampacity of wire bonds—very small wires used to interconnect semiconductor devices with their packages.

The team at APEI envisions using the Application Builder for a variety of other projects, including applications to automate and streamline the calculation of wire bond inductance, package thermal performance, and more.

Source: COMSOL

Liquid Flow Sensor Wins Innovation Prize

Sensirion recently won the DeviceMed OEM-Components innovation prize at the Compamed 2014 exhibition. The disposable liquid flow sensor LD20-2000T for medical devices features an integrated thermal sensor element in a microchip. The pinhead-sized device is based on Sensirion’s CMOSens technology.sensirionliquidflowsensor

The LD20-2000T disposable liquid flow sensor provides liquid flow measurement capability from inside medical tubing (e.g., a catheter) in a low-cost sensor, suitable for disposable applications. As a result, you can measure drug delivery from an infusion set, an infusion pump, or other medical device in real time.

A microchip inside the disposable sensor measures the flow inside a fluidic channel. Accurate (~5%) flow rates from 0 to 420 ml/h and beyond can be measured. Inert medical-grade wetted materials ensure sterile operation with no contamination of the fluid. The straight, open flow channel with no moving parts provides high reliability. Using Sensirion’s CMOSens technology, the fully calibrated signal is processed and linearized on the 7.4 mm2 chip.

Source: Sensirion

Data Center Power & Cost Management

Computers drive progress in today’s world. Both individuals and industry depends on a spectrum of computing tools. Data centers are at the heart of many computational processes from communication to scientific analysis. They also consume over 3% of total power in the United States, and this amount continues to increase.[1]

Data centers service jobs, submitted by their customers, on the data center’s servers, a shared resource. Data centers and their customers negotiate a service-level agreement (SLA), which establishes the average expected job completion time. Servers are allocated for each job and must satisfy the job’s SLA. Job-scheduling software already provides some solutions to the budgeting of data center resources.

Data center construction and operation include fixed and accrued costs. Initial building expenses, such as purchasing and installing computing and cooling equipment, are one-time costs and are generally unavoidable. An operational data center must power this equipment, contributing an ongoing cost. Power management and the associated costs define one of the largest challenges for data centers.

To control these costs, the future of data centers is in active participation in advanced power markets. More efficient cooling also provides cost saving opportunities, but this requires infrastructure updates, which is costly and impractical for existing data centers. Fortunately, existing physical infrastructure can support participation in demand response programs, such as peak shaving, regulation services (RS), and frequency control. In demand-response programs, consumers adjust their power consumption based on real-time power prices. The most promising mechanism for data center participation is RS.

Independent system operators (ISOs) manage demand response programs like RS. Each ISO must balance the power supply with the demand, or load, on the power grid in the region it governs. RS program participants provide necessary reserves when demand is high or consume more energy when demand is lower than the supply. The ISO communicates this need by transmitting a regulation signal, which the participant must follow with minimal error. In return, ISOs provide monetary incentives to the participants.

This essay appears in Circuit Cellar #293 (December 2014).

 
Data centers are ideal participants for demand response programs. A single data center requires a significant amount of power from the power grid. For example, the Massachusetts Green High-Performance Computing Center (MGHPCC), which opened in 2012, has power capacity of 10 MW, which is equivalent to as many as 10,000 homes (www.mghpcc.org). Additionally, some workload types are flexible; jobs can be delayed or sped up within the given SLA.

Data centers have the ability to vary power consumption based on the ISO regulation signal. Server sleep states and dynamic voltage and frequency scaling (DVFS) are power modulation techniques. When the regulation signal requests lower power consumption from participants, data centers can put idle servers to sleep. This successfully reduces power consumption but is not instantaneous. DVFS performs finer power variations; power in an individual server can be quickly reduced in exchange for slower processing speeds. Demand response algorithms for data centers coordinate server state changes and DVFS tuning given the ISO regulation signal.

Accessing data from real data centers is a challenge. Demand response algorithms are tested via simulations of simplified data center models. Before data centers can participate in RS, algorithms must account for the complexity in real data centers.

Data collection within data center infrastructure enables more detailed models. Monitoring aids performance evaluation, model design, and operational changes to data centers. As part of my work, I analyze power, load, and cooling data collected from the MGHPCC. Sensor integration for data collection is essential to the future of data center power and cost management.

The power grid also benefits from data center participation in demand response programs. Renewable energy sources, such as wind and solar, are more environmentally friendly than traditional fossil fuel plants. However, the intermittent nature of such renewables creates a challenge for ISOs to balance the supply and load. Data center participation makes larger scale incorporation of renewables into the smart grid possible.

The future of data centers requires the management of power consumption in order to control costs. Currently, RS provides the best opportunities for existing data centers. According to preliminary results, successful participation in demand response programs could yield monetary savings around 50% for data centers.[2]


[1] J. Koomey, “Growth in Data Center Electricity Use 2005 to 2010,” Analytics Press, Oakland, August, 1, 2010, www.analyticspress.com/datacenters.html.

[2] H. Chen, M. Caramanis, and A. K. Coskun, “The Data Center as a Grid Load Stabilizer,” Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC), p. 105–112, January 2014.


LaneTTF Annie Lane studies computer engineering at Boston University, where she performs research as part of the Performance and Energy-Aware Computing Lab (www.bu.edu/peaclab). She received the Clare Boothe Luce Scholar Award in 2014. Annie received additional funding from the Undergraduate Research Opportunity Program (UROP) and Summer Term Alumni Research Scholars (STARS). Her research focuses strategies power and cost optimization strategies in data centers.

 

Probe a Circuit with the Power Off (EE Tip #146)

Imagine something is not working on your surface-mounted board, so you decide use your new oscilloscope. You take the probe scope in your right hand and put it on the microcontroller’s pin 23. Then, as you look at the scope’s screen, you inadvertently move your hand by 1 mm. Bingo!ComponentsDesk-iStock_000036102494Large

The scope probe is now right between pin 23 and pin 24, and you short-circuit two outputs. As a result, the microcontroller is dead and, because you’re unlucky, a couple of other chips are dead too. You just successfully learned Error 22.

Some years ago a potential customer brought me an expensive professional light control system he wanted to use. After 10 minutes of talking, I opened the equipment to see how it was built. My customer warned me to take care because he needed to use it for a show the next day. Of course, I said that he shouldn’t worry because I’m an engineer. I took my oscilloscope probe and did exactly what I said you shouldn’t do. Within 5 s, I short-circuited a 48-V line with a 3V3 regulated wire. Smoke and fire! I transformed each of the beautiful system’s 40 or so integrated circuits into dead silicon. Need I say my relationship with that customer was rather cold for a few weeks?

In a nutshell, don’t ever try to connect a probe on a fine-pitch component when the power is on. Switch everything off, solder a test wire where you need it to be, grab your probe on the wire end, ensure there isn’t a short circuit and then switch on the power. Alternatively, you can buy a couple of fine-pitch grabbers, expensive but useful, or a stand-off to maintain the probe in a precise position. But still don’t try to connect them to a powered board.—Robert Lacoste, CC25, 2013

PIC32MX1/2/5 Microcontrollers for Embedded Control & More

Microchip Technology’s new PIC32MX1/2/5 series enables a wide variety of applications, ranging from digital audio to general-purpose embedded control. The microcontroller series offers a robust peripheral set for a wide range of cost-sensitive applications that require complex code and higher feature integration.MicrochipPIC32MX125-starterkit

The microcontrollers feature:

  • Up to 83 DMIPS performance
  • Scalable memory options from 64/8-KB to 512/64-KB flash memory/RAM
  • Integrated CAN2.0B controllers with DeviceNet addressing support and programmable bit rates up to 1 Mbps, along with system RAM for storing up to 1024 messages in 32 buffers.
  •  Four SPI/I2S interfaces
  • A Parallel Master Port (PMP) and capacitive touch sensing hardware
  • A 10-bit, 1-Msps, 48-channel ADC
  • Full-speed USB 2.0 Device/Host/OTG peripheral
  • Four general-purpose direct memory access controllers (DMAs) and two dedicated DMAs on each CAN and USB module

 

Microchip’s MPLAB Harmony software development framework supports the MCUs. You can take advantage of Microchip’s software packages, such as Bluetooth audio development suites, Bluetooth Serial Port Profile library, audio equalizer filter libraries, various Decoders (including AAC, MP3, WMA and SBC), sample-rate conversion libraries, CAN2.0B PLIBs, USB stacks, and graphics libraries.

Microchip’s free MPLAB X IDE, the MPLAB XC32 compiler for PIC32, the MPLAB ICD3 in-circuit debugger, and the MPLAB REAL ICE in-circuit emulation system also support the series.

The PIC32MX1/2/5 Starter Kit costs $69. The new PIC32MX1/2/5 microcontrollers with the 40-MHz/66 DMIPS speed option are available in 64-pin TQFP and QFN packages and 100-pin TQFP packages. The 50-MHz/83 DMIPS speed option for this PIC32MX1/2/5 series is expected to be available starting in late January 2015. Pricing starts at $2.75 each, in 10,000-unit quantities.

 

Source: Microchip Technology

New Fully Differential Amplifiers (FDAs)

Texas Instruments has announced two new fully differential amplifiers (FDAs) intended to improve performance in radar and test and measurement equipment and wireless base stations. The LMH3401 and LMH5401 FDAs provide DC-coupled applications with high-quality AC performance to improve system capabilities. They offer a higher bandwidth and slew rate, as well as lower distortion than existing ADC drivers.LMH3401   LMH5401 PR graphic

The LMH3401 delivers 7 GHz of –3-dB bandwidth at 16-dB gain, a high slew rate of 18,000 V/µs, and low harmonic distortion of –77 dBc at 500 MHz. The LMH5401 can be configured for 6 dB of gain or more, delivering 6.2 GHz of -3 dB bandwidth at 12-dB gain.

The LMH3401 and LMH5401 are available in a 14-pin, 2.5-mm × 2.5-mm QFN package. The LMH3401 costs $8.95 and the LMH5401  coss $7.95 in 1,000-unit quantities.

Source: Texas Instruments

STMicro Reduces Time to Development with Open.MEMS Licensing

STMicroelectronics recently announced the launch of the Open.MEMS licensing program. Its purpose is to encourage broad use of its MEMS and sensors among open-community developers. Open.MEMS licensees can access free drivers, middleware, and application software, beginning with “sensor fusion for 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer, considered vital for many portable and wearable applications.”

STMicro’s STM32 Open Development platform supports Open.MEMS, which went live on November 11, 2014, and will continue to be expanded regularly with additional low-level drivers, middleware/libraries, and application-layer code.

 

2014 Year-End Notes

In every December issue, we like to take a look at where we’ve been and where we’re going. Since this is the final issue of the year, let’s review a few important notes about 2014 and the 2015 editorial schedule.

CIARCIA PURCHASES CIRCUIT CELLAR

In early October, Circuit Cellar’s founder Steve Ciarcia finalized a deal to purchase Circuit Cellar, audioXpress, Voice Coil, Loudspeaker Industry Sourcebook, and their respective websites, newsletters, and products from Netherlands-based Elektor International Media. After gaining international recognition for writing BYTE magazine’s “Ciarcia’s Circuit Cellar” column, Ciarcia launched Circuit Cellar magazine in 1988. Since then, he’s published hundreds of articles and editorials in the magazine.

Circuit Cellar founder Steve Ciarcia addresses the team Vermont

Circuit Cellar founder Steve Ciarcia addresses the team Vermont

WIZnet CONNECT THE MAGIC 2014 DESIGN CHALLENGE

In March 2014, engineers around the globe began working on innovative Internet of Things (IoT) design projects around WIZnet’s WIZ550io Ethernet controller module. In September, after a few weeks of judging, we announced the winners. Hans Peter Portner won First Prize for his Chimaera design, which is a touch-less, network-ready, polyphononic music controller.

Portner's Chimaera project

Portner’s Chimaera project

2015 EDITORIAL CALENDAR

Interested in publishing an article in a 2015 edition Circuit Cellar? Email a proposal or complete submission to editor@circuitcellar.com. Our 2015 editorial calendar is now live.

Budgeting Power in Data Centers

In my May 2014 Circuit Cellar article, “Data Centers in the Smart Grid” (Issue 286), I discussed the growing data center energy challenge and a novel potential solution that modulates data center power consumption based on the requests from the electricity provider. In the same article, I elaborated on how the data centers can provide “regulation service reserves” by tracking a dynamic power regulation signal broadcast by the independent service operator (ISO).

Demand-side provision of regulation service reserves is one of the ways of providing capacity reserves that are picking up traction in US energy markets. Frequency control reserves and operating reserves are other examples. These reserves are similar to each other in the sense that the demand-side, such as a data center, modulates its power consumption in reaction to local measurements and/or to signals broadcast by the ISO. The time-scale of modulation, however, differs depending on the reserves: modulation can be done in real time, every few seconds, or every few minutes.

In addition to the emerging mechanisms of providing capacity reserves in the grid, there are several other options for a data center to manage its electricity cost. For example, the data center operators can negotiate electricity pricing with the ISO such that the electricity cost is lower when the data center consumes power below a given peak value. In this scenario, the electricity cost is significantly higher if the center exceeds the given limit. “Peak shaving,” therefore, refers to actively controlling the peak power consumption using data center power-capping mechanisms. Other mechanisms of cost and capacity management include load shedding, referring to temporary load reduction in a data center, load shifting, which delays executing loads to a future time, and migration of a subset of loads to other facilities, if such an option is available.

All these aforementioned mechanisms require the data center to be able to dynamically cap its power within a tolerable error margin. Even in absence of advanced cost management strategies, a data center generally needs to operate under a predetermined maximum power consumption level as the electricity distribution infrastructure of the data center needs to be built accordingly.

This article appears in Circuit Cellar 292.

Most data centers today run a diverse set of workloads (applications) at a given time. Therefore, an interesting sub-problem of the power capping problem is how to distribute a given total power cap efficiently among the computational, cooling, and other components in a data center. For example, if there are two types of applications running in a data center, should one give equal power caps to the servers running each of these applications, or should one favor one of the applications?

Even when the loads have the same level of urgency or priority, designating equal power to different types of loads does not always lead to efficient operation. This is because the power-performance trade-offs of applications vary significantly. One application may meet user quality-of-service (QoS) expectations or service level agreements (SLAs) while consuming less power compared to another application.

Another reason that makes the budgeting problem interesting is the temperature and cooling related heterogeneity among the servers in a data center. Even when servers in a data center are all of the same kind (which is rarely the case), their physical location in the data center, the heat recirculation effects (which refer to some of the heat output of servers being recirculated back into the center and affecting the thermal dynamics), and the heat transfer among the servers create differences in temperatures and cooling efficiencies of servers. Thus, while budgeting, one may want to dedicate larger power caps to servers that are more cooling-efficient.

As the computational units in a data center need to operate at safe temperatures below manufacturer-provided limits, the budgeting policy in the data center needs to make sure a sufficient power budget is saved for the cooling elements. On the other hand, if there is over-cooling, then the overall efficiency drops because there is a smaller power budget left for computing.

I refer to the problem of how to efficiently allocate power to each server and to the cooling units as the “power budgeting” problem. The rest of the article elaborates on how this problem can be formulated and solved in a practical scenario.

Characterizing Loads

For distributing a total computational power budget in an application-aware manner, one needs to have an estimate of the relationship between server power and application performance. In my lab at Boston University, my students and I studied the relationship between application throughput and server power on a real-life system, and constructed empirical models that mimic this relationship.

Figure 1 demonstrates how the relationship between the instruction throughput and power consumption of a specific enterprise server changes depending on the application. Another interesting observation out of this figure is that, performance of some of the applications saturates beyond a certain power value. In other words, even when a larger power budget is given to such an application by letting it run with more threads (or in other cases, letting the processor operate at a higher speed), the application throughput does not improve further.

Figure 1: The plot demonstrates billion of instructions per second (BIPS) versus server power consumption as measured on an Oracle enterprise server including two SPARC T3 processors.

Figure 1: The plot demonstrates billion of instructions per second (BIPS) versus server power consumption as measured on an Oracle enterprise server including two SPARC T3 processors.

Estimating the slope of the throughput-power curve and the potential performance saturation point helps make better power budgeting decisions. In my lab, we constructed a model that estimates the throughput given server power and hardware performance counter measurements. In addition, we analyzed the potential performance bottlenecks resulting from a high number of memory accesses and/or the limited number of software threads in the application. We were able to predict the saturation point for each application via a regression-based equation constructed based on this analysis. Predicting the maximum server power using this empirical modeling approach gave a mean error of 11 W for our 400-to-700-W enterprise server.[1]

Such methods for power-performance estimations highlight the significance of telemetry-based empirical models for efficient characterization of future systems. The more detailed measurement capabilities newer computing systems can provide—such as the ability to measure power consumption of various sub-components of a server—the more accuracy one can achieve in constructing models to help with the data center management.

Temperature, Once Again

In several of my earlier articles this year, I emphasized the key role of temperature awareness for improving computing energy efficiency. This key role is a result of the high cost of cooling, the fact that server energy dynamics also rely on temperature substantially (i.e., consider the interactions among temperature, fan power and leakage power), and the impact of processor thermal management policies on performance.

Solving the budgeting problem efficiently, therefore, relies on having good estimates for how a given power allocation among the servers and cooling units would affect the temperature. The first step is estimating the CPU temperature for a given server power cap. In my lab, we modeled the CPU temperature as a function of the CPU junction-to-air thermal resistance, CPU power, and the inlet temperature to the server. CPU thermal resistance is determined by the hardware and packaging choices, and can be characterized empirically. For a given total server power, CPU power can be estimated using performance counter measurements in a similar way to estimating the performance given a server cap, as described above (see Figure 1). Our simple empirical temperature model was able to estimate temperature with a mean error of 2.9°C in our experiments on an Oracle enterprise server.[1]

Heat distribution characteristics of a data center depend strongly on the cooling technology used. For example, traditional data centers use a hot aisle-cold aisle configuration, where the cold air from the computer room air conditioners (CRAC) and the hot air coming out of the serves are separated by the rows of racks that contain the servers. The second step in thermal estimation, therefore, has to do with estimating the impact of servers to one another and the overall impact of the cooling system.

In a traditional hot-cold aisle setting, the inlet server temperatures can be estimated based on a heat distribution matrix, power consumption of all the servers, and the CRAC air temperature (which is the cold air input to the data center). Heat distribution matrix can be considered as a lumped model representing the impact of heat recirculation and the air flow properties together in a single N × N matrix, where N is the number of servers.[2]

Recently, using in-row coolers that leverage liquid cooling to improve efficiency of cooling is preferred in some (newer) data centers to improve cooling efficiency. In such settings, the heat recirculation effects are expected to be less significant as the most of the heat output of the servers is immediately removed from the data center.

In my lab, my students and I used low-cost data center temperature models to enable fast dynamic decisions.[1] Detailed thermal simulation of data centers is possible through computational fluid dynamics tools. Such tools, however, typically require prohibitively long simulation times.

Budgeting Optimization

What should the goal be during power budgeting? Maximizing overall throughput in the data center may seem like a reasonable goal. However, such a goal would favor allocating larger power caps to applications with higher throughput, and absolute throughput does not necessarily give an idea on whether the application QoS demand is met. For example, an application with a lower BIPS may have a stricter QoS target.

Consider this example for a better budgeting metric: the fair speed-up metric computes the harmonic mean of per-server speedup (i.e., per-server speedup is the ratio of measured BIPS to the maximum BIPS for an application). The purpose of this metric is to ensure none of the applications are starving while maximizing overall throughput.

It is also possible to impose constraints on the budgeting optimization such that a specific performance or throughput level is met for one or more of the applications. Ability to meet such constraints strongly relies on the ability to estimate the power-vs.-performance trends of the applications. Thus, empirical models I mentioned above are also essential for delivering more predictable performance to users.

Figure 2 demonstrates how the hill-climbing strategy my students and I designed for optimizing fair speed up evolves.  The algorithm starts setting the CRAC temperature to its last known optimal value, which is 20.6°C in this example. The CRAC power consumption corresponding to providing air input to the data center at 20.6°C can be computed using the relationship between CRAC temperature and the ratio of computing power to cooling power.[3] This relationship can often be derived from datasheets for the CRAC units and/or for the data center cooling infrastructure.

Figure 2: The budgeting algorithm starts from the last known optimal CRAC temperature value, and then iteratively aims to improve on the objective.

Figure 2: The budgeting algorithm starts from the last known optimal CRAC temperature value, and then iteratively aims to improve on the objective.

Once the cooling power is subtracted from the overall cap, the algorithm then allocates the remaining power among the servers with the objective of maximizing the fair speed up. Other constraints in the optimization formulation prevent any server to exceed manufacturer-given redline temperatures and ensure each server to receive a feasible power cap that falls between the server’s minimum and maximum power consumption levels.

The algorithm then iteratively searches for a better solution as demonstrated in steps 2 to 6 in Figure 2. Once the algorithm detects that the fair speed up is decreasing (e.g., fair speedup in step 6 is less than the speedup in step 5), it converges to the solution computed in the last step (e.g., converges to step 5 in the example). Note that setting cooler CRAC temperatures typically indicate a larger amount of cooling power, thus the fair speedup drops. However, as the CRAC temperature increases beyond a point, the performance of the hottest servers are degraded to maintain CPU temperatures below the redline; thus, a further increase in the CRAC temperature is not useful any longer (as in step 6).

This iterative algorithm took less than a second of running time using Matlab CVX[4] in our experiments for a small data center of 1,000 servers on an average desktop computer. This result indicates that the algorithm can be run in much shorter time with an optimized implementation, allowing for frequent real-time re-budgeting of power in a modern data center with a larger number of servers. Our algorithm improved fair speedup and BIPS per Watt by 10% to 20% compared to existing budgeting techniques.

Challenges

The initial methods and results I discussed above demonstrate promising energy efficiency improvements; however, there are many open problems for data center power budgeting.

First, the above discussion does not consider loads with some dependence to each other. For example, high-performance computing applications often have heavy communication among server nodes. This means that the budgeting method needs to account for the impact of inter-node communication for performance estimates as well as while making job allocation decisions in data centers.

Second, especially for data centers with a non-negligible amount of heat recirculation, thermally-aware job allocation significantly affects CPU temperature. Thus, job allocation should be optimized together with budgeting.

In data centers, there are elements other than the servers that consume significant amounts of power such as storage units. In addition there are a heterogeneous set of servers. Thus, a challenge lies in budgeting the power to a heterogeneous computing, storage, and networking elements.

Finally, the discussion above focuses on budgeting a total power cap among servers that are actively running applications. One can, however, also adjust the number of servers actively serving the incoming loads (by putting some servers into sleep mode/turning them off) and also consolidate the loads if desired. Consolidation often decreases performance predictability. The server provisioning problem needs to be solved in concert with the budgeting problem, taking the additional overheads into account. I believe all these challenges make the budgeting problem an interesting research problem for future data centers.

 

Ayse CoskunAyse K. Coskun (acoskun@bu.edu) is an assistant professor in the Electrical and Computer Engineering Department at Boston University. She received MS and PhD degrees in Computer Science and Engineering from the University of California, San Diego. Coskun’s research interests include temperature and energy management, 3-D stack architectures, computer architecture, and embedded systems. She worked at Sun Microsystems (now Oracle) in San Diego, CA, prior to her current position at BU. Coskun serves as an associate editor of the IEEE Embedded Systems Letters.

 

 
[1] O. Tuncer, K. Vaidyanathan, K. Gross, and A. K. Coskun, “CoolBudget: Data Center Power Budgeting with Workload and Cooling Asymmetry Awareness,” in Proceedings of IEEE International Conference on Computer Design (ICCD), October 2014.
[2] Q. Tang, T. Mukherjee, S. K. S. Gupta, and P. Cayton, “Sensor-Based fast Thermal Evaluation Model for Energy Efficient High-Performance Datacenters,” in ICISIP-06, October 2006.
[3] J. Moore, J. Chase, P. Ranganathan, and R. Sharma, “Making Scheduling ‘Cool’: Temperature-Aware Workload Placement in Data Centers,” in USENIX ATC-05, 2005.
[4] CVX Research, “CVX: Matlab Software for Disciplined Convex Programming,” Version 2.1, September 2014, http://cvxr.com/cvx/.

Polymer Capacitors for Industrial Applications

KEMET introduced new automotive-grade polymer capacitors at Electronica 2014. The T591 high-performance automotive-grade polymer tantalum series delivers stability and endurance under harsh humidity and temperature conditions. It is available in capacitances up to 220 µF and rated up to 10 V, with operating temperatures up to 125°C.KEMET Corporation Auto Polymer

“The T591 Series was developed with enhancements in polymer materials, design and manufacturing processes to meet the increasing demands of the telecommunications, industrial, and now, automotive segments,” Dr. Philip Lessner, KEMET Senior Vice President and Chief Technology Officer, was quoted saying a release.

You can use the series for a variety of projects, such as decoupling and filtering of DC-to-DC converters in automotive applications or industrial applications in harsh conditions.

Source: KEMET

 

 

24-Bit Sigma Delta A/D Converter

Analog Devices recently announced a 24-bit sigma-delta A/D converter with a fast and flexible output data rate for high-precision instrumentation and process control applications

The AD7175-2 converter delivers 24 noise-free bits at 20 SPS and 17.2 noise-free bits at 250 ksps providing you with a wider dynamic range. With twice the throughput for the same power consumption versus competing solutions, the AD7175-2 enables faster, more responsive measurement systems providing a 50-ksps/channel scan rate with a 20-µs settling time.Analog-AD7175-2-Product-Release-Image

The integrated, low-noise, true rail-to-rail input buffer enables quick and easy sensor interfacing, reduces design and layout complexity, simplifies analog drive circuitry and reduces PCB area. The AD717x family, with a wide range of pin and software compatible devices, allows consolidation and standardization across system platforms.

According to Analog Devices, the converter gives “designers a wider dynamic range, which enables smaller signal deviations to be measured as required within analytical laboratory instrumentation systems.”

Specs and features:

  • 2x the throughput for the same power consumption in comparison to other devices
  • Enables faster measurement systems providing a 50-ksps/channel scan rate with 20-µs settling time.
  • Integrated true rail-to-rail input buffer for easy sensor interfacing and simplified analog drive circuitry
  • User-configurable input channels
  • 2 differential or 4 single-ended channels
  • Per-channel independent programmability
  • Integrated 2.5-V buffered 2-ppm/°C reference
  • Flexible and per-channel programmable digital filters
  • Enhanced filters for simultaneous 50-Hz and 60-Hz rejection
  • −40°C to +105°C operating temperature range

Source: Analog Devices