Computers drive progress in today’s world. Both individuals and industry depends on a spectrum of computing tools. Data centers are at the heart of many computational processes from communication to scientific analysis. They also consume over 3% of total power in the United States, and this amount continues to increase.[1]
Data centers service jobs, submitted by their customers, on the data center’s servers, a shared resource. Data centers and their customers negotiate a service-level agreement (SLA), which establishes the average expected job completion time. Servers are allocated for each job and must satisfy the job’s SLA. Job-scheduling software already provides some solutions to the budgeting of data center resources.
Data center construction and operation include fixed and accrued costs. Initial building expenses, such as purchasing and installing computing and cooling equipment, are one-time costs and are generally unavoidable. An operational data center must power this equipment, contributing an ongoing cost. Power management and the associated costs define one of the largest challenges for data centers.
To control these costs, the future of data centers is in active participation in advanced power markets. More efficient cooling also provides cost saving opportunities, but this requires infrastructure updates, which is costly and impractical for existing data centers. Fortunately, existing physical infrastructure can support participation in demand response programs, such as peak shaving, regulation services (RS), and frequency control. In demand-response programs, consumers adjust their power consumption based on real-time power prices. The most promising mechanism for data center participation is RS.
Independent system operators (ISOs) manage demand response programs like RS. Each ISO must balance the power supply with the demand, or load, on the power grid in the region it governs. RS program participants provide necessary reserves when demand is high or consume more energy when demand is lower than the supply. The ISO communicates this need by transmitting a regulation signal, which the participant must follow with minimal error. In return, ISOs provide monetary incentives to the participants.
Data centers are ideal participants for demand response programs. A single data center requires a significant amount of power from the power grid. For example, the Massachusetts Green High-Performance Computing Center (MGHPCC), which opened in 2012, has power capacity of 10 MW, which is equivalent to as many as 10,000 homes (www.mghpcc.org). Additionally, some workload types are flexible; jobs can be delayed or sped up within the given SLA.
— ADVERTISMENT—
—Advertise Here—
Data centers have the ability to vary power consumption based on the ISO regulation signal. Server sleep states and dynamic voltage and frequency scaling (DVFS) are power modulation techniques. When the regulation signal requests lower power consumption from participants, data centers can put idle servers to sleep. This successfully reduces power consumption but is not instantaneous. DVFS performs finer power variations; power in an individual server can be quickly reduced in exchange for slower processing speeds. Demand response algorithms for data centers coordinate server state changes and DVFS tuning given the ISO regulation signal.
Accessing data from real data centers is a challenge. Demand response algorithms are tested via simulations of simplified data center models. Before data centers can participate in RS, algorithms must account for the complexity in real data centers.
Data collection within data center infrastructure enables more detailed models. Monitoring aids performance evaluation, model design, and operational changes to data centers. As part of my work, I analyze power, load, and cooling data collected from the MGHPCC. Sensor integration for data collection is essential to the future of data center power and cost management.
The power grid also benefits from data center participation in demand response programs. Renewable energy sources, such as wind and solar, are more environmentally friendly than traditional fossil fuel plants. However, the intermittent nature of such renewables creates a challenge for ISOs to balance the supply and load. Data center participation makes larger scale incorporation of renewables into the smart grid possible.
The future of data centers requires the management of power consumption in order to control costs. Currently, RS provides the best opportunities for existing data centers. According to preliminary results, successful participation in demand response programs could yield monetary savings around 50% for data centers.[2]
[1] J. Koomey, “Growth in Data Center Electricity Use 2005 to 2010,” Analytics Press, Oakland, August, 1, 2010, www.analyticspress.com/datacenters.html. [2] H. Chen, M. Caramanis, and A. K. Coskun, “The Data Center as a Grid Load Stabilizer,” Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC), p. 105–112, January 2014.
Annie Lane studies computer engineering at Boston University, where she performs research as part of the Performance and Energy-Aware Computing Lab (www.bu.edu/peaclab). She received the Clare Boothe Luce Scholar Award in 2014. Annie received additional funding from the Undergraduate Research Opportunity Program (UROP) and Summer Term Alumni Research Scholars (STARS). Her research focuses strategies power and cost optimization strategies in data centers.
Leave a Comment