Nov 14, 2014Networking giant Cisco recently published its annual Global Cloud Index, in which it predicts annual data center traffic will triple, reaching 8.6 zettabytes by 2018 (a single zettabyte equals a trillion gigabytes). Certainly, many of those bytes will be linked in various ways to the Internet of Things, and as they move between data centers and users, all those bytes will consume a massive number of electrons.
Research conducted by the Natural Resources Defense Council and consultancy Anthesis found that in 2013, data centers across the United States consumed an estimated 91 billion kilowatt-hours of electricity—or the equivalent annual output of 34 large coal-fired power plants.
If you've seen news stories related to data centers and energy in the past few years, they've probably been associated with one of a handful of major tech companies, such as Apple, Facebook, Google or eBay, that have invested huge amounts of money into their data farms, powering them with renewable-energy infrastructure. But these power users are the outliers. Small or midsize data centers, which often operate a single facility for their own needs, consume nearly half of all the electricity expended by data centers in the United States, while multi-tenant (or co-located) data centers consume 19 percent, according to the NRDC report.
Sadly, much of that energy is wasted. The study found that at many data centers, servers are constantly powered despite only occasionally performing any work, while a third of servers remain completely inactive but are still plugged in, drawing power for no reason. These are issues that can be resolved through the better utilization of servers, such as through virtualization—a process by which software is used to "virtualize" operating systems or applications running on multiple underutilized servers, and to have them running on fewer physical servers, thereby saving energy, space and management costs.
But another major energy sink is the air conditioning used for cooling the server banks, which generate significant heat. A number of years ago, savvy data center operators began to realize that this was an area in which they could make substantial cuts to energy usage, and they began wiring their data centers with multiple temperature sensors, which provided them with a real-time heat map of the facility (or, more likely, a cold map, given the propensity to just crank up the AC to prevent servers from experiencing stress).
Wiring sensors is a slow and costly prospect that gives little flexibility in terms of repositioning sensors as a data center's layout changes. In 2008, RFID Journal reported on Cisco using active (battery-powered) RFID tags to track its IT assets within data centers. The following year, it published an article about Microsoft using temperature sensors to monitor hot and cold spots within its data centers around the world.
RFID gained significant traction as a means of tracking assets, which data centers and their customers are required to do, in order to both meet government regulations and monitor the proprietary or private data the servers hold. In the years that followed, falling sensor costs and the standardization of wireless sensor networking have meant that companies can now easily and quickly deploy sensors that allow asset tracking, but also collect and transmit temperature and humidity data to a data center's building energy-management software.
Companies such as RF Code and Dust Networks market their sensor networks and energy efficiency programs to data centers. During a recent briefing, Richard Jenkins, RF Code's VP of marketing, described many of the success stories the company has had with data centers, including providing its sensor and data-collection services to CenturyLink, which operates more than 55 data centers throughout the United States and is one of the largest providers of co-located data centers.
CenturyLink is performing "quite precise temperature and humidity tracking" using RF Code asset sensors," Jenkins explained. "And they're passing [the energy savings] along to customers." According to Jenkins, the company expects to achieve $15 million in energy savings across its data centers by 2018, with a return on investment of less than one year.
Indeed, there is money to be saved through tamping down on energy bills and even allowing server rooms to reach temperatures much higher than data center operators had previously thought were tenable. Many years of experimentation have shown that the servers can handle such temperatures without experiencing performance failures or overheating. Aside from adding sensors that can do double-duty for asset tracking and environmental monitoring, data centers are adding shields called blanking panels that close off empty slots inside server racks to make sure cooling air will move through the rack correctly.
Yet, Anthesis partner Josh Whitney, who led the research into the NRDC data center report, says that sensor systems have become relatively common these days, even among co-located data centers and many small and midsize facilities. The real problem—and opportunity—is that there is a lack of alignment in terms of incentives to lower power consumption. Oftentimes, he says, the data center pays the power bill and the client pays a flat rate for the data services. Therefore, the client has no incentive to ask that the data center consider its energy consumption—in fact, it might even stand in the way of the data center saving energy.
"In a co-located data center, you might have 100 customers' servers installed in 10 racks," Whitney explained. "Each has its own service-level agreement. Most might say 'we want temperatures set on 75,' but one might say 'we want temperatures set at 70.'"
According to Whitney, having invested heavily in energy-management systems and server-utilization strategies, some "progressive" data center operators, such as Supernap, are starting to turn the tide, by essentially dictating some of the basic energy-usage terms to their clients. This allows such firms to improve overall energy performance.
Intelligent sensor networks can play an important role in both reducing a data center's carbon footprint and helping track and account for the highly valuable assets and data stored within. But in co-located data centers, they cannot, on their own, save energy. That step comes from data center operators and clients establishing service-level agreements that maximize efficiency and cut through the perception that data centers should be cold, no matter what.
Jenkins told me that in the past, "no one cared how cold they ran their data centers—waste hasn't been an issue." And it seems that attitude has not really changed.
Last year, the Uptime Institute surveyed more than 1,000 global data center operators, and only half of all North American respondents said they considered energy efficiency to be very important. If data centers begin realigning incentives to save energy and their customers get to share in the fruits of optimally run data centers, this attitude will change. After all, the NRDC report concluded that even if only half of existing energy-reduction potential could be realized, data centers and their customers could share in savings of $3.8 billion annually.
Mary Catherine O'Connor is the editor of Internet of Things Journal and a former staff reporter for RFID Journal. She also writes about technology, as it relates to business and the environment, for a range of consumer magazines and newspapers.