Intel’s Efforts To Save Big Bucks On Data Center Costs
Intel now has a goal of saving nearly $250 million in data center costs within the next eight years, according to a statement made Tuesday by a company executive. Diane Bryant, chief information officer for Intel, announced at an event that Intel has already halved the number of data servers used by the company, and Intel continues researching ways to consolidate further. At its height, 147 data centers were in use, whereas today’s number hovers near 70.
The four year refresh cycle for data center servers is one way that Intel plans to cut data center costs. The refresh cycle was initiated in 2007, and has already made a difference in Intel’s budget, according to Bryant. The costs of maintaining data centers include support, system maintenance, and cooling.
The cost cutting costs Bryant revealed are intended to save the company $250 million by 2015. 2008 saw a savings of $45 million, and this year there has been a great deal more scrutiny on IT spending. The advantage to the four year refresh cycle, Bryant explained, is that older servers cost more to replace and take up quite a bit more financial resources to operate and maintain. Faster chips will cut data center costs. Bryant also mentioned server consolidation in her address, and having more applications located in virtualized environments.
The primary strategy Intel has used to consolidate servers was to upgrade the Xeon chips used. One Nehalem-based, quad-core Xeon chip can carry the load of ten single-core Xeon chips, which not only made for less hardware in data centers but increased server performance overall. Bryant also said Intel had reduced the costs of hardware acquisition, and overhead costs such as maintenance and energy.
The lion’s share of data center expenses, according to Bryant, is the cooling of the servers, which relates to their efficiency. It is a difficult calculation to make, Bryant said, and Intel continues its efforts to identify an “efficient data center.” Using servers with more power efficiency has made an initial difference in reducing energy costs, but the search is far from over.
Intel has created partnerships with the Environmental Protection Agency and other American government agencies, to develop ways to measure power efficiency in each server state, between idle and full power operation. In May the EPA began issuing Energy Star ratings for deserving servers. The main measurement for such a rating is the power consumption of the server when idle, as well as the efficiency of its power supply.
In addition, Intel is maximizing its server performance by implementing technologies that operate at high utilization rates. “Back two or three years ago,” Bryant said, “everyone’s data centers were running at five, ten, or fifteen percent utilization.” Today, Intel looks for an 85 percent utilization rate in the HPC environment, provided the servers don’t overload. Intel runs about 100,000 servers currently, and 80,000 of those are in the high-performance computer environment. Twenty thousand are “office servers” for normal tasks, with a utilization rate of 65% for max efficiency.
One way Intel is working to achieve high utilization rates, according to Bryant, is by moving applications into virtual environments instead of keeping them in dedicated hardware. The challenge is to operate at a maximum utilization level that will not overburden the data systems.
Intel has already been working to reduce their energy costs. A proposal in 2008 suggested the company work on a data center that used minimal air conditioning. They are also developing better cooling techniques for the equipment in data centers, with the help of tech academics and companies such as IBM and Hewlett-Packard.