Due to the current economic downturn, data center managers are forced to meet demanding business requirements within limited budget constraints. They have been cutting operating costs in every possible way, and the fastest-growing and largest share of the data center’s operational costs are energy costs, most of which are consumed by servers and cooling systems.
Unfortunately, most energy-efficient technologies require considerable upfront investment and will only yield returns after several years. However, some low-cost technologies have been overlooked because they seem impractical or too extreme. The eight energy-saving methods listed below have been tested in real data center environments and have proven to be highly effective. Some of these methods require almost no investment and can be implemented immediately, while others may require some funding, but the return on investment is much quicker than traditional IT capital expenditures.
The standard for measuring data center energy efficiency is Power Usage Effectiveness (PUE), where a lower number is better, with 1.0 being ideal. PUE refers to the ratio of total power consumption in a data center to the power consumed by servers for actual computing tasks. A PUE of 2.0 means that for every 2 watts of power entering the data center, only 1 watt is used by the servers, with the lost power being converted into heat, requiring additional energy for cooling.
The following technologies may not reduce your PUE value directly, but you can assess their efficiency by checking your monthly bills. The real goal is saving costs.
Among the methods listed, you will not find solar, wind, or hydrogen energy, as these alternative energy sources require significant investment in advanced technologies, which are not feasible for immediate cost savings during the current economic crisis. In contrast, the following eight methods require no complex technologies other than fans, ventilation, and piping.
These eight methods are:
1. Extreme Energy-Saving Method 1: Raise Temperature Settings.
You can implement this simplest energy-saving method this afternoon: raise the temperature setting of the data center thermostat. Traditional belief holds that data center temperatures should be set below 68°F. It is often thought that this temperature setting extends the lifespan of equipment and provides more time to react in case of a cooling system failure.
However, experience shows that when server components fail, especially hard drives, the operating temperature tends to rise. But in recent years, IT economics has crossed an important threshold: server operational costs often exceed the cost of acquisition. As a result, prioritizing cost reduction over hardware protection is more important.
At the GreenNet conference last year, Google’s “green energy czar” Bill Weihl shared Google’s experience with raising data center temperature settings. He claimed that 80°F is the new safe temperature. However, your data center must first meet a simple prerequisite: isolate the cold air used for cooling from the hot air generated after cooling. If necessary, use thick plastic curtains or insulation panels.
Although Google claims 80°F is safe, Microsoft’s experience shows that temperatures can be set even higher. Microsoft’s data centers in Dublin, Ireland, use a “no cooler” mode, where they use free outside air to cool the servers, with intake air temperatures reaching 95°F. However, as the temperature setting increases, there will be a diminishing return, as the increase in server fan speeds leads to higher energy consumption.
2. Extreme Energy-Saving Method 2: Shut Down Unused Servers.
Virtualization has shown the energy-saving benefits of putting unused processors, hard drives, and memory into sleep mode. So why not shut down entire servers? Is the “business flexibility” of keeping servers on standby worth the energy costs they consume? Have you found any servers that could be turned off? Shutting them down would give you the lowest energy consumption—zero, at least for the server. However, you’ll first have to deal with objections from naysayers.
They often argue that rebooting a server reduces its average lifespan because voltage is applied to non-hot-swappable components like motherboard capacitors. This idea has been proven wrong: in reality, the components used in servers are the same as those in devices that frequently start, such as cars and medical equipment. No evidence suggests that frequent reboots decrease a server’s Mean Time Between Failures (MTBF).
The second misconception is that rebooting takes a long time. You can shorten startup time by disabling diagnostic checks at startup, booting from disk images, and using hardware with hot-start capabilities.
The third objection is: if a server must be started to accommodate an increased load, users don’t want to wait, no matter how fast it starts. However, even when application request speeds are slow, most application architectures will not reject new users, and users won’t notice they’re waiting for a server to start. It turns out that when the number of users affects the application, users are willing to wait if you inform them that “we are starting more servers to improve your request speed.”
3. Extreme Energy-Saving Method 3: Use Free Outside Air for Cooling.
Higher data center temperature settings will prepare you for the second energy-saving method, called free-air cooling. This method uses cooler outside air as a cooling source, eliminating the need for expensive coolers. Microsoft’s data centers in Ireland use this method. If you’re trying to maintain 80°F inside, and the outside air is only 70°F, simply blow in outside air for cooling.
Compared to Method 1, this method requires some effort. You’ll need to reconfigure ventilation ducts to bring outside air into the data center. Additionally, basic safety equipment such as air filters, dehumidifiers, fire dampers, and temperature sensors must be installed to ensure the outside air does not damage sensitive electronic equipment.
In experiments, Intel successfully reduced energy consumption by 74% using the outside air cooling method. Two groups of servers ran for ten months, with the first group using traditional coolers, and the second using a combined outside air and cooler system. The second group used outside air for 91% of the time. Intel found that the servers using outside air cooling accumulated a lot of dust, indicating that large particle filters and finer filters were needed. Furthermore, because the filters require frequent replacement, easy-to-clean and reusable filters are necessary.
Despite the heavy dust buildup and wide temperature fluctuations, Intel found that the failure rate of servers using outside air cooling did not increase. For a 10 MW data center, this method could save $3 million annually in cooling costs and 76 million gallons of water, which is expensive in some areas.
4. Extreme Energy-Saving Method 4: Use Hot Air from Data Center Cooling to Heat Offices.
You can double the energy savings by using hot air from data center cooling to heat offices. Similarly, you can use cooler office air to cool the data center. In colder weather, you’ll get plenty of heat, and at the same time, the data center’s additional cooling needs can be fully met with outside air.
Unlike external air cooling, you may no longer need your current heating system. That is, you won’t need a heater as tall as a person. And you don’t need to worry about harmful substances being emitted by electronic devices in the data center when generating heat. Today, servers that comply with the “Restriction of the Use of Certain Hazardous Substances in Electrical and Electronic Equipment” (RoHS) directive no longer use polluting materials such as cadmium, lead, mercury, and polybrominated compounds.
Similar to external air cooling, the only technology you need is experience with heating, ventilation, and air conditioning (HVAC) systems: fans, ducts, and thermostats. You’ll find that your data center can provide enough heat to replace traditional heating systems. IBM’s data center in Uitikon, Switzerland, provides free heating to local residents, saving enough energy costs to heat 80 households. TelecityGroup Paris even supplies the hot air from their data center to greenhouses year-round to support climate change research.
Rearranging your heating system might take a week, but because the costs are low, you’ll see returns within a year or even sooner.
5. Extreme Energy-Saving Method 5: Equip SSDs for Frequently Read Data Sets.
SSDs are popular for their fast read speeds, low power consumption, and low heat generation, making them widely used in netbooks, tablets, and laptops. SSDs can also be used in servers, but their high cost and lower reliability have hindered deployment. Fortunately, SSD prices have dropped significantly in the last two years, and data centers can achieve rapid energy savings by deploying SSDs. Simply storing some applications on SSDs can greatly reduce energy and cooling costs for disk arrays. They can reduce power consumption by about 50%, with almost no heat.
A limitation of SSDs is their limited write endurance. Currently, single-layer cell (SLC) SSDs for server storage can handle about 5 million write operations. Consumer-grade multi-layer cell (MLC) SSDs have larger capacities but only last one-tenth as long as SLC SSDs.
The good news is that you can now purchase SSDs that are compatible with existing interfaces to replace high-energy, high-heat mechanical hard drives. To quickly reduce energy consumption, store data sets that only need to be read, such as streaming video files, on SSDs. This avoids the write cycle limitation of SSDs. In addition to lowering energy and cooling costs, startup speeds will also improve significantly.
When choosing SSDs, opt for server-grade SSDs, not desktop-specific SSDs. Server-specific SSDs often use multi-channel architectures to improve throughput. Standard SATA 2.0 interface SSDs typically have a transfer rate of 3Gbps, while high-end SAS SSDs like the Ultrastar from Hitachi and Intel can transfer at 6Gbps and offer up to 400GB capacity. While SSDs still have some design flaws, they mainly concern desktop and laptop SSDs’ BIOS passwords and encryption issues, which do not affect server-specific SSDs.
6. Extreme Energy-Saving Method 6: Use Direct Current in Data Centers.
Yes, reuse direct current (DC). The logic is simple: servers internally use DC, so eliminating the step of converting AC to DC for power supply can result in quick energy savings.
In the early 21st century, DC was very popular in data centers. Back then, the power conversion efficiency of data center server power supplies was only 75%. But with improved power conversion efficiency, data centers started using more efficient 208V AC. By 2007, DC had fallen out of favor. However, in 2009, DC became popular again, thanks to high-voltage data center products.
In early data centers, the power company outputted 16,000V AC, which was converted to 440V AC, then 220V AC, and finally 110V AC for the servers. Each step of conversion wasted power, and the lost power was converted to heat, which needed to be dissipated by cooling systems, increasing electricity costs. Direct conversion to 208V AC reduced one conversion step, and the server’s internal power supplies had a maximum efficiency of 95%.
By 2009, new data center equipment could directly convert 13,000V AC from the power company into 575V DC for input into the server racks. The server racks would then convert the 575V DC directly into 48V DC for the servers, achieving twice the efficiency of traditional AC power conversion technologies, and producing less heat. While vendors claim a 50% energy saving, most experts consider a more realistic saving to be around 25%.
This method requires some investment, but the technology is not complicated and has been proven to be effective. One potential hidden cost is that 48V DC transmission requires thicker copper cables. According to Joule’s Law, at the same power, low voltage requires thicker wires due to the higher current.
7. Extreme Energy-Saving Method 7: Dump Heat Underground.
In warmer climates, external air cooling cannot be used year-round. For example, in Iowa, while winter temperatures aren’t very low, summer temperatures reach 90°F to 100°F, making it unsuitable for using external air for cooling.
Generally, several feet underground, the temperature is relatively low and stable. Underground temperatures are less affected by outdoor weather like rain or extreme heat. If you bury pipes deep underground, the cooling water absorbing heat from the servers will circulate underground, where the heat is absorbed by the surrounding cooler soil.
While this technique is not complex, geothermal cooling requires a significant amount of piping. Furthermore, setting up a successful geothermal cooling system requires careful analysis and calculation. Since data centers constantly generate heat, a single geothermal cooling trench may saturate the surrounding soil’s temperature, causing the cooling system to fail. You need to analyze the heat dissipation capacity of the surrounding land to determine how much heat can be absorbed and whether the underground water table can enhance heat dissipation. You also need to assess whether this method is feasible and the potential environmental impact.
8. Extreme Energy-Saving Method 8: Dump Heat into the Ocean.
Unlike geothermal cooling, the ocean can infinitely absorb the heat emitted by data centers. Ocean water cooling systems are similar to geothermal systems, but they require a sufficient water source, such as the Great Lakes between the U.S. and Canada.
Using ocean water for cooling is the most ideal scenario. In coastal areas, the ocean can cool data centers through heat exchangers. Google applied for a patent for this method in 2007. However, Google’s ocean cooling plan is not applicable to most of us, as it requires an island.
If your data center is located near the ocean, a large lake, or an inland waterway, the situation is much simpler. Nuclear power plants have been using seawater or lake water for cooling for decades. In fall last year, “Swedish Computing” reported that Google converted a pulp mill into a data center using this cooling method in Hamina, Finland. The data center uses cold water from the Baltic Sea as the sole cooling source. The seawater also serves as emergency firefighting water for the data center. Google’s practice has proven the plan to be highly reliable. Since the pulp mill already had pipes laid to draw seawater from the Baltic Sea, Google saved considerable costs during the conversion.
Freshwater lakes can also be used to cool data centers. Cornell University’s campus in Ithaca, New York, uses water from nearby Cayuga Lake to cool its data center and the entire campus. The university established a “lake water cooling system” in 2000, which pumps 35,000 gallons of water per hour and sends it 2.5 miles to campus at 39°F.
Both seawater and freshwater cooling systems require an expensive component: heat exchangers. These exchangers separate the cooling water used directly to cool the data center from the externally drawn natural cooling water. This separation is necessary to protect both the environment and the sensitive servers in case of leaks. Aside from the costly heat exchangers, seawater or lake water cooling systems only require basic piping.
How much savings do you hope for? The value of these technologies lies in the fact that they are not mutually exclusive: you can combine multiple methods to achieve both your short-term and long-term goals. Start with the simplest method—raising your data center’s temperature settings—and then evaluate the remaining seven energy-saving methods based on the savings.
Disclaimer:
- This channel does not make any representations or warranties regarding the availability, accuracy, timeliness, effectiveness, or completeness of any information posted. It hereby disclaims any liability or consequences arising from the use of the information.
- This channel is non-commercial and non-profit. The re-posted content does not signify endorsement of its views or responsibility for its authenticity. It does not intend to constitute any other guidance. This channel is not liable for any inaccuracies or errors in the re-posted or published information, directly or indirectly.
- Some data, materials, text, images, etc., used in this channel are sourced from the internet, and all reposts are duly credited to their sources. If you discover any work that infringes on your intellectual property rights or personal legal interests, please contact us, and we will promptly modify or remove it.