Google is brilliant, simply brilliant.
They have a power problem, that is very true. They also pay a premium for more efficient power supplies.
It is true that most home PC’s are idle most of the time, so do not benefit from a more efficient power supply. But, if all PC’s shipped with a 90% power supply, the supplies would be in volume manufacturing.
The premium Google pays would go away. They could buy standard (and efficient) power supplies off the shelf, for cheap.
Plus, they look like good environmental Al Gore friends of the earth. PR and profits, brilliant.
Google is brilliant, simply brilliant.
Most of the world’s computers are running screen savers at any given time (I’m not talking about server farms).
Some estimates put the number of mips consumed by screen savers at near 90%.
So, we are paying the power companies to run screen savers…
Is power so ridiculously expensive in California because some resource is running out? If so, there’s a bigger problem than just money…
Does anyone know whether how many years we can power the USA using our current sources of fuel?
my cpu says up to 85% efficiency… but then i’m a gamer so i say waste away! my 8800 seems to suck a lot of power.
Glad to see Google pushing their innovations across the industry. The improvements they have to make by necessity can lead to a more efficient use of power by the entire population as better power supplies are pushed out.
With all of these issues people are beginning to see the true cost of inefficiency of devices on the planet. As someone commented above, one of the major issues is cleaning up devices that are still sucking out power on “standby” mode. If all the devices that are left idling could more intelligently drop their power use down then we would really see a substantial decrease in usage.
About two years ago, I was part of a group that was given a tour of DreamWorksSKG’s computer animation studios, which included the “render farm” room full of rack upon rack upon rack of servers grinding away to render every frame of their latest films. The tour guide mentioned that the beancounters had been having a hard time settling on a way to budget the computer time against the films. The cost of the hardware bought for one film shouldn’t be counted against a second film, and anyway the hardware costs were becoming a smaller and smaller fraction of the budgets. After some analysis (including the electricity/CPU costs), the beancounters settled on a metric that has been surprisingly consistent over the years, regardless of changes in CPU speeds or power consumption.
They now budget the costs of THE AIR CONDITIONING USED TO KEEP THE COMPUTER ROOM COOL. Frames of finished film rendered versus BTUs.
This was initially surprising to us on the tour, until we did as the tour guide suggested and held a hand up on either side of a typical server rack to feel the breeze flowing through it. Sure enough, the
temperature was quite noticably (maybe 15 degrees F) warmer coming out than going in. That’s a lotta heat.
Makes me wonder if I should think of my computer as a space heater.
A number of companies besides Google are focused on developing high-efficiency power supplies for data centers, including ColdWatt (coldwatt.com) which claims to hit 91 percent.
I’ve seen figures up to 30 cents/kw/hr for California. For those of you who don’t think that’s bad, I’m paying 6.8 cents in Virginia. That’s the maximum rate for summer months. Of course, we get about 30% of our power from nuclear.
If I was in California I’d be rioting in the streets. I can’t think of a reason for that large a disparity that doesn’t involve corruption and/or incompetence.
everyones turning green. intel, google, shrek.
Didn’t IBM just announce something about power-saving datacentres?
A. Lloyd Flanagan stated: “If I was in California I’d be rioting in the streets. I can’t think of a reason for that large a disparity that doesn’t involve corruption and/or incompetence.”
Apparently, you missed the power crisis in California a couple of years back. The problem is simple: NIMBY and BANANA (“Not In My Back Yard” and “Build Absolutely Nothing Anywhere Near Anything”). California basically failed to improve their electricity generating capability for several decades while power use climbed continually. At that time, the vast majority of their power came from natural gas fired plants. Natural gas is fine for handling peak loads. But, it’s expensive. Using it to handle all the load is dumb. Coal, nuclear, or hydro is what they needed for that. They didn’t have it, and AFAIK, still don’t have it. Thus, expensive electricity.
No, the real challenge is to drop power consumption levels on idle to something much lower.
Oh, it’s certainly possible to build a low-power PC if you are selective about the parts you use. My home theater PC, for example, draws about 70w at idle.
The other option, as Chubber pointed out, is to go with laptops which are low power by necessity. Since more and more people are choosing laptops over desktops-- within a few years, more than 50% of all computers sold will be laptops-- maybe this is a self-correcting problem.
this made the front page of digg.
I understand how you can use a meter (Kill-A-Watt or similar) to measure the energy draw from the wall socket, but how do you measure the DC output from the power supply?
A few years back I was participating in an effort to validate computer models used to predict climate change (climateprediction.net), to which I contributed 129.4 simulation years from 8 CPU’s in my home that were otherwise powered on and running 24/7, but 99% idle. I stopped contributing when the 5% higher electricity bills started coming in–electricity bills in my area come with a handy historical consumption chart, so the real environmental effects of trying to predict climate change were pretty obvious. Oh, the irony…
Someone had a blog post recently that suggested that climateprediction.net and similar projects should in future use only Sony Playstations and video card GPU’s, since the power-per-teraflop efficiency of desktop CPU’s was so much lower.
My desktop at work has a CPU which in theory consumes 90W in “performance” mode (3.0 GHz) and 82W in “powersave” mode (2.8 GHz). The older machine it replaces had a 25W 375MHz mode. This is not progress…
I literally use my computers as space heaters. When working at home in the winter I simply close my office door to retain all the heat generated by the C++ compiler, instead of raising the thermostat to heat the entire house, and I have used my laptop instead of a blanket while watching TV on the couch on more than one occasion…
Wouldn’t it make more sense for them to have one 120V to 12V transformer per server room, and then run 12V directly to each computer? Somehow it feels like you could do a better job that way, or am I missing out on something? Didn’t someone at Google suggest that we should use 12V at our homes instead of 100/120/230/whatever volts?
No. There’s this little problem of I-squared-R (Ohmic) losses. It takes more current to transfer a fixed ammount of power at a lower voltage (P= VxI). Unfortunately, wires are not superconductors, and their non-zero resistances end up heating the wires. The power dissapated in the wires is Pdiss= (I^2)R, and goes as the SQUARE of current. For a fixed power, doubling the voltage cuts the current in half, and reduces the power loss in the transmission lines by 75%!
Ever wonder why power transmission lines that cross the countryside are operated at such high voltages? It’s to minimize the ohmic power losses.
but how do you measure the DC output from the power supply?
Usually, you don’t. You estimate DC output based on typical efficiency numbers for that PSU at that wattage level.
Places like http://www.silentpcreview.com are measuring DC output, I think using multimeter alligator clips (and a heaping dose of caution!).
A data center’s raw power costs are not 15c/kWh. Every Joule of energy a server uses has to be removed. By AC this typically uses 60% as much energy again. Data center operators have to supply conditioned power. Accounting for the infrastructure cost for the UPS and secondary power source pushes the real cost of power way up. Consequently, the real cost of power in a data center is likely to be more than double the most expensive rate listed above. This halves payback time for any power savings anywhere.
I am currently helping to figure out how not to build another data center by making better use of our existing one! Good for business, good for the environment.
Use a script or hibernate, you can save at least 8 hours (1/3 a day) of energy your computer wasted. This saves more than a 90% efficient power supply. For google, i’ll shutdown US server farm when the timezone goes to European.
FOR /F %%d IN (‘echo %date%’) DO SET StartDate=%%d
ECHO “%WinDir%\system32\shutdown.exe” -s -f -t 90"%Windir%\System32\Shutdown.cmd"
ECHO CMD /K “%WinDir%\System32\Shutdown.cmd”
SCHTASKS /DELETE IN SaveEnergy /F nul 2nul
SCHTASKS /Create /SC DAILY /TN SaveEnergy /ST %StartTime% /SD %StartDate% /TR “%WinDir%\System32\Shutdown.cmd” /RU “%USERDOMAIN%%USERNAME%” /RP *
For a detailed breakdown of AC power demand under idle and max load conditions for many different types of systems, have a look at the updated REAL SYSTEM POWER REQUIREMENTS table on page 4 of SPCR’s Power Supply Fundamentals – http://www.silentpcreview.com/article28-page4.html
Most of the example systems are using high efficiency ATX PSUs; the highest one being 84~85% at the top of its efficiency curve. The max power draw of even an OC’d Pentium D950 + ATI X1950XTX system is just 256W. If a 70% efficient PSU was used for this system, the power draw would be more like 310W.