When Hardware is Free, Power is Expensive

I used to use an old PC as my home server, but switched to a home-made low power box to save money. It cost me a bit under $500 to put together, but saves about $20 a month in electricity. The powersupply I use in it is close to 90% efficient… It uses a transformer to generate 12V dc and then uses DC-DC switching regulators to get other voltages. I measured power consumption at the wall and the whole thing uses 18W typically. When you consider that my old machine used 150W typically and still drew 7W when turned off, that’s a huge improvement.

Performance/watt is indeed increasing, if you need proof of this then compare a 3.2Ghz Pentium D with a 3.2Ghz Core 2 Duo. Fully loaded that Pentium D will easily eat through 135 watts while the Core 2 Duo would be sitting somewhere around 100 watts tops. Now factor in the gaping performance gap between the architectures and the C2D is an obvious winner. With upcoming quad-core chips dropping considerably in price Google will see jumps in performance/watt consumed as you’d be able to load powersupplies at higher values, power fewer motherboards with denser rigs (1 quad-core vs 2 dual-core rigs) and cutdown on the costs of powering the excess peripherals. Processors are constantly surpassing their predecessors in both performance and power consumption.
ps: 3.2Ghz can usually be hit on the C2D with default vcore (1.25-1.35 depending on the chip)

Yes, 14 cents per kWh is expensive.

Here in Ohio I pay about five.

14c/KWH is exspensive, here in Manitoba 95% of our power is Hydro power and we get it for $0.06CAD

Maybe Google should move here.

http://www.hydro.mb.ca/regulatory_affairs/energy_rates/electricity/current_rates.shtml

Actually for large businesses it drops down to just above $0.02 per KWH if you own the transformer on site and use more than 100kv.

For one thing, computer software starts to look incredibly expensive

The software that I’ve been using for the past 13 years is free to use (gnu, Linux, Debian, Ubuntu).

Bill’s argument, what is you time worth, is lame.
My time is worth a lot – the software is still free, I didn’t pay to use it.

Here’s an outstanding ppt from James Hamilton outlining what commodity data centers will look like in the no so distant future.

http://research.microsoft.com/~jamesrh/TalksAndPapers/JamesRH_CommodityDataCenterDesign.ppt

Jeff,

I liked your article. I’m surprised to see how much political fervor it has sparked. There are many different points of view about how much energy ‘should’ cost and whether market forces will self correct problems or whether people should artificially raise energy prices to encourage alternatives. Lots of it seems tied to the ‘hot topic’ of Global Warming and that political football.

No matter whether you’re a died in the wool socialist or a capitalist, it makes sense to say that less is more in this case, people. Technology has always made gains by doing lots more with lots less.

g

batteries for my flashlite and headlamp are getting too expensive

Of course, the question could also be whether looking for a financial incentive is the way to go on powersaving. Green computing is something we not only do for our wallets, but also for our children?

10-15 years ago they were predicting that the US had coal reservoirs that would last over 100 years with increasing demand and w/o looking for new ones.

Stupid suggestion for all of those data centers:

  1. Get rid of all but 12v there is no reason for the other voltages everything can be run off of it.
  2. Get rid of power supplies entirely and have one AC/DC converter for the building that is ultra efficient and supply all of the computers DC.

#2 on the list is such a simple thing that should be done in all new homes, adding DC voltage. Everything and their uncle uses DC voltage in houses now except for refidgerators and stoves and even those can rn off of DC just as efficently if not more so than AC.

The key is that you need to use AC for transmission for effective transmission (just ask Edison). But in the house, there is no such requirement. We should be transitioning to DC from AC power in homes ASAP. The cost/energy savings would be huge… just think of all of the power bricks around your house, each and every one of them 70% effective, when you can create one that is 95% efficient (because it’s so large) and get rid of all of that heat, and all of those power bricks…

“the greater public good”

Coding Tau?

About the ‘anecdotal’ evidence that Vista with Aero uses more power…

Tom’s Hardware did a benchmark 1-2 months ago on this. They actually tested how long the battery lasted with Vista + Aero vs XP. It turned out it the two usedroughtly the same power.

While this might be surprising to some, it definitely is not strange. Video Cards are so powerful nowadays that the operations performed by Vista’s Aero is something very simple and non-demanding for it.

You can save money now, or you can save later. It’s all preference.

In the long run, the falling cost of computing hardware is largely driven by Moore’s law, which has lowered watts per MIPS at roughly the same rate as it’s lowered dollars per MIPS.

It’s true that notebooks use a lot less power — typically 10-40W rather than the 200-400W figures you’re bandying about above.

A three-year payback is nothing to sneeze at. That’s a very-low-risk 33% ROI. You aren’t going to find that in the stock market!

Remember that every watt you use in a computer in a data center becomes 2-4 watts when you include the power draw from the air conditioning. This is also true in your house when you have the air conditioner on, but if you are paying to heat your house, it just means you have to run the heater less. Whether this saves or wastes money depends on what the heater runs on — if it’s electric, it’s obviously a wash, since 200W of heating is 200W of heating regardless of whether it’s operating by heating a CPU or a nichrome wire. Most other heating fuels used to be cheaper, but I think the recent rise in the price of natural gas in the US has made it more expensive than electricity.

RC, Google wants to hide their power usage because they don’t want their competitors to know how much computing power they have. It makes it more difficult to enter a market and compete with Google if you think you can do it with one-tenth the compute power they use (especially if you’re wrong), or if you think you need ten times the compute power they do and therefore overbuy.

To those who asked why power is expensive in California (50% more than in Finland or most other US states), the answer is that the utilities there were forced to sign long-term power contracts in 2001 in order to stop getting hammered by rolling blackouts. These contracts require them to pay extortionate rates for power. It turns out that the power shortages were engineered (by Enron, among others) precisely to extract more money from California. After these crimes were uncovered, if I remember correctly, California’s governor Gray Davis initiated a lawsuit against the energy companies responsible; they responded by funding a recall campaign against him and an electoral campaign by their buddy Arnold Schwarzenegger, who dropped the lawsuit as soon as he became governor.

California was particularly vulnerable to such machinations because (1) they are heavy energy users (because they are a major part of the national economy); (2) they don’t have nearly enough local power production; (3) they had just voted against George W. Bush.

So, Flanagan’s comment about how such high power rates must come from corruption or incompetence — they were right on target.

This is part of why I’m in Argentina.

@Tim
Exactly. My desktop computer has a 1.2v supply (most likely for logic) which is of course done by the motherboard, however, you are correct, redesigning ALL the chips for use with 12v would not work very well. I did say “Making bigger chips to handle the extra voltage (wires or lead need to be bigger)”. 12v would burn out most chips today - even gas-guzzling TTL only takes 5v! Of course, CMOS WOULD take 12v, however, when was the last time you saw a CMOS ALU? Or accumulator? Or… anything beyond a few simple gates?

And of course, linear power supplies are huge! I have a 550w PSU in my desktop computer, it is the size of about 3 bricks. A 550w linear power supply would be bigger than the case!

And on the idea of 12v to power the entire building - forget about it! Why do you think we use AC now instead of DC? BECAUSE IT TRAVELS FARTHER!

Of what I said earlier about 12v running primarily the mechanical components - well that really is ALL that they do. The CPU is usually on 5v, more likely 3.3v or even 1.2v/1.8v (on mine, 1.2v). 12v would fry it. And that would be hard to work around.

Oh well… I guess you can’t win them all…

Jeff,

Interesting analysis… but I disagree with your conclusion. Most people would think a 3-year payback is a fairly good investment. With the growing passion for green alternatives, I think people will be looking for conservation alternatives at all levels of their home and offices.

In “GREEN: THE NEW RED, WHITE AND BLUE,” Thomas Friedman, New York Times columnist and author of “The World is Flat,” visited the hydro-electric power plant that Google is using to power one of its data centers on the west coast. In the future, all technology companies need to address power consumption as part of their overall marketing and cost strategy.

Also, I have to agree with our European brethren that while 12 cents a kWh is high by current American standards, it’s just the beginning. I say this with a slight grimace as I current enjoy an artificially low 7-8 cents per kWh price here in Ohio.

To those of you saying google should just shut off servers when not in use consider this. googles best use of their power is not in responding to queries but in crawling the web for more or updated content (a never ending mission)

To those suggesting one power supply per building you must by now realize the loss of power ofer the distance would be terible, however the use of one per few motherboards would not be a bad idea.

Google could save a significant amount by giving their programmers little terminal boxes (100Mhz and 32mb ram no HD or CD)and giving them access to a server cluster for their actuall use. This would incease the efficiency of each users individual machine by eliminating 95% of it’s power. This theory would work for us at home as well, as connection costs go down we can start centralizing computing power and make more efficient use per watt. (now if someone would just offer this as a service…)

For servers, HPC, and datacenters - it’s all about SWaP.

Those same metrics can also be applied to home power users where (in my case), I’m running 8 systems 100% load 24/7/365 of a 20A breaker.

Another note that people often don’t realize: Intel processors (STILL) consume more power than AMD.

Final note (that also people don’t realize):
The cooler your CPU/system runs, the more efficient it gets. Less current leakage. (Based on data/results/publication from the IBM Journal of Research and Development for the z9 system).