When Hardware is Free, Power is Expensive

Very interesting article… I’m curious as to what got you thinking about this?

Computer power efficiency is definitely heating up as a green topic.

  1. Even though we can do a lot to make computing more efficient, it’s still a far more efficient industry ($ produced/W) than anything else going. When Google pumps out great maps for people (who then avoid getting lost) or people shop online (avoiding trips to the store), the net watts are pretty negative.

  2. Computing in the cloud also has strong energy implications: http://profitdesk.com/content/2006/10/13/saving-it-costs-by-banking-in-the-cloud/

My biggest issue for the last 5 years or so has been the requirement for a high end 400 watt power supply just to be able to get my desktop at home to boot with 2 optical drives, 2 hard drives, and a high end (for 5 years ago ;p) video card. With a less efficient 350 watt power supply the drives would all spin up during the boot process and the system would promptly reset as something was starved for power. That alone was reason enough for me not to turn the computer off unless I really needed to, as I can’t imagine the power load being high enough at idle with the power saving features enabled to counter the cost of turning the thing on whenever I need it.

Of course, the surprising thing (and this should be to most people) is that power consumption has continued to increase, as most of the chip makers (in both video and CPU chips) have touted all of their new power savings, but it seems more likely that they’re just making the chips efficient enough that they can actually be used in the home, rather than efficient enough to use less power overall.

Speaking as an electronic engineer, the suggestion to “simplify” power supplies to a single 12V power supply wouldn’t actually make things any more efficient, in fact it’d probably make things worse.

When you want to change a DC voltage to another DC voltage, you’ve really got two choices :-

Linear power supplies: voltage output is less than voltage input, output so once you’ve factored in in-circuit losses the efficiency is a bit less than Vout/Vin (12V to 5V = 41% efficiency, 12V to 3.3 = 27.5% efficiency).

Switch mode power supplies: voltage output can be less or more than the voltage input, output current depends on the efficiency or the supply and the input current. Generally, unless specially designed these work better under higher loads. Efficiency depends on the load and design, but typically 70 to 80%.

Now it’s just not possible to “redesign the chips” to live on 12V. This is how things worked in the past, but as the clock speeds went up and the transistor sizes reduced the supply voltages had to come down - not only was the voltage the major source of heating, but smaller transistors cannot cope with the high voltages. So nowadays, even though the power supply is outputting +3.3/+5/-5/+12/-12 you’ll find that the motherboard is covered in small localised power supplies (usually a combination of switch mode and linear supplies) reducing the supplies to lower values like 1.2V/1.8V etc.

If you consider that a the average efficiency on each switch mode supply is 75 to 80%, it’s pretty obvious that cascading just makes the losses increase. If anything, you’d be better off designing a single multiple output power supply that generated every signal you’d need - but this isn’t possible given the way that the microprocessor cores, and core voltages change from generation to generation.

I’d have to agree that the best way to reduce power usage would be to run a simpler system: powerful CPUs, high speed HDDs, high end graphics cards come with high power requirements. Multi-core CPUs are an improvement, but I don’t think it’s possible to power down any unused cores (yet). Use LCD displays (typical LCD uses around 25 to 40W, CRT uses 100 to 200W, plasma TV uses 400W to 1000W).

On the idea of running a single 12V supply to power a building is just crazy - not only would you have problems with ohmic losses (as mentioned earlier) but you’d need to shift some really dangerous currents around the building - for a 1200W microwave oven you’d need to supply at least 100A (excluding losses). Not only is this impractical, but it’s also highly dangerous.

Cisco’s newest labs have most devices running off DC (48V). They don’t have the overhead (heat) of AC-DC conversion in every device.

Do not generalize high electricity prices in Europe, in France it is 0,07€/kWh. But the energy sector is not yet deregulated.

Maybe we need hybrid power supplies, like hybrid cars.

It’s a bit of a joke, but in all seriousness, are notebooks more efficient because of the battery? I imagine they would have to be. For home PC’s (not servers), a power supply that runs off of battery when the computer is idle might increase overall efficiency, just like hybrid cars remove polution caused by idling cars.

Of course, if you leave your charger plugged in it could blow the whole equation (Nokia recently warned that 2/3 of all cell phone power usage is from adapter bricks being plugged in while not charging).

Hardware isn’t getting effectively free. That’s an absolutely pointless math.

Although technology creates more computing power it also creates the demand for it. I could work, play and live perfectly fine with computers 10 years ago. Today I probably need to spend even more money just to continue to be fine at playing, working and living with them.

http://tinyurl.com/2j2cm9

At least for my wallet, computers are getting effectively more expensive.

My home rig has a power supply with an integrated fan adjustor and wattage display that goes in a 5" bay. I’ve used on on my last 4 motherboards now. I find that I pretty much have to be doing something active for it to go over 100W. It almost never goes over 110, and when it does its typically because a game is starting up (3D card + Hard Drive + CDROM).

Typically it seems to idle in the upper 70’s to low 80’s.

My wife’s machine, an old WinXP Athlon machine, is basically always on. I set it to standby/hibernate/whatever after 45 minutes of idle, but when it actually worked it was a pain because XP takes a looong time to go down and come back up. Recently something changed due to something that was installed (no clue what it is), and XP no longer powers down. There goes power saving.

I don’t want to turn this into a flame war, but by contrast my mac goes in standby very quickly and comes back up instantly, and no installed software seems to interfere with the mac going to sleep.

Duncan Wilcox: “XP takes a looong time to go down and come back up”.

This is generally caused by a driver that takes too long to answer to the stand-by request. My machine, with an Intel 865PE chipset and NVIDIA graphics, takes less than 2 seconds to go to stand-by or wake up.

Me: “This is generally caused by a driver that takes too long to answer to the stand-by request.”

By the way, IIRC, Vista has improved this area. It’ll wait for drivers for only a limited amount of time. Sleeping should be faster, then.

Wouldn’t it make more sense for them to have one 120V to 12V transformer per server room, and then run 12V directly to each computer? Somehow it feels like you could do a better job that way, or am I missing out on something? Didn’t someone at Google suggest that we should use 12V at our homes instead of 100/120/230/whatever volts?

There is more to this Google/power story that I don’t fully understand. It appears that Google has been pressuring state legislatures to rewrite laws concerning power usage reporting so that they don’t have to report how much power their data centers are using (see http://blogs.zdnet.com/micro-markets/?p=1277 for example). If they are making power usage so much more efficient, why would they need to hide how much they use? Speculation among some fellow geeks has been that Google corporate wants to appear “green”, while sweeping under the rug the exorbitant amount of power they are starting to consume at their “out-of-the-way” datacenters.

Maybe that’s a sign that Quebec will be seeing more datacenters popping up; Montreal’s prices are like 6.6 per kilowatt-hour.

This makes me think back to my Macintosh Plus. It cost $2,500 (in 1986 dollars), well above today’s price for an entry level iMac. It consumed 60W and ran cool enough that it didn’t need a fan. (Of course its CPU ran at 8 kHz.)

I run a laptop with a pretty serious CPU (Core Duo T2700 @ 2.33 Ghz) and a good, gaming quality video card (NVideo GeForce 7600). It is “rated” at 89 watts (4.7 amps at 19 volts) at the power supply, but I know that the computer it’s self uses much less than that on a regular basis. The power supply is only slightly warm, and it can run the computer and charge the battery with the same 4.7 amps. I know that the computer can run without the battery installed, so that must be the peak wattage used. I run the laptop 24/7, though it usually snoozes (not standby, but video and hard drive shutdown) during the night.

I originally went with the laptop for noise reasons, but I do love it’s low power operation.

At least we aren’t all using those big 22 inch CRTs anymore. My old one was registered at 450 watts! That would warm up my small office in no time.

Power supply manufacturers could make their power supplies much more efficient and it would probably only add 10-15% to the cost. But if they are $1 more than the competition at NewEgg, they loose the sale. It’s an uninformed market. If they put “85% efficient” in the title, it would sell more power supplies. But instead they put “455 watt” so they can win the “whip it out” contest.

Brainstorm: How about a little generator wheel in your mouse and piezo keys on your keyboard to generate power while you use your computer? Off to the patent process…

“how often is your PC operating at full load? If you’re like most users, almost never.”

I find this deeply, deeply sad. More people should be gamers! I am a gamer, and I run my computer at full load from the moment I plop in front of it, to the moment I finally stumble off to bed.

In science, computers are idle half the day. The rest they are actively running from photoshop filters to 3D rendering to massive scipy/Matlab matrix crunching or assembling images. Universities would benefit a lot as well from efficient power supplies, even if efficient only at the 250W+ range.