back to article IBM drops Power7 drain in 'Blue Waters'

A month ago, IBM and the University of Illinois broke ground for the data center that will eventually house the Power7-based "Blue Waters" massively parallel supercomputer. The data center, it turns out, is as tricky to design as the processor and server that will be humming along inside it. During a talk at the recent SC2008 …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Boffin

    Hot headed?

    I might be missing something here, but if power consumption in HPC data centres is such a concern, and cooling a major part of this, then why not extend this radical idea of using the ambient environment to its logical conclusion and actually site the damn things where cool (or better, bloody freezing) air/water is available all year round?

    Why piss about with temperate climes, stick the things halfway up a mountain or in a remote northern latitude and be done with it.

  2. E

    Change the form factor

    Many clusters are built of 1U multiprocessor pizza boxes with 2 or 4 CPU + a PSU per box.

    Could these things not be offered with the power drawn from per-rack bus rails? If so then perhaps a single large PSU for each rack could be engineered for better efficiency.

    Instead of a full 110/220 VAC PSU in each pizza box, just have an essentially empty PSU that plugs into the power bus - the rest of the pizza box can be the same product sold for non-clusters.

    Also the comment above is good: build data centres where there is a lot of cold water already. Canada has good power infrastructure and a lot of cold water.

  3. Keith T
    Alert

    Lots of cheap electricty and cold air up north

    There is a lot of cheap hydro-electric power available in Northern Manitoba, Canada.

    The ambient air and ground temperatures make cooling cheap and easy.

    A portion of the data center waste heat could captured in hot water be sold to the local community for heating.

  4. Chris C

    Good ideas

    "...the Blue Eaters machine is supposed to have more than 800 TB of main memory (with at least 32 GB of main memory per SMP node), and more than 800 TB of main memory."

    I believe that statement is repetitive and redundant.

    "...comprised of 162 racks of servers, organized into three columns, with each column having five rows with 9 racks each."

    3 columns times five rows times 9 racks = 135 racks. Where do the other 27 racks go?

    "The Blue Waters data center will not have room-level air conditioning, either..."

    It's not my place to question data center designers (having no personal experience with data centers myself), but from a layman's viewpoint, won't the data center still get hot? Even if you can pipe all the heat away from the processors and memory using the water cooling, what about the hard drives? Will those also having water-cooling blocks attached to them? If not, won't the spinning platters and moving heads generate a good amount of heat?

    At any rate, it's good to see someone thinking about data center design to increase efficiency.

  5. Dave Edmondston
    Go

    @Hot Headed

    Agree - it's cold here for 9 months of the year (Edinburgh); I'm sure Scotland, Norway, Finland, Canada, Alaska, etc, etc can beat that and provide staff/locations for data centres. Use the world's natural resources where you can. Great things can be done with fibre cable and iLO cards.

    I seem to recall Iceland might be needing an industry to branch out into these days. Hmm...

  6. nagyeger

    Re: Lots of cheap electricty and cold air up north

    So instead of combined heat and power systems, we'd be taking combined heat and petaflop systems. I like it

  7. Gordon Henderson
    Boffin

    Re: Change the form factor

    It's not that easy to distribute low voltage at high currents. 300W at 5V is 60 amps. Multiply that by (say) 36 in a full rack and it's 2160 amps. Now modern systems use more than just 5V, so DC-DC converters are required, or distribute the other voltages too. High current introduces more loss (I^R loss), so dissipates more heat, so you need bigger power rails, more copper and that in itself will cause issues... (And if the total resistance of all that copper was 1000th of an ohm, then the losses at 2160 amps would be 4.6KW) It only gets a lot worse as the voltage drops and the current increases )-:

    More efficient chip architectures are needed. (eg. ClearSpeed) but for general purpose server use, I'm switching to Intel Atoms and am eyeing up the AMD offerings too (Eg. 45W for a dual HT Atom core system with 2 x 1TB drives vs 225W for a Dell 2U monster with 1/10 the disk capacity!)

  8. Anonymous Coward
    Alert

    200,000 Cores....

    .... They'd better get their PVU license counts correct before IBM sting them for a ton of cash ;)

  9. This post has been deleted by its author

  10. Frank

    Is this correct?

    "..a 300 watt device running in a data center requires 800 watts of input power. .."

    I can readily believe this, what with the inefficiencies at each stage of the power supply chain and the large burden of providing power for the cooling systems. By 'input power', I assume this means total power supplied to the entire facility from the electricity provider.

    The final statement of planned and calculated savings seems way too optimistic however:

    "So to power up a 300-watt device in this data center will only take about 350 watts of input power. That's a very big improvement."

    That's not a 'very big improvement', it's an unbelievably enormous improvement. I would ask if this compares like with like. The initial figure of 800W to run a 300W box referred to total power provided to the facility, with all ancilliary equipment (power chain, cooling, etc).

    If it is the case that this level of improvement can be achieved with simple design and usage techniques, then why on earth hasn't it been done before in other data centres, resulting in massive running cost savings (as well as saving the planet etc). ?

    It's probably too much to ask but I'd like to see a follow up study done on this centre when it's completed and in normal operation. That would be interesting.

  11. Steven Jones

    Best Heat Sink

    Is, of course, the oceans. Pick a location where the sea is cold all the year round and use a heat exchanger to dump the heat. That's something the designers of Nuclear power stations have known about for years. Air cooling is never as reliable - even up a mountain (at least to accessible heights), the air can get warm. At high altitudes the air density is significantly lower which lowerstthe effectiveness of heat exchangers.

    Of course you want to choose a place where the sea is cold all the year round - but the USA has Alaska, and the sea there is very cold (plus I assume they have significant hydro potential). Even further South the Pacific is generally pretty cold as there's not much of a continental shelf (although being on a subduction zone, then earthquakes and tsaunami are always a danger).

    Of course being posted to Alaska might not be that popular....

  12. Danny
    Paris Hilton

    Inverness has a few datacentres

    N Scotland is ramping up datacenters, cheap tidal energy and low ambient temperatures with the added high speed data links make for a good combination.

    I have often wondered why Iceland hasnt cleaned up. Power is almost free from the geothermal plants and ambient temperature is hardly a problem. I suppose raw data links would be an issue having no "land lines" and satellite only.

    Paris, I bet she has no problems with her thermals.

  13. Andrew

    The question on everyone's lips...

    Will it run Vista?

  14. Youvegottobe Joking

    @ Chris C

    The storage boxes attached to the clusters can be in a seperate room if necessary. The nodes themselves would not need any hard disks. Note that the storage boxes can put out a fair bit of heat but even so that would only be a fraction of the heat put out by a high density rack.

    Most current high density racks blow out an incredible amount of heat (standing behind a rack of HP bladecentres is like you just opened the door of your car on a hot day), but almost all of that is from the CPUs, PSUs and chipset/RAM. The remainder of the circuitry generally does not even have heatsinks on them. If you cool the ram/chipset and CPU with water and the power comes from somewhere else then there will be very little heat generated by the rack and minimal cooling would be required.

    Ideally as mentioned these datacentres should be put in places with cheap power and low temperatures. Also a good idea mentioned above, the hot water produced could be piped to nearby residences and offices during winter...

  15. John Ryland
    Linux

    Fortress of solitude anyone?

    Hmmm, perhaps cooling is the reason that Superman's Fortress of Solitude in in the land of ice...

  16. E

    @Gordon Henderson

    Thanks for explaining!

  17. David Halko
    Thumb Down

    For the cost of the building, they could have a SUN Black Box and get the computers for free!

    It is rather ironic - I am not sure how IBM convinced them to buy a building for their supercomputer with some unbenchmarked possible proprietary 8 core processors with a proprietary OS.

    The last Power processor with a single socket got 53.2 SPEC CINT2006 Rate

    If they would get a SUN Black Box...

    http://www.sun.com/products/sunmd/s20/specifications.jsp

    1 Black Box = 8 racks * 80 sockets = 640 Sockets

    Need more capacity? Well, add another box and stack it.

    Add a bunch of Open Source 8 core processors with a bunch of Open Source OS's (download OpenSolaris or Linux) and get the same performance.

    http://www.sun.com/servers/coolthreads/t5140/

    A SUN CoolThreads T2+ will give the university 85.5 SPEC CINT2006 Rate per socket. With 2 sockets per 1U, the university could get 160 SPEC CINT2006 Rate per U of rack space. I wonder if IBM will be able to keep up.

    The university would get the same computing power rolled to them and usable today if they went with SUN.

    What an awful idea to wait for something to be cooked up by IBM somewhere in 2010 that they could have had since April of 2008 from SUN at a lower cost.

    Someone needs a better education!

  18. Anonymous Coward
    Go

    @David Halko

    Remember that this latest supercomputer will probably be used to run the big scientific problem. Now from my experience of such things, integer performance doesn't really figure big - so using CINT2006_rate - which is an integer benchmark - isn't that useful.

    In fact, in my work history I remember working with one researcher on his (integer based) code and the old Sun Ultra60 was giving 'our' Cray Y-MP a damned good thrashing. But no one would argue that this meant we should have skip'd our Cray in favour of a shed load of U60's!

    So go back to your Sun specs and do a meaningful comparison - one based on Floating Point performance. Trouble with this is that the Power processor lines (as far as I can remember) have had comparatively poor integer performance, but their floating-point values have been pretty class leading.

    Personally, I'll be interested to see what the production Power7 can do... :)

  19. Rackspanner
    Coat

    Calm down boys

    It's pretty pointless arguing where you should build this kind of thing when it's already clearly been decided to put it in a shopping centre just outside the M25.

  20. Anonymous Coward
    Thumb Down

    @David Halko

    The one that needs better education is yourself.

    Integer benchmarks are unusable at HPC workloads (e.g. CINT you mentioned).

    If Sun has nice gear for HPC they would figure in TOP500, but they fail.

    Sun boxes have a poor design for HPC throughput, that's why IBM parallel designs always wins with POWER and Cell machines.

    Learn and read more, before posting pls!

    Regards!

This topic is closed for new posts.

Other stories you might like