back to article Green data center threat level: Not green

Data centers are rampaging energy hogs. If you haven't been thoroughly beaten over the head with that fact already — you'll have to give us a tour of the underside of your rock some time. We're sure it's lovely. No doubt there's big money to be sluiced from data center electricity woes. And just about every tech company is …

COMMENTS

This topic is closed for new posts.
  1. Matt Bryant Silver badge
    Boffin

    Energy Czar - unlikely!

    Here's the problem - if my racks each cost £780 in juice bills from my opex budget, and say I have 1000 racks and I manage to implement a 10% saving, that's only £78k in the year. To have the clout to be able to actual enforce changes, a dedicated Energy Czar would have to be board level, which means he would likely cost the company £200k+ per annum out of someone else's budget, which is a lot more than the £78k saving which would go back into my opex budget, and that's before we consider the costs and possible risks (brown-outs, reduced computing power, changes of platform, annoying other members of the board like the FD) of messing with the current setup.

    The reality is big electricity bills are a pain but they're still not painful enough for most companies to undertake the expense of going seriously green, especially with budget tightening likely with the current economic mini-hysteria. So, in the meantime, green selling points which reduce opex are nice, but they are not the major selling point.

  2. Jason Hanna

    @ Matt Bryant

    I believe the article was quoting £780 per rack per *month*. If I'm using your example that's nearly £1M in savings per year on a 10% reduction.

    Not sure I agree with your comment that an "Energy Czar" would have to board-level or dedicated, either. If you're working in an organization with 1000+ server racks, I'd sure hope you have some sort of architecture team with responsibility for reviewing technical projects, hardware and software decisions, etc.

    Why can't this be a role granted to a high-level IT architect or facilities guru already in your company?

  3. trackSuit

    At the flick of a Switch -may contain some jokes

    Ask an accountant and he'll say lower the supply voltage to the data center by 10%.

    Ask a lawer and he'll say sue the equipment manufacturers for not being green enough.

    Ask a share holder and he'll say decimate the employees (in the old sense of the word).

    Ask the managers and they will simultaneously say "we have no money"... and... "let's employ a consultant firm to write a big report about our problems".

    Time wasted recasting the problem will not fix it though. And companies always have plenty of money to spend on the things they want to spend it on. Time to think about how to solve problems for the long term, rather than running round in tight circles waving arms around.

    This will require deep thought, planning, a thorough understanding of the company, its employees and the technology it uses and.... leadership -none of these should be a problem for a modern company whose aims are to be competitive, stay in business and provide jobs.... A company who accepts change as a natural part of life rather than something to shy away from.

  4. Solomon Grundy

    @Jason Hanna

    I agree, if this role was needed the responsibilities should be placed on existing staff. No need to hire someone to analyze the electric bill.

  5. Chris
    Thumb Up

    Come to sunny South Africa! Cheap!

    Why not bring all your datacentres to Sunny South Africa! Even with looming rate hikes we still have the cheapest electricity* around, and IT skills to boot!

    *eletcricity may not actually be available at all times of day please check your local Eskom subsidiary for dates and times of scheduled power cuts

  6. Valdis Filks

    Technologies already exist to reduce power usage

    We should prevent energy usage rather than increase energy usage and then cool it. Prevention is always better than cure. We should design/architectect systems from scratch with lower power & least usage. These technologies exist today and are mature. We should not add hotter and hotter chips and more disks which require more and more cooling. This is an upward spiral in usage. Lets avoid the power usage problem in the first place. Use a low power solution in the beginning and then all following problems disappear.

    We buy power hungry devices then cool them. Compounding the problem, by redesigning and upgrading additional cooling systems.

    We should buy power efficient devices, that need increase in cooling - removing the problem at source, then no need to spend money on extra cooling.

    Examples:

    Replace all desktop PC's and laptops with thin clients. These exist from Sun and Citrix (both solutions run/support Windows apps). How many workers are really mobile. If you sit by a desk all day you do not need your own PC. Most PC's, laptops run at low CPU utilisation all day. Sunray running Unix apps and Windows apps is mature and proven existed for years. No extra staff required, security problems solved as no local data/disk/cd drives. Replacement of (2Kg) thin client takes 10mins, (20Kg) PC/laptop 1-2 days. Thin clients have no local software so no re-installs required. Many indirect savings.

    Take all inactive data and put it or achive it on tape drives. Tapes require no cooling or power when inactive, disks do. This fact cannot be ignored. We can use de-duplication and then move the de-duped data blocks to tape. Use VTL's or any type of virtualisation to send data to disk. Behind the disks use tapes to store the data that is not used weekly or monthly. e.g. disk buffer pools for daily weekly used data moved to tape after a month of inactivity. All tapes are encrypted, no offsite or lost data issues, tape encryption is mature and generally available.

    Virtualisation and hypervisors take extra CPU cycles and are not required to consolidate applications from many systems to one system. In the last 20yrs we have been running many apps on one shared server. Any decent OS using CPU/Mem sharing algorithms can run many apps. Most enterprise OS platforms have built in virtualisation and resource (RAM, CPU) sharing software. Large (e.g. 16 CPU, multi-core) servers can consolidate & virtualise many small servers. These large servers have reliability tecnologies far exceeding the mainframe mythological reliability. Large Unix servers and mainframes are the same thing. But more application choice and simpler staffing issues with Unix servers.

    You can normally replace all web tier and small print servers, email servers with multi-core servers. E.g. approximately 5 small servers can be replaced by one Coolthreads server. With a 5 x reduction in power and cooling requirements. No need to add extra cooling/chillers in your datacenter.

    Summary:

    From 500W desktop PC's go to 10W thin clients

    From 900W - many KW disk arrays go to 0W tape. Immense storage savings.

    From many web tier 700W servers go to 400W single CPU multi-core throughput servers (e.g. Sun Coolthreads servers).

    Just by using existing, proven technologies that require no extra training or new technologies we can solve todays power usage problems.

    You can get this from most of the major IT suppliers. Some technologies are unique to Sun like Coolthreads servers and SunRay thin clients. I wish others could supply these too.

  7. Brian Murray

    "Data centers are rampaging energy hog"

    indeed ... but what does that make desktops?

  8. Matt Bryant Silver badge
    Happy

    RE: Jason Hanna

    We had a three stage plan to get away from the individual project spends which resulted in a one-box-per-project mentality, where each project was treated as an individual buying exercise (as an example, at one point we had seventy-two laptop/desktop models from three major and four minor vendors!). The first step involved a lot of bitter politics as we took power away from individual project managers, department managers and from our purchasing department, and put a lot more responsibility onto our architects. Then, having decided a clean-sweep-policy wouldn't fly as throwing everything away and starting from scratch was too expensive and too risky to the business, we had to draw up standard builds which allowed us to make savings and optimisations in small steps as systems were retired, replaced or added. Finally, we added virtualisation to the mix to further optimise the whole. Throughout, we reviewed progress and used the savings data to reinforce our message and protect it from those that wanted their own buying power back. Despite electricity savings being monitored, they were relatively minor compared to the savings in other areas, and were seen more as a bonus than a design goal. When we review the builds this year, despite our company having a green policy, it will still be far down the list of design criteria. As possible savings from other areas start to become thinner due to repeated optimisations, then maybe savings from reduced power consumption will become more important and start to sway our design process more.

  9. Steven Knox

    @Valdis Filkis

    While I agree with you in principle, there are some discrepancies in your comment, including real-world problems with your solutions, and overstatements of the problems, Your summary is a good place to start:

    >> From 500W desktop PC's go to 10W thin clients

    500W desktop PCs? REALLY? All of the desktop systems that we've found for positions which work well with thin clients have had 100-200W power supplies max. The only 500W systems I've even heard about are workstations which have processing requirements that make thin client solutions choke, or home gaming PCs for the suicidally insane. You also forgot to include the 3-4 700W or more servers for running all of those desktops in your calculations. Our real-world experience is that 1 @700W server can host about 20 10W thin clients (a total of 900W) with no noticeable performance hit. This is stil better than 20 low-end desktops (2000W), but it's much less than the 98% savings you imply. It's also important to notice the security RISK you fail to mention which comes along with thin-client solutions: you now have 20 separate vectors of attack against a single machine. Statistically, this significantly increases both the likelihood of an unpatched vulnerability being exploited, and the scope of damage. On the other had, it can also improve the ease and quality of patch management. Finally, before you jump on the thin client bandwagon, you have to ensure that your applications really are compatible with thin client environments IN THE CONFIGURATION YOU NEED. We've run across several "thin-client-compatible" applications that were not compatible enough to actually work in our environment.

    I also find this statement to be an oversimplification:

    >>Virtualisation and hypervisors take extra CPU cycles and are not required to consolidate applications from many systems to one system.

    In a perfect world, with a perfect OS, and perfect applications, yes, that's true. But in this world, I personally have found many applications which WILL NOT work together well on the same box. Good virtualization systems allow you to consolidate such applications onto one box while keeping them from biting each others' toes, while using very few system resources. My company uses all 3 solutions: dedicated servers for high-processing applications, multiple-application boxes where possible, and virtualization for those apps which aren't nice to each other.

    >>Just by using existing, proven technologies that require no extra training or new technologies we can solve todays power usage problems.

    No, they don't require "no extra training". Even if it's just explaining to users why they don't have a CD-ROM in their thin client, each of the changes you mention do require extra training. No, they don't solve today's power usage problems completely. They at best reduce them to a manageable level. But you're right about one thing: the technologies are available today, and they not only reduce power usage, but they save companies significant amounts of money as well. They're not quite the painless silver bullet you imply, but I speak from experience when I say that any company which is not investigating these technologies is costing themselves money.

  10. Peter

    I'd comment on this, but then it seems I'd be killing all of you... us:(*

    Where is Dilbert when you need him?

    *Oh, darn. Too late.

  11. Anonymous Coward
    Coat

    If you save $750 a month per rack

    Theoretically you can save 3X this amount as for a large data centre, about a third of the total energy bill goes to the servers. The rest is used on power distribution, power and cooling the DC.

    Less requirement at the server = less infrastructure cooling needs.

    Mines the thermally insulated one for as I like to keep my DC cold.

  12. Henry Cobb
    Flame

    Third world nation power cuts are no joke

    You can ration by the market or ration by government action.

    In India they monkey around with the power so much that around 10% of our electric bill is in diesel which we burn in the automobile sized generator in our parking lot.

    If they would simply raise the electric rate (and compliance level) until they had Californian levels of service (just as an example of another blackout prone area), perhaps the farmers wouldn't be pumping the aquifers dry.

    Given that India and China are pressed for food now, while they have aquifers to empty, just imagine the fun they'll have a few decades from now when they don't.

  13. Steve Mann

    Power Requirements

    If the industrial west would get its collective finger out and establish the solar power sat network with microwave downlink that people-in-the-know have been saying we should do for thirty five years, we could drop the price of leccy to fractions of a penny a kilowatt, make it as clean as it is probably ever going to be possible to get it and finally get us off needing oil for power generation.

    All that sunshine just being wasted. All that bloody rigamarole with "Carbon Credits".

    Gordon Bennet!

This topic is closed for new posts.

Other stories you might like