back to article Nutanix, IBM hug each other in Power pity party

Nutanix and IBM will announce on Tuesday a new relationship that will see Nutanix build hyperconverged systems out of IBM Power servers – its first non-Intel-powered boxes. Details of what will be delivered, and when, have not yet been revealed. But The Register understands Nutanix will bring its hyperconverged stack to Power …

  1. Anonymous Coward
    Anonymous Coward

    Single pool?

    > Nutanix may also make it possible to consider x86 and Power as a single pool of resources.

    I'm not entirely sure how this would be possible. Surely any given application is going to be compiled to a specific architecture (x86 or Power). It's not like you can just wrap it up in a VM and migrate it between the two architectures unless you have some kind of emulation layer (as opposed to virtualization).

    This is still going to end up looking like separate pools of resources and, without a pooling capability, is just putting a prettier front-end in front of an architecture that is increasingly losing relevance. I can somewhat understand this move from IBM (trying to eek some more life out of their platform) but from the Nutanix perspective I fail to see how this is anything but a distraction and waste of engineering effort.

    1. Anonymous Coward
      Anonymous Coward

      Re: Single pool?

      It is probably Nutanix's strategy to get their hands on those Power workloads, which will likely be migrating somewhere in the next few years. They want a clear path to Nutanix with a bridge step... as opposed to everyone just going straight to AWS or GCP.

    2. returnofthemus

      Re: Single pool?

      If it runs on Linux, it runs on POWER https://youtu.be/t3_dn3cawIs?list=PLIz1srf1Wr9LUwe8OUo3VzRqle7Fk0UEN

      Simples ;-)

    3. Anonymous Coward
      Anonymous Coward

      Re: Single pool?

      From what I am seeing they are wasting their engineering resources in a mad attempt to do EVERYTHING!

      Backup (competing with Veeam)? We do it!

      Workload Mobility to the cloud (competing with everyone)? We do it!

      Cloud Management? We do it!

      AI? We do it!

      Software Defined Networking (competing with Vmware and Cisco)? We do it!

      Hypervisor? We do it!

      Have I missed something or have they hired a few thousand developers? Chances are that all these things are a desperate attempt to find an area where they can make PROFITS because they are burning cash fast and they are getting squeezed brutally in their core business. Looks like a race against the clock to find a cash cow FAST or they are out of business soon.

    4. returnofthemus

      Re: Single pool?

      "Surely any given application is going to be compiled to a specific architecture (x86 or Power)".

      Errr No!

      I guess you must have skipped Computing 101 class, it's 'Operating Systems' that are compiled to specific instruction set architecture NOT applications.

      Historically, the only missing piece of the POWER architecture puzzle was Little Endian support, today's modern applications have no platform specific dependencies because they are written in interpretive or scripting languages, hence x86/Linux applications of this type will run 'as is' on POWER.

      95% of x86/Linux applications (with no platform specific dependencies) written in C/C++ will only need a recompile, with only the remaining 5% requiring additional source code changes.

  2. Anonymous Coward
    Anonymous Coward

    >>Despite sounding like a breakfast cereal, Nutanix has done very well very fast, but it isn't yet entertained in discussions about core apps inside big enterprises.

    Not sure I buy this statement. They've been harping on their core apps success in their past earning call.

  3. Korev Silver badge
    Joke

    Hyperconverged

    Cloud

    Software-defined storage

    If only they'd mentioned AI then I'd have been a buzzword bingo winner...

    1. baspax

      THEY DID!!

      https://www.geekwire.com/2017/nutanix-adding-ibms-power-processors-data-center-bundles-ai-applications/

    2. Anonymous Coward
      Anonymous Coward

      Don't forget 'Internet of Things' and that current darling of the salesmen 'GDPR compliant'.

      Remember all that fuss about Y2K bugs?

      .....some things deserve to just peter out.

  4. elan

    rednut hat

    not such a big deal for nutanix. kvm is already running on power systems. acropolis is more or less a customization plus adapted distributed file system. would be a nice niche in the hci market (btw: happy to see non-x86/64 processors alive like power,sparc. monoculture is never a good option).

    remains the question of how to market, how to get nutanix to the real enterprise segment ?

    1. Anonymous Coward
      Anonymous Coward

      Re: rednut hat

      Nutanix is an intermediary step anyway. I think people will go full public cloud instead of trying to make HCI work across the board. Nutanix is essentially "now you can have a data center (sort of) like a Google data center!"... you know who else has data centers exactly like Google's data centers? Google (Cloud Platform), and AWS, etc.

      1. Anonymous Coward
        Anonymous Coward

        Re: rednut hat

        It's bound to happen. Public cloud. You literally cannot purchase a new software app or platform that has an on prem option, Google Apps, Salesforce, Workday, ServiceNow, GAE, Force, etc... every app that was created post 2000 runs as a SaaS app and only a SaaS app. Anything created in the future will be SaaS. IaaS has a strong value proposition too. No company starting greenfield today would consider buying an on prem data center... it is all legacy. A matter of when, not if.

  5. Anonymous Coward
    Anonymous Coward

    "Power users also see the elasticity and pay-as-you-go models falling from public clouds and want that in their own data centres."

    Yeah, and how is Nutanix going to help them with that? It's not like you can tell Nutanix, or any other hardware provider, four months after buying the boxes that you would like to elastically scale the workload down to zero and get 86% of the large amount of cash you paid up front back. So you can't elastically scale down, at all. You can sort of, kind of scale up by turning on cores after you buy the box... but that isn't at all elastic (you have to place calls, make purchases of permanent hardware, possibly get a PO for more hardware depending on how much you want to scale up, etc). It's not like Nutanix can scale to whatever amount of resource is needed with no intervention because your ecommerce site, or whatever, just had a massive, unexpected three hour spike in traffic due to some viral demand. I am not familiar with any model in which you can buy, for instance, 200 cores from Nutanix or the like for an hour, pay a hundred bucks and then turn them off and walk away.

    Whenever I hear "cloud" and "in your own data center", I know it is about to make no sense. That is like saying "have an electrical utility plant in your own house."

    1. nijam Silver badge

      > That is like saying "have an electrical utility plant in your own house."

      There are circumstances and locations (not many, I concede) where solar panels or a small wind turbine make sense for a house, of course.

      1. mathew42

        In Australia it appears that many disagree on when it makes sense.

      2. Anonymous Coward
        Anonymous Coward

        Yeah, the electricity example doesn't hold up perfectly to the cloud situation. Basic economics are the real reason for public cloud though -

        1) Economies of scale - Generally speaking, when you do things at extremely high volume (like an AWS or Google in cloud), you can do it much less expensively, per unit, than someone making something on a small scale.

        2) Comparative advantage through specialized skills - Google, for instance, can afford to hire 700 PhDs to work on security and invent their own everything from an infrastructure perspective so they are not paying a fortune to 100 middle men providers with huge margins. They can invest in automation technologies to make that infrastructure really fast and reliable. Not possible for even a large business in a non tech industry.

        3) Cooperative resource use - When you are building on prem, you need to scale to peak (whatever the three year peak is, even if that peak last for a few hours every year or two). When you use public cloud, you are basically cooperatively pooling infrastructure with thousands of other people and businesses and paying for actual utilization.

        A better analogy would be teleco. Everyone is, and always has been, in the "network cloud" through some ISP, like AT&T, Verizon, Orange, etc. No one is dragging their own fiber around the world. Same reasons people use AT&T or whoever as their ISP and teleco provider apply to cloud computing... it's just that people are used to teleco "cloud" because it is the way things have always worked, whereas using it for compute and storage seems different... but not really. If network connections were fast and reliable in the 70s and 80s, no one would have ever built a data center in the first place.... and the question "will this be on prem or in the cloud?" would be as odd a question as "do you plan to use a teleco to connect your New York and London offices, or will you be having a submarine drag a fiber optic cable across the Atlantic ocean?"

    2. elan

      well, if you define cloud as big dc with a lot of hw etc - agreed. but thats , at least to my experience , not the case. companies are asking for the XaaS concept. they want to keep their data inhouse, keep their hands on....imagine having a phone number...please stay in the line....and your trucks are piling up...hey, you will need a pint or two in the evening.

      1. Anonymous Coward
        Anonymous Coward

        "they want to keep their data inhouse, keep their hands on....imagine having a phone number...please stay in the line....and your trucks are piling up...hey, you will need a pint or two in the evening."

        Isn't that kind of exactly the way it works on prem? If you have some unknown issue with your, for instance, EMC storage array... or some unknown issue with any aspect in on prem infrastructure, you're on the phone with EMC or whoever trying to figure out what is going on. The difference with public cloud is that instead of having to place calls to EMC, Cisco, VMware, Oracle, SAP, Red Hat, etc (up the entire stack to figure out where the issue is) when you have an issue, you have a single provider or at least fewer... and, in theory, as they control everything there should be far fewer issues and those issues should be fixed fast. None of this "your EMC storage array looks good according to our log data and diagnostics... why don't you call Cisco, this is probably a Cisco thing... or maybe not, who knows... but it's not an EMC problem."

        The need to keep data in house is generally an excuse by people who just don't want to change anything. There is no reason for it in most situations. A cloud provider is likely to have more security, data governance resources than the average company, or even above average company. They have huge incentives to not allow that data to be exposed, loss of reputation for their entire company, lawsuits, etc.... Most people are already putting their most critical data out on the cloud, generally via Salesforce... that's every customer name and contact, the entire sales pipeline, your price points for everything, product plans, potentially PCI data and orders. That is the most critical data for most companies... and the data which would be most interesting to hackers.

  6. Anonymous Coward
    Anonymous Coward

    Nutanix mistake

    Nutanix missteps:

    1. Didn't develop an entry-level product suitable for SMB - size/performance

    2. Didn't develop a public product, suitable for public purse consumption model

    3. Didn't bond more closely with Dell early on - way too much friction.

    4. Chose Supermicro as their hardware platform. Should have done a hardware OEM with Dell or HPE.

    5. No move to a software-only model, with option for hardware.

    6. Customer frustrations from the licensing model. upgrades and additions complexity.

    7. Nutanix liked because they are comparing it to prior non HCI platform. How to retain customers for wave 2?

    Looking forward...

    They have not addressed 1 and 2. They are trying to cover (3) with better channel sales, Lenovo partnerships, etc.

    4 will continue to bite them until they stop selling as an appliance.

    5 they have announced s/w only, which is a good step forward. May address 1 & 2.

    6. remains unaddressed unless you wish to pick a fight with your Nutanix rep.

    7. Acropolis is part of this argument, but 6 will impact the ever more knowledgeable customer if they start looking to Dell-EMC, HPE and others for alternatives.

    Excellent product and a leader in HCI, but will struggle to grow out of the large enterprise model. They don't scale down to a mini VSAN-like product for retail stores, remote distribution locations, office needs, edge applications.

    Sales team is still opportunity and deal-based and hasn't been strategic in partnering to do proper business development.

    I sincerely hope for Nutanix that they figure this all out before vmware and microsoft catch up.

    This just like the browser wars of old, or more recently the virtualization/hypervisor platform wars.

    AC because I work with all of the above.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like