back to article Approaches to building the enterprise cloud

Data center technologies are constantly evolving, displacing their predecessors. Data center storage, and Hyperconverged Infrastructure (HCI) in particular, make for a good example. HCI has been around for almost a decade. We're well along the hype cycle. We've seen outlandish marketing claims, watched vendors IPO and be …

COMMENTS

This topic is closed for new posts.
  1. Mark 110

    I hate Agile as well

    And I'm not sure its the correct word in this context. I have little experience with HCI but it appears to give you the ability to scale in smaller chunks. Thats all.

    So instead of spending £1/4 mill on a major SAN capacity upgrade (giving you the ability to scale to many terabytes of new storage) because the latest project needs 100GB of storage you haven't quite got . . . you can spend £50k instead. Thats not agile, just better from a scalability pov.

    Does HCI actually give you the ability to go to a web interface, spec your server and provision it without ever going near a techie like a cloud platform does?

    1. Anonymous Coward
      Anonymous Coward

      Re: I hate Agile as well

      In and of itself it doesn't, and frankly the "traditional" VMware-and-SAN architecture gives a much, much better experience in that respect. Tools are better, functionality is more extensive, skilled people are far easier to find.

      The scalability side isn't just smaller scaling units (i.e. one cheap machine vs one massive array), it's that the scaling tends to be in all dimensions simultaneously. You buy compute, storage and bandwidth as a unit because, well, frankly, they are a single unit. This means they should, in principle, scale together. It also means that, again in principle, you can reduce those 3 functions down to 1. No more running a procurement on networks, compute and storage to get some apps going. It's all one "hyper-converged" system.

      It's many of the same ideas that underpin the Hadoop ecosystem, just applied to more general purpose workloads. Frankly none of the tools are quite there yet, and I suspect whatever becomes the market leader will come out of the K8s/Docker ecosystem rather than some hot new startup with proprietary tech.

      The big elephant in the room with "HCI" is that they're trying to present a similar experience to the entirely abstract VMware-on-SAN experience, where broadly every machine is equal because you're hitting the same storage array with the same random profile from the same compute array over the same (probably highly constrained) network. HCI kit is going to end up fragmented and specialised (wot, GPUs on all your machines?) so the abstraction becomes leaky, and this makes scheduling and provisioning much more complicated. No one's nailed that yet.

    2. Trevor_Pott Gold badge

      Re: I hate Agile as well

      "Does HCI actually give you the ability to go to a web interface, spec your server and provision it without ever going near a techie like a cloud platform does?"

      Depends on the platform, but yes, several HCI solutions do exactly this. Nerds optional.

    3. Anonymous Coward
      Anonymous Coward

      Re: I hate Agile as well

      Apologies but you are completely wrong in regards to HCI pricing. Free / open source Ceph software defined storage costs you just adding additional disk (s) or server (s) to the pool. Assuming you have defined replication ratio set to 3 in order to grow your storage size by 2 TB you just add 3 disks 2 TB each. More than that you can safely power down your server(s) while increasing capacity and Ceph being clustered file system handles this transparently. Typically you power down server 1 add disk power it on then power down server 2 add disk power on and so on. During these operations storage cluster and all OpenStack workloads runs as usual. And don't tell me that you spend 50k pounds on three disks.

      1. Trevor_Pott Gold badge

        Re: I hate Agile as well

        Yeah, I wasn't going to get into pricing with these sorts. Open source stuff like Ceph or LizardFS can handle HCI storage layer, with OpenNebula and many others providing great management UIs. Then we go up through the various smaller contenders like Maxta, Yottabyte and Nodeweaver to the midsized ones like Hypergrid or Scale to the big heavies like SimpliVity, VMware or Nutanix.

        The price range varies wildly, and even Nutanix have entry-level gear that isn't that badly priced. HCI isn't expensive. It certainly isn't as expensive as ancient three-tier architecture. That said...

        "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"

        --Upton Sinclair

      2. Mark 110

        Re: I hate Agile as well

        "Assuming you have defined replication ratio set to 3 in order to grow your storage size by 2 TB you just add 3 disks 2 TB each."

        I was thinking of scenarios where there's no space for any more disks. I have no idea how much it costs to make space for more disks on a HCI platform?

    4. mr_souter_Working

      Re: I hate Agile as well

      "Does HCI actually give you the ability to go to a web interface, spec your server and provision it without ever going near a techie like a cloud platform does?"

      That was possible 7 or 8 years ago - just using Microsoft VMM - I know because it is exactly what I did for the small company I worked for. Couldn't get the developers to use it at first, and it wasn't as fast as these days - but they could request and provision servers themselves without asking anyone (they were allocated a certain amount of resources, and once used, they needed to start deleting old servers before creating any new ones).

      Currently working for one of the big names - and getting any sort of server is a nightmare of red tape and excuses. Things go backward - we sell these sort of solutions to our customers, but can't get them working internally.

      :(

  2. big_D Silver badge

    Interchangeable hard drives

    When I started, we had VAX minis with interchangeable hard drives. You would stick the platters into a drive on the machine as they were needed and you could carry them between machines. A damned expensive way of transferring data between machines.

    Generally, most of the data was on tape - I worked for a seismic surveyor and they would take blast reading out in the North Sea, Atlantic or the middle of a desert and ship carton loads of tapes back to us for processing. I think the company had a warehouse with over half a million tapes in it, when I left.

    They moved it over to cassetes at some point (DAT, I think).

    We have come a long way. But, essentially, whether it is RAID or SAN, that hasn't changed, just "you" as a cloud consumer don't have to worry about it any more, some other DF operator has to worry about the state of the storage, you just have to hope he knows what he is doing...

  3. Alan Sharkey
    Happy

    HCI reduces disruption

    In the old days, steady state was existing workloads only. Distruption 1 was caused when users (spit!) requested new resources (another server, more disk etc). Disruption 2 was when those resources didn't exist (not enough capacity) and more had to be purchased and integrated.

    HCI reduces the two stage disruption down to 1. Steady state is now existing workloads AND provision of new resources. Disruption 1 is when capacity is full and more has to be purchased and provisioned.

    Which leads us to an interesting proposition. Could we reduce or remove the last disruption and then steady state is everything? I think so. Public cloud (Amazon, Azure etc) sort of does it now as far as the public is concerned. What is needed is that ability transformed to the private industry.

    Just my thoughts coming to the end of my (illustrious) career). :)

    Alan

  4. JS-W

    Lock in?

    When I see HCI - I also see vendor lock in. Storage, Compute, network, software all provided by a single vendor is great for integration, but must surely mean less flexibility moving forward.

    It seems like a Virtual private cloud, but with the downside of owning infrastructure.

    1. Anonymous Coward
      Anonymous Coward

      Re: Lock in?

      That's why you do the sensible thing and run OpenStack on the cheapest kit you can procure. Pluggable compute, storage and networking abstractions to fit most use cases. You'll probably still be locked into a vendor, but that vendor will probably be Red Hat, the licenses will be reasonable and there are viable alternatives.

      On-prem infrastructure, for organisations where this kind of stuff is worthwhile, is enormously cheaper than cloud infrastructure.

      1. Anonymous Coward
        Anonymous Coward

        Re: Lock in?

        As much as I want to love Opensource especially Python based, have you actually tried to run Openstack for any larger heterogeneous (OpenStack works ok if you just run few really large loads) setting?

        Have you tried to build something non-trivial using Openstack Swift objects? Tried to work with the vendor "integrations" to SDN switches in production?

        My conclusion is that my object storage needs are filled within the public cloud with CDN acceleration. My private cloud needs are better filled with VMware. That said with VMware license pricing consideration to moving is always on the table.

This topic is closed for new posts.