back to article Part 3: Docker vs hypervisor in tech tussle SMACKDOWN

If you're willing to start from scratch, give up high availability, the ability to run multiple operating systems on a single server and all the other tradeoffs then Docker really can't be beaten. You are going to cram more workloads into a given piece of hardware with Docker than with a hypervisor, full stop. From the …

  1. dan1980

    'Hypervisors help us deal with the realities of heterogeneous environments and the fact that not everything is a web-based workload, or ready to be recoded for "scale".'

    HERESY!!!! Burn [him], burn [him]!!!

    1. James Dore
      Joke

      Yeah...

      ... is it Web Scale?

  2. wheelybird

    I think you've missed the point of Docker - it's not meant to run simultaneous operating systems on the same server. It's not virtualisation - it's a encapsulation and deployment solution. You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture.

    Your article is essentially pointless.

    1. Trevor_Pott Gold badge

      "You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture."

      Which is exactly what I said.

      Containers let you get more workloads per node, but they don't of themselves give you a means to provide high availability for those workloads. You either use production infrastructure (hypervisors or NonStop servers) which provide you redundancy, or you redesign your applications for it.

      Thus containers are not competition for a hypervisor. They are not an "or" product. They are an "and" product.

      Considering the hype and marketing of the Docker religious, that absolutely does have to be said to the world, spelled out explicitly and repeatedly.

      1. Anonymous Coward
        Anonymous Coward

        Fewer OS instances

        The win for containers IMHO is fewer OS instances. It matters not whether the OS running Docker is a physical server or a VM, either way you get the benefits of application isolation without dealing with more OS instances. Even if you didn't pay an extra penny to license/support these additional OS instances, there are still costs associated with installing/maintaining them.

        1. Trevor_Pott Gold badge

          Re: Fewer OS instances

          We still don't know how licensing will shake out. I suspect every Docker container will require an OS license from Microsoft. Otherwise, I agree with you 100% here, sir.

          1. Anonymous Coward
            Anonymous Coward

            Re: Fewer OS instances

            How would Microsoft justify additional licenses for containers, when there is only the single OS running? I would think they'd just license the container software, since only Microsoft can add the required kernel hooks they don't have to worry about third party competition...

            1. Trevor_Pott Gold badge

              Re: Fewer OS instances

              Let's have that conversation after they've finished integrating Docker into the OS and decided on licensing, eh? I've heard so many different things out of Redmond that I can't put credence to any of it, tbh.

          2. P. Lee

            Re: Fewer OS instances

            If containers are app-deployment/scale-out features, it makes sense to build them into your OS. It would be hard to charge an OS-fee on top of that for that function, unless its in a 3rd party hosted environment.

            I suppose if MS built them into hyper-v under the label "thin vm" they might get away with it. To my mind, this functionality should be built into the OS - it's all thing things I was taught an OS should do - manage resources and applications.

            1. Trevor_Pott Gold badge

              Re: Fewer OS instances

              When, in the history of our industry, has Microsoft licensing been connected to rational thought?

              1. cupperty

                Re: Fewer OS instances

                Fewer OS instances = more containers to bring down / move when that OS needs patching / upgrading

              2. dan1980

                Re: Fewer OS instances

                @Tervor

                "When, in the history of our industry, has Microsoft licensing been connected to rational thought?"

                Well, Office 2007 and Office 2010 allowed you to install a copy on a desktop and another on a laptop provided they weren't both in use at once. This was only for Home & Business and Professional but this made sense because the Home & Student edition came with THREE licenses that could be installed on any computers - desktop or laptop.

                That said, this licensing provision only applied when one purchased Office with the CD/DVD - for some reason, if you downloaded the software and purchased one of those retail boxes with just the license card in it (which would also generally be the case if Office came pre-installed) then you didn't get the same dual-install rights. Never could figure that one out considering the card-only version was a proper boxed product you could buy off the (physical) shelf and the price difference was pretty small.

                So that part didn't make sense but the rest did. At least until 2013/365 came around . . .

                1. Trevor_Pott Gold badge

                  Re: Fewer OS instances

                  @dan1980 except you leave out the part of the Office 2007/2010 licensing that pertains the remote/VDI usage whilst attempting to sing Microsoft's praises of licensing that product. Though you at least acknowledge the media-based madness of the time.

                  Microsoft doesn't do "rational".

            2. Canecutter

              Re: Fewer OS instances

              Oh, like you were able to find it in OpenVMS back in 1998?

              You're right. Those facilities really should be built into an operating system. To do it, though, they will actually need to start recognizing separate concepts and referring to separate concepts by name.

              In those days, the VMS Process was able to achieve the desired isolation;

              We used the VMS Job to achieve further isolation and grouping of related components or applications;

              We used VMS Clusters to achieve scale-out and redundancy;

              We used VMS Galaxy to achieve multiple operating system versions (albeit of VMS) on the same physical host.

              Yea, surely those who fail to learn the lessons of VMS are doomed to reinvent it (expensively).

              Too bad the operating system has, through its history, been saddled with such awful owners.

              Nonetheless, I am all for what the likes of Docker and even VMware are doing.

              1. Anonymous Coward
                Anonymous Coward

                Re: Fewer OS instances

                "Yea, surely those who fail to learn the lessons of VMS are doomed to reinvent it (expensively)."

                Wise words from Canecutter, and perhaps not just of historic interest.

                For the record, VMS (or perhaps we should call it OpenVMS) is under new management: by agreement with current owners/neglectors of VMS (HP), a new-but-old company called VMS Software Inc have the rights to develop new versions of VMS, initially for whatever remains of the lifetime of IA64, and also in future for AMD64.

                VMS Software Inc are a US-based company, based down the road from where much of the original VMS development went on, and have (re)employed many of the original and/or better known VMS architects and developers.

                http://www.vmssoftware.com/

      2. dan1980

        "Containers let you get more workloads per node, but they don't of themselves give you a means to provide high availability for those workloads."

        Just like one might run a multi-master DB on physical servers as the application is taking care of the redundancy. Of course, you still have to actually CODE the bloody thing to replicate in a manner that works best for your application so as to avoid any problems - like making sure you use auto-increment keys where appropriate.

  3. Gotno iShit Wantno iShit

    I'm still confused about your quote that you stood by in the comments to part #2

    "I might write it for Docker, once Docker has things like FT, HA and vMotion, but I'm honestly not sure why I'd bother, Docker seems like more work than AWS and doesn't offer a fraction of the flexibility you get when using a proper hypervisor."

    Why should containers do this, surely that is the job of the hypervisor? Indeed you state in this article that containers will add to and not replace established technology and will run perfectly inside the hypervisor. We all know what happens to a promising new technology when it tries to be all things to all men. Is it the cost of requiring both that keeps you from using containers 'till the above condition is met?

    1. Trevor_Pott Gold badge

      Good questions. I will do my best to answer.

      "Why should containers do this, surely that is the job of the hypervisor?"

      Containers shouldn't do this. They will likely try anyways, as a great many of those who are invested in containers and adjacent technologies want to see them replace hypervisors.

      The argument goes that containers are so much more efficient than hypervisors that we should do away with hypervisors altogether and use containers for everything. Based on that, we'll either have to throw all our workloads away and recode everything (not bloody likely, especially since there are any number of workloads that can't be "coded for redundancy" in the container/cloud sense) or containers will have to evolve to add these technologies.

      As it stands now, I know a number of startups working to bring hypervisor-like technologies (HA, vmotion, etc) which are in stealth. We are at the beginning of mass market contain adoption, not the end. Thus the technologies enjoyed by the mass market for their current workloads will have to be recreated for previous ones, just as we have for every technology iteration before this.

      Does this mean it's rational or reasonable to do so? I argue no. If I run 4 VMs on a node and each node contains 100 Docker workloads, I am giving up a small amount or overhead in order to virtualise those 4 Docker OSes, but gaining redundancy for them in exchange. Meanwhile, I can load up that server to the gills and keep all it's resources pinned without "wasting" much because I'm using containers.

      To me, it makes perfect sense to have a few "fat" VMs full of containers. Resilient infrastructure, high utilization. But this is considered outright heresy by many of the faithful, as - to them at least - containers are about grinding out every last fraction of a percentage point of efficiency from their hardware.

      "Is it the cost of requiring both that keeps you from using containers 'till the above condition is met?"

      No, it is the pointlessness of using both that keeps from using containers. Right now, today, most of my workloads are legacy workloads. They don't convert into these new age "apps" that "scale out", as per docker/public cloud style setups. If I want redundancy, I need a hypervisor underneath.

      So, I could do what I talked about above and put my workloads in containers and then put the containers in a hypervisor. That would increase the efficiency of about 50% of my workloads, ultimately dropping the need for an average of two physical servers per rack where a rack contains about 20 servers.

      That's a not insignificant savings, so why not jump all over this?

      1) Even with the workloads that can be mooshed into containers it will take retooling to get them there. It is rational and logical to move them into containers, but that is a migration process akin to going XP --> Windows 10. It takes time, and is best done along the knife's edge of required updates or major changes, rather than a make-work project on it's own.

      2) If I start using containers I need to teach my virtualisation team how to use these containers. That's more than just class time, it takes some hands on time in the lab and the chance to screw it up some. That is scheduled, but I'm not going to adopt anything in production until I know that I can be hit by a bus and the rest of the team can carry on without me.

      3) Politics. Part 4 of this series will talk about the politics of Docker. Not to give anything away, but...the politics of containerization is far from settled. I don't want to be the guy who builds an entire infrastructure on the equivalent of "Microsoft Virtual Server 2005" only to have all that effort made worthless a year or two later. Been there, done that.

      4) 2 servers out of 20 isn't world-changing savings for me. Oh, that's Big Money if you're Rackspace, but at the scale of an SMB that only has a few racks worth of gear, there's an argument to be made for just eating the extra hardware cost in order to defer additional complexity for a year or two.

      Really, in my eyes, it's Docker versus the cloud...sometimes in the cloud.

      If I was building a brand new website today, I would have a really long think about Docker. Do I use Docker, Azure, AWS, Google or someone like Apprenda?

      The choices are likely to be informed not by the technical differentiators between these platforms, but by business realities ranging from Freedom of Information and Privacy regulations, marketing success around Data Sovereignty, cost and availability of manged workloads.

      Do I run my workload in one of the NSA's playground clouds, pick something regional, or light it up on my own infrastructure? Is the particular set of applications I am looking at deploying into my Docker containers available from a source I trust, and likely to be updated regularly, so that I can just task developers to that application and not have to attach an ops guy?

      New applications make good sense to deploy into Docker containers. And Docker containers in the hands of a good cloud provider will have a nice little "app store" of applications to choose from.

      But if I am lighting them up in the public cloud, do I really care if it's in a Docker container? Those cloud providers have stores of stuff for me to pick from in VM form as well. And I don't care if what I am running is "more efficient" when running on someone else's hardware; that's their problem, not mine.

      I'm not against using Docker in the public cloud, but I see no incentive to choose it over more "traditional" Platform as a Service offerings either. If for whatever reason we decide the public cloud is the way to go, I'll probably just leave the decision "Docker/no Docker" up to the developers. The ops guys won't really have to be involved, so it's kinda their preference. I really don't care overmuch.

      So from a pragmatic standpoint I really only care about Docker if it's going to run on my own hardware, either as part of my own private cloud, or as part of a hybrid cloud strategy. As we've seen, there are layers of decision-making to go through before we even arrive at the conclusion that a given new workload is going to live on my in-house infrastructure. But let's assume for a moment we've made those choices, and the new workload is running at home.

      This is where we loop back to the top and start talking about inertia.

      All my workloads are on my own private cloud already. They're doing their thing. If I don't poke them, they'll do their thing for the next five years without giving me much grief. My existing infrastructure is hypervisor-based. My ops guys are hypervisor-based.

      If I simply accept that - if I give in to the laziness and inertia of "using what I know and what I have to hand" - then my new applications require no special sauce whatsoever. I can let the hypervisor do all the work and just write an app, plop it in it's own operating system, and let the ops guys handle it. What's one more app wrapped in one more OS?

      Change for change's sake offers poor return on investment. So for me to move to Docker there has to be inventive. Right now, today, at the scale I operate, the ability to power down 8 servers isn't a huge motivation. I could write pay for the electricity required to light those servers up by writing 4 additional articles a year.

      Two years from now, I may have a dozen applications in the cloud, all coded for this "scale out" thing. I may have gotten rid of one or two legacy applications in my own datacenter and replaced them with cloudy apps. Five years from now...who knows?

      It would be really convenient for those new applications to be written to be Docker compatible, scale-out affairs that provided their HA via the design of the application rather than the infrastructure. But I don't know for sure that Docker will be the container company that wins.

      For that matter, the hypervisor/cloud companies could see Docker as a threat in the next two years, declare amnesty and agree to a common virtual disk format.

      Docker offers a means to make my apps more-or-less portable. Ish. As long as there isn't too big a difference between the underlying systems, they'll move from this server to that one, from private cloud to public. If I kept the OSes at the same patching levels on both sides, I could move things back and forth...though not in an HA or vmotion fashion. That has some appeal.

      But a common virtual disk format would allow me to move VMs between hypervisors and from any hypervisor to any cloud. Were this to happen, I'd really lose most of my incentive to use Docker. At least at the scale I operate.

      TL;DR

      All of the above is a really roundabout way of saying this:

      Docker is cool beans for big companies looking to make lots of workloads that require identical (or at least similar) operating environments. (See; scale out web farms like Netflix.)

      Hypervisors are just more useful to smaller businesses.

      I'm way more likely to care about a technology that lets me easily move my workloads from server to server and from private cloud to public cloud (and back) than I am one that will let me get a few extra % efficiency out of a server. Docker could do this one day. So could hypervisors. Neither really do it today.

      Hope that helps some.

      1. Michael Wojcik Silver badge

        Hypervisors are just more useful to smaller businesses.

        I agree, broadly, but containers may also be very useful to smaller businesses for deploying containerized applications - whether third-party or in-house. Putting individual applications in containers can simplify deployment and reduce the chances of unexpected interactions. Those containers needn't replace hypervisors (as you point out) nor non-containerized applications; they'll happily run in VMs hosted by the former and alongside the latter.

        1. Trevor_Pott Gold badge

          Agree 100%. Containers are useful to small businesses. But they can't give up the benefits of hypervisors either. They'll be deploying containers inside VMs almost exclusively. Best of both worlds!

          Only those who are dedicated at a religious level will be deploying to metal. They need/want every erg of efficiency possible. SMBs aren't in it for the efficiency; they need ease of use way more than efficiency.

  4. Warm Braw

    Not disruptive?

    I beg to differ.

    Virtual Machines are only a good solution for development environments (where you need more flexibility than you have space/hardware) and for certain kinds of legacy/migration environments.

    They're significantly sub-optimal for everything else, but they're widely deployed because there's nothing better that's widely-available.

    Containers (as a technology, not talking specific implementations) are always going to be more efficient than VMs and it's much easier to tune a workload in one place rather than two (hypervisor and guest VM).

    If you want to do "elastic" computing, then for most future workloads, containers are going to be the way to do it. Same goes for installing pre-packaged functional units (database, mail server, etc) on to your corporate servers.

    VMs aren't going away, but their role is going to be significantly reduced. And if that isn't disruptive for outfits like VMware, then I'll eat my Fedora.

    1. Trevor_Pott Gold badge

      Re: Not disruptive?

      Some problems with your viewpoint:

      1) Not everyone values efficiency over ease of use or capability.

      2) You give up a lot of ease of use and capability in order to get the efficiency gains of containers.

      3) Legacy workloads will take decades to go away.

      4) Many/most legacy workloads, as well as a significant portion of new workloads (for at least the next decade of coding) will not have application-based redundancy or high-availability. They'll rely on a hypervisor to provide it.

      What you say only makes sense if you assume everyone is going to move to public-cloud style scale-out workloads. We're not all Facebook/Google/Netflix. Industry specific applications don't make that jump well. Ancient point of sales apps, CRMs, ERPs, LOBs, OLTPs, etc won't make that jump...and migration is crazy expensive.

      That's if you can convince a company that is absolutely dependent on a 30 year old POS application whose every quirk they know by heart that they should ditch it and migrate to a new one. Because...Docker?

      There are 17000 enterprises in the world. Maybe a few hundred thousand government agencies that could be considered as large as one. There are over a billion of SMBs in the world, and they're not going "web scale" with their applications any time soon.

      Hypervisors displaced metal because they offered immediate benefit without being too disruptive: you didn't have to recode applications for them, you didn't have to really make huge changes of any kind. As infrastructure got denser, datacenter designs changed, but that was dealt with as part of the regular refresh cycle.

      Docker, like public cloud PaaS scale-out apps requires burning what you have down and restarting. Maybe one day, 30 years from now, containers will have displaced hypervisors. If so, I will bet my bottom dollar that the containers of 30 years from now look a hell of a lot more like a hybrid between the containers of today and the hypervisors of today than just a straight up continuation of the current container design.

      1. This post has been deleted by its author

      2. Warm Braw

        Re: Not disruptive?

        Nothing changes overnight and, as I said, VMs will remain very useful in migration.

        Most of the SMBs I know aren't running VMs even now, they run their applications on a miscellany of elderly hardware held together by good fortune and the occasional visit of the part-time finance director's second cousin. They might benefit from consolidating their systems onto a couple of mutually-redundant servers running hypervisors, but they'd have to fork out for new hardware all at once and find someone with the expertise to set it up and maintain it. So it tends not to happen. And these are the people who will have "legacy" applications forever more.

        The SMBs that have embraced VMs, uniform server platforms and storage systems (and actually have a "regular refresh cycle") are the minority - clueful and resourced. And as soon as containerisation makes it properly to Windows, those people will be taking it up in their droves, because there's no point running multiple OSs if you don't have to - even just from a licensing point of view.

        Containers aren't just a packaging technology - they depend on the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor. And while Docker may have a little way to go (but I think 30 months rather than 30 years will see some big changes), I think you'd have a hard time persuading the people on non-x86 hardware that their WPARs and Zones are manifestly harder to work with than VM solutions.

        Even IBM praises the benefits of WPARs (containers) over LPARs (hypervisor) in the majority of use cases, even though it supports both and the latter has rather more hardware support than the typical x86 VM. I can't really improve on their reasoning:

        * Better resource utilisation (one system image)

        * Easier to maintain (one OS to patch)

        * Easier to archive (one system image)

        * Better granularity of resource management (CPU, RAM, I/O)

        1. Trevor_Pott Gold badge

          Re: Not disruptive?

          "Most of the SMBs I know aren't running VMs even now, they run their applications on a miscellany of elderly hardware held together by good fortune and the occasional visit of the part-time finance director's second cousin."

          Then you don't know SMBs, period. "Small to medium business" covers 1 to 1000 seats, generally. With enterprise being above 1000 seats. (Depending on which government is doing the counting.) The bulk of those companies are in the 50-250 seat range, and as an SMB sysadmin by trade, I promise you they've been virtualised for some time now.

          "And as soon as containerisation makes it properly to Windows, those people will be taking it up in their droves, because there's no point running multiple OSs if you don't have to - even just from a licensing point of view."

          Wrong again. Ignoring the rest of your prejudiced (and false) remarks, you don't understand at all <i.why</i> most companies use VMs. It is to obtain the benefits of redundancy, reliability and manageability (including snapshots, backups, replication, live workload migration during maintenance, etc) that hypervisors provide. Contianers, at the moment, don't provide that.

          SMBs want far more than just the ability to run the maximum number of workloads on a given a piece of hardware. They want those workloads to be bulletproof. They need them to be something that can be moved around while still in use because there aren't any "maintenance windows" anymore. There's always someone remotely accessing something. That's just life today. Hell, that was life 5 years ago. It's like you have a picture of SMBs stuck in a time warp from 1999 and you imagine that they've never evolved.

          "Containers aren't just a packaging technology - they depend on the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor."

          Everything depends on " the provision of resource management and scheduling in the OS that are equivalent to those provided by a current hypervisor". Whether running on metal in it's own OS, in a container, or in a hypervisor. I don't understand how this precludes containers from being "just a packaging technology".

          "And while Docker may have a little way to go (but I think 30 months rather than 30 years will see some big changes), I think you'd have a hard time persuading the people on non-x86 hardware that their WPARs and Zones are manifestly harder to work with than VM solutions."

          No, I wouldn't. Because you are completely ignoring the desired outcome portion of the equation. Containers provide what companies desire when the hardware underneath the container provides the required elements of high availability, workload migration and continuous uptime during maintenance. Run containers on an HP NonStop server or an IBM mainframe and you get all the bits you want while getting the extra efficiency of containers.

          But, shockingly enough, most businesses don't have the money to spend $virgins on mainframes or NonStop servers. So they use hypervisors to lash together commodity hardware into what amounts to a virtual, distributed mainframe. They then package up their applications in their own OSes and move them about.

          Are containers realtively easy to deploy and somewhat easy to manage? Sure. I'll even go so far as to say they're way easier to deploy than VMs are, but I will remain adamant that VMs are currently easier to manage. What you're missing, however, is that hypervisors democratize all the other things - portability, heterogeneity, high availability and so forth - that are requirements of modern IT. Containers don't provide mechanisms for that, unless you burn down your existing code bases and completely redesign.

          "Even IBM praises the benefits of WPARs (containers) over LPARs (hypervisor) in the majority of use cases, even though it supports both and the latter has rather more hardware support than the typical x86 VM. I can't really improve on their reasoning:"

          Of course IBM is touting WPARs over LPARs. They sell the pantsing mainframes that make containers a viable technology. And they only ask the firstborn of your entire ethnic group in order to afford it!

          "Better resource utilisation (one system image)"

          Nobody is debating this one. Containers are more efficient.

          "Easier to maintain (one OS to patch)"

          One OS reboot takes down 1000s of containers. Also, you get the lovely issue of having to deal with workloads that may react badly to a given patch being mixed in with workloads that might need a given patch, all running on the same OS instance. Funny, container evangelists never talk about that one...

          Easier to archive (one system image)

          Oh, please. We're not using Norton Ghost here. Ever since Veeam came along nobody in their right mind has had trouble doing backups, DR or archives of VMs.

          Better granularity of resource management (CPU, RAM, I/O)"

          That depends entirely on how shit your hypervisor is. Funnily enough, VMware seems to be quite good at providing granularity of resource management.

          So, of IBM's four-point path to victory, the only thing that really shines as rationale is "efficiency". And it carefully sidesteps some pretty significant issues ranging from price (we can't all afford mainframes or NonStop servers) to manageability (1000 workloads sharing a single OS can actually be less desirable than, say, 10 OSes, each with 100 workloads.)

          1. Anonymous Coward
            Anonymous Coward

            Re:NonStop

            Trevor might perhaps be well advised to STFU about NonStop (software or servers). His ignorance of that particular subject, and its lack of general applicability but siperb capabilities where the application has been designed for NonStop from day 1, is starting to show.

            Stick to the stuff you understand, Trevor. There's plenty of it.

            1. Trevor_Pott Gold badge

              Re: Re:NonStop

              I don't know what's got you in a twist. I don't exactly remember saying NonStop servers were generally applicable. They are, however, a hell of a lot easier to use than they were back when they were Tandem. They are x86 now, have virtualisation capabilities (give or take) and are a perfect platform to run containerization on top of.

              Expensive? Yes. Bulletproof? Absolutely. But you don't need to design the application for NonStop (though it helps.) Is Docker itself on NonStop? Not yet, but it's only a matter of time. HP has it's own containerization technology on there for now, but adding Docker support is simple, logical and inevitable. And it will make their NonStop-X line far more attractive.

              So what is your beef? That I called NonStop expensive? That I didn't mention some fault tolerant product you personally sell? Or that I pointed out that containerization needs mainframe-like setups or to completely recode the application for application-level fault tolerance in order to actually achieve HA or FT?

              Perhaps you'd be happier if I said "Superdome X" instead of NonStop? Afterall, it's basically a NonStop X with Linux instead of NonStop OS.

              NonStop was an Itanium-only thing that needed a lot of care and feeding. HP is evolving it into something that is so easy to use that it is a real and legitimate challenger for virtualisation systems in the commercial midmarket. 2015 will see NonStop X and Superdome X make some real inroads here, especially as the prices come down from "holy what the hell crazy madness" to "that's worth a look."

              So, if I am wrong in how I have talked about NonStop, please, do enlighten us all as to exactly how?

    2. Graham 24

      Re: Not disruptive?

      "They're significantly sub-optimal for everything else, but they're widely deployed because there's nothing better that's widely-available."

      So what's better then, even if it isn't widely available?

      "it's much easier to tune a workload in one place rather than two"

      Not if the container has very limited tuning capabilities and the hypervisor and guest have much more sophisticated tuning capabilities. Just because something is easier to do doesn't mean you end up with a better end result.

  5. Terry Cloth

    And now for something slightly different

    CoreOS has just announced a shift in emphasis on containerization called Rocket. According to the blurb, they're not happy with Docker's generalization, and they want to keep something simpler. Interestingly, they point out that Docker has removed its standard container definition.

    Please compare and contrast.

  6. Fazal Majid

    Joyent's Solaris-derived SmartOS shows how containers (a.k.a zones) can coexist with KVM-based VMs on the same kernel. All modern Linux distros have similar capabilities, if not quite as refined. The battle is about management tools - the company that controls the de-facto standard can make a lot of money, see how VMware gave away ESXi, the real revenue is in vCenter, and value-added features like HA and vMotion.

    Both public cloud and hypervisor vendors will gain container capabilities. AWS and Microsoft have already made announcements, the others, including VMware, will follow. It seems to me new applications will be designed for, and run directly in containers, whereas heavyweight VMs will be reserved for migrated legacy workloads. Containers do require automation tools like Puppet/Chef/Ansible/Saltstack to be manageable, however, as does the Cloud. Another opportunity to sell to the enterprise.

    The efficiency gains from containers are nothing to sneeze at, you can squeeze an order of magnitude or two more containers than VMs on the same hardware, not a mere 10%. For cloud providers, specially PaaS ones, this is compelling. Even for IaaS, thin provisioning is easier to achieve with containers. Linux based container solutions need to reach the levels of maturity of Solaris, specially as concerns security as the recent Docker vulnerability shows. Using a better file system like ZFS (as done by SmartOS or Flocker) is also a big boost, and can provide something close to vMotion in terms of ability to migrate workloads, if not yet online (shutdown required).

    Some of the more important gains are in the realm of latency - SSDs give, and VMs take away. At my company, switching from AWS to a containerized private cloud (OpenIndiana) yielded significant improvements in cost (6x), latency (1/3), throughput (3x) and uptime (MTBF went up 30x).

    I've already stated my belief VMware style hypervisors will be relegated to a niche of hosting legacy workloads. Nothing wrong with that, and it can be quite lucrative, as shown by IBM. Container vendors won't be able to extract the same profit margins, because they are built on open-source, so the legacy vendors may still end up gobbling up the new entrants. In other words, legacy workloads may represent a small fraction of future volume, but a large portion of value.

    1. Trevor_Pott Gold badge

      And yet, in practice, for the workloads I run, I see only 10% improvement. I wholeheartedly believe that for certain workloads, you can get a 10x improvement. Hell, why not. Do you know what I can do with benchmarking tools, when motivated?

      But the question isn't about the headline improvements. It's about the average improvements for everyday workloads, for average companies.

      Also, along those lines, I absolutely do not buy the latency claims you state. I run a fairly significant lab full of stuff and I have been testing virtualisation and storage configurations for about 4 months solid, 8 hours a day. Every configuration I can get my hands on. From various arrays to hyperconverged solutions to local setups. I've run the same workloads on metal. I've used SATA, SAS, NVMe, PCI-E and am getting MCS stuff in here sometime in the next couple of weeks.

      The long story short? Take the ****ing workload off the network and you get your latency back. And there are plenty of ways to go about doing that today. I suspect you'd be shocked at the kinds of performance I can eek out of my server SANs, to say nothing of the kinds of performance I get when using technologies like Atlantis USX, Proximal Data or PernixData!

      I respect that you have found a way to use containers to great effect, sir. Yet I find I must humbly submit that your use case of them may well be abnormal when we consider the diversity and scope of workloads run by companies today.

      I think it is fairer to say that under the right circumstances, containers can deliver manyfold increases in density and perhaps even performance, however, they are not likely to deliver this for all - or even most - workloads today. Containers need to be built into, just like the public cloud. With the advantage that many public cloud workloads can be migrated to containers with relative ease.

  7. Maximus

    What about Moka5? They've been doing hypervisors/virt for a while, and I believe they have a managed client side container?

  8. Anonymous Coward
    Anonymous Coward

    "There are 17000 enterprises in the world. Maybe a few hundred thousand government agencies that could be considered as large as one. There are over a billion of SMBs in the world, and they're not going "web scale" with their applications any time soon."

    "Then you don't know SMBs, period. "Small to medium business" covers 1 to 1000 seats, generally. With enterprise being above 1000 seats. (Depending on which government is doing the counting.) The bulk of those companies are in the 50-250 seat range, and as an SMB sysadmin by trade, I promise you they've been virtualised for some time now."

    The definition of SMBs seems to be shifting here. One statement says there are a billion SMBs in the world - surely the majority of these SMBs is a very small business. The other statement has the majority of these SMBs - at least 500 million (given there are 1 billion SMBs) - in the 50-250 seat range which seems a tad optimistic to me. So Warm Braw's comments about SMB's being held together by good fortune and second cousins makes some sense to me.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like