Hmmmm....
" the Docker client tools will decide whether it needs to be a Linux virtual machine or it needs to be a Windows virtual machine'
Anyone wish to hazard a guess which VM will be chosen? Fox/Henhouse..
Containers are all the rage with Linux sysadmins these days, and now Microsoft and Docker say they're going to bring that same virtualization-beating goodness to Windows. But just what will that look like and how will it work? First things first. One thing Microsoft's new partnership with Docker won't let you do is take any of …
You may be surprised. The thing you license is the OS and *that* is the thing that isn't duplicated when you are using containers rather than VMs. Logically, you should be able to run multiple containers in the same way that you run multiple apps.
I always had concerns that virtual machines were solving the right problem. That is, if you have a medium-sized server farm of 60 servers, for example, you can consolidate that onto a single virtual system with less than 10 actual servers. This works because those 60 servers are old, so you get more raw grunt out of 10 new servers. Also, most of those physical servers have pretty low CPU usage, so you gain that way as well.
My problem, though is that you still have 60 virtual servers to maintain, as well as licencing all that software. So you haven't actually reduced your support costs by an appreciable amount.
Comparing containerisation to virtual machines is like threads compared to full processes. You reduce the overhead, but increase the risk of each component interfering with other components. Big efficiency gains if your're prepared to exercise a bit of care in putting your system together.
When Microsoft put together it's small business server (a long time ago, before virtual machines were available on MS), they had a lot of trouble because (for example) Exchange and MS-SQL didn't play well together. They each wanted to use all the system's resources. Containers would have made this process a lot easier, since you could have put them into their own containers and set limits to the resources they could use. Virtual machines could be used to solve the problem today, but is is rather an overkill.
So I applaud the fact that Microsoft is moving towards containerisation, not because it is the newest and bestest, but because Microsoft customers will get real benefits.
Re: SBS, Exchange, SQL_server, Sharepoint
Unfortunately, given the current limitations of App-V, namely it does not support "large server applications such as Microsoft Exchange Server, Microsoft SQL Server, and Microsoft SharePoint" [MS Technet gg703262], I wouldn't be surprised to find that you still won't be able to run these "large server applications" in MS Docker containers; these would still need to be run on a dedicated host.
How do you come to that conclusion? It's not as if MS have bought Docker. And if you have a linux container it simply won't work on a Windows VM so they'd be generating bad press and support-load if they were to create the wrong VM.
There's sensible paranoia and there's being a bloody idiot. Try to aim for the former.
Excatly. It's a zero-sum argument that sees Microsoft wanting desperately not to continue to be seen as the 'last team player to be picked'. Again.
And I hate to think of how Windows is going to manage the address spaces for containers in the same way it's struggled for years with JVMs. The architecture will have to be a bolt-on methinks.
So another blade in an already large Swiss Army knife.
'last team player to be picked'
Yeah, they're the fat kid on sports day. We all have to cheer them along, turn a blind eye to any cheating, and laughing at them is frowned upon.
"Come on, Microsoft... you can do it.. you're nearly there!"
"Stop that giggling, you lot!"
"OK, everyone slow down and let him catch up"
"And I hate to think of how Windows is going to manage the address spaces for containers in the same way it's struggled for years with JVMs."
For JVMs, you manage them via the JVM command line parameters - exactly the same as you do on say Linux. We ended up moving a lot of memory hungry JVMs from Linux to Window Server and they ran significantly faster on the exact same hardware / same JVM version! So not quite sure how it has 'struggled'
"We ended up moving a lot of memory hungry JVMs from Linux to Window Server and they ran significantly faster on the exact same hardware / same JVM version! So not quite sure how it has 'struggled'"
We had a mixture of Linux and Windows and ended up doing the opposite. CentOS has meant a huge saving in licenses and we've found it vastly easier to deploy and manage 'nix servers (which means even more savings) on a large scale thanks to decent package managers (Yum/RPM) and Puppet.
There's been no discernible difference in performance either way.
"And I hate to think of how Windows is going to manage the address spaces for containers in the same way it's struggled for years with JVMs. The architecture will have to be a bolt-on methinks."
The Windows kernel already supports multiple sessions. These are "fairly" isolated from one another. (They have independent object namespaces, for example.) I wouldn't be at all surprised if containers couldn't just be implemented as multiple sessions.
I feel obliged to add that the containers concept would have been baffling to an OS designer from the 1960s, because complete isolation of one app from another was what OSes were invented for. It is the plethora of inter-process communications methods that have been bolted on since then that has necessitated the re-implementation of the original concept.
It is the plethora of inter-process communications methods that have been bolted on since then that has necessitated the re-implementation of the original concept.
...or rather the permanent configuration clusterfuck and the leaky abstraction / runny sandbox effect?
The IPC will still be there.
""There aren't any container technologies in Windows that ship to the public now, but we do have some internal works that we've been doing," he explained."
That will probably surprise the millions of users of Microsoft App-V who already containerise (and stream) their applications....
First they decide it's good for them & start embracing it. Once it's incorporated into Windows, they'll start fekking with the API & tweaking the "Standards" until it will only run properly on Windows. Once everyone has mangled their old code to work, or completely stopped using the original version in favor of the Microsoft variant, then it's time for... (Cheesy Ominous Music) Then they'll decide that they no longer wish to support it, dump it like a flaming turd, and leave everyone in the lurch.
How many times has it happened before? How many times does it have to happen again? "Those whom refuse to learn from History are doomed to repeat it." Or, in Microsoft's case, destined to capitalize on it, squeeze it for every last penny, then toss the corpse under a bus.
Fixed that for you...
First they decide it's good for them & start embracing it. Once it's incorporated into iOS/OS X, they'll start fekking with the API & tweaking the "Standards" until it will only run properly on iOS/OS X. Once everyone has mangled their old code to work, or completely stopped using the original version in favor of the Apple variant, then it's time for... (Cheesy Ominous Music) Then they'll decide that they no longer wish to support it, dump it like a flaming turd, and leave everyone in the lurch.
How many times has it happened before? How many times does it have to happen again? "Those whom refuse to learn from History are doomed to repeat it." Or, in Apple's case, destined to capitalize on it, squeeze it for every last penny, then toss the corpse under a bus.
Feel free to add your own Google variant... :)
erm, no.. Apple never use standards in the first place, and Google have actually created standards.
Microsoft have a track record (and leaked documents) of embracing standards, extending them with proprietary, then extinguishing the competition by having their proprietary extensions becoming de-facto standard.
I've been working on MS tech long enough to know their tricks, very well.
"I'm sure we could find some VM running Office '95, then open/save the file through each version until we get to now."
Naw, you'd need to start with MS-DOS 5, and find a licensed copy of Word on media that hasn't turned to compost.
Whitewash and orchestrated marketing campaigns does not make MS suddenly good.
The day Ballmer & Gates do not have any more involvement in MS (for real) that day I would believe that MS "may", and I repeat "may" become something else.
Two recent examples: Nokia & the OOXML fiasco, where Microsoft corrupted many members of ISO in order to win approval for its phony 'open' document format.
MS Never plays fair, they turn to crooked tactics even when they do not need them, because their motto is to annihilate competition.
The first warning flag is that this is supposed to not be based on their existing research. Second, it'll draw heavily from Docker's existing code. That'll be a neat trick. (BTW if want to see nova-hot development, just track Docker on GitHub.) Lastly, it's Microsoft. For the reasons above and the core of containers/VM's. "Each is sandboxed off from the others so that they can't interfere with each other." They never get anything right, for some number of iterations.
Already had Windows Server Next on the to-do list. Moving up a place or two so I'm not totally surprised at what's in the box.
"Windows is always trying to catch-up to Linux."
Really? And what year did Linux get even a half decent interface that the average Joe could use?
I remember the days when you had to mount the CD drive to use it, then unmount the bloody thing, just to eject a disk. God help you trying to get it to play the content.
Maybe if your 16 and new to computing you think that Linux is way ahead, but trust me, it took a long time coming for it be even remotely usable for the masses.
Sadly for your argument this technology is not aimed at the average user.
It's aimed at the server market. A hint may be in the article: " ...which it says will arrive in the next version of Windows Server."
As far as I know sys. admins are a little ahead of your "average Joe" in things computing and will probably be able to handle the intricacies of running Windows Dockers, if it ever gets off the ground.
Still, nice to see that the FLOSS world can teach the proprietary one a thing or two.
A container would be much less resource greedy than a VM on a non server machine, so limiting this to server OS machines is dense or greedy.
As a Developer, I frequently run VMs on a windows pro/ultimate machine for test environments, but would often prefer to have a lighter weight alternative; I expect other power users would also agree.
@Infernoz - "A container would be much less resource greedy than a VM on a non server machine, so limiting this to server OS machines is dense or greedy."
From the sounds of it, Microsoft's take on containers won't be ready when Windows 10 ships. They may be only half done when the server version ships. Don't be surprised if something actually useful doesn't ship until the version after Windows 10, or even the version after that. They're way, way behind the rest of the field on this one.
"limiting this to server OS machines is dense or greedy."
Microsoft would probably suggest you run a developer licence of Windows Server, which is free to MSDN users.
If you aren't large enought to have a separate development server, you could run it inside a VM. Yes, I know I said VMs are misused a lot, but this is one place I would suggest it as an appropriate solution.
I remember the days when you had to mount the CD drive to use it, then unmount the bloody thing, just to eject a disk. God help you trying to get it to play the content.
Oh yeah? I remember the day when we had to get up early and pre-heat the Olivetti so that at around 10:00 one could boot MS-DOS from a gigantic floppy (HIMEM.SYS: Unable to load) . Then an orthodox priest muttered a prayer in a mysterious language and lighted incense before an actually working program could be loaded off a stack of additional floppies. "mount a CD is too hard for me". PAH!!
Oh yeah? I remember the day when we had to get up early and pre-heat the Olivetti so that at around 10:00 one could boot MS-DOS from a gigantic floppy (HIMEM.SYS: Unable to load) .
Goddamn, you were LUCKY!
I remember the days of sitting at the front panel, setting the switches to load the boot code which then dragged the IPL from the paper tape.
>Then an orthodox priest muttered a prayer in a mysterious language...
I wondered why all the Unix sysadmin's/guru's I came across in the 70's and early 80's who were worth anything, wore sandals, crucifix, had unkempt beards and long hair and often carried strange things such as a rabbit's foot and various (closely guarded) heavily annotated reference cards...
Usually had 2 on a neck lanyard, they clanked togethed, giving off a loud chime, where we would chant 'DB'...didn't have robes, had oversized vests (to go over our mandated company logo'd rugby shirts)...what A/C we had was for eq so it was always too cold to work or 125 degrees F... always thoght the Boss hated us...still use code sheets, i'm lazy... and at age 71 finally got a crew cut (it's growing out)...really miss UNIX stuff, the fact that all software comes w/ C is just not enough.
so actually, i guess we did our best to drive the Boss nuts (some kinda were)...RS.
Really? And what year did Linux get even a half decent interface that the average Joe could use
Average Joe? If you want consumer stuff, then use a consumer OS.
I remember the days when you had to mount the CD drive to use it
Yes, it must have took me weeks (no internet) to realise you mount /dev/hdc - not /dev/hdc1, since it's not partitioned.
Times really were hard for us then, but much more enjoyable (for a geek) than the alternative OS.
"And what year did Linux get even a half decent interface that the average Joe could use?"
By my recollection that would be about 1995, and I found references back to 1996 for X11 and OpenWin with either olwm or olvwm. I quite liked the latter for its easily changed (by drag & drop) number of virtual desktops, the number of which apparently was limited only by the available memory. Originally developed by Sun for Solaris, it was a well thought out and implemented piece of work performance was quite good on a 486-33 with 16 Meg memory. After nearly 20 years I can't be sure, but think there was a pointy-clicky way then to unmount and eject a CD, along with a fair number of other useful things, and it seemed to support pretty much anything that knew how to run as an X client.
As I recall, Microsoft's best consumer offer at the time was Windows 95, probably not their premier offering and, from a stability viewpoint, substantially inferior to either OS2 or Linux and X Window. Its saving was that it was supplied by default on just about every PC sold commercially and would run most or all of the applications developed for MS-DOS and earlier MS Windows versions.
It is incorrect also to confuse the task of installing Windows, which almost nobody had to do, with installing Linux, which required a bit of knowledge and, depending on the release, a possibly significant amount of interaction with the installer application. The proper thing is to compare operation after installation and configuration.
"Windows is always trying to catch-up to Linux."
Perhaps that's true; but would that be a bad thing?
Because I could also argue that Linux is now trying to catch-up to FreeBSD; this container development sounds very much like a FreeBSD Jail. I could also argue that they're not very fast either considering that Jails have been part of FreeBSD since version 4.0; see here. Release date? Around 14th of March 2000; this is the announcement.
Seriously; what's the problem? Open source was made to be used, people don't give out the source code just to make them look better. The whole idea is to embrace and extend or expand.
And here's the thing with software freedom and all that: you don't get to chose who's going to use your product. Because doing so would not only be an insult to the whole idea of free software, it would also turn the whole thing into a tyranny. Freedom is the art of allowing everyone (so even Microsoft!) to use the fruits of your labour.
When I see Microsoft using something which already exists on Linux, or in this case see Linux do something which is already available somewhere else, then all I can't help think is: "the idea really works...". Meaning the idea behind free software and shared knowledge.
In the end we all benefit.
(sorry for a small rant).
>"In the end we all benefit."
IFF Microsoft uses and expands, then distributes, the source code per the GPL. If it just nabs the idea, implements its own "intentionally" faulty version of whatever standard and then uses foul means to dominate the market, then no-one benefits. I'm torn between wanting software patents and not wanting them. I think a 2 year software patent would suffice in this instance.
This post has been deleted by its author
"but what makes docker so much better / different than AppV?"
Quite - it's like App-V but minus a lot of the clever functionality like application streaming.
Microsoft probably just want have an option to use Docker but I can't see it replacing what is seemingly a significantly more powerful product in App-V
Seems not the same thing at all?
Docker ---> Local App Management & Isolation , but lighter than VMs
App-V ---> Terminal-based application access (at least that's what I can wring out of the marketdroid talk cunningly buried by the hynpotizing eyes of stockphoto guy.)
Damn! That is a scary looking dude! The eyes!
I hope I can explain better with less maketdroid. In our implementation, App-V is not terminal based. It has a couple of back-end servers that manage and serve the packaged applications either as persistent or streamed applications. These run on the client PC but are effectively stand alone as they hold their own copy of the relevant registry keys and dependent applications etc. We use our existing SCCM infrastructure to provide application streaming/download points so remote sites aren't WAN dependent. It's of most benefit to us when we have multiple apps which are dependent on different versions of a service that cannot co-exist on differing versions. e.g. App1 needs Java 1.6 and App2 needs Java 1.7. This is all centrally managed through AD groups.
Is the functionality similar in Docker, even if the management of the apps is different? Interested to hear what it offers and how it's managed.
"App-V ---> Terminal-based application access (at least that's what I can wring out of the marketdroid talk cunningly buried by the hynpotizing eyes of stockphoto guy.)"
No - App-V is application virtualisation containers. It can run your applications and dependencies independently of other software running under that OS instance - so for instance you can run multiple versions of applications at the same time that would otherwise conflict. And it has very clever streaming technology that can install just the parts of a containerised application that are needed, when you need them whilst the rest of the application continues to download in the background - giving near instant 'install on demand' of virtualised application containers across even a slow network. It is often used as part of VDI, but can also be used as a software packaging and deployment method to standard desktops and servers.
Containers are like VMs, except they are more light weight and so take less memory and start up very fast.
Let's put it this way. Linux (or Solaris) is already designed to run multiple programs at once, so why do we need VMs or containers? It's because there are still shared resources which can cause one app to interfere with another. VMs deal with this by simulating an entire computer, while containers deal with it by providing what appear to be separate copies of just those things which can cause problems. As a trivial example, a container will provide an app with a view of the file system that looks like it has the file system all to itself. What makes this different from a traditional "chroot" is that this sort of thing is duplicated for everything. However unlike a VM, you are not duplicating the entire OS, just changing the OS services.
With VMs you can run different operating systems versions or even completely different operating systems altogether. With containers there is only one OS present. This means that despite the advantages of containers, traditional VMs aren't going to go away completely. The two complement each other rather than being direct replacements.
Unix type operating systems have had container-like features forever. However, containers are not an all or nothing type of thing. They're a matter of degree and the details matter. What makes Linux containers (and Solaris zones) different is all the work they put into tracking down all the details where one process can "see" the presence of another. It involves design changes deep in the innards of the OS kernel so it's not an add-on package.
Docker is not providing containers. Containers are part of the OS itself. What it is doing is providing what I would call a "cloud-lite" layer on top of containers for deploying and managing server applications. I'm calling it "cloud-lite" because proper Docker apps are written like cloud services, but you can run them on your own hardware (although you can run them on a public cloud if you wanted to). If "cloud-lite" doesn't really explain it for you, I'll simply say that the app format is restricted in what it can do so that it can be started up, shut down, and moved around easily.
Windows no doubt has a few container-like features already. However, if they had a full-blown container system they would say so and people would be using it. Keep in mind that containers are not an all or nothing affair. The big question when the next server version comes out is going to be what the limitations of their containerization will be.
Microsoft App-V is something else altogether. It's a means of streaming applications to clients rather than installing them directly. There is a certain amount of redirection involved in this, but it's really a different thing intended for a different and very specialized market. Ignore the AC marketroids.
Docker sits on top of the underlying OS container system. It's the OS kernel which must provide the actual container capabilities for Docker to use. It sounds like what Microsoft is mainly interested in is using the Docker management tools because they don't have anything comparable.
However, containers aren't a single "thing". I've been following the progress of containerization in Linux, and there are a lot of pieces involved which were implemented gradually over the years. Each addition meant that more container capabilities became available. There are still new capabilities going in even today.
I'm getting the impression from the vagueness of Microsoft's hand waving that their container system isn't all that mature yet. That is, I suspect that if your application starts poking around in the innards of Windows it will find that it's not as completely isolated from other applications as users might have hoped. As a result, you may be limited in what you can do with it on Windows without running into problems.
It is quite likely that Microsoft suddenly realized that containers are now the hot new thing and that they've missed the boat yet again. They were so busy patting themselves on the back on how well Hyper-V was doing that they didn't see that hyper-visors were becoming passe in many applications. They now need to throw something vaguely container shaped out there in order to make it look like they're still in the game when it comes to servers. I won't be surprised though if this first release of their own container system isn't all that useful. It may be several release cycles before it's really all "there".
One problem that isn't about to go away on its own is that Windows 10 doesn't have it. Development is going to be real fun if you can't do any development and testing without either running Windows Server in a VM, or else connected to a remote machine.
If you're using Ubuntu 14.04, then Docker is in the Software Centre and you can install it with a couple of mouse clicks. If you want to try it out or become familiar with it, the best way right now is probably to install Linux.
"It sounds like what Microsoft is mainly interested in is using the Docker management tools because they don't have anything comparable."
App-V + SCCM already significantly exceed the capabilities of Docker as far as I can see - and App-V is a far more mature technology that has been around a lot longer.. Microsoft are probably just adding Docker as another option as it won't cost them much if anything.
...but a Linux version, of which MS has now also got a Windows version?
If so, then everything is really cool until you realise you do want some apps to be able to talk to other apps. So you put in rules allowing certain bubbles to talk to other bubbles. Before you know it you have Microsoft asking if they can take a copy of your config because they've never seen one as complicated - yet working - before.
Lazy downvoting because I mentioned Microsoft without the obligotory "sucks" after it?
AppV can run local, or stream sandboxed applications which from reading this article sounds just like what Docker does. Perhaps the article needs a bit more beefing up with detail for those of us who haven't embraced *nix yet? Rather than downvoting me, how about you post a comment explaining the difference and benefit(s), and then we can all upvote you?
In a begrudging way, you have to admire this attempt. MS typically engages in catch-ups like this by adding yet more layers to their products, and sometimes somehow gets them to work after a fashion. This contrasts with Linux's ability to strip down for purposes such as running containers, which really is a far easier achievement. I know which I'd prefer if I could think of a use case for containers, which I suspect is more limited than the hype would suggest, but from a application vendor's point of view, I suppose it sounds attractive.
It's fun to sit and watch two camps arguing over Windows copying Linux and noting how oblivious/blinkered the Linux camp are to the fact that containers are a Sun invention and have been around for years. So exactly who is copying who?
Just accept that it makes sense to build other peoples good ideas into your own products.
Well, one of the WinNT features from the early days is the POSIX and OS2 subsystems (the latter being removed finally). The mechanism is already there. MS could add a full on Linux subsystem to run existing Dockers. That might save Windows Server as an OS in the cloud era.
Disclaimer - I'm really not a fan of the above as I think its sub-optimal, just mentioning it is possible.
"Well, one of the WinNT features from the early days is the POSIX and OS2 subsystems (the latter being removed finally). The mechanism is already there. MS could add a full on Linux subsystem to run existing Dockers."
1) The POSIX support wasn't actually POSIX compliant.
2) Containers are totally different in their scope, design and application.
> Or at least PCs not infected with SecureBoot
fuck me, not this FUD again. SecureBoot is a UEFI feature, not an MS technology. In order to get Windows certification, you must be able to switch it off.
The only MS-based devices where you cannot switch it off are SurfaceRT tablets and big fucking deal, you can't install linux on an iPad either.
Stop it. Just stop it. It's gone beyond stupid and way into the realm of deliberate lies now.