back to article Microsoft devises new way of making you feel old: Windows NT is 25

Windows NT has hit an important milestone. Its launch is now closer to the first Moon landing than it is to today. With its debut in July 1993, Windows NT ushered in a gloriously pure 32-bit future, free of the 16-bit shackles of the past. While the majority of PCs at the time were running MS-DOS, often with Microsoft's …

  1. HmmmYes

    To be honest, the NT kernel is not too bad - bit slow and clunky.

    But the rest .....

    And dont get me started on the network stack. Jesus.

    1. 404

      NT video drivers... 0 resolution with 0 bit color... I didn't mind NT - did the upgrade from NT 4.03 to Windows 2000 AD at an ISP all by my lonesone, took six months and two Solaris (first pizza, then x86) boxen for DNS duties...

      Was good times, much better than today's Internet O'Shite

      1. MAH

        quote- I didn't mind NT - did the upgrade from NT 4.03 to Windows 2000 AD -

        I did that once in 4 hours upgrade and a 5 client minute outage. Took the BDC for the domain, put it on isolated network, promoted to PDC, upgraded to Windows 2000 AD, took a workstation on the prod network, put it on the isolated network and logged in as if nothing had happened.

        Shutdown the production NT PDC, powered on the Windows 2000 AD server in production, and voila, upgraded to AD from NT4...

        Of course, if it didn't work I was mostly fubar'd because there was no real backout...once the Client machines connected to the 2000 AD and logged in, only a drop from domain and rejoin back to the NT 4 PDC would have recovered the system if I had to backout.

        Was praying it wasn't going to be a brown underwear day :)

        1. 404

          30k users, 1200+ websites - zero downtime. Thus the six months - I was under quite a bit of pressure, might have aged a few years during that time ;)

    2. J. R. Hartley

      It was still shite compared to the Amiga. Only XP only finally changed that.

    3. CrazyOldCatMan Silver badge

      And dont get me started on the network stack

      I think they eventually re-wrote it, using code stolen^W 'inspired by' BSD..

      And on the HAL front - that's something OS/2 also did. In fact, OS/2 effectively took over the whole BIOS too - which is why there were so many odd errors early on - the IBM programmers expected manufacturers to actually design their hardware and BIOS according to the spec..

  2. Alan J. Wylie

    DEC's Dave Cutler worked on VMS. Is it just co-incidence that WNT is to VMS as HAL is to IBM?

    1. TRT Silver badge

      Dave. What are you doing, Dave?

    2. big_D Silver badge

      It is a shame he didn't bring the VMS/DCL command syntax with him. That was one of the most beautiful operating systems to work on.

      1. Chavdar Ivanov

        Why, oh, why would one downvote this?

      2. Dave Pickles

        He brought just about everything else. At one time it was reckoned that the best tutorial guide to writing a device driver for NT was the VAX/VMS Device Driver manual.

        https://www.itprotoday.com/management-mobility/windows-nt-and-vms-rest-story

        https://everything2.com/title/The+similarities+between+VMS+and+Windows+NT

        1. Andrew Commons

          @ Dave Pickles

          When Win NT was first released we were doing side by side comparisons with VMS and two things that Cutler didn't bring over were puzzling. First up Logical Names which gave you a level of indirection for practically everything along with protected name spaces. Then there was the big omission of Installed Images that allowed privileges to be assigned to trusted pieces of code (amongst other things, sharing, fast startup etc as well) removing the need for users to have privileges. Both were probably out of place in a PC operating system.

      3. Yet Another Anonymous coward Silver badge

        >It is a shame he didn't bring the VMS/DCL command syntax with him.

        Powershell is trying. DCL used to give you a grammar lesson in every error message.

        1. CrazyOldCatMan Silver badge

          Powershell is trying

          Indeed. Very, very trying. As in "trying to be a unix shell but failing miserably".

          Sure, it's an elegant concept but it's utterly inconsistent in usage.

      4. Anonymous Coward
        Anonymous Coward

        "It is a shame he didn't bring the VMS/DCL command syntax with him"

        Maybe it would have been ended in a court for copyright infringements or worse?

    3. Anonymous Coward
      Anonymous Coward

      "DEC's Dave Cutler worked on VMS. Is it just co-incidence that WNT is to VMS as HAL is to IBM?"

      Vms up 1 letter = wnt.

      Ibm down 1 letter = hal.

      Oooh :-)

  3. big_D Silver badge

    POSIX

    The POSIX subsystem lasted a long time... It was replaced in XP and Server 2003 by SFU, which was dropped in Windows 8 / Server 2012. Its successor today is WSL, albeit it works a bit differently.

    1. Anonymous Coward
      Anonymous Coward

      Re: POSIX

      Except the POSIX subsystem was the most minimal implementation they could do to get by government purchasing requirements. Trying to actually implement against it was usually greeted by "Function not implemented".

      1. Yet Another Anonymous coward Silver badge

        Re: POSIX

        It did nothing, but did it in a POSIX approved manner.

        Rather like having an ISO9001 procedure saying that you respond to all customer requests with "fsck-off", as long as you audit that you actually do so..

      2. bombastic bob Silver badge
        Meh

        Re: POSIX

        Interix/SFU/SUA/whatever - I tried, REALLY TRIED, to make it work so I could build things with it. But X11R5 was just TOO out of date, and autotools didn't have the capability of handling the lack of compatibility. And writing those changes myself proved to be a frustrating (if not impossible) task.

        Didn't even have 'tar' - only 'pax', and pathetically didn't support a lot of things (like compression).

        I gave up on it. Cygwin just works better.

        As for Windows NT needing 16Mb of RAM: compare that to Win-10-nic, which seems to run poorly with 100 times as much RAM... especially 'the Metro' / UWP garbage.

        I had the unfortunate experience of having to create a virtualbox VM running Win-10-nic to test an application on. I had an easier time installing the latest ReactOS (multiple crashes, and looping in the 'OOBE WELCOME' menu thing. After (effectively) disabling audio (switched to the AC'97 driver, which apparently isn't supported at ALL), I was actually able to install it. Then I went to give MS feedback on how pathetic something worked, and the performance of the UWP text box was SO bad, I coudl type LITERALLY! TWICE! AS! FAST! as the text rendering of what I'd typed. Meanwhile, CPU on _TWO_ _CORES_ was being _MAXED_ _OUT_ the _ENTIRE_ _FREAKING_ _TIME_!!!

        I'd take that old clunky NT 3.1 *ANY* day over CRAP-WARE like Win-10-nic.

        Oh, and my windows application ran JUST FINE (even though I had to test it to make sure). I'll need to fire that thing up ocasionally to test it out AGAIN and AGAIN, of course. But I think I'll "unplug" the virtual network drivers when I do it, so that it doesn't spend unnecessary time and bandwidth UPDATING itself whenever I do it... and use a diskette or CD image to transfer the application EXE file whenever I test it. Heh.

  4. Locky

    16MB?

    I used to dream of 16MB...

    All we had was himem.sys and a cardboard box in the middle of the motoway

    1. big_D Silver badge

      Re: 16MB?

      Luxury! We had 1KB RAM and counted ourselves lucky!

      1. Anonymous Coward
        Anonymous Coward

        Re: 16MB?

        Opulence!

        We had 10 bits of memory, 20 if you took off your shoes.

        1. 404

          Re: 16MB?

          What the hell?

          I had 8MB in my Windows 95 machine! Watched the Hindenburg crash may many times on the free encyclopedia CD on my 2X cdrom that came with it.

          Did 98, 98SE, straight to Windows 2000 Pro (we don't talk about ME), XP, Win7, Win 8.1 and stopped before Windows 10...

          Makes you wish OS/2 Warp won, doesn't it?

          1. Anonymous Coward
            Anonymous Coward

            Re: 16MB?

            Watched the Hindenburg crash may many times on the free encyclopedia CD on my 2X cdrom that came with it.

            Anyone remember the Weezer video from Happy Days on a CD-Rom?

            1. Jay 2

              Re: 16MB?

              I recall that was the Win 95 CD.

          2. CrazyOldCatMan Silver badge

            Re: 16MB?

            Makes you wish OS/2 Warp won, doesn't it?

            It does indeed - except for one thing - I strongly doubt that IBM would have been any more pleasant to deal with than Microsoft. After all, just look how they treated OS/2 once they finally decided they couldn't be bothered with it - they didn't let anyone else have it for *years*

            And even then, they charged so much for it that all the follow-ons (ecomstation et. al.) have been unaffordable.

          3. MJI Silver badge

            Re: 16MB?

            I went

            WFW -> 98OSR2 -> XP -> XP -> 7

            boss went

            WFW -> ME -> Vista - > 8.x

            Support chap (more recent)

            XP -> 7 -> 7 -> 10

            Who has had the least pain?

            My 7 PC hangs on well

            1. Adam 1

              Re: 16MB?

              > boss went WFW -> ME -> Vista - > 8.x

              Sorry to hear. No-one deserves that.

              1. MJI Silver badge

                Re: 16MB?

                And he is getting a new 10 PC today

        2. Anonymous Coward Silver badge
          Trollface

          Re: 16MB?

          That's where Norfolk has the advantage - shoeless people had 24 bits of memory (which is conveniently 3 bytes when using the usual 8-bits/byte)

          1. MyffyW Silver badge

            Re: 16MB?

            Pre NT it didn't so much matter how much memory you stuffed in but rather how well you managed that first 640K. Lots of LOADHIGH statements, judicious use of memory managers and, frankly, witchcraft marked my career as a neophyte sysadmin back in the mid '90s. Happy days, seen through the lens of comfortable middle-age.

          2. Doctor Syntax Silver badge

            Re: 16MB?

            "which is conveniently 3 bytes when using the usual 8-bits/byte"

            And even more conveniently 4 bytes if you have 6-bit bytes. ICL 1900 anyone?

        3. apveening Silver badge

          Re: 16MB?

          "We had 10 bits of memory, 20 if you took off your shoes."

          Should be enough for all normal purposes, you can reach 1.048.576 with that.

        4. John Brown (no body) Silver badge

          Re: 16MB?

          "We had 10 bits of memory, 20 if you took off your shoes."

          You had shoes?

      2. Anonymous Coward
        Anonymous Coward

        Re: 16MB?

        Luxury! We had 1KB RAM and counted ourselves lucky!

        And we could play a game a chess with that!

        1. Mark Dempster

          Re: 16MB?

          >Luxury! We had 1KB RAM and counted ourselves lucky!

          And we could play a game a chess with that!<

          Still the only computer chess game that I've actually beaten...

    2. anothercynic Silver badge

      Re: 16MB?

      Ahhh, the black magic that was HIMEM.SYS... and EMM386.EXE... the amount of fun we had trying to shoehorn a *lot* of stuff into extended memory with DOS 5/6! Amazing how changing the loading order of stuff could make a huge difference in memory usage!

      1. big_D Silver badge

        Re: 16MB?

        I remember two key applications we had used himem.sys and qemm386.sys respectively. That meant diddling around with batch files to copy 2 different config.sys files back and forth and rebooting, so that you could earily switch from one to the other.

      2. error 13

        Re: 16MB?

        oh the good old days. seemed to get a lot better when the ODI drivers came out, I eventually managed, with much cursing to get a PC simultaneously running IPX, NDIS and DECnet stacks - with the icing on the cake getting enough up in high memory for windows to start. Getting windows to start was the benchmark ...

        Of course if you wanted to add an extra device like a scanner you were stuffed and had to enter the qorld of Quarterdeck. QEMM me up baby!

      3. alexmcm

        Re: 16MB?

        Had 3 weeks' trying to cram a certain UK bank's upgraded DOS front end teller system on their PCs using every trick I could think of. It finally came down to the load order of the different modules. Finally got it to load reliably on every PC.

        Gave myself a big pat on the back and was moved on to a different project. I was only told later that they didn't use it, and were instead upgrading to XP. I think they were running out of compatible network cards for their 15 year old branch PCs, and decided to bite the bullet and upgrade the hardware.

    3. Daniel von Asmuth
      Windows

      Re: 16MB?

      Windows NT took so long to build because Gates insisted on compatibility with MS-DOS, MS-Windows and (16-bit) OS/2. Compared to MS-Windows it was stable and performed half decent if you had lots of RAM. I once ran, err crawled ran NT 3.51 and Exchange server in 16 MB.

      MS-Windows 4.00 was usuallly packaged with MS-DOS 7.00 and sold as Windows '95. That crap was hastily launched as 32-bit OS/2 started to gobble up MS Windows market share and NT had too high hardware requirements for the unwashed masses until the Home Edition of NT 5.1, a.k.a. XP, not to mention the selling price. It took Redmond until 2000 to create a usable server edition. Compared to Unix it still lacks (pseudo)-terminal support.

      Funny how NT 4.00 complained about the presence of a disc in the CD-ROM drive when it was labeled C:, but not when after renaming it to H:.

      1. Anonymous Coward
        Anonymous Coward

        "It took Redmond until 2000 to create a usable server edition."

        Sure, with AD it was far better for larger networks, and could compete with Netware.

        Still, before it, your option for a server were expensive Unix licenses (and the hardware to run them), or Netware - just developing for Netware was more complex and you couldn't reuse your Windows skills, and there were less software available.

        NT4 was OK if the network wasn't large. Just it made more sense with NT4 clients, Win9x wasn't really designed for networks.

        OS/2 remained a small niche.

        1. MJI Silver badge

          Re: "It took Redmond until 2000 to create a usable server edition."

          Netware was brilliant, Windows networking was such a let down afterwards.

          How do you make a Windows NT server 4x quicker? Stick Netware on it.

          1. Ken Hagan Gold badge

            Re: "It took Redmond until 2000 to create a usable server edition."

            "How do you make a Windows NT server 4x quicker? Stick Netware on it."

            But in doing so you make it 4x more expensive. Not that NT was cheap, but Netware was pretty pricey and the difference was enough to pay for a substantial hardware upgrade.

            1. Mark Dempster

              Re: "It took Redmond until 2000 to create a usable server edition."

              >But in doing so you make it 4x more expensive. Not that NT was cheap, but Netware was pretty pricey and the difference was enough to pay for a substantial hardware upgrade.<

              The genius thing that MS did to beat NetWare was not to bother enforcing user licence counts...

              You could run 100 or more users on NT with only a 5 CAL setup as long as you weren't worried about the legalities - and many companies weren't. NetWare enforced the user count very strictly, so you HAD to buy the appropriate number of (very expensive) licences.

              That's why you rarely see Netware any more.

      2. caitlin.bestler

        Re: 16MB?

        It's easy to forget how long it took for NT to make it's public debut.

        Does anyone else remember when "NT" allegedly stood for "Not There"/

        1. Anonymous Coward
          Anonymous Coward

          Re: 16MB?

          "Does anyone else remember when "NT" allegedly stood for "Not There"/"

          NT meant "Not Tested" in my neck of the woods.

          1. Jude Bradley

            Re: 16MB?

            Or "Nicked Technology".

    4. Anonymous Coward
      Anonymous Coward

      Re: 16MB?

      .... and if you had one of those video capture cards, you lost a MB!

      QEMM for expanded, extended and conventional memory shenanigans!

  5. Anonymous Coward
    Anonymous Coward

    25 years and still a PITA

    They should have got the effing thing sorted out by now but... nope.

    Everything after Windows 7 is a step backwards if you ask me.

    1. Doctor Syntax Silver badge

      Re: 25 years and still a PITA

      "Everything after Windows 7 is a step backwards if you ask me."

      I'd go back to W2K. Missed XP. W7 just filled up its VM space doing almost nothing but updates and I couldn't be bothered to give it more.

      1. asdf

        Re: 25 years and still a PITA

        >I'd go back to W2K.

        I did for one of my old laptops (very old Thinkpad) since I needed a true 9 pin com port I could lug around easily on site. Since software also needs the vb6 run time double the reason not to upgrade. Of course that laptop is used for instrumentation testing only and never goes on the real network. Still runs like a champ and even shoe horned an old version of cygwin on there.

        1. asdf

          Re: 25 years and still a PITA

          Meant to say 9 pin serial com port as the USB to serial nonsense is the dog's breakfast and not worth the hassle especially when diagnosing comm issues with old equipment where want less variables not more. Went W2K instead of XP due to less memory usage which before I increased laptop memory was an issue sometimes.

  6. Anonymous Coward
    Trollface

    Obviously...

    Wow, peeps have been hating for 25 long years.

    1. Flocke Kroes Silver badge

      Re: Obviously...

      I did notice the icon, but that is not how things were at the time. Although the Linux kernel pre-dates NT, practical Linux distributions arrived at a similar time. PCs came with DOS/Windows/95/95/98/ME bundled. NT cost extra and if you wanted Linux you near enough had to assemble your own PC from supported components. To start with, Penguins were few and far between and likely to have dual boot machines.

      I think the hatred (from both sides) started around the time of XP. Microsoft decided that home users would use a cheap DOS based OS and that business users had to buy XP. Linux was a minor irritation because it was came with all the software you needed for free, had a proper CLI and better GUI but few people had even heard of it. Microsoft knew they had to crush it so they could charge monopoly prices for XP.

      For me it was no contest: my legacy DOS software would not work with XP but did work with DOSEMU on Linux. Microsoft trying to trash the boot loader to prevent dual boot systems was not appreciated. The hate came from Microsoft forcing EOMs to bundle their OS with new computers. That could be avoided by building your own desktop but Linux laptops (when you could find them) had the extra cost of Windows without the price reduction from crapware. The FUD and lies from Microsoft got tiresome really fast, went on for years and became SCO vs World.

      At some point, Microsoft noticed they were fighting the wrong battle and their real problem was Android - Linux with the best bit (the license) circumvented. My own hatred for the Microsoft tax faded years ago because I no longer need a desktop, Vista meant I got my laptop cheap and now that it is starting to fall to pieces I am building a sturdy wooden modular replacement from components. (That project is more because of EOMs deciding that I have to buy thin and fragile than anything I can blame Microsoft for.)

      1. PhilipN Silver badge

        Re: The hatred runs deep

        And originates from long before XP, and even before 95. ‘Cos, for example, the favoured word processor early on became WordPerfect and whilst it’s series of owners tried to stay ahead of and then match Word feature for feature MS built undocumented API’s into each iteration of Windows to make WordPerfect explode.*

        By comparison OS/2 had an excellent basic set of workhorse programs (Faxworks for example had a built in graphics subsystem so you could amend and annotate faxes, add signatures etc on the screen 20 years ago or more) then the developer’s website would suddenly and “inexplicably” disappear, the supposition being that the developer, understandably, succumbed to the Microsoft shilling.

        *Remember there was an OS/2 version (5.2 if memory serves) of WordPerfect but at a time when cross-platform compatibility was essential (that is, until MS succeeded in making compatibility unnecessary) it was ill-formed. Basically a pig’s ear.

        But then there was a Linux version too, which did not justify the name WordPerfect. Not just the ear - the whole pig!

        1. AndrueC Silver badge
          Happy

          Re: The hatred runs deep

          Remember there was an OS/2 version (5.2 if memory serves) of WordPerfect

          Ah but why would anyone use that when there was DScribe? Possibly the best Word processor I've ever used. Hey ho.

      2. Anonymous Coward
        Anonymous Coward

        Re: Obviously...

        From what alternative reality you come from - or maybe you're born around 2000? Your post is full of factual errors.

        Just to start, when XP was released, DOS and the Win 9x line were killed - and both consumer and business users were to use an NT kernel. Anyway, MS has been hatred for a long time already by then.

        The way it moved to crush competition in the 1990s brought a lot of it - mostly deserved- but then the competition were the likes of IBM, DR, Lotus, Wordperfect and Borland - not the then unknown Linux which became usable for generic users only towards the end of the decade - i.e. the KDE project was started only in 1996- when MS already achieved its goals, and then incurred in the antitrust investigations.

        Don't know what DOS software you were running, because thanks to the Virtual 86 mode of the CPU supporting DOS software was much easier - and I don't know about NT 3.x, but it was easier to support DOS games (in 1994 I was running LucasArt's "Tie Fighter" without issues in OS/2 3.x DOS box, and Turbo Pascal 7) than Win9x ones which required a DirectX support NT didn't have (NT4 supported DirectX 3 only) and would have come later in 2000 and especially in XP.

        Direct access to the hardware was not an issue because the VIrtual 86 mode trapped that and gave the OS a way to emulate the operations. Of course devices not emulated could have been an issue.

        Android would come much, much later, Linux was able to erode the lucrative server OS market much earlier, as soon as it became a viable replacement for more expensive Unixes on powerful enough hardware.

        1. Flocke Kroes Silver badge

          Re: Obviously...

          Microsoft have been shits for a very long time but the comment I was replying to was about 25 years of hate. I started with TᴇX on Unix before OS/2 existed. One large project with Microsoft Word sent me running back to TᴇX and these days I prefer python/reportlab. I was aware of OS/2 and WordPerfect battles but as they did not affect me directly I do not know if they are a good match for the 25 year time frame. OS/2 started 31 years ago and the last release was 16 years ago. Perhaps it is a reasonable fit it MS started their attack 25 years ago and you are really persistent at holding a grudge. WordPerfect started 39 years ago and much to my surprise is still going. Did Microsoft's hate against WordPerfect start 25 years ago? Have they done anything about WordPerfect in the last five years?

          The software I had problems with were cross compilers. I do not know why the did not work with NT and XP, but Microsoft's technical support was particularly unhelpful. They said: "God hates you."

          Oops: dinner time ... got to AFK

      3. david 12 Silver badge

        Re: Obviously...

        In 1989, Seattle was split into two camps: the overpaid and self-important MS employees, and the underpaid and self-righteous university (unix) community. At parties they would meet and ostentatiously ignore each other.

    2. vtcodger Silver badge
      Unhappy

      Re: Obviously...

      Not hatred exactly, but I thought at the time that NT -- no matter what its technical merits -- was a dubious idea. The problem I anticipated was that NT was never likely to be the server OS that Unix was even back then and migrating the user OS away from a small, minimal core (i.e. MSDOS) would mean that when the next generation of low end devices came along, Microsoft wouldn't have an ecosystem that could be shoehorned into them.

      Pretty much what happened. You cell phone doesn't run an NT derived system because by the time the hardware became capable enough to support one, other OSes owned that market. And neither does all the annoying IoT stuff -- largely for the same reason.

      1. Anonymous Coward
        Anonymous Coward

        " because by the time the hardware became capable enough to support one"

        Actually by the time the hardware was capable of running Android, it could run an NT kernel as well. Windows Phone 8 was less resource hungry than Android.

        DOS was so limited, and so often bypassed, I wouldn't call it a core nor a kernel.

        Nor Unix nor Linux nor Windows are microkernels. Windows kernel is not large - just, until a few years ago you couldn't make a "compact" install of Windows, it installed anyway a lot of code and services you could only disable later. For example, Windows Server has the full Active Directory support installed, and you can make a Domain Controller just running the dcpromo utility.

        It is true you can often run Linux on older, less powerful hardware - but for simpler tasks as well. As soon as you have similar needs, the hardware is more or less the same.

        1. CrazyOldCatMan Silver badge

          Re: " because by the time the hardware became capable enough to support one"

          As soon as you have similar needs, the hardware is more or less the same

          My last place, we had a number of webservers - two linux boxes (main and failover) and one IIS box. The IIS box cost 4 times more than the linux boxes because of the spec 'required' to run IIS rather than Apache & OpenCMS..

          1. Anonymous Coward
            Anonymous Coward

            "because of the spec 'required' to run IIS"

            Just IIS alone? Or the application needing it, and its runtime/framework? Comparing apples with oranges?

            For example, just install any Java application in a Linux box, and your hardware needs skyrocket...

    3. Doctor Syntax Silver badge

      Re: Obviously...

      I can tell you when my hatred started. Late '90s some complete and utter eejit in their advertising pestering department decided on a gimmick. They would get a magazine pubilisher to put a gob of the sticky stuff used to attach floppies between two pages with the slogan "Don't get stuck with Microsoft".

      I suppose said eejit in his idiocy thought it would simply peel off with no harm done. It didn't always do that on magazine covers and stood no chance of being got off the flimsy pages without tearing. The eejit also hadn't realised the slogan was ambiguous. As a reward for such an arrogant tampering with what I'd paid good money for (and to the other advertisers who'd paid good money to buy space on the same pages) I decided to take the meaning they didn't intend and avoid getting stuck with them as far as possible in the future.

      Back in the early days I had their FORTRAN for CP/M which seemed a bit of a miracle although I suppose even a Z80 box had more memory and storage than I was allotted on the University mainframe a few years earlier. And Windows itself was quite welcome when it first arrived: I could run an X-server on it to connect to the HP-UX boxes I was responsible for or, later, just multiple terminal sessions.

      But Microsoft, over the years, have brought the hate on themselves through the sheer arrogance of their behaviour.

    4. caitlin.bestler

      Re: Obviously...

      What makes you think people had to wait for its release to hate it?

  7. Herring`

    NT

    I moved (briefly) from developing on DOS/Win3.x to a stint on OS/2 2.1. I thought it was lovely - and didn't need so much RAM as NT. The only downside was the massive stack of floppies for an install.

    1. anothercynic Silver badge

      Re: NT

      You've never tried to install OS/2 Warp then... Jesus Mary and Joseph... 21 3.5" disks, and the installer insisted on *every single one*.

      1. Herring`

        Re: NT

        I've done a few Netware 2.x installs entirely from floppy. I think that wins. That uses a bunch of disks several times.

        From a development perspective, OS/2 2.1 was sooo much nicer than Win 3.x

        1. error 13

          Re: NT

          yeah, Netware 2 was on 5 1/4 inch real floppies - at lest Netware 3 and above came mostly on solid ones :)

          MSoffice used to come on floppy too . That was a tedious installation...

      2. Anonymous Coward
        Anonymous Coward

        "OS/2 Warp"

        Warp came on a CD too I got it that way - although IIRC it required still five-six floppy to boot before it could read the CD. But IIRC update packs, or whatever they were called, came on many floppy only - or I was never able to get them on CDs.

        1. 2Nick3

          Re: "OS/2 Warp"

          You could copy the contents of the floppies to a directory and install from there. I ran an OS/2 2.11 farm and would copy the floppies to one server then run the install across the network from there. Saved eons of time having to install a FP on 80 servers.

          Developed that out to a repository for all of the packages (Notes, CM/2, ADSM, etc) and fixes on the servers. Once I was done with the base OS install on a new machine I didn't have to touch a floppy or CDROM again.

        2. anothercynic Silver badge

          Re: "OS/2 Warp"

          Yes... the CD... CD-ROMs were a novelty when I first laid my hands on Warp!

        3. CrazyOldCatMan Silver badge

          Re: "OS/2 Warp"

          Warp came on a CD too I got it that way - although IIRC it required still five-six floppy to boot before it could read the CD

          Two floppies were all that you needed unless you had some *really* exotic hardware..

          (My copy of OS/2 Warp came bundled with a SB16 sound card and a CD-Rom drive to hook up to it).

          1. AndrueC Silver badge
            Boffin

            Re: "OS/2 Warp"

            (My copy of OS/2 Warp came bundled with a SB16 sound card and a CD-Rom drive to hook up to it).

            That reminds me of one of the fixes I saw while browsing through the bug list. It stuck in my mind because it showed how much of an effort IBM was making to ensure VDM<->DOS compatibility. And it sounds like 'cool geeky programmer stuff' :)

            There was a very good golf game for DOS. For its time, graphically excellent. Digitised images for the course and contours for the greens. While you were playing there'd be bird song and occasionally running water. The fix I remember seeing was for the sound card. Apparently the game was causing problems because it tried to send the samples to the card 10,000+ times a second and the VDM couldn't service the interrupts that fast. I think the fix IBM implemented was to have OS/2 take over controlling the sound card so that the VDM didn't need to raise interrupts. So presumably they emulated the SB hardware for the VDM. Cool stuff :)

            Mind you I also remember them getting snippy because so many joysticks of the time were not programmable and that broke their driver model. A similar problem was that they expected all printers to be connected using Centronix cables with the Acknowledge pin wired up and functional. They seemed quite offended when they discovered that most printers of the time had that feature disabled and/or the owner was using a cheap cable that didn't have the pin connected.

            1. Charles 9

              Re: "OS/2 Warp"

              "There was a very good golf game for DOS. For its time, graphically excellent. Digitised images for the course and contours for the greens. While you were playing there'd be bird song and occasionally running water."

              Sounds like Access Software's Links. Played it quite a bit in the 486 days along with its successor Links 386Pro (which allowed SuperVGA resolutions). Eventually acquired by Microsoft and rolled into Microsoft Golf, came and went (though I don't blame Microsoft for this--the push to realtime 3D golf rendering by the 5th console generation rendered the Links engine obsolete IINM).

              1. GruntyMcPugh Silver badge

                Re: "OS/2 Warp"

                @Charles 9 "Sounds like Access Software's Links."

                That sounds familiar,... I was an operator during this era..... there were periods of the night shift where we were under utilised,.. ahem.

                1. AndrueC Silver badge
                  Thumb Up

                  Re: "OS/2 Warp"

                  Yes, it was. I remember the way the greens drew themselves (2:46 in). A good game.

                  Although its portrayal of a golf swing appears to have a bit of a reverse 'C' going on so not to be emulated unless you enjoy visiting a chiropractor :)

      3. Doctor Syntax Silver badge

        Re: NT

        "21 3.5" disks, and the installer insisted on *every single one*."

        I don't remember what Xenix used but I don't think it was quite that many.

        I had a SCO install which came on a CD but needed a sloppy to boot. It wouldn't install on Virtual box even if you could get a copy of the floppy onto it - it didn't like the emulation. I had a few clients with Informix on SCO (the staple of a lot of small businesses at one time) so having that on a laptop was quite useful. About the time laptops no longer had floppies Linux became mature enough to use without spending more time fiddling with it than doing actual work (KDE 5 is making me start thinking that things are going backwards).

        1. Destroy All Monsters Silver badge

          Re: NT

          > KDE 5 is making me start thinking that things are going backwards

          Oh, so I'm not the only one.

          (And stop banning my comments, El Reg, shits' worse than liberal lilly webzines like HuffPo)

        2. hopkinse

          Re: NT

          I threw out a set of SCO Xenix/386 install disks not that many years ago that had been used in anger circa 1991/92. They were on 5.25" disks and there were shitloads of them - can't remember quite how many but I remember them plus the manuals in their boxes took up a couple of shelves in a big bookcase in our office.

      4. AndrueC Silver badge
        Happy

        Re: NT

        Lol, reminds of the one time I raised a support ticket with IBM. I noticed that on the UK version of Warp there was a solitary full stop below the copyright message. Being young and naive and therefore a fanboi(*) I reported it. Two months later out of the blue I got a parcel. It was a Jiffy bag with the latest service pack on floppy disks and a note saying that the SP included a fix for my issue. To this day I don't know whether to consider that extremely good customer service or a pointless waste of IBM's resources :)

        (*)I am no longer so naive as to be a fanboi (a good thing) but also not as young (a bad thing) :)

  8. Doctor Syntax Silver badge

    "NT 4, in 1996, is peak Windows as far as this grizzled hack is concerned, before NT was retooled for consumers with the launch of Windows XP in 2001."

    You missed W2K?

    I migrated my W2K VM from my old to new laptop this morning. It runs the one application I can't get running under Wine and couldn't find a decent replacement for under Linux. I'm trying to decide whether to migrate the W7 VM. Probably not.

    1. Dan 55 Silver badge

      Just out of interest, does ReactOS run the software?

      1. Doctor Syntax Silver badge

        "Just out of interest, does ReactOS run the software?"

        Not tried but probably not. It actually fails to install properly as far as I can tell and the bastard vendors had no interest whatsoever in fixing it. It needs to contact their servers to register although IIRC there was a means to register it by contacting them off-net. But it's a long time since I bought it and I don't know if I could even register a re-install so the easiest thing is simply to keep it on a VM where it's registered and working.

  9. Admiral Grace Hopper

    NT4

    NT4 was where MicroSoft lost the plot. Moving the GDI to Ring 0 was my first WTF moment with Windows. Were they really prepared to compromise the stability of a server OS to keep the GUI running? They were.

    Oh dear.

    1. Anonymous Coward
      Anonymous Coward

      Re: NT4

      Just look at where Linux graphic code runs now - same ring. Since a lot of processing now happens in the graphic card acceleration hardware, and that's strictly tied with the driver which needs to run in the kernel to talk to the hardware, there's little choice but to run most of the graphic code there until you like slow performance - ring transitions can be very costly - just look at how fixes for meltdown slowed things down... and why they took the shortcut to map kernel code to user space protecting it just with the paging mechanism - *and all OSes did it*, Linux included.

      Stability depends mostly on the quality of the driver, and especially in the past a lot of cheap graphic cards came with bad drivers. The real downside is security, because things like font processing may be used as attack vectors.

      1. Charles 9

        Re: NT4

        So what about microkernel environments, where as little of the kernel is exposed as possible? And virtualization, which necessarily involves a lot of time in Userland? Hasn't there been something of a push to pull more performance-intensive stuff (including graphics and low-latency networking) into Userland to avoid the costly context switching and insulate against rogue processes?

        1. Anonymous Coward
          Anonymous Coward

          "So what about microkernel environments"

          Ask yourself why microkernels like MINIX or Hurd went nowhere, and the only commercial implementations are at best "hybrid" ones like macOS.

          Intel envisioned something alike a microkernel with its four rings (0 - kernel, 1- I/O, 2- OS services, 3 - applications), but the cost of so many transitions were too high. AFAIK, nobody used such an architecture (which was not portable also, since most CPU had only two levels). More separations and slower "communication" mechanism across levels means slower performance, albeit better security.

          Modern virtualization allow guest OS to run in ring 0 for performance (avoiding emulation, which itself runs in ring 0) - and that's why "ring -1" was added.

          The problem with graphics and networking is they need to talk to the hardware. In the Intel architecture, you can talk directly to the hardware in any ring <= the IOPL setting to access I/O ports (but that's is set to 0 in most OS), or when you can access physical memory when hardware is mapped there, and that's again usually something only the kernel can do.

          If, and only if, you can do most processing in user space and then move the processed data to the kernel to be sent to the device, you can avoid most switches - otherwise you need the other way round.

          Take font display: everytime you need to create a glyph you need to compute its image, antialias it, etc. etc. You can send just the glyph "code" to the kernel and do all the processing there, or you'll need to go back and forth from the kernel to compute the correct image (because you may not want to duplicate a lot of the kernel states in user space, which would anyway needs to be kept in sync...), and then send the whole image to display.

          Networking on the other hand, if most of the protocol data can be computed in user space, may just need to send the bytes to be transmitted to the driver in kernel, which depending on the transport protocols may do very little but to send them to the hardware to be transformed in electric or optical signals.

          1. kventin

            Re: "So what about microkernel environments"

            """microkernels like MINIX [...] went nowhere"""

            Intel Management Engine runs MINIX 3

            what about L4 and QNX -- are they unsuccessful, too?

            re your glyph example: if TeX can use precomputed images, why can't font displays?

            1. Anonymous Coward
              Anonymous Coward

              "what about L4 and QNX -- are they unsuccessful, too?"

              I was speaking about generic desktop/server OS - embedded and other niche uses may be better served by other architectures, where the issues that may make them bad on a generic desktop/server OS may not exist at all, while they can solve others.

              You can't easily pre-compute each font glyph (at any size?) - because its actual display may depend on what's before, after and below. And often, it needs to happen in real-time. Vector font formats are very complex, and are designed for high-end typography needs. Maybe overkill for many users, but there is also the DTP crowd, and the like - and you may want a PDF to be displayed as designed. You zoom, and glyph have to be re-computed.

              1. Charles 9

                Re: "what about L4 and QNX -- are they unsuccessful, too?"

                "You can't easily pre-compute each font glyph (at any size?) - because its actual display may depend on what's before, after and below."

                But what about computing font glyphs in userland THEN pushing it on to the kernel for compositing like a layer? I can see things like 3D rendering necessarily being kernelland because it's the GPU that does the actual lifting; same with video acceleration. But fonts?

                1. Anonymous Coward
                  Anonymous Coward

                  "But what about computing font glyphs in userland THEN"

                  Too many ring switches, I'm afraid. For example antialiasing needs to know what's behind the glyph (which may not be known by the application, if not under its control). Fonts antialising may use "hints" inside the font data to avoid they look the wrong way.

                  Also, font rendering may be hardware accelerated as well. With the increased size of display devices, and their resolution, even displaying 2D objects had to be accelerated. Windows old GDI is often too slow for some tasks, and that's why hardware accelerated Direct2D was introduced.

                  Still, you can rasterize fonts in user space and sent the result to the kernel for display, AFAIK there are some libraries that do that - but it's simpler when you can pre-render a whole static output (i.e. a PDF page) than when you need to manipulate a dynamic output.

                  It is true that maybe the new wave of flat, solid color designs may not need it, but remember Windows 7 enabled the "aero" interface only if the underlying hardware was good enough, and some effects could be disabled if there's not enough power.

                  1. Charles 9

                    Re: "But what about computing font glyphs in userland THEN"

                    "It is true that maybe the new wave of flat, solid color designs may not need it, but remember Windows 7 enabled the "aero" interface only if the underlying hardware was good enough, and some effects could be disabled if there's not enough power."

                    I believe the key requirement here is GPU compositiong. That's why Aero automatically turns off if you use something like a screen mirror driver (like DFMirage, recommended for use with VNC on Windows) because the screen buffer has to be in main memory for a mirror driver to work properly.

          2. Charles 9

            Re: "So what about microkernel environments"

            "Ask yourself why microkernels like MINIX or Hurd went nowhere, and the only commercial implementations are at best "hybrid" ones like macOS."

            What about QNX, then, used in BB10? Even the memory manager in QNX is a separate process, yet it doesn't seem to have performance issues..

            1. Anonymous Coward
              Anonymous Coward

              Re: "So what about microkernel environments"

              Not exactly a successful example <G>.

              Still ut's a phone, it doesn't have the multiprocessing/multitasking needs of a server. I think Canon's DRYOS is also a microkernel one, and it runs many millions of cameras - but again, it's a specific OS for very specific needs.

              1. Charles 9

                Re: "So what about microkernel environments"

                "Still ut's a phone, it doesn't have the multiprocessing/multitasking needs of a server. I think Canon's DRYOS is also a microkernel one, and it runs many millions of cameras - but again, it's a specific OS for very specific needs."

                But it having a GUI means it has to tackle one of those bug-a-boos: graphics performance. And based on what I've read, a BB10 phone CAN do some decently-demanding stuff like 1080p video. Either BB10 breaks the QNX microkernel segregation or they found a way to get good hardware-accelerated performance out of a microkernel. Which?

                1. Anonymous Coward
                  Anonymous Coward

                  "Either BB10 breaks the QNX microkernel segregation or they found a way"

                  It's much easier when you have a single application taking the whole screen. Windowing makes processing heavier. Even with DirectX or the like, often performance are better when you switch to full screen, than running the application in a window, especially games.

                  Also, it runs on ARM, not Intel. Intel ring transitions imply a lot of security checks, structures lookup and loads, it's one of the reasons they are slow. ARM has a simpler model, and it could be faster.

                  That's why faster instructions like SYSCALL/SYSENTER were introduced later - albeit far less versatile, but created to support the way OS used the CPU to call into kernel code. Also, if you can't pause or give far lower priority ti other processes/threads, switches happens far more often.

                  Anyway, from the Wikipedia page, don't know how much reliable "Later versions of QNX reduce the number of separate processes and integrate the network stack and other function blocks into single applications for performance reasons."

    2. The Average Joe

      Re: NT4

      but they moved print drivers out of the kernel, or the default was out, you could still run old ones and Kill your terminal server with just 1 print preview!

      1. Charles 9

        Re: NT4

        "but they moved print drivers out of the kernel, or the default was out, you could still run old ones and Kill your terminal server with just 1 print preview!"

        As you say, but apparently one of the chief complaints of early NT's was terrible graphics performance, even for the most basic stuff. And that was because graphics were originally kept in Userland as much as possible, BUT because of the necessities of graphics hardware, that caused massive context thrashing. It's not like Microsoft was entirely to blame for moving graphics drivers back into Kernelland--they were under pressure to get the performance back up to speed or people would stick to the old Windows, even at a time when 2K represented the final push before XP deprecated 9X.

      2. Flakk

        Re: NT4

        but they moved print drivers out of the kernel, or the default was out, you could still run old ones and Kill your terminal server with just 1 print preview!

        Assuming a shop used HP printers (back when HP still made printers, as opposed to today's cheap consumables vacuums), you only needed two drivers to support all of the terminal users: the HP LaserJet II PCL driver and the HP LaserJet II PS driver. HP printers through at least LaserJet 6 could use those two drivers just fine. Didn't have Terminal Server print driver problems (though I did have my fair share of other problems).

      3. Anonymous Coward
        Anonymous Coward

        Re: NT4

        Printer drivers are not "interactive" - unless you still use line printers and like to see a character at a time. If you're generating PS/PCL or the like, maybe directly from data already stored in user space, and then send it to the printer, there's little need to do it in kernel.

        "Interactive" computation like performing a 3D model transformations in real time (games... but not only), may work better if you send everything in kernel, and then just tell what transformation you need each time you need it displayed, maybe several times per second...

    3. CrazyOldCatMan Silver badge

      Re: NT4

      Were they really prepared to compromise the stability of a server OS to keep the GUI running

      After all, why does a server need a GUI? Everything is done via a CLI!

      (Unless you use point 'n drool)

      1. Anonymous Coward
        Anonymous Coward

        "After all, why does a server need a GUI? Everything is done via a CLI!"

        CLIs, like GUIs, are excellent at some tasks, and dreadful at others. I prefer to have both - so I can use the best for each task. While back then GUIs were more fashionable, until Linux made the CLI sexy again...

    4. Snake Silver badge

      Re: NT4

      Thank you, I was wondering when / if someone else would mention this! OLE to Ring 0 calls?! Really??!

      At the time I sent the full technical details of that change, plus plenty of writeups from technical journals, to the U.S. Navy Training Support Center in San Diego, CA. Knew a flyer stationed there and it turns out that they were educating [him] on the new NT4 and had no idea about the compromised kernel! Got them straightened out in a hurry! :p

  10. Christian Berger

    Well cutting the old cruft didn't really work

    One of the main selling points for any version of any product from Microsoft was that you could run your software from previous versions.

    That's why Windows NT still contained the incredibly messy WinAPI which, because it had no way of generalizing things, had a function for every feature imagined by the creators, as well as data structures where the things you were interrested in were declared "reserved do not use". The API was so bad that people resorted to reading the stack in callback functions to get more information from the system.

    Then you had features deliberately put in to harm your competition by making it harder for them to implement them. SMB is said to have quite some feature duplication, apparently developers didn't read their own code.

    The problem for Microsoft is that they cannot get rid of this. Any change means loosing backwards compatibility. Any loss in backwards compatibility means that wine and ReactOS will look like better alternatives.

    1. Anonymous Coward
      Anonymous Coward

      "SMB is said to have quite some feature duplication, apparently developers"

      SMB has its root in a lot of IBM code.... and dates back before Windows, when IBM designed a protocol that could work over NetBIOS/NetBEUI and back then it had to work on non TCP/IP transport protocols as well (especially IPX).

      Some of the "reserved" stuff was "reserved" because it could change in later versions. The fact that people messed with them just made Win32 even more messy when MS had to cope with compatibility - read Raymond Chen's "The Old New Thing" blog for many examples.

      But it is true it was also used as a competitive advantage.

    2. cornetman Silver badge

      Re: Well cutting the old cruft didn't really work

      > The problem for Microsoft is that they cannot get rid of this. Any change means loosing backwards

      > compatibility. Any loss in backwards compatibility means that wine and ReactOS will look like better

      > alternatives.

      Actually, we're already there.

      An old 16-bit windows game that we tried to run under Windows 7 - ah, no 16-bit system, runs perfectly under Wine on Linux.

      Admittedly, small potatoes yet, but Microsoft are shackled to compatibility as one of their key marketing advantages.

      1. Anonymous Coward
        Anonymous Coward

        "ah, no 16-bit system"

        Only in 64 bit versions. And just because AMD removed the Virtual 86 mode from the CPU when in 64 bit mode. So you'll need to use some kind of emulation or virtualization.

  11. jeffdyer

    Microsoft have never had a capital S in soft, sorry.

    1. Sgt_Oddball

      I think people get too used to seeing Micro$oft...

      1. Admiral Grace Hopper

        Guilty as charged. I seem to have formed a false memory on this one. They have had some striking logos along the way but none with capital S.

        1. Mage Silver badge

          some striking logos along the way

          That article has some glaring errors. DR GEM, IBM, MS & Apple all copied Xerox more than each other. Lisa (the pilot Mac :) ) not Apple II had any step forward in HW & SW. The Apple II was dreadful and a success mostly due to Visicalc. I had one, as well as later RM380Z, ACT Sirius 1, original IBM and Apricot.

          1. Daniel von Asmuth
            Gimp

            Re: some striking logos along the way

            "The Apple II was dreadful and a success mostly due to Visicalc."

            The Apple ][ had its problems, but the MacIntosh was really dreadful. .... if Gates had intended us to use a mouse, He would have given us three hands.

        2. Dan 55 Silver badge

          In the early days (i.e. their quick and dirty BASIC that they flogged to all and sundry), Micro-Soft was a thing in copyright notices but as someone mentioned above the logo was all in capitals.

    2. Christian Berger

      It is also spelled MikeRoweSoft.

    3. sinsi

      Technically, they did - the first logo was all capitals.

    4. Sanguma

      capital S in soft

      IIRC, in their very first years when they were hawking their Altair BASIC, they were Micro Soft.

  12. Tom 38

    Hmm

    NT 4 was great compared to 95/98, but 2000 (NT5) was a pretty decent upgrade to NT 4, like somewhat better driver support.

  13. 0laf
    Thumb Up

    We still had an NT box doing actual work until very recently (i.e. last 18 months).

    Coincidentally I found a genuine copy of W2k just the other day when I was tidying out some storage. I couldn't throw it away. I remember 2k very fondly having used it with my first proper job after uni in about 2001.

    1. Version 1.0 Silver badge
      Thumb Up

      I think I still have the NT 4.0 kit - it's all on 3.5" floppy disks

  14. Anonymous Coward Silver badge
    Linux

    "NT 4, in 1996, is peak Windows as far as this grizzled hack is concerned"

    NT5 for me. AKA Windows 2000. The first (somewhat) stable desktop OS I encountered, and it didn't slow down quite like XP (NT5.1) did.

    1. Mage Silver badge

      it didn't slow down quite like XP (NT5.1) did

      NT 4.0 was running fine as server for us with about 20M RAM. But screen was only 800 x 600 @ 8bits.

      XP needed about 90M RAM. Each SP needed more RAM. Double buffering a 1600 x 1200 @ 24 bit or 32 bit screen adds a lot more RAM usage and slows it.

      AV software or crapware slowed XP, otherwise it didn't "slow". I have a laptop I only stopped using regularly 18 months ago with XP, bought in 2002 and re-installed once in 2003. It never "slowed down". Its 1.8GHz P4 & 1600 x 1200 screen is still superior to average laptop sold in a supermarket with win 10.

      I have some Win10 gear, but everyday use is now Linux Mint, Mate desktop and customised TraditionalOK theme on Lenovo E460.

    2. FrankAlphaXII

      Quite agreed.

      I never actually used XP on any of my machines, I kind of hated it but I had to support it on friends, other Soldiers and family's computers for about five years. I still think it looks like a butt ugly Fisher-Price OS unless its being run with the classic theme.

      My Windows 2000 desktop system ran like a dream until we had a weird January thunderstorm and lightning strike which nailed my apartment building, resulting in a massive hardware failure the same damned day Vista came out. After getting new hardware I couldn't find my Windows 2000 install media which had been lost in one of the Army's famous Permanent Change of Stations when I was junior enlisted and active component. Its probably still in some Army warehouse in Texas or California.

      I hated it even more than I had hated Windows XP until Windows 7 came out.

      1. Doctor Syntax Silver badge

        "we had a weird January thunderstorm and lightning strike which nailed my apartment building, resulting in a massive hardware failure the same damned day Vista came out."

        So that's where it happened. The heavens had to protest somehow and thunderbolts are the standard way.

    3. commonsense

      "NT5 for me. AKA Windows 2000. The first (somewhat) stable desktop OS I encountered, and it didn't slow down quite like XP (NT5.1) did."

      Ditto. I used to dual boot NT4 and 95/98, until 2000 came along, with decent hardware support (eventually), DirectX support, multiple monitor support without the need for a specialist card, a satisfactory level of stability. Nothing but fond memories.

  15. The Average Joe

    25 years and the clustering is still not as good as VMS...

    what a waste. Clustering today sucks compared to VMS. Dave had the chance and he blew it.

    1. Yet Another Anonymous coward Silver badge

      Re: 25 years and the clustering is still not as good as VMS...

      Or a versioned filesystem that a regular user could use to get data back (yes I know ZFS is very clever but shadow copy is not the same as "file.txt;1" )

      1. CrazyOldCatMan Silver badge

        Re: 25 years and the clustering is still not as good as VMS...

        I know ZFS is very clever but shadow copy

        ZFS isn't available on Windows.. I think you mean NTFS (or possibly VFS)

        1. Ross 12

          Re: 25 years and the clustering is still not as good as VMS...

          Also NTFS does have 'streams', so you'd just do 'file.txt:1'

          or perhaps move the current version to file.txt:previous before saving file.txt

          It was a badly supported feature though and as far as I know, barely ever used

        2. Yet Another Anonymous coward Silver badge

          Re: 25 years and the clustering is still not as good as VMS...

          ZFS isn't available on Windows.. I think you mean NTFS (or possibly VFS)

          I meant that ZFS has some of the capabilities of VMS filesystem, 30years later, and a bunch of more complicated improvements. But you still don't have such an easy user way of getting back previous versions of a file

  16. Wolfclaw

    NT4 was nice to use compared to W3, W10 needs to kill pre-W7 spport and then pre-W8 a year later and then finally pre-W10 in another 2 years and only 64-bit !

    1. Charles 9

      But one of the key selling points of Windows is backwards compatibility. I doubt Microsoft will be willing to kill that nest egg.

  17. AndrueC Silver badge
    Happy

    That was one of a handful of things that OS/2 did a lot better. OS/2 VDMs were almost hypervisors. So low level that you could actually boot them off a floppy disc. They could run just about any DOS application you cared to including games and still get crash protection. I remember playing Geoff Crammond's first F1 simulator while downloading from CompuServe in the background.

    The other thing it did better (at least in concept) was having an object oriented shell. The implementation was a bit rough but conceptually a very powerful idea.

    I also thought its memory management was better, being similar to that of Unix. RAM was just the fastest form of storage and no attempt was made to keep it free. Unfortunately it led to a lot of support queries from people wondering why they never had any free RAM but I prefer the idea of letting RAM backing 'evolve' rather than the original Windows idea of continually trying to trim working sets.

    And of course OS/2 had REXX.

    Ah well - that was then and this is now :)

    1. CrazyOldCatMan Silver badge

      And of course OS/2 had REXX.

      Which started out on their mainframes - I remember using Rexx in VM/CMS back well before OS/2..

      And it's now available on Linux.

      1. rnturn

        Rexx?

        I know it was available but it never seemed to be able to displace EXEC/EXEC2 among us FORTRAN-G/H programmers back in those days.

  18. Anonymous Coward
    Anonymous Coward

    NT4 uptime measured in years

    My personal file server is running NT4 Server (SP6a) and has since 1998 or so. It has only ever been shut down due to hardware failure (fans, drives) or extended power outages. Uptime is normally measured in years. It just sits in the corner quietly doing its job of domain controller, DNS, DHCP, VPN, file sharing, printer sharing without complaining. Previously hosted a website getting 100,000+ visitors a day and the only issue with that was MDAC falling over because (shocking) Access DBs aren't designed to be hammered like that. I keep thinking I will shut it down and replace it with something more modern, then I think "It works, why downgrade it to something that needs to be rebooted every 2nd Tuesday for updates?".

    1. Mage Silver badge

      Re: NT4 uptime measured in years

      NT 3.51 was better, no GDI in Kernel. There was even a preview version of the Explorer shell. But MS wanted people to buy upgrades.

      Also why there was no retail SP for USB on NT4.0. I had a preview of the cancelled SP that the USB worked. MS was worried the SP with USB would hurt Win2K sales, yet it wasn't completed. XP is finished version but was rushed and by SP3 got bloated. Also some stupid gratuitous GUI / location changes on W2K and then XP.

      W7 is simply a SP fix of Vista, because by 2003 (NT5.2) lost plot on Vista (NT6) development.

      Now NT (aka Win 10 and really win 7.2 as Win8 is really Win7) is pointless.

    2. katrinab Silver badge

      Re: NT4 uptime measured in years

      My FreeBSD server hasn't been around quite that long, but I've had it for about 6 years now. Last reboot was 203 days ago to replace the motherboard. My highest uptime was about 4 years.

      1. Strange Fruit

        Re: NT4 uptime measured in years

        "Last reboot was 203 days ago to replace the motherboard."

        Trigger's broom?

  19. Mage Silver badge

    Exceptional HW & incompatibility?

    Why not compare with Xenix, OS/2, BSD Unix, AT&T UNIX and VMS resources?

    Comparison with MS-DOS/PC-DOS, Concurrent CP/M, CP/M 80, CP/M 86, Intel ISIS II, Apple II DOS etc is pointless.

    Curiously there was an MS OS/2 which included MS LAN Manager in 1989, is that why NT starts at 3.x? Also NT ran OS/2 text mode (console) programs on OS/2 subsystem, MSDOS command instead of NT cmd on an NTVDM, and 16 bit Win 3.x programs using WOW translation of 16 bit WinAPI to 32 bit NT API and NTVDM. So NT 3.51 & NT4.0 ran Win3.x & win95 mixed 16bit/32bit mixed programs faster on the Pentium Pro than Win95 did. Win9x killed the Pentium Pro.

    NT was held back for 10 years by success of Win9x and badly written windows programs that ignored security APIs and needed Admin mode. Properly written Win32 programs, even written for NT3.1, worked fine on NT4.0, Win2K, XP, Vista/Win7 without being Admin.

    1. CrazyOldCatMan Silver badge

      Re: Exceptional HW & incompatibility?

      there was an MS OS/2 which included MS LAN Manager

      Which was itself a clone of the IBM OS/2 LAN Manager (we used that as our file server back in the old token-ring days).

      It worked quite well (for those days) and only fell over twice in about 5 years - once when the aircon broke and the room it was in hit 55C and the second time when I hit the power button by mistake.. (the monitor was on top of the server and had exactly the same power button, just 4 CM away from the server power button. While going out the door, I pushed what I thought was the monitor power button but, before I took my finger off, realised that the texture my other fingers were on wasn't the monitor. I stood there for an age while by colleagues went around the office telling people to save their work. I think they did it as slowly as possible in order to teach me a lesson. We taped over the server power button after that.)

      1. Anonymous Coward
        Anonymous Coward

        Re: Exceptional HW & incompatibility?

        IIRC LAN Manager was a 3Com and Microsoft collaboration running on OS/2 - when MS and IBM were still collaborating on it. IBM resold it as well.

        That's why Windows had (and partially still has) many "LM" features, including the infamous hash for password storage.

  20. Anonymous Coward
    Anonymous Coward

    DEC Alpha NT 4.0

    I cut my IT teeth on DEC Alpha servers running NT 4.0. Even had a NT 4.0 cluster with Exchange 5.5 on DEC Alpha.

    First 64 bit to run Windows.

    Have very fond memories of the Digital kit.

    1. Yet Another Anonymous coward Silver badge

      Re: DEC Alpha NT 4.0

      Especially since DEC used to sell them below cost to push the NT license and to get people off VMS - for reasons that made sense to somebody you could buy an Alpha with NT for about 1/2 the price of the same HW with VMS !

      So we bought the NT machines and installed Linux, almost 2x the power of a Sparc for 1/2 the price.

  21. Version 1.0 Silver badge

    So no anniversary edition?

    I was hoping the MS would be releasing an anniversary edition - just the current system with the NT interface, I could use that.

    1. Roland6 Silver badge

      Re: So no anniversary edition?

      You mean a release of Win 10 with the NT UI/UX?

      Now that would make you feel old, go from Win10 UI/UX madness, reboot to install updates and get confronted with an NT3 UI/UX

  22. Anonymous Coward
    Facepalm

    IBM's OS/2 foundered on the rocks of Microsoft's Windows.

    Windows NT .. Originally intended as a successor for IBM's OS/2, before the collaboration between the two companies foundered on the rocks of the success of Microsoft's Windows

    Haaa .. you're a funny guy :]

    --

    let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt”, Donald J. Trump

    1. CrazyOldCatMan Silver badge

      Re: IBM's OS/2 foundered on the rocks of Microsoft's Windows.

      collaboration between the two companies foundered on the rocks of the success of Microsoft's Windows

      Commonly known as "the triumph of marketing over technology".

      1. Anonymous Coward
        Anonymous Coward

        Commonly known as "the triumph of marketing over technology"

        No. It was another example of how much applications availability matters - people need and use applications, not operating systems.

        Windows 3.x - which was also cheaper - soon got many applications, while OS/2 didn't.

        IBM was too slow and clumsy at filling that gap, even after it bought Lotus, and its development tools were also inferior (VisualAge, anybody?)

        Also for a while, especially before Warp, IBM was still dreaming to use OS/2 to promote its own hardware at expenses of clones, while Windows supported whatever you liked. Often, you didn't find drivers for non IBM hardware.

        OS/2 was probably a triumph of *bad marketing" from IBM, together an ill-conceived strategy, they didn't own the PC market, and failed to understand it.

        1. PhilipN Silver badge

          Re: Commonly known as "the triumph of marketing over technology"

          Slight correction. IBM’s desktop salesman understood the market very well. They were desperate to sell boxen, which meant Windows, and no incentive to push OS/2.

          AND MS would wield their monopolistic power by threatening to withhold information about the latest version of Windows (so it could actually run properly) unless IBM turned its back on OS/2. Ergo market opportunities for Compaq et al and the rest is history.

  23. FrankAlphaXII

    >>Nothing less than a 386-class processor (for the Intel iteration) would do, and running it in less that 16MB would make for a very sub-par experience – astonishingly excessive for the time.

    Considering the 486 had been around for almost 4 years at that point, I really don't think it was that crazy to ask for a 386. Now the 16 MB of RAM was a little on the WTF side, because memory was bonkers expensive back then, but asking for a 386 by 1993 wasn't at all excessive. Especially when considering that NT was marketed for workstations and servers and not for general-purpose consumer level stuff like 3.1 and the 9x versions of DOS-wrapper "Windows"

    Just my two cents.

    1. Anonymous Coward
      Anonymous Coward

      My two cents is that if you want to pretent that 9.x versions were "DOS-wrapper", then the rest of your coments are probably equally worthy.

    2. Anonymous Coward
      Anonymous Coward

      "but asking for a 386 by 1993 wasn't at all excessive"

      Especially since you needed it as well to run Windows 3.x decently - it could run on 286, but with many limitations especially to run DOS programs. I don't remember anybody running it using a 286.

      Even some advanced DOS programs using a DOS Extender often required a 386.

      Still desktop machines with more than 4MB were rare - and probably many low-end 386s had no enough power to run the more demanding NT well.

      1. Fading
        Unhappy

        Re: "but asking for a 386 by 1993 wasn't at all excessive"

        I was still running an 8086 in 1993 (Amstrad 1640).

        1. Anonymous Coward
          Anonymous Coward

          "I was still running an 8086 in 1993 (Amstrad 1640)"

          I run a 286 (and DOS) until 1994 - but just because I was a student with no income, and my parents couldn't afford a new one. Not really the target for NT.

          Despite their tremendous success in the "consumer market" as well, in the early 1990s PC were still essentially business-oriented machines - most software was also expensive.

          And NT was aiming at the high-end market, 3.x would do for the low-end one.

          As soon as I got a job, I got a Pentium and 8MB of RAM...

  24. Anonymous South African Coward Bronze badge

    And yet the WorkPlace Shell trumps Windows' GUI...

    I miss it. It was so flexible. You can have a dozen folders, each with their own colour schemes and fonts.

    And you could have workspaces. Assign apps etc to a workspace (folder) - open it, and all the apps/docs associated with that folder opens.

    The only weak point was the .INI files... if one goes corrupt, then you have a jolly time to recover from that issue.

    NT4 SP6 was very stable, but not secure. In contrast Win10 is secure (better than in comparison to NT4), but stability is a 50/50 affair, what with the latest tomfoolery from Redmond in insisting it MUST be updated every so often.

    I sometimes wonder what would the world have been like had OS/2 and Novell gained serious traction - Novell for file servers and OS/2 for desktop/servers....

  25. Anonymous Coward
    Anonymous Coward

    I installed NT Advanced Server 3.1 from floppy

    3 times in a row before I remembered the pw I used. It was about 15 floppies.

    That was the 1st time I got pissed off with it. Even though in this case it was me at fault... ....I suspected at the time anyway!

  26. TonyB

    Unless I missed it, I'm surprised no one has mentioned the book Show Stopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft, (G.Pascal Zachary), a great read.

  27. Chris Johnson 1

    I run systems which house several VMs. Since nothing is demanded of the guest OS other than that it supports the programs I am running I use the smallest one I can get away with - Win2k. If NT4 had included USB support I would still be using it, since I had always found it a reliable OS.

  28. Ross 12

    When I built my first PC in 97, NT4 workstation went straight on it. This was because I was an A-level student and MS were doing an offer where you got Win95/NT4wks for ~ £35 I think it was. The dumb thing though was that the Windows 95 license was upgrade only, and my machine had no OS to upgrade, so I went for NT4 instead and it was pretty damned good. Although I did have to throw ram upgrades at it pretty sharpish.

  29. rnturn

    NT showed promise early on.

    I was actually a little excited by what I'd heard and read about the guts of NT---as an old VMS hand a lot of it felt familiar. But I never had a chance to actually use it until the ill-considered decision to move the video drivers into the kernel. I had to warn users every time I needed to make a change to the network user database that was running on one of the company's NT systems---there was a better than 50/50 chance that clicking on "Save" would cause the database application to crash and bring the system down with it. Eventually, we decided that making simple changes like this could only be done after normal business hours. All that solid-as-a-rock VMS lineage wiped out by one silly decision.

  30. intrigid

    Peak Windows

    Windows 7 is currently Peak Windows, and it may be the only Windows to gain and never relinquish that title.

  31. John Ellin
    FAIL

    MS SQL Server on NT

    Many years ago, I worked for a major financial institution on the European mainland. We were evaluating RDBMS solutions. We were a Netware house and wanted the solution to run as an NLM.

    When we contacted our local MS rep for the MS SQL Server NLM they told us that they were discontinuing support, but we could take the NT version out on a spin. When I said 'we don't have NT,' he said 'you can take NT on a spin as well.'

    The very next day, he showed up with a stack of 37 3.5" floppies. I started the install. Cluster error on disk 12. A day later, I had a new disk 12. Cluster Error on disk 14. A day later, I had a new disk 14. Eight days later, I called the rep and suggested he 'stuff it.' The next day, he arrived with three floppies and a CD-Rom - I'll never understand why he didn't give us the CD option on day one.

    I inserted boot floppy #1 and rebooted. The machine whirred and asked for floppy #2 and 'hit any key'. Then floppy #3. And finally, I received the message: 'Please insert the Windows NT CD-Rom into Drive A: and hit any key to continue.' and there was no way to edit the drive letter. So I did just what it said and inserted the CD-Rom into drive A: a bit at a time.

    Ahhh memories.

  32. timhowarduk

    Reboot anyone?

    Remember how touching almost anything in the network settings (including IP address) required a server reboot? Tell the kids of today that, and they won't believe you....

    I still have rather fond memories of this era though, supporting NT server, Exchange 5.5 (isinteg -patch forever ingrained), Proxy Server etc. when the more glamorous sites had these revolutionary ISDN lines...

    These days servers are so dumbed down and 'user friendly' you can't work out which of the sugar coated answers to a wizard lead to the configuration you know you want and live in perpetual fear of unwanted side effects that it will decide you must surely want....

  33. JBowler

    Acorn looked at the white paper pre-1993

    I worked there then, I was asked to look at the MS white paper on NT; what I said was that it had all the right words but there wasn't enough info to tell whether the code worked the same way. The sub-geniuses (to be polite) at Acorn didn't follow up, so the next thing I heard was when an ex-Acorn employee was working on the 68k port, sometime after I had left the sinking ship. The ARM port could, and should, have happened at the start of the '90s; if the management had actually followed up as opposed to BS'ing it WOULD have happened then because MS really did have the shyte.com It would have happened; back then I cared.

    So far as I can tell (I later worked for MS, but not in the OS division) NT seems to have lost and found its way several times since then. It is a damned good micro-kernel but it is beset by the *F*F*F* shell; Windows Explorer (apparently a pseudonym for DOS 3.0) takes it down every time. BUT that is an application. You can do everything you want (and quite a lot that any sane OS vendor doesn't want) if you escape from the Win32 API.

    Then there is NTFS. I LOVE NTFS. Sorry, I probably shouldn't say that in public.

    John Bowler jbowler acm.org

  34. Erebus

    Long live the dinosaur

    I love the way people here reminisce about NTVDM as if it's extinct. Spare a thought for those wretched slaves in corporate hell who are still running 32-bit operating systems because their masters love their 16-bit apps SO much.

    1. Charles 9

      Re: Long live the dinosaur

      So why couldn't those apps be shunted into VMs and be done with them?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like