back to article Now Microsoft ports Windows 10, Linux to homegrown CPU design

Microsoft has ported Windows 10 and Linux to E2, its homegrown processor architecture it has spent years working on mostly in secret. As well as the two operating systems, the US giant's researchers say they have also ported Busybox and FreeRTOS, plus a collection of toolkits for developing and building applications for the …

  1. TheVogon

    Interesting. They are apparently ahead of rivals in quantum CPU technology too:

    https://www.bloomberg.com/news/articles/2018-03-28/microsoft-edges-closer-to-quantum-computer-based-on-elusive-particle

    1. Multivac

      They have been the world leaders in quantum computers for years, even as far back as Windows 2000 the OS was both working and not working simultaneously.

      1. bazza Silver badge

        They have been the world leaders in quantum computers for years, even as far back as Windows 2000 the OS was both working and not working simultaneously.

        Perhaps, but only if you are not observing it, so you'd have to be a Linux / Mac user for Windows to be in that state. As soon as you observe it, you'll see it either working or not.

        1. Teiwaz

          They have been the world leaders in quantum computers for years, even as far back as Windows 2000 the OS was both working and not working simultaneously.

          Perhaps, but only if you are not observing it, so you'd have to be a Linux / Mac user for Windows to be in that state. As soon as you observe it, you'll see it either working or not.

          Nope, you merely have to look away for a moment or step away from your desk and it'll flip states*

          * I think it's a hunting trick to attract unwary technical support personnel close to it so it can suck the life from their souls.

          1. aks

            They have souls?!!

    2. Anonymous Coward
      Anonymous Coward

      M$ uses Chinese ESP32 CPU as surveillance chips (Clipper chip v2)

      M$ also uses ESP32 (cheap chinese CPU) as basis for their new surveillance chips. Exactly like NSA's spying "Clipper chip" from 1996, so they try it again with their shopfront M$.

      "The Clipper chip was a chipset that was developed and promoted by the United States National Security Agency (NSA) as an encryption device that secured “voice and data messages" with a built-in backdoor. It was intended to be adopted by telecommunications companies for voice transmission. It can encipher and decipher messages."

  2. TheSkunkyMonk

    Bet they come with some great marketing wank/spyware built right into the CPU

  3. King Jack
    Big Brother

    Computer says "No"

    Only M$ approved programs allowed any where near your computer. Expect this chip to phone home every time you take a shit. Reminds of an ad M$ was running 2+ years ago about how they would stamp out computer crime, by spying on everything you do.

    1. bombastic bob Silver badge
      Meh

      Re: Computer says "No"

      along with that, consider ONLY running apps from "The Store", mandatory updates, and the worst possible extreme incarnation of 'management engine' you can possibly think up, in your worst nightmares...

      Oh, and DRM. Of course.

      I admit the tech sounds interesting, but this almost sounds like using microcode as an instruction set, the same *kind* of way RISC made CPUs lightweight but not necessarily faster (ARM still catching up to x86 tech).

      But it may have a niche market for cell phones and other 'slab' devices. It almost sounds too late because windows phone is officially _DEAD_.

      1. FIA Silver badge

        Re: Computer says "No"

        I admit the tech sounds interesting, but this almost sounds like using microcode as an instruction set, the same *kind* of way RISC made CPUs lightweight but not necessarily faster (ARM still catching up to x86 tech).

        Isn't this kind of backwards? Remember ARM were an order of magnitude faster than X86 when they debuted, it's just the focus switched to low power/embedded when Intel's focus remained on high performance.

        Microcode was the way of getting the benefits of RISC on the horrific X86 instruction set.

        It wasn't until the advent of OOO and superscalar stuff around the P5 that the Intel stuff really took off speed wise, by that point Acorn was almost dead and the market for ARMS were phones and PDAs and the like. Ironically the 'everything can have a conditional flag' approach ARM took with their instruction set didn't lend itself well to the OOO and superscalar stuff.

        A modern ARM core, especially a 64bit one tunes for speed can be quite quick too. See the CPUs coming out of Apple for example.

        1. bombastic bob Silver badge
          Devil

          Re: Computer says "No"

          "Microcode was the way of getting the benefits of RISC on the horrific X86 instruction set."

          Actually, microcode has been around since CPUs existed. it's how they work internally. From what I was told (in a college class) the IBM 360 had microcode on plastic cards with holes in them, which were read by changes in capacitance for sense wires running along the top and bottom (rows and columns). that way you could re-program the IBM 360's microcode by using a stack of cards.

          The concept of RISC (as I recall) was to get closer to the microcode to streamline the instruction pipeline and reduce the size (etc.) of the core, though it's not the same thing as BEING microcode.

          I don't recall MIPS or ARM _EVER_ being faster than the high-end x86's. Maybe it was, maybe it wasn't. I don't remember hearing that. I'm pretty sure it was the other way around.

          1. ThomH

            Re: Computer says "No" @bombastic bob

            Benchmarking is a fools' game, of course, but the ARM at introduction was sufficiently faster than the then high-end x86, the 386, that for a while Acorn sold it on an ISA card for use as a coprocessor.

            The marketing puff is here; a PCW review is here, though it fails to come to a definitive conclusion on ARM v 386 it makes statements like "The 8MHz ARM processor is one of the fastest microprocessors available today" and "A fairer [price] comparison would perhaps be with other fast coprocessor boards for the IBM PC, such as the 80386, the 68020 and the Transputer boards" which certainly seems to bracket it with those others.

          2. TheVogon

            Re: Computer says "No"

            "I don't recall MIPS or ARM _EVER_ being faster than the high-end x86's"

            DEC Alpha sure was.

          3. MacroRodent
            Boffin

            Re: Computer says "No"

            The concept of RISC (as I recall) was to get closer to the microcode to streamline the instruction pipeline and reduce the size (etc.) of the core, though it's not the same thing as BEING microcode.

            The original RISC architectures like MIPS and SPARC did not use microcode at all. In that sense, the user-visible instructions were the microcode. Being able to avoid microcode is the main benefit of having simple and regularly encoded fixed-length instructions. I am not sure if any later implementations of these architectures use it.

            I don't recall MIPS or ARM _EVER_ being faster than the high-end x86's.

            In the 1990's, RISC processors generally ran circles around Intel's 486 and the original Pentium. I am not sure when the tables turned. Maybe around the time when Pentium III was introduced. Should did up old benchmarks.

            1. bazza Silver badge

              Re: Computer says "No"

              MacroRodent said:

              In the 1990's, RISC processors generally ran circles around Intel's 486 and the original Pentium. I am not sure when the tables turned. Maybe around the time when Pentium III was introduced. Should did up old benchmarks.

              True, but then again a SPARC or Alpha or MIPs based machines cost a huge amount of money, whereas PCs were pretty cheap. A Silicon Graphics workstation was the object of envy that never sat on my desk at work...

              The writing was on the wall by 1995, and by 2000 there was nothing to justify picking anything other than x86, except in particular cases for particular purposes.

              The 400MHz PPC7400 PowerPC chip from Motorola was quicker at certain types of maths (DSP) than a 4GHz Pentium 4, largely because Intel hadn't bought into the SIMD idea. It's only quite recently that Intel finally added an FMA instruction to SSE / AVX that meant it wasn't hamstrung. Not adding this was apparently a deliberate policy to make Itanium (which always had an FMA) look good.

              Even today there's a lot of radar systems based on PowerPC.

              The Cell processor from IBM (think Sony PS3) was quicker (for some specific purposes) than anything Intel had; in fact it took about 6, 8 years for Intel to get close to beating the Cell, and only comparatively recently have they matched its memory bandwidth. Experiencing the full might of a proper Cell processor (i.e. not one inside a PS3) was a mind bending moment, they were truly unbelievably good. Hard to program for, unless one had grown up on Transputers. It's a real pity IBM canned it, because a Cell processor from a modern silicon fab with the SPE count dialled up to max would still eat everything else for breakfast. Though with the idea of today's JavaScript generation getting to grips with something as complex as Cell being laughable, perhaps IBM took the right commercial decision.

              Interestingly, the modern CPUs from Intel and especially AMD are resembling Cell more and more (lots of cores on a fast internal bus, lots of IO bandwidth), the only substantive difference is that their cores are full-on x86 cores with (annoying) caches whereas all an SPE was good for was maths and nothing else, run out of their very handy SRAM, with no cache engine to get in the way.

          4. Torben Mogensen

            Re: Computer says "No"

            "Actually, microcode has been around since CPUs existed. it's how they work internally."

            True, but in the beginning microcode interpreted the instructions, whereas modern processors compile instructions into microcode. This gets rid of the interpretation overhead (which is considerable).

          5. Nigel Campbell

            Re: Computer says "No"

            ARM was definitely faster than anything available from Intel at the time the Archimedes came out. BYTE published a series of Dhrystone benchmarks showing it was faster than anything but a Sun 3/80. This was before any of the workstation vendors such as Sun had moved off the M68K family to RISC architectures.

            Sun, SGI, HP and various other parties brought out RISC architectures in the latter part of the 1980s, typically coming to market a couple of years before Intel brought out the 486. These machines were 2-3x the speed of a 386 on integer benchmarks, and typically had much faster graphics and I/O than mainstreanm PC architectures of the day.

            RISC architectures were still appreciably faster than the 486 and Pentium until the Pentium Pro came out, although not the 2-3x advantage they used to enjoy. However, by the mid 1990's the majority of the I/O bottlenecks and other architectural baggage on the x86 family had been solved and Wintel/Lintel was a viable architecture for a general purpose workstation.

            Linux and Windows NT made the ordinary PC a viable workstation platform, although NT didn't really mature as a platform until Windows 2000 came out. By 2000 or thereabouts the writing was on the wall for proprietary workstation architectures as they had stopped providing a compelling performance advantage over commodity PCs. RISC workstations hung on for a few more years, mostly running legacy unix software or meeting requirements for certified platforms.

            Around 2005-6 AMD's Opteron brought 64 bit memory spaces to commodity hardware and removed the last major reason to stick with RISC workstations, which by then had ceased to provide any compelling performance advantage for the better part of a decade. The last RISC vendors stopped making workstations by about 2009 and by then most of their collateral was about running platforms certified for stuff like aerospace.

            IBM's Power 8/9 are still significantly faster than Intel's offerings, although whether that translates into a compelling advantage is still up for debate. Raptor systems is getting some press for their workstations, but their main schtick is security rather than performance.

        2. bazza Silver badge

          Re: Computer says "No"

          FIA said:

          Isn't this kind of backwards? Remember ARM were an order of magnitude faster than X86 when they debuted, it's just the focus switched to low power/embedded when Intel's focus remained on high performance.

          Well, I was around at the time the first Acorn Archimedes computers came out. They were quick, they weren't quicker than a high end PC, especially one with a '87 coprocessor. They were cheaper though, so whilst 386 equipped PCs did exist and would blow the pants of an Archimedes, no one could afford a PC like that. Even a 286 or 86 equipped PC was an expensive, comparatively rare item back then.

          A modern ARM core, especially a 64bit one tunes for speed can be quite quick too. See the CPUs coming out of Apple for example.

          The guys behind the K computer in Japan are considering ARM for their next super computer. ARMs have always been excellent choices in the circumstance of using the core to marshal other units (like video decompressors, math units, etc, which is what's going on inside a phone).

          Making an ARM as fast as a big Intel chip is mostly a matter of achieving the same DRAM bandwidth, having big caches, etc. This all takes a lot of transistors, just like it does in X86s. The advantage ARM has is that they don't have to translate a CISC instruction set (X86) into a RISC instruction set prior to execution. That's what's going on inside modern X86 processors. By not doing this, ARM saves a lot of transistors.

          Intel's current range of big CPUs have main memory bandwidths of 100GByte/sec; that's a huge amount, and is unmatched by any ARM. This allows Xeons to really chomp through an awful lot of processing very quickly indeed, keeping their cores fully fed. Same with AMD's current processors; they're monsters.

      2. Handle123456

        Re: Computer says "No"

        Waitasecond ... you are saying ... Microsoft will act like Apple?

    2. Deadly_NZ

      Re: Computer says "No"

      To say nothing about it running at a high 3.1Mhz and taking 16 years to update it's code

  4. TReko

    Good description, Register

    Well written explanation with background. Thank you

  5. eldakka

    > The code is also annotated by the compiler to describe the flow of data through the program, allowing the CPU to schedule instruction blocks accordingly.

    Wasn't this one of the downfalls of Itanic? That the compiler became too complex, sorta like the software on the F-35, taking longer to keep developing the software such that it couldn't keep up with hardware changes?

    1. Flocke Kroes Silver badge

      Itanic was wildly successful ...

      The announcement alone ended development of architectures it was supposed to compete with. When the first implementation was ready for evaluation it achieved its second main goal: it needed so many transistors that no-one else could build a compatible product. It could sensibly compete on power usage with a toaster despite getting worse bench mark score than cheaper cooler CPUs available at the time. After years of delays, when a product did reach the market, the final main goal was achieved (after a fashion): the price was at the high end of a monopoly product. The only problem was (large NRE)/(small sales) made the cost about equal to the price.

      Having the compiler make all the decisions about instruction ordering in advance sounds really cool until you remember some real world problems: do you order the instructions based on the time required to fetch data from the cache or from DRAM? Guess wrong and data is not in the right place at the right time. All the scheduling decisions made by the compiler become garbage. What if the hardware it optimised to spot multiply/divide by 0 or ±1? Again result arrives ahead of schedule and the CPU has to re-order everything.

      I am not surprised it took Microsoft years to come up with something worth committing to silicon. I would expect more years to pass before they get performance/watt close to a modern X86/ARM.

      1. Bryan Hall

        Re: Itanic was wildly successful ...

        Not to mention that thing called multi-tasking. So what it each app is optimized when the scheduler then dumps you out to run the next thing in que. THAT was the main technical problem with the EPIC approach as far as I could see. Great product for a serial processing mainframe... :-)

        1. bazza Silver badge

          Re: Itanic was wildly successful ...

          Not to mention that thing called multi-tasking. So what it each app is optimized when the scheduler then dumps you out to run the next thing in queue.

          Sun made a similar mistake with sliding stack frames in SPARC; neat idea, until the scheduler comes along and schedules something else.

          Itanium was popular with people developing DSP, because it had an FMA instruction. X86 didn't until, what, 5 years ago now? I took the addition of FMA to AVX / SSE as being the signal from Intel that Itanium was finally, irrevocably dead.

        2. Nigel Campbell

          Re: Itanic was wildly successful ...

          They were quite popular in numerical computing for a while, and not the only VLIW architecture to get some traction in that space (look up the Multiflow TRACE for example). VLIW also makes an appearance in DSPs but it's not terribly efficient for general purpose computing. Now that modern CPUs can do dependency analysis in realtime they get a multiple pipeline scheduling capability that can adapt in realtime.

  6. 89724102372714582892524I9755670349743096734346773478647852349863592355648544996313855148583659264921

    Still at the FPGA stage after all these years?!?!?!?!???

    Intel must have said no.

    1. Zippy's Sausage Factory

      Re: Still at the FPGA stage after all these years?!?!?!?!???

      You can absolutely guarantee Intel said no. This represents Microsoft turning into an Apple-ish end-to-end factory, where they make the hardware and the software.

      I half expect Windows to become confined to Microsoft hardware only, like Apple. No more tinkering. No more buying whichever graphics card you feel like and slapping it in a tower and having it work. No more writing your own Windows apps without an MSDN account.

      Great for Microsoft, but if that happens the days of open computing are over. To quote the Great Sage* himself, "they're selling hippy wigs in Woolworths, man. The greatest decade in the history of mankind is over. And as Presuming Ed here has pointed out, we have failed to paint it black."

      * Bruce Robinson, natch.

      1. Teiwaz

        Re: Still at the FPGA stage after all these years?!?!?!?!???

        Great for Microsoft, but if that happens the days of open computing are over.

        Nah!!

        Really is at such odds with their 'we like OSS now' talk.

        Question is, would it sufficiently 'sour the milk' enough for the terminally MS dependant to finally make the effort to get off the nipple.

        And as to No more buying whichever graphics card you feel like and slapping it in a tower and having it work really depended on how much effort the manufacturer put into the drive and how much MS borked the last update.

      2. Mark 110

        Re: Still at the FPGA stage after all these years?!?!?!?!???

        "Great for Microsoft, but if that happens the days of open computing are over."

        That must be why they ported Linux onto it . . .

      3. TheVogon

        Re: Still at the FPGA stage after all these years?!?!?!?!???

        "I half expect Windows to become confined to Microsoft hardware only, like Apple."

        Windows has a fair history of cross platform CPU options although only x86 / x64 were a success. For instance Alpha, Mips, Arm, etc.

        1. This post has been deleted by its author

      4. Adam 1

        Re: Still at the FPGA stage after all these years?!?!?!?!???

        Totally agree with these tin foil hat comments. I mean if Microsoft were to own a large number of data centres where very small percentage improvements in work per watt or operations per second without increasing power can multiply out to a major $$ windfall it would make sense. But clearly this is about killing openness.

  7. Allan George Dyer
    Trollface

    Application Support?

    So Windows boots, but does it run Clippy and the BSOD?

    1. Steve Davies 3 Silver badge
      Happy

      Re: Application Support?

      More importantly...

      Does it run Crysis?

      1. Fungus Bob

        Re: Application Support?

        All I need is Edlin.

  8. Anonymous Coward
    Anonymous Coward

    Well, it's Microsoft

    Sooner or later it'll lose interest in its 'skunkworks' projects, distracted by something else.

    1. Paul Crawford Silver badge

      Re: Well, it's Microsoft

      Just like Google?

    2. JDX Gold badge

      Re: Well, it's Microsoft

      That is rather the point - they are speculative R&D projects that may through out some interesting results even if the project itself is never marketable. Like concept cars.

      1. GrumpyOldBloke

        Re: Well, it's Microsoft

        Patents, patents, patents!

  9. Tchou

    It's been a long time..

    ... we haven't see a MS product and dropped jaws.

    I bet we won't see another anytime soon. I call vaporware.

    1. Mike Lewis

      Re: It's been a long time..

      Windows 10 was pretty jaw dropping. They managed to turn Comic Sans into an operating system.

      1. handleoclast
        Pint

        Re: It's been a long time..

        They managed to turn Comic Sans into an operating system.

        Quote of the month. Maybe quote of the year. Have a pint.

        1. Destroy All Monsters Silver badge

          Re: It's been a long time..

          Comic Sans and his trusty sidekick, Windows Defender.

        2. Kristian Walsh Silver badge

          Re: It's been a long time..

          Hating on Comic Sans is a strong indicator of someone who knows a little, but not very much more, about typography, so the comparison is valid.

          1. handleoclast

            Re: It's been a long time..

            Hating on Comic Sans is a strong indicator of someone who knows a little, but not very much more, about typography

            Where, on your scale of mastery, does knowing that, in later life, Jan Tschichold recanted on his insistence of using sans-serif faces in body text?

            Does the name "Mickey Mouse" come to mind when you think of comics? Would Mickey Mouse Sans be a better or worse name for the face?

            Illumination may come to you. Especially if you contemplate modern modems.

  10. Steve Davies 3 Silver badge
    Thumb Up

    Is MS Qualcomm's saviour?

    Given all the legal problems that QC is having which is hitting their bottom line, I wonder if the QC 'C Level' Execs see MS as their way to that retirement payoff?

    If MS want to take this into the mainstream with things like their Surface Product line where they will have total control over everything that runs on the kit, then a control-freak company like MS may well have to consider buying QC.

    Will MS want to take over a company such as QC which seems on the face of it to be a bit of a pariah at the moment. We don't know all the details but it sure seems that way on the surface.

    It is nice to see a different CPU architecture appear every so often. Without the backing of MS it stands no chance of getting into the mainstream. Look what happened to the last one? WLIW didn't make a heck of a lot of difference in the long term.

    Only time will tell.

    Good article El Reg and for that you get a thumbs up

  11. Anonymous Coward
    Trollface

    GIGO

    A conveyor belt of garbage into the CPU - the perfect analogy for M$FT's Windoze and Orifice software!

    1. Zippy's Sausage Factory

      Re: GIGO

      Anyone downvoting this care to tell us what the weather's like in Redmond today?

      1. defiler

        Re: GIGO

        Remember that Office has >90% market share. There are decent alternatives too.

        By that reasoning, either the world is on fire, or office just isn't all that bad. (And yes, I also dislike the whole ribbon thing...)

        1. bazza Silver badge

          Re: GIGO

          Remember that Office has >90% market share. There are decent alternatives too.

          The alternative often pointed to is Libre / OpenOffice. And whilst chunks of LibreOffice are OK, some parts are pretty shockingly bad. I find their spreadsheet program to be astonishingly slow in comparison to Excel. Like, really, really slow when dealing with large amounts of data on a sheet.

          Office is good because Office, well, just works. Ok, it's not "perfect" and there's a ton of quirks, but as a tool for just getting stuff done it's pretty hard to beat. A corporate Outlook / Exchange setup is a very powerful thing, and now there's nothing else to compete with it.

          Apart from Visio; that is a hateful thing, truly ghastly. I wish it had never been born.

      2. Kristian Walsh Silver badge

        Re: GIGO

        Frankly, I've no idea, nor any connection to Microsoft except that I use their OS and software sometimes (I am actually writing this on a Mac, though), but I'm sick to the back teeth of the pack of morons that come out to comment every time an article about Microsoft is published here.

        This article describes something that is a genuinely interesting development, and normally you'd expect some of the more knowledgeable commenters to chip in with additional information or informed critique. But, because it's something that Microsoft has developed, the comments have immediately been hijacked by the crack band of fucking idiots who, labouring under the misapprehension that their thousand or so prior posts on the subject were insufficient to convince the world that they don't like Microsoft, feel it necessary to enter the breach, pausing only to switch off their forebrain on the way...

        1. Anonymous Coward
          Anonymous Coward

          Re: GIGO

          <chip in...>

          I see what you did there!

        2. King Jack
          Thumb Down

          Re: I'm sick to the back teeth of the pack of morons...

          Blame M$. If they had not behaved like criminals with Windows 10 and not adopted techniques of spyware then I'm sure they would not get the amount of flak whenever their name appears. They seem not to care about anybody and do whatever suits them whenever they want. They may well come up with the best chip design but because they are M$ they will use it to shaft as many people as they can. Sony with their rootkits, feature removal and propitiatory crap get the same hatred as they too will never change until bankruptcy looms. If M$ just wrote an OS and softwares that didn't try to own you then these comments would fade away.

          1. Kristian Walsh Silver badge

            Re: I'm sick to the back teeth of the pack of morons...

            I stopped reading at the dollar sign. Was the rest of it any good?

          2. 2Nick3
            Childcatcher

            Re: I'm sick to the back teeth of the pack of morons...

            "Blame M$."

            You can't control how others behave, but you can control how you react to it.

            So yeah, the forum being flooded by posts that distill down to "I hate MS" can be laid at the feet of those making the posts. No one at MS forced them to do it, it was their own free will, let them be responsible for it.

          3. Someone Else Silver badge

            @King Jack -- Re: I'm sick to the back teeth of the pack of morons...

            Blame M$. If they had not behaved like criminals with Windows 10 Windows 3.1 and "Undocumented Windows" [...]

            There, FTFY....

        3. Claptrap314 Silver badge

          Re: GIGO

          I'll tell you why the more knowledgeable people aren't saying much--Dunning-Krueger. This design is a really, really big change. It has obvious comparisons to Itanic, but recall that the Itanic operated three (41-bit?) instructions at a time with a few flag bits to inform the processor about parallelism safety. This is not that, at all. From a hardware/architecture design standpoint, I have little to say because I know that I'm unqualified to evaluate it. There are some business issues that appear fascinating, however.

          I did not pay a great deal of attention at the time, but my recollection is that the problem was lack of support for the idea of moving more or less the entire install base to a new architecture. Microsoft appears to be attacking that problem by porting the major OSes, which implies their software build tooling.

          The comment, "Will it run Crysis"? is probably the most important one on this article. It took me a year or so after joining AMD to realize that no one buys CPUs. They buy applications. The hardware (all of it), & the OS are nothing but overhead. A perfect architecture without applications is worthless. The most hideous beast of an architecture that is running 80% of the world's software today can be expected to run 85% in five years, barring a major disruption to the market.

          I think the timing of this announcement is important. Microsoft has been working on this for almost a decade. When you are doing this sort of research, you can make a big announcement at any point. I think that they looking as Spectre as the sort of disruption that might create an opening to overturn x86. It's certainly due.

          Microsoft drove Apple to 5% market share by running a completely open platform for developers. Yes, it's been a long time, but it seems unlikely to me that they have completely forgotten this fact. Certainly, they would like to drive DRM into the core of the system, but I'm doubting they will do this to the detriment of non-M$ applications.

        4. Anonymous Coward
          Anonymous Coward

          Re: GIGO

          MICRO$HAFT $UX BALLZ

  12. Hans 1
    Windows

    Time will tell, but I doubt this is gonna get to anything, the return of Windows RT which will fail for the exact same reasons it failed before. Linux, on the other hand, will not have trouble with a new architecture, everything is a recompile away, hopefully ...

    I also doubt MS will have the patience to push this platform until it gets traction, and given MS' recent and past track-record, nobody will bank on it anyway.

    I cannot see how MS can pull this one off.

    1. Anonymous Coward
      Anonymous Coward

      I also doubt MS will have the patience to push this platform until it gets traction, and given MS' recent and past track-record, nobody will bank on it anyway.

      A view I share. As described the technology sounds very interesting, but over and above the "it's Microsoft", I have a further challenge: Why have neither Intel, AMD, ARM or Samsung developed a similar approach, or bought this particular technology in from academia? You can argue that Intel, and to an extent ARM are victims of their own success and would dismiss it as "not invented here", but AMD could certainly do with a technology break out.

      Do those companies know something that MS don't?

      1. Anonymous Coward
        Anonymous Coward

        I suspect they have. There was a lot of work in the late 90s in "alternative architectures" like asynchronous clock less CPU stuff. Most of it seems to have been abandoned in the early 00s though. Perhaps it was just too far ahead of its time.

      2. Brewster's Angle Grinder Silver badge

        "Why have neither Intel, AMD, ARM or Samsung developed a similar approach, or bought this particular technology in from academia?"

        That was pre Spectre/Meltdown. I bet it looks a whole lot more attractive now people have taken an interest in the side channels CPUs create.

      3. Paul Crawford Silver badge

        It is probably much much simpler and its why x86 persists, why Windows RT was doomed, and why practically all phones use ARM chips: Software.

        No one really wants to recompile, test (yes, I know its a novel concept), debug and support existing software for a new product hardly anyone uses. And so the new product remains hardly used.

      4. FIA Silver badge

        Why have neither Intel, AMD, ARM or Samsung developed a similar approach, or bought this particular technology in from academia? You can argue that Intel, and to an extent ARM are victims of their own success and would dismiss it as "not invented here", but AMD could certainly do with a technology break out.

        Do those companies know something that MS don't?

        Do Samsung have a history of innovating in the CPU space? I get they're a core ARM licencee but they do more 'mass market' stuff don't they?

        Intel couldn't be seen to do anything to significantly de-stabilise X86 as that's where the money comes from. Although they have tried a few times.

        ARM are stuck with ARM, and AMD doesn't have the thing that Microsoft have lots of... spare cash. (Remember, whilst we don't care any more, Windows is still the dominant computer OS, and still makes Microsoft a lot of money).

        Companies with lots of money can spend it doing R&D, hence this, and why Apple do their own CPUs and GPUs now.

      5. DanceMan

        Re: Why have AMD not developed a similar approach

        AMD don't have the money, while MS has.

      6. doublelayer Silver badge

        There's a chance, but not a big one

        The way I see it, microsoft have chosen a good time to think about switching over, as they are at a relatively pivotal point. This is similar to the many stories about their thoughts of running windows 10 on arm. I don't see a reason this has to fail, but I can see lots of ways it could. The last time microsoft tried it, for example, they got windows RT and it didn't succeed. They will need to realize that very little is going through the windows store and that the rest needs to be available. That means either getting devs to recompile a lot of things or making a compatibility layer. However, if they manage that, I see no reason this couldn't be a new architecture.

        However, given microsoft's track record with this and their current software base, I doubt it will happen. Apple could switch to arm because their low-end users get their software from the appstore, and their high-end users use software made by companies that have enough money to recompile and test the new code to death. Linux can switch to most things because the software can be recompiled by anyone and patches provided by anyone with the knowledge and inclination. If microsoft makes this available, and things start to break, it may fail at that point. They aren't really providing something that we couldn't get before, so it will need to be very good for it to get the chance to become better.

      7. Anonymous Coward
        Anonymous Coward

        The more interesting question...

        > Why have neither Intel, AMD, ARM or Samsung developed a similar approach

        ...is why Microsoft *did* do it, only to apparently shut it down "since the research questions have been answered" (by the looks of it, in the positive).

        That, along with the last sentence about not upsetting their "silicon partners" reads to me like one important strategic goal of this project would have been precisely to put pressure on those partners by saying, "look, you're not indispensable, in fact we could really upset your boat if we wanted to, so now about those prices..."

        PS: I must stress, I am talking from a position of complete ignorance about these things. Take the above more as a question than as any form of assertion.

    2. TheVogon

      "the return of Windows RT which will fail for the exact same reasons it failed before."

      Or maybe not:

      https://www.theregister.co.uk/2017/05/12/microsofts_windows_10_armtwist_comes_closer_with_first_demonstration/

    3. whitepines
      Big Brother

      Unless of course chips made with this new architecture look for a Microsoft signature to even run programs...good luck sidestepping that without a prison sentence.

  13. Dan 55 Silver badge

    MS and getting it right first time

    Hopefully they can do for their CPU what they're having trouble doing for their laptops and operating systems.

  14. Anonymous Coward
    Anonymous Coward

    Interesting stuff

    Good article and, just for a change, I'm now interested in something that Microsoft are working on. Something to keep an eye on.

    As an aside, I can see one additional reason for that page to have been taken down : it lists two 'People' and 15 'Interns'. Interns are people too, you know.

  15. DrXym

    The processor really shouldn't matter to applications these days.

    Operating systems are more than capable of compiling a portable application instruction set like LLVM bitcode into whatever the native instruction set is.

    The article suggests MS might be doing that although it's unclear if devs have to do that last step as part of packaging or if the OS, or an intermediate packaging / app store does it. Come to think of it, I wonder why Linux dists like Red Hat & Ubuntu aren't doing this too - experimenting with building some of the higher level apps in a noarch format and compiling them to binary during installation.

    1. bombastic bob Silver badge
      Devil

      Re: The processor really shouldn't matter to applications these days.

      "building some of the higher level apps in a noarch format and compiling them to binary during installation."

      Already being done, by FreeBSD in fact. Check out FreeBSD's "ports" system - build from SOURCE, baby! OK they have packages too, but the option to EASILY build from source departs from the Linux distros I've seen [where loading all of the correct 'source' and '-dev' packages can become tedious]. FBSD is designed for compiling, so the headers and stuff will all be there whenever you install a library.

      The downside, it can take several hours to build firefox or libre office or any version of gcc, like for a cross-compiler or because the dweebs that wrote 'whatever' software _DEMAND_ the bleeding edge version (or a specific version) of the compiler. So yeah, best to install THOSE as pre-compiled binary packages. And having the option is worth it.

      /me points out that kernel modules are typically build-from-source, for obvious reasons.

      (also worth pointing out, I understand that java p-code is 'compiled' into native code when you install Android applications on any version since [I think] Android 5)

      1. DrXym

        Re: The processor really shouldn't matter to applications these days.

        I don't mean built from source. I mean the rpm / deb contains compiled binary as LLVM bitcode and then during installation the binary is turned into a native executable. This is relatively quick step to perform and doesn't require an entire development toolchain.

        <p>

        From an app's perspective it means shipping a single package that could be used on any supported architecture. It means the product could be instrumented with additional security checks configured by the administrator, modified to run in a sandbox / virtual environment and other things of that nature.

        1. Kristian Walsh Silver badge

          Re: The processor really shouldn't matter to applications these days.

          A problem particular to Linux is that the system-call numbers are different on different architectures (notably between x86 32-bit and 64-bit). I'm sure it's not an unsurmountable issue, but fixing it does require the final-stage compiler to know a little more about the bitcode than a straight compiler would.

          The reason why adoption would be slow is that storage space is cheap, and posting binaries allows the maintainers to keep compatibility promises: basically if there's an ARM binary up, you can be sure the maintainer has at least run the package on ARM. Right now, there are only three or four major architectures in use (which three or four depends on whether the package is used in embedded or server/desktop applications), so it's not too much hassle for a package maintainer to build 3 versions of a package and post them to a repo. The alternative saves a little time, but then the maintainer will have to deal with the possibility of someone using their package on an architecture they've never built it on themselves.

          1. FIA Silver badge

            Re: The processor really shouldn't matter to applications these days.

            A problem particular to Linux is that the system-call numbers are different on different architectures (notably between x86 32-bit and 64-bit). I'm sure it's not an unsurmountable issue, but fixing it does require the final-stage compiler to know a little more about the bitcode than a straight compiler would.

            This would only affect static binaries and the odd thing that actually make system calls directly. Most stuff links against libc so wouldn't be an issue.

  16. Steve Channell
    Windows

    CPU meets EPIC / GPGPU hybrid

    All the talk of running the a soft CPU will be designed to avoid the problems of Itanic which pushed all the parallel logic into compiler but never envisioned multi-core processors that make parallel execution quicker with OS threads than the explicit parallel instructions. Once vendors eventually got hold of silicon and compiler kits it was too late to fix the design flaws in the EPIC design.

    Borrowing the GPGPU paradigm of small kernels of code that can be executed in parallel seems like a better approach and Operating systems seem a very good test because each SYSCALL is designed to be atomic.

    if it can me made to run effectively, it should provide breathtaking performance

    1. Torben Mogensen

      Re: CPU meets EPIC / GPGPU hybrid

      Actually, the description reminded me a lot of EPIC/Itanium: Code compiled into explicit groups of instructions that can execute in parallel. The main difference seems to be that each group has its own local registers. Intel had problems getting pure static scheduling to run fast enough, so they added run-time scheduling on top, which made the processor horribly complex.

      I can't say if Microsoft has found a way to solve this problem, but it still seems like an attempt to get code written for single-core sequential processors to automagically run fast. There is a limit to how far you can get on this route. The future belongs to explicitly parallel programming languages that do not assume memory is a flat sequential address space.

  17. Michael H.F. Wilkinson Silver badge
    Coat

    So have they ported Edge to this EDGE architecture?

    Sorry, just couldn't resist

    I'll get me coat. The one with "Get thee to a punnery" in the pocket please

    More seriously, this might well be an interesting development in CPU design. I will wait and see.

    1. bombastic bob Silver badge
      Trollface

      Re: So have they ported Edge to this EDGE architecture?

      PUN-ishment. Heh.

      1. adam 40 Silver badge

        That wan't punny

        (body)

    2. Doctor Syntax Silver badge

      Re: So have they ported Edge to this EDGE architecture?

      Maybe Microsoft made the names coincide deliberately. Edge becomes MS branding across a range of product types?

  18. Peter Gathercole Silver badge

    Garbage recycling analogy

    Whilst your analogy is clever, it doesn't really describe mainstream modern processors.

    What you've ignored is hardware multi-threading like SMT or hyperthreading.

    To modify your model, this provides more than one input conveyor to try to keep the backend processors busy, and modern register renaming removes a lot of the register contention mentioned in the article. This allows the multiple execution units to be kept much busier.

    The equivalent to the 'independent code blocks' are the threads, which can be made as independent as you like.

    I've argued before that putting the intelligence to keeping the multiple execution units busy in the compiler means that code becomes processor model specific. This is the reason why it is necessary to put the code annotation into the executable, to try to allow the creation of generic code that will run reasonably well on multiple members of the processor family.

    Over time, the compiler becomes unwieldy, and late compared to the processor timelines, blunting the impact of hardware development. But putting the instruction scheduling decision making process into the silicon as in current processors increases the complexity of the die, which becomes a clock speed limit.

    I agree that this looks like an interesting architecture, and that there may be a future in this type of processor, but don't count out the existing processor designs yet. They've got a big head start!

  19. Giles Jones Gold badge

    Wintel anti-competitive practices killed off a lot of decent tech. So part of me hopes Microsoft suffers the same fate.

    I can't forget the Sony laptop that was using a Transmeta CPU being cancelled because of Intel intervening.

    1. Kristian Walsh Silver badge

      Transmeta...

      Transmeta failed because they couldn't deliver the cost/performance ratio that they'd promised to customers. Toshiba (it was Toshiba, not Sony) cancelled that laptop product because they couldn't rely on Transmeta delivering the CPUs for it. Intel didn't impede them, because Intel didn't have to - they messed up themselves.

      Intel pretty much ignored Transmeta - at this time, it was capturing Apple's business from Motorola and IBM: getting the last remaining non-Intel PC maker on board was something that was much more valuable to Intel.

      And in any case, Intel's bullying isn't Microsoft's. If the two companies were so joined at the hip as you imply, then Microsoft would not have supported AMD's 64-bit Opteron chips at their launch back in 2003, and Opteron competed with Intel much more directly than Transmeta did.

  20. karlkarl Silver badge

    Its a shame. I wish that they would force developers to use their stupid CrapStore for this platform (requiring more than just a UNIX-like recompile). That way we can watch this platform flop too :)

  21. steviebuk Silver badge

    No doubt....

    ...they'll put DRM on the chips which will detect pirate copies of Windows 10. Much like they've wanted for years. Ignoring the fact such system can do and will fail at points, locking people out of a PC and software they've paid for.

    A tiny bit like the DVD and Blu-ray warnings we all get before the movie. That pirating it is illegal blah blah "Would you steal a hand bag?", "Would you steal a car?". Ignoring the fact, I haven't FING stolen it, I've purchased it but now have to watch your shitty advert. Yet IF I'd actually pirated it, I wouldn't have to watch this as the pirates cut that bit out. It's a sad day when you sometimes get better service from pirates than the official studios that released it.

    1. TheVogon

      Re: No doubt....

      "they'll put DRM on the chips which will detect pirate copies of Windows 10."

      They don't need DRM on chips for that. They already detect pirate copies and it just displays an irritating warning.

  22. Unicornpiss
    Meh

    I guess we'll see..

    Microsoft is such a mixed bag these days of surprising innovation and stupefying failures. I often don't think all of their divisions are on speaking terms with each other. It would be nice to see a new architecture out there, but I'm not holding my breath just yet.

  23. I Am Spartacus
    Linux

    Real opportunity to move forwards

    Todays chips are based on old x86 and x286 instruction sets. The whole way that the page large memory sets makes them unnecessarily complex. We can see this with the way that branch prediction and optimistic instruction execution has cause Spectre and Meltdown bugs. These are bugs that are fabricated in the silicon, making them very difficult to correct. In fact, it will take Intel and AMD years to come up with CPU designs that eliminate them completely.

    Add to this the number of legacy instructions in the hugely complex x86 chipset that have to be maintained, because somewhere, some bit of code, might just make use of them.

    If Microsoft really have come up with a parallel execution model that allows predictive execution to be handled not at the CPU but at the compiler level, then this just might be something new. It is early days, I know, but it sounds interesting. If they can do this whilst making the context switches efficient, then this might be a way forward to a whole new chipset. If that can be done whilst making motherboards simpler, then so much the better.

    I do take on board that Microsoft may very well embed hardware instructions that detect illegal use of Windows, then that is their absolute right to do that. (Last week I sat in a major airport in the middle east, in a lounge, amusing myself that the display panels had "This version of windows in not authenticated" in the corner!). Why does this bother people? If you want to run Windows, then you should pay Microsoft for your copy.

    They may very well include DRM. Again, is this necessarily bad? If half the people who pirate films, games, music actually paid for it, the net cost would be driven down. (And yes, I do believe in Santa Claus, thank you.) Again, personally, I don't see this necessarily as bad.

    But note the article - they already have Linux running on it. So, you can have your FOSS system, with all the access you want to the hardware. You can roll your own pirate scheme for you and your friends if you want.

    The summary is that Microsoft may chose to break the Intel/AMD monopoly with a chipset and instruction set that is designed for the 21st Century. And that sounds to me like a good thing.

    This is from someone who is very anti-microsoft and runs Linux everywhere he can.

    1. Doctor Syntax Silver badge

      Re: Real opportunity to move forwards

      "They may very well include DRM. Again, is this necessarily bad?"

      If it comes up with false positives, yes. How do you know the version running the display panels was hooky? Could it simply be that they were isolated from the internet & couldn't phone home to be verified? You do not want your radiotherapy machine to lock up in the middle of your treatment because it suddenly decided it wasn't running legit Windows.

      1. I Am Spartacus

        Re: Real opportunity to move forwards

        @Doctor Syntax:

        Very true. This reminds me of the thread yesterday on security of SCADA systems when you have air gapped PCs. Yes, Redmond do ned to find a proper way of doing DRM confirmation. But for a company like Microsoft, this is not beyond our technology to solve.

        By the way, the screens I saw were displaying rolling news, so clearly there was an internet connection!

  24. Dominic Sweetman

    Old MIPS hacker

    Early RISC CPUs (late-80s MIPS and SPARC) really were a lot faster than contemporary x86. That was because the instruction set was redesigned to pipeline nicely. Once out-of-order CPUs were worked out and built successfully (though perhaps on the graves of burnt-out CPU designers...) the RISCs had only a small edge on x86, which Intel's better design/silicon easily overwhelmed.

    The approach described is an attempt to create an instruction set which better uses an out-of-order core. This is an interesting idea. Probably only Microsoft, Apple and Google have a big enough codebase and captive audience to develop such a thing.

  25. Anonymous Coward
    Anonymous Coward

    I'm sure it will be THE BEST PLATFORM

    for running Windows

  26. Anonymous Coward
    Anonymous Coward

    OPCODE: BSCR,1

    Blue screen opcode, of course. And forced cpu architecture extension updates. And only compatible with OneDrive opcodes to read data.

  27. Daniel von Asmuth
    Windows

    Hail the future monopoly

    Soon Microsoft can do without Intel, AMD, Dell, HP, Lenovo, & co (and we still get Linux)

    1. TheVogon

      Re: Hail the future monopoly

      "Soon Microsoft can do without Intel, AMD, Dell, HP, Lenovo, & co (and we still get Linux)"

      Yep, Linux runs under the Windows kernel now too.

  28. Anonymous Coward
    Anonymous Coward

    Microsoft parallelism is the CPU equivalent of the chatty 'salesperson' coming to your front door and distracting you whilst his mates have broken in round the back and are now stealing your valuables.

  29. John Sanders
    WTF?

    Obvious questions...

    Why?

    What for exactly?

    What problem does it solve? (That isn't solved already)

  30. Robert Sneddon

    Experiment(al)

    I don't see MS building commercial silicon based on this experimental work -- the Fine Article mentioned that after several years of work the current silicon is Xilink FPGAs running at 50MHz. They're a long way from a taped-out, tested and optimised design coming off a 10nm fab somewhere in quantity 100,000. The rest of the project is cycle-accurate simulations, Matrix-style "silicon" that isn't available off-the-shelf to anyone.

    I can see them using what they've learned from developing this CPU design in future compiler and OS releases running on existing commodity silicon like x86 and ARM, assuming it can be made to work on that sort of classic CPU architecture and makes a noticeable difference in speed, security etc.

  31. Anonymous Coward
    Anonymous Coward

    "in conflict with our existing silicon partners."

    That could have come straight from the mouth of the current president of the US of A.

  32. dnicholas

    Curiously removed...

    "Microsoft's website doesn't have a lot of information about E2 – and what was online now isn't. Last week, it curiously removed this page about the work, leaving the URL to redirect to an unrelated project."

    Nothing unusual about this, Microsoft's websites are a garbage fire

  33. PJ_moi
    FAIL

    Microsoft ports... successfully?

    Microsoft haven't built *any* hardware that both works and is long-lasting for more than 20 years. Its last hardware success was - and still is - the Microsoft Mouse. All other hardware projects have fallen by the wayside (Zune, Kinect, Windows Phone...) or, if still (barely) around, riddled with problems. (Xbox? Surface range?) As a result, the idea of MS successfully porting *anything* to a home-grown processor architecture has as much chance of flying long-term as I do.

  34. -tim

    Another level of exploits to defend against?

    These systems have yet another level of security problems. They are much better for Return Oriented Programming but the information flow system also has a state machine like system that can be exploited to make use of the unused or deferred data flow states to move data around.

  35. John Savard

    TRIPS

    Apparently the original TRIPS project went quite far: a 500 MHz prototype chip achieved half the goal - 500 GFLOPS. That was on a 130nm process.

    Since Microsoft now killed the project to allay fears on the part of their "existing silicon partners", though, it looks like this may not see the light of day. Of course, a massively parallel chip that runs some benchmarks impressively, but is slower at running any real software - which is what they could easily have ended up with - is no great loss. Research projects don't always achieve their hoped-for goals, so AMD and Intel may perhaps end up not switching to making the Microsoft chip under license.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like