nav search
Data Centre Software Security DevOps Business Personal Tech Science Emergent Tech Bootnotes
BOFH
Lectures

back to article
Now Microsoft ports Windows 10, Linux to homegrown CPU design

Silver badge

Interesting. They are apparently ahead of rivals in quantum CPU technology too:

https://www.bloomberg.com/news/articles/2018-03-28/microsoft-edges-closer-to-quantum-computer-based-on-elusive-particle

10
6

They have been the world leaders in quantum computers for years, even as far back as Windows 2000 the OS was both working and not working simultaneously.

123
3
Anonymous Coward

M$ uses Chinese ESP32 CPU as surveillance chips (Clipper chip v2)

M$ also uses ESP32 (cheap chinese CPU) as basis for their new surveillance chips. Exactly like NSA's spying "Clipper chip" from 1996, so they try it again with their shopfront M$.

"The Clipper chip was a chipset that was developed and promoted by the United States National Security Agency (NSA) as an encryption device that secured “voice and data messages" with a built-in backdoor. It was intended to be adopted by telecommunications companies for voice transmission. It can encipher and decipher messages."

0
8
Silver badge

They have been the world leaders in quantum computers for years, even as far back as Windows 2000 the OS was both working and not working simultaneously.

Perhaps, but only if you are not observing it, so you'd have to be a Linux / Mac user for Windows to be in that state. As soon as you observe it, you'll see it either working or not.

7
0
Silver badge

They have been the world leaders in quantum computers for years, even as far back as Windows 2000 the OS was both working and not working simultaneously.

Perhaps, but only if you are not observing it, so you'd have to be a Linux / Mac user for Windows to be in that state. As soon as you observe it, you'll see it either working or not.

Nope, you merely have to look away for a moment or step away from your desk and it'll flip states*

* I think it's a hunting trick to attract unwary technical support personnel close to it so it can suck the life from their souls.

6
1

Bet they come with some great marketing wank/spyware built right into the CPU

41
32
Silver badge
Big Brother

Computer says "No"

Only M$ approved programs allowed any where near your computer. Expect this chip to phone home every time you take a shit. Reminds of an ad M$ was running 2+ years ago about how they would stamp out computer crime, by spying on everything you do.

47
43
Silver badge
Meh

Re: Computer says "No"

along with that, consider ONLY running apps from "The Store", mandatory updates, and the worst possible extreme incarnation of 'management engine' you can possibly think up, in your worst nightmares...

Oh, and DRM. Of course.

I admit the tech sounds interesting, but this almost sounds like using microcode as an instruction set, the same *kind* of way RISC made CPUs lightweight but not necessarily faster (ARM still catching up to x86 tech).

But it may have a niche market for cell phones and other 'slab' devices. It almost sounds too late because windows phone is officially _DEAD_.

13
25
FIA
Bronze badge

Re: Computer says "No"

I admit the tech sounds interesting, but this almost sounds like using microcode as an instruction set, the same *kind* of way RISC made CPUs lightweight but not necessarily faster (ARM still catching up to x86 tech).

Isn't this kind of backwards? Remember ARM were an order of magnitude faster than X86 when they debuted, it's just the focus switched to low power/embedded when Intel's focus remained on high performance.

Microcode was the way of getting the benefits of RISC on the horrific X86 instruction set.

It wasn't until the advent of OOO and superscalar stuff around the P5 that the Intel stuff really took off speed wise, by that point Acorn was almost dead and the market for ARMS were phones and PDAs and the like. Ironically the 'everything can have a conditional flag' approach ARM took with their instruction set didn't lend itself well to the OOO and superscalar stuff.

A modern ARM core, especially a 64bit one tunes for speed can be quite quick too. See the CPUs coming out of Apple for example.

13
1
Silver badge
Devil

Re: Computer says "No"

"Microcode was the way of getting the benefits of RISC on the horrific X86 instruction set."

Actually, microcode has been around since CPUs existed. it's how they work internally. From what I was told (in a college class) the IBM 360 had microcode on plastic cards with holes in them, which were read by changes in capacitance for sense wires running along the top and bottom (rows and columns). that way you could re-program the IBM 360's microcode by using a stack of cards.

The concept of RISC (as I recall) was to get closer to the microcode to streamline the instruction pipeline and reduce the size (etc.) of the core, though it's not the same thing as BEING microcode.

I don't recall MIPS or ARM _EVER_ being faster than the high-end x86's. Maybe it was, maybe it wasn't. I don't remember hearing that. I'm pretty sure it was the other way around.

7
2
Silver badge

Re: Computer says "No" @bombastic bob

Benchmarking is a fools' game, of course, but the ARM at introduction was sufficiently faster than the then high-end x86, the 386, that for a while Acorn sold it on an ISA card for use as a coprocessor.

The marketing puff is here; a PCW review is here, though it fails to come to a definitive conclusion on ARM v 386 it makes statements like "The 8MHz ARM processor is one of the fastest microprocessors available today" and "A fairer [price] comparison would perhaps be with other fast coprocessor boards for the IBM PC, such as the 80386, the 68020 and the Transputer boards" which certainly seems to bracket it with those others.

6
0
Silver badge

Re: Computer says "No"

"I don't recall MIPS or ARM _EVER_ being faster than the high-end x86's"

DEC Alpha sure was.

12
0
Silver badge
Boffin

Re: Computer says "No"

The concept of RISC (as I recall) was to get closer to the microcode to streamline the instruction pipeline and reduce the size (etc.) of the core, though it's not the same thing as BEING microcode.

The original RISC architectures like MIPS and SPARC did not use microcode at all. In that sense, the user-visible instructions were the microcode. Being able to avoid microcode is the main benefit of having simple and regularly encoded fixed-length instructions. I am not sure if any later implementations of these architectures use it.

I don't recall MIPS or ARM _EVER_ being faster than the high-end x86's.

In the 1990's, RISC processors generally ran circles around Intel's 486 and the original Pentium. I am not sure when the tables turned. Maybe around the time when Pentium III was introduced. Should did up old benchmarks.

9
0
Silver badge

Re: Computer says "No"

FIA said:

Isn't this kind of backwards? Remember ARM were an order of magnitude faster than X86 when they debuted, it's just the focus switched to low power/embedded when Intel's focus remained on high performance.

Well, I was around at the time the first Acorn Archimedes computers came out. They were quick, they weren't quicker than a high end PC, especially one with a '87 coprocessor. They were cheaper though, so whilst 386 equipped PCs did exist and would blow the pants of an Archimedes, no one could afford a PC like that. Even a 286 or 86 equipped PC was an expensive, comparatively rare item back then.

A modern ARM core, especially a 64bit one tunes for speed can be quite quick too. See the CPUs coming out of Apple for example.

The guys behind the K computer in Japan are considering ARM for their next super computer. ARMs have always been excellent choices in the circumstance of using the core to marshal other units (like video decompressors, math units, etc, which is what's going on inside a phone).

Making an ARM as fast as a big Intel chip is mostly a matter of achieving the same DRAM bandwidth, having big caches, etc. This all takes a lot of transistors, just like it does in X86s. The advantage ARM has is that they don't have to translate a CISC instruction set (X86) into a RISC instruction set prior to execution. That's what's going on inside modern X86 processors. By not doing this, ARM saves a lot of transistors.

Intel's current range of big CPUs have main memory bandwidths of 100GByte/sec; that's a huge amount, and is unmatched by any ARM. This allows Xeons to really chomp through an awful lot of processing very quickly indeed, keeping their cores fully fed. Same with AMD's current processors; they're monsters.

2
0
Silver badge

Re: Computer says "No"

MacroRodent said:

In the 1990's, RISC processors generally ran circles around Intel's 486 and the original Pentium. I am not sure when the tables turned. Maybe around the time when Pentium III was introduced. Should did up old benchmarks.

True, but then again a SPARC or Alpha or MIPs based machines cost a huge amount of money, whereas PCs were pretty cheap. A Silicon Graphics workstation was the object of envy that never sat on my desk at work...

The writing was on the wall by 1995, and by 2000 there was nothing to justify picking anything other than x86, except in particular cases for particular purposes.

The 400MHz PPC7400 PowerPC chip from Motorola was quicker at certain types of maths (DSP) than a 4GHz Pentium 4, largely because Intel hadn't bought into the SIMD idea. It's only quite recently that Intel finally added an FMA instruction to SSE / AVX that meant it wasn't hamstrung. Not adding this was apparently a deliberate policy to make Itanium (which always had an FMA) look good.

Even today there's a lot of radar systems based on PowerPC.

The Cell processor from IBM (think Sony PS3) was quicker (for some specific purposes) than anything Intel had; in fact it took about 6, 8 years for Intel to get close to beating the Cell, and only comparatively recently have they matched its memory bandwidth. Experiencing the full might of a proper Cell processor (i.e. not one inside a PS3) was a mind bending moment, they were truly unbelievably good. Hard to program for, unless one had grown up on Transputers. It's a real pity IBM canned it, because a Cell processor from a modern silicon fab with the SPE count dialled up to max would still eat everything else for breakfast. Though with the idea of today's JavaScript generation getting to grips with something as complex as Cell being laughable, perhaps IBM took the right commercial decision.

Interestingly, the modern CPUs from Intel and especially AMD are resembling Cell more and more (lots of cores on a fast internal bus, lots of IO bandwidth), the only substantive difference is that their cores are full-on x86 cores with (annoying) caches whereas all an SPE was good for was maths and nothing else, run out of their very handy SRAM, with no cache engine to get in the way.

5
0

Re: Computer says "No"

"Actually, microcode has been around since CPUs existed. it's how they work internally."

True, but in the beginning microcode interpreted the instructions, whereas modern processors compile instructions into microcode. This gets rid of the interpretation overhead (which is considerable).

0
1

Re: Computer says "No"

Waitasecond ... you are saying ... Microsoft will act like Apple?

0
1

Re: Computer says "No"

ARM was definitely faster than anything available from Intel at the time the Archimedes came out. BYTE published a series of Dhrystone benchmarks showing it was faster than anything but a Sun 3/80. This was before any of the workstation vendors such as Sun had moved off the M68K family to RISC architectures.

Sun, SGI, HP and various other parties brought out RISC architectures in the latter part of the 1980s, typically coming to market a couple of years before Intel brought out the 486. These machines were 2-3x the speed of a 386 on integer benchmarks, and typically had much faster graphics and I/O than mainstreanm PC architectures of the day.

RISC architectures were still appreciably faster than the 486 and Pentium until the Pentium Pro came out, although not the 2-3x advantage they used to enjoy. However, by the mid 1990's the majority of the I/O bottlenecks and other architectural baggage on the x86 family had been solved and Wintel/Lintel was a viable architecture for a general purpose workstation.

Linux and Windows NT made the ordinary PC a viable workstation platform, although NT didn't really mature as a platform until Windows 2000 came out. By 2000 or thereabouts the writing was on the wall for proprietary workstation architectures as they had stopped providing a compelling performance advantage over commodity PCs. RISC workstations hung on for a few more years, mostly running legacy unix software or meeting requirements for certified platforms.

Around 2005-6 AMD's Opteron brought 64 bit memory spaces to commodity hardware and removed the last major reason to stick with RISC workstations, which by then had ceased to provide any compelling performance advantage for the better part of a decade. The last RISC vendors stopped making workstations by about 2009 and by then most of their collateral was about running platforms certified for stuff like aerospace.

IBM's Power 8/9 are still significantly faster than Intel's offerings, although whether that translates into a compelling advantage is still up for debate. Raptor systems is getting some press for their workstations, but their main schtick is security rather than performance.

1
0

Re: Computer says "No"

To say nothing about it running at a high 3.1Mhz and taking 16 years to update it's code

0
0

Good description, Register

Well written explanation with background. Thank you

69
1
Silver badge

> The code is also annotated by the compiler to describe the flow of data through the program, allowing the CPU to schedule instruction blocks accordingly.

Wasn't this one of the downfalls of Itanic? That the compiler became too complex, sorta like the software on the F-35, taking longer to keep developing the software such that it couldn't keep up with hardware changes?

33
1
Silver badge

Itanic was wildly successful ...

The announcement alone ended development of architectures it was supposed to compete with. When the first implementation was ready for evaluation it achieved its second main goal: it needed so many transistors that no-one else could build a compatible product. It could sensibly compete on power usage with a toaster despite getting worse bench mark score than cheaper cooler CPUs available at the time. After years of delays, when a product did reach the market, the final main goal was achieved (after a fashion): the price was at the high end of a monopoly product. The only problem was (large NRE)/(small sales) made the cost about equal to the price.

Having the compiler make all the decisions about instruction ordering in advance sounds really cool until you remember some real world problems: do you order the instructions based on the time required to fetch data from the cache or from DRAM? Guess wrong and data is not in the right place at the right time. All the scheduling decisions made by the compiler become garbage. What if the hardware it optimised to spot multiply/divide by 0 or ±1? Again result arrives ahead of schedule and the CPU has to re-order everything.

I am not surprised it took Microsoft years to come up with something worth committing to silicon. I would expect more years to pass before they get performance/watt close to a modern X86/ARM.

28
0

Re: Itanic was wildly successful ...

Not to mention that thing called multi-tasking. So what it each app is optimized when the scheduler then dumps you out to run the next thing in que. THAT was the main technical problem with the EPIC approach as far as I could see. Great product for a serial processing mainframe... :-)

0
0
Silver badge

Re: Itanic was wildly successful ...

Not to mention that thing called multi-tasking. So what it each app is optimized when the scheduler then dumps you out to run the next thing in queue.

Sun made a similar mistake with sliding stack frames in SPARC; neat idea, until the scheduler comes along and schedules something else.

Itanium was popular with people developing DSP, because it had an FMA instruction. X86 didn't until, what, 5 years ago now? I took the addition of FMA to AVX / SSE as being the signal from Intel that Itanium was finally, irrevocably dead.

0
0

Re: Itanic was wildly successful ...

They were quite popular in numerical computing for a while, and not the only VLIW architecture to get some traction in that space (look up the Multiflow TRACE for example). VLIW also makes an appearance in DSPs but it's not terribly efficient for general purpose computing. Now that modern CPUs can do dependency analysis in realtime they get a multiple pipeline scheduling capability that can adapt in realtime.

0
0

Still at the FPGA stage after all these years?!?!?!?!???

Intel must have said no.

16
4

Re: Still at the FPGA stage after all these years?!?!?!?!???

You can absolutely guarantee Intel said no. This represents Microsoft turning into an Apple-ish end-to-end factory, where they make the hardware and the software.

I half expect Windows to become confined to Microsoft hardware only, like Apple. No more tinkering. No more buying whichever graphics card you feel like and slapping it in a tower and having it work. No more writing your own Windows apps without an MSDN account.

Great for Microsoft, but if that happens the days of open computing are over. To quote the Great Sage* himself, "they're selling hippy wigs in Woolworths, man. The greatest decade in the history of mankind is over. And as Presuming Ed here has pointed out, we have failed to paint it black."

* Bruce Robinson, natch.

9
6
Silver badge

Re: Still at the FPGA stage after all these years?!?!?!?!???

Great for Microsoft, but if that happens the days of open computing are over.

Nah!!

Really is at such odds with their 'we like OSS now' talk.

Question is, would it sufficiently 'sour the milk' enough for the terminally MS dependant to finally make the effort to get off the nipple.

And as to No more buying whichever graphics card you feel like and slapping it in a tower and having it work really depended on how much effort the manufacturer put into the drive and how much MS borked the last update.

1
3
Silver badge

Re: Still at the FPGA stage after all these years?!?!?!?!???

"Great for Microsoft, but if that happens the days of open computing are over."

That must be why they ported Linux onto it . . .

11
0
Silver badge

Re: Still at the FPGA stage after all these years?!?!?!?!???

"I half expect Windows to become confined to Microsoft hardware only, like Apple."

Windows has a fair history of cross platform CPU options although only x86 / x64 were a success. For instance Alpha, Mips, Arm, etc.

6
1

This post has been deleted by its author

Silver badge

Re: Still at the FPGA stage after all these years?!?!?!?!???

Totally agree with these tin foil hat comments. I mean if Microsoft were to own a large number of data centres where very small percentage improvements in work per watt or operations per second without increasing power can multiply out to a major $$ windfall it would make sense. But clearly this is about killing openness.

3
3
Silver badge
Trollface

Application Support?

So Windows boots, but does it run Clippy and the BSOD?

31
6
Silver badge
Happy

Re: Application Support?

More importantly...

Does it run Crysis?

13
4
Silver badge

Re: Application Support?

All I need is Edlin.

3
0
Anonymous Coward

Well, it's Microsoft

Sooner or later it'll lose interest in its 'skunkworks' projects, distracted by something else.

30
3
Silver badge

Re: Well, it's Microsoft

Just like Google?

13
2
JDX
Gold badge

Re: Well, it's Microsoft

That is rather the point - they are speculative R&D projects that may through out some interesting results even if the project itself is never marketable. Like concept cars.

3
1

Re: Well, it's Microsoft

Patents, patents, patents!

1
1

It's been a long time..

... we haven't see a MS product and dropped jaws.

I bet we won't see another anytime soon. I call vaporware.

8
10

Re: It's been a long time..

Windows 10 was pretty jaw dropping. They managed to turn Comic Sans into an operating system.

39
4
Silver badge
Pint

Re: It's been a long time..

They managed to turn Comic Sans into an operating system.

Quote of the month. Maybe quote of the year. Have a pint.

16
2
Silver badge

Re: It's been a long time..

Comic Sans and his trusty sidekick, Windows Defender.

1
1
Silver badge

Re: It's been a long time..

Hating on Comic Sans is a strong indicator of someone who knows a little, but not very much more, about typography, so the comparison is valid.

5
1
Silver badge

Re: It's been a long time..

Hating on Comic Sans is a strong indicator of someone who knows a little, but not very much more, about typography

Where, on your scale of mastery, does knowing that, in later life, Jan Tschichold recanted on his insistence of using sans-serif faces in body text?

Does the name "Mickey Mouse" come to mind when you think of comics? Would Mickey Mouse Sans be a better or worse name for the face?

Illumination may come to you. Especially if you contemplate modern modems.

1
2
Silver badge
Thumb Up

Is MS Qualcomm's saviour?

Given all the legal problems that QC is having which is hitting their bottom line, I wonder if the QC 'C Level' Execs see MS as their way to that retirement payoff?

If MS want to take this into the mainstream with things like their Surface Product line where they will have total control over everything that runs on the kit, then a control-freak company like MS may well have to consider buying QC.

Will MS want to take over a company such as QC which seems on the face of it to be a bit of a pariah at the moment. We don't know all the details but it sure seems that way on the surface.

It is nice to see a different CPU architecture appear every so often. Without the backing of MS it stands no chance of getting into the mainstream. Look what happened to the last one? WLIW didn't make a heck of a lot of difference in the long term.

Only time will tell.

Good article El Reg and for that you get a thumbs up

16
0
Silver badge
Trollface

GIGO

A conveyor belt of garbage into the CPU - the perfect analogy for M$FT's Windoze and Orifice software!

34
33

Re: GIGO

Anyone downvoting this care to tell us what the weather's like in Redmond today?

22
16
Silver badge

Re: GIGO

Remember that Office has >90% market share. There are decent alternatives too.

By that reasoning, either the world is on fire, or office just isn't all that bad. (And yes, I also dislike the whole ribbon thing...)

6
4
Silver badge

Re: GIGO

Frankly, I've no idea, nor any connection to Microsoft except that I use their OS and software sometimes (I am actually writing this on a Mac, though), but I'm sick to the back teeth of the pack of morons that come out to comment every time an article about Microsoft is published here.

This article describes something that is a genuinely interesting development, and normally you'd expect some of the more knowledgeable commenters to chip in with additional information or informed critique. But, because it's something that Microsoft has developed, the comments have immediately been hijacked by the crack band of fucking idiots who, labouring under the misapprehension that their thousand or so prior posts on the subject were insufficient to convince the world that they don't like Microsoft, feel it necessary to enter the breach, pausing only to switch off their forebrain on the way...

32
5

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

The Register - Independent news and views for the tech community. Part of Situation Publishing