RiscOS
man that CPU is sure going to run RiscOS sweetly! no more lag when I fire up Otter Browser (Netsurf is only 32bit IIRC) ;-)
Putting aside its legal battles and lawsuits for a few hours, Qualcomm today said is it shipping the Centriq 2400 – its ARM-based server-grade processor, and the world's first 10nm data-center CPU. "This chip is now shipping for revenue," said Anand Chandrasekher, the boss of Qualcomm Datacenter Technologies, adding that the …
I don't think I've ever been more efficient in terms of how quickly I coded and produced real results than I did back in the day with my expanded BBC Micro Model B, followed by up my Archimedes running RiscOS. Odd, considering how long things took to load, looking back.
I put it down to those excellent classic' assembly/architecture System User Guide manuals.
A switch to Windows became a life of faffing and clunky updates that seemed to take most of my time firefighting problems. For something that was sold as increasing the productivity of processes, it's sure not looking that way of late for Windows.
Windows 10 'best version ever'* does seem to be finally settling into place, but then MS have been saying that for the last 20 years. *Microsoft's interpretation not mine.
Unfortunately the chip has no 32 bit mode, so won't run RISC OS, which still contains a vast amount of 32 bit ARM assembler. There are also a lot more important issues which need development time than porting to ARMv8, so I don't see it happen anytime soon.
It could run on the Centriq 2400 via emulation though, leaving 47 cores available to the host OS.
I tend not to read as much into geometry these days, although in this case it does look like it's made a difference in the sheer amount of cache on the chip - which is a good thing. It also shaves a cycle of latency here and there in comparison to the competition in terms of cache/memory latencies and branching. The instruction issues/cycle look well balanced and they've made an interesting choice in pipeline lengths as well - superficially it looks like they've put a lot of effort into minimizing latency. Can't wait to see some SPEC & SPEC_rate results - I'm not expecting top marks but I reckon the Centriq has a fair chance of achieving respectable SPEC / cubic metre (and watt) figures - which would be exciting. :)
I think I'd be fine with some server-grade ARM as a basis for a Linux/whatever desktop. The multitude of cores, provided they have tolerable IPC characteristics (interprocess comms, not quite as concerned with instruction rates with that many cores), would likely jive well with modern browsers' multi-process designs.
It's a while since I looked in detail at this stuff, but back in the late 20th century, SPECint and SPECfp weren't really most sensible people's benchmark of choice any more, surely, given that workloads had changed since SPEC came in, and address spaces had grown, and other similar technotrivia changes.
Depending on what you want to do with these things, maybe CoreMark might have been a good starting point in the early 21st Century, maybe an application-specific benchmark?
Or have things reverted to 20C-style benchmarks since I left the building?
[Alternatively maybe it's too early in the morning for me to be posting]
The trick with SPEC numbers is to identify the sub-tests that represent one's application.
In other words, benchmark your application on systems that you can get your hands on, and then pick out the SPECint/SPECfp cases that have published results on those systems, displaying the same relative performance.
That subset of the test suite can then be used to get a rough idea of how that application might perform on other systems.
It's a while since I looked in detail at this stuff, but back in the late 20th century, SPECint and SPECfp weren't really most sensible people's benchmark of choice any more, surely, given that workloads had changed since SPEC came in, and address spaces had grown, and other similar technotrivia changes.
...
[Alternatively maybe it's too early in the morning for me to be posting]
It is.
SPEC CPU is very much relevant for some application domains - including those I am interested in. I find the 2006 version of SPECfp to correlate very well with our typical workloads [there are too few submissions for the 2017 version of the benchmark to say anything yet - but I looking forward to seeing more, not in the least because it has an OpenMP component].
If a vendor does not submit a SPEC CPU score, this tells me that either:
a. Their hardware is really not particularly suited for my workloads
-or-
b. They do not care about my workloads and about having me as a customer.
Either way, I will most likely look elsewhere.
Naturally, your workloads may be different, and SPEC CPU may not be relevant for you. YMMV, you know?
Poor Chipzilla is in for a very interesting time now... what with Intel chippery getting pwned any way you like...
It will not be surprising if Intel chipset sales fall while other chipset sales start to rise.
"Certainly Google, which buys chips by the boatload, is seemingly eager to deploy anything-but-Intel in its warehouses of computers, which should at least leave Chipzilla a little worried."
Seemingly is the key word here. With an estimated 80 billion in the bank, we can probably assume Google can afford to deploy whatever it likes. There are non-x86 server class chips available if they really wanted non-x86 and AMD would have offered a non-Intel option.
What Google want is a credible(ish) alternative to bludgeon better prices out of Intel, but I see little evidence they are actually making moves to jump ship. Unlike Apple who clearly ARE developing an alternative option.
> Other folks who have got their hands on the hardware, we're told, include: Alibaba, LinkedIn, Cloudflare,...
.... and the Raspberry Pi Foundation?
btw - I've checked: At 20mmx20mm the die would fit quite nicely, but I'm a bit worried about the 120 W heat dissipation.
And USB 3 at last!
Err, sorry, just daydreaming ;-)
I was reading this Cloudflare blog article yesterday, some interesting benchmarks: https://blog.cloudflare.com/arm-takes-wing/
Interesting.
Will the main advantage over Intel be the price?
What will Qualcomm's royalty/licensing ploy be? They love to charge a royalty on every chip they design and sell (does ANYONE else do this?). They like to charge a SECOND royalty based on value of product the chip is in (does ANYONE else do this?).
They buy and shutter innovative companies purely to get the IP and then add that anyway possible to a new product (Flarion 4G to LTE 4G). They lobby to get their IP part of standards (Dumbing down of 3G before first release to use CDMA which Qualcomm is major IP holder).
The chip maker equivalent of Oracle or Zombie SCO?
Yes, nice technology, shame about how you licence IP.
Icon, because I'm not sure why I'd want a 120W ARM CPU?
"Yes, nice technology, shame about how you licence IP."
Agreed. :(
"Icon, because I'm not sure why I'd want a 120W ARM CPU?"
I am hoping it's because it will pack more densely into racks, and deliver good enough aggregate throughput in production to allow you to squeeze more bang for buck out of your date centres. IMO Intel have dropped the ball on verifying and testing their designs - the errata sheets have been horrific for a few generations of Xeons now. There would be some advantage to having less buggy chips - firmware/hardware bugs & work-arounds get tiresome and very costly at scale... :)
What makes you think Qualcomm will be better than Intel with regards to buggy chips ? If Intel chips were so buggy there would be a lot of people complaining, and there doesn't seem to be(outside of some vocal people complaining about that AMT stuff). I certainly haven't been alarmed by any recent Intel bugs, and I certainly don't think I am in the minority(though I keep my HP servers fairly up to date with Proliant Service packs so they get whatever HP may put in there to fix issues).
The Intel f00f bug was a bad one, as was the FDIV bug.
When it comes to existing Qualcomm CPUs, one of their biggest markets I'd assume is phones/tablets, and there seems to be at least as many complaints about Qualcomm in that space. Looks like several root exploits against qualcomm CPUs released last year.
AMD Epyc sounds interesting though it seems to have quite limited availability at the moment from OEMs. I remember being very excited about Opteron 6000 when it came out and still have a bunch in production even today(HP DL385G7s)
Oh intel are terrible.
You really dont want use their chips if ypu avoid them.
Some are terrible. I possible to talk to someone for proper technical support - teams are broke down before silicon gets used.
The, the best you get is Oh well fix it in the next family, which will pin incompatible.
"What makes you think Qualcomm will be better than Intel with regards to buggy chips ?"
I think they have a better chance because the target ISA is so much simpler - better defined, peer-reviewed etc. Qualcomm could still screw it up of course, but the problem domain *should* be a lot smaller than verifying an x86-64 design - so they have a better chance of making a good fist of it.
I don't think it's actually possible to produce a formal model of the Intel ISA, and I feel safe throwing that out there because I very much doubt anyone will ever produce a complete formal model of it and prove me wrong. :)
"The Intel f00f bug was a bad one, as was the FDIV bug."
The current errata are somewhat worse in my view, but don't take my word for it, you should take a look yourself and make your own call - Intel do publish them.
"If Intel chips were so buggy there would be a lot of people complaining"
I'm complaining - but I clearly don't qualify as a lot of people. Few people look at the errata, when a box is a bit flakey folks tend to (naively) assume the CPU is OK, and look elsewhere at stuff like firmware, memory, PSUs, or OS bugs. They might even find problems in those areas too - but for whatever reason few people choose to look at the CPU errata - my guess is that many simply don't understand the language & concepts in the errata sheets and so ignore them outright.
I am no Qualcomm fanboy - I would rather someone else punted this gear. ;)