nav search
Data Centre Software Security DevOps Business Personal Tech Science Emergent Tech Bootnotes
BOFH
Lectures

back to article
Qualcomm is shipping next chip it'll perhaps get sued for: ARM server processor Centriq 2400

Anonymous Coward

RiscOS

man that CPU is sure going to run RiscOS sweetly! no more lag when I fire up Otter Browser (Netsurf is only 32bit IIRC) ;-)

16
0
Anonymous Coward

Re: RiscOS

Not even a RiscOS guy but I clicked through to the comments thinking exactly the same thing. Beaten to it.

3
0
Anonymous Coward

Re: RiscOS

I don't think I've ever been more efficient in terms of how quickly I coded and produced real results than I did back in the day with my expanded BBC Micro Model B, followed by up my Archimedes running RiscOS. Odd, considering how long things took to load, looking back.

I put it down to those excellent classic' assembly/architecture System User Guide manuals.

A switch to Windows became a life of faffing and clunky updates that seemed to take most of my time firefighting problems. For something that was sold as increasing the productivity of processes, it's sure not looking that way of late for Windows.

Windows 10 'best version ever'* does seem to be finally settling into place, but then MS have been saying that for the last 20 years. *Microsoft's interpretation not mine.

7
1
Silver badge

Re: RiscOS

Yes, RISCOS sometimes feels like a breath of fresh air when I use it. I just wish the real good productivity stuff wasn't so expensive.

3
0
Silver badge
Unhappy

RISC OS not RiscOS

Unfortunately the chip has no 32 bit mode, so won't run RISC OS, which still contains a vast amount of 32 bit ARM assembler. There are also a lot more important issues which need development time than porting to ARMv8, so I don't see it happen anytime soon.

It could run on the Centriq 2400 via emulation though, leaving 47 cores available to the host OS.

2
0
Silver badge

Ringbus

So it is a ringbus chip.. not for heavy loads but rather plenty of light loads....

5
0
Silver badge

Comparison with Thunder-X and X-Gene would be useful.

Please can I have one with perhaps just 8 or 16 cores?

4
0
Silver badge

+1 from here!

I would really like alternatives to 64-bit x86 that were easy to buy and supported on more than one OS (even though it is probably just Linux for my projects in mind).

Competition is good! Much more so if the alternative is not Oracle!

15
1
Roo
Silver badge

There really is no solid basis for comparison at first glance. The pin bandwidth seems to be in a different league and the "ring bus" does look quite different to what Intel were punting in Xeons, it looks a lot closer to a contemporary datacentre chip than Thunder-X & X-Gene.

2
0
Silver badge

And also AMD's abandoned ARM chips.

A significant factor is the fab process; these are 10 nm, while the AMD chips and X-Gene were on 40 nm. That's a huge difference, Not sure about thunder-x.

4
0
Roo
Silver badge
Windows

I tend not to read as much into geometry these days, although in this case it does look like it's made a difference in the sheer amount of cache on the chip - which is a good thing. It also shaves a cycle of latency here and there in comparison to the competition in terms of cache/memory latencies and branching. The instruction issues/cycle look well balanced and they've made an interesting choice in pipeline lengths as well - superficially it looks like they've put a lot of effort into minimizing latency. Can't wait to see some SPEC & SPEC_rate results - I'm not expecting top marks but I reckon the Centriq has a fair chance of achieving respectable SPEC / cubic metre (and watt) figures - which would be exciting. :)

1
0
Anonymous Coward

Bling!

Those prices! I hadn't noticed how much they were! Wow!

Hey, have any of these companies tried for celebrity associations? One or more of these chips as chain drops around a notable throat ought to bring favorable notice. Maybe Qualcomm for Swift, AMD for Hilton, Intel for Messi?

2
0

"suitably fat caches"? maybe not quite

The parts in the slide in the image have the Intel units being 1.375MB of L3 per core, while the Qualcomm ones are 1.25MB per core. Advantage to Intel.

https://regmedia.co.uk/2017/11/08/qualcomm_skus_sm.jpg?x=648&y=345&infer_y=1

0
7

Re: "suitably fat caches"? maybe not quite

Huh? Your argument does not make sense.

64 Cores in a single package. . . vs 4 16 core or 3 24 core or . . . Apples vs Oranges. When intel comes up with a 64 core package maybe but L3 is a shared resource, so the comparison doesn't make sense.

6
0
Bronze badge

Test Drive ?

Qualcomm should offer test drive servers to developers. I guess this would be a great marketing tool along with an opportunity for developers and technical managers to benchmark the CPUs.

Otherwise - PR talk will be forgotten tomorrow at 1600 GMT.

2
0
Silver badge

One qualifier

With respect to CloudFlare, Prince did have to qualify that they use Go quite a bit and it's only in beta for ARM. The rest of their requirements compile well enough. Optimization for the compilers being a work in progress but acceptable enough.

1
0
Anonymous Coward

I think I'd be fine with some server-grade ARM as a basis for a Linux/whatever desktop. The multitude of cores, provided they have tolerable IPC characteristics (interprocess comms, not quite as concerned with instruction rates with that many cores), would likely jive well with modern browsers' multi-process designs.

0
0
Anonymous Coward

Where are the SPEC numbers?

In order to take these beasties seriosly, I'll need to see the SPEC numbers. For workloads I care about, that would be SPECint and SPECfp.

Absence of those tells me everything I need to know. YMMV.

0
0
Anonymous Coward

Re: Where are the SPEC numbers?

It's a while since I looked in detail at this stuff, but back in the late 20th century, SPECint and SPECfp weren't really most sensible people's benchmark of choice any more, surely, given that workloads had changed since SPEC came in, and address spaces had grown, and other similar technotrivia changes.

Depending on what you want to do with these things, maybe CoreMark might have been a good starting point in the early 21st Century, maybe an application-specific benchmark?

Or have things reverted to 20C-style benchmarks since I left the building?

[Alternatively maybe it's too early in the morning for me to be posting]

0
0
Anonymous Coward

Re: Where are the SPEC numbers?

The trick with SPEC numbers is to identify the sub-tests that represent one's application.

In other words, benchmark your application on systems that you can get your hands on, and then pick out the SPECint/SPECfp cases that have published results on those systems, displaying the same relative performance.

That subset of the test suite can then be used to get a rough idea of how that application might perform on other systems.

1
0
Anonymous Coward

Re: Where are the SPEC numbers?

It's a while since I looked in detail at this stuff, but back in the late 20th century, SPECint and SPECfp weren't really most sensible people's benchmark of choice any more, surely, given that workloads had changed since SPEC came in, and address spaces had grown, and other similar technotrivia changes.

...

[Alternatively maybe it's too early in the morning for me to be posting]

It is.

SPEC CPU is very much relevant for some application domains - including those I am interested in. I find the 2006 version of SPECfp to correlate very well with our typical workloads [there are too few submissions for the 2017 version of the benchmark to say anything yet - but I looking forward to seeing more, not in the least because it has an OpenMP component].

If a vendor does not submit a SPEC CPU score, this tells me that either:

a. Their hardware is really not particularly suited for my workloads

-or-

b. They do not care about my workloads and about having me as a customer.

Either way, I will most likely look elsewhere.

Naturally, your workloads may be different, and SPEC CPU may not be relevant for you. YMMV, you know?

0
0
Silver badge

Poor Chipzilla is in for a very interesting time now... what with Intel chippery getting pwned any way you like...

It will not be surprising if Intel chipset sales fall while other chipset sales start to rise.

1
0
Silver badge

Where are nVIDIA on this front?

Actually FWIW I wonder why Team Green aren't exploring this market with something like their Pascal (Tegra), Chips. This almost feels like something they were aiming for before being shot down, by both Intel, and Team Red (AMD/ATi).

0
0
Silver badge

Re: Where are nVIDIA on this front?

They are pursuing a different path - massive parallelism with thousands of simple but fully programmable cores and their own architecture. You can see some of this at https://www.youtube.com/watch?v=86seb-iZCnI

3
0

"Certainly Google, which buys chips by the boatload, is seemingly eager to deploy anything-but-Intel in its warehouses of computers, which should at least leave Chipzilla a little worried."

Seemingly is the key word here. With an estimated 80 billion in the bank, we can probably assume Google can afford to deploy whatever it likes. There are non-x86 server class chips available if they really wanted non-x86 and AMD would have offered a non-Intel option.

What Google want is a credible(ish) alternative to bludgeon better prices out of Intel, but I see little evidence they are actually making moves to jump ship. Unlike Apple who clearly ARE developing an alternative option.

0
1
Pint

Raspberry Pi 5?

> Other folks who have got their hands on the hardware, we're told, include: Alibaba, LinkedIn, Cloudflare,...

.... and the Raspberry Pi Foundation?

btw - I've checked: At 20mmx20mm the die would fit quite nicely, but I'm a bit worried about the 120 W heat dissipation.

And USB 3 at last!

Err, sorry, just daydreaming ;-)

7
0
Anonymous Coward

Re: Raspberry Pi 5?

And a $2K price tag...

5
0

Benchmarks

I was reading this Cloudflare blog article yesterday, some interesting benchmarks: https://blog.cloudflare.com/arm-takes-wing/

1
0
Silver badge
Paris Hilton

A power draw of up to 120 watts

Interesting.

Will the main advantage over Intel be the price?

What will Qualcomm's royalty/licensing ploy be? They love to charge a royalty on every chip they design and sell (does ANYONE else do this?). They like to charge a SECOND royalty based on value of product the chip is in (does ANYONE else do this?).

They buy and shutter innovative companies purely to get the IP and then add that anyway possible to a new product (Flarion 4G to LTE 4G). They lobby to get their IP part of standards (Dumbing down of 3G before first release to use CDMA which Qualcomm is major IP holder).

The chip maker equivalent of Oracle or Zombie SCO?

Yes, nice technology, shame about how you licence IP.

Icon, because I'm not sure why I'd want a 120W ARM CPU?

2
0
Anonymous Coward

Re: A power draw of up to 120 watts

"I'm not sure why I'd want a 120W ARM CPU?"

To use in an internet-connected kettle, of course.

1
0
Roo
Silver badge
Windows

Re: A power draw of up to 120 watts

"Yes, nice technology, shame about how you licence IP."

Agreed. :(

"Icon, because I'm not sure why I'd want a 120W ARM CPU?"

I am hoping it's because it will pack more densely into racks, and deliver good enough aggregate throughput in production to allow you to squeeze more bang for buck out of your date centres. IMO Intel have dropped the ball on verifying and testing their designs - the errata sheets have been horrific for a few generations of Xeons now. There would be some advantage to having less buggy chips - firmware/hardware bugs & work-arounds get tiresome and very costly at scale... :)

3
0
Silver badge

Re: A power draw of up to 120 watts

What makes you think Qualcomm will be better than Intel with regards to buggy chips ? If Intel chips were so buggy there would be a lot of people complaining, and there doesn't seem to be(outside of some vocal people complaining about that AMT stuff). I certainly haven't been alarmed by any recent Intel bugs, and I certainly don't think I am in the minority(though I keep my HP servers fairly up to date with Proliant Service packs so they get whatever HP may put in there to fix issues).

The Intel f00f bug was a bad one, as was the FDIV bug.

When it comes to existing Qualcomm CPUs, one of their biggest markets I'd assume is phones/tablets, and there seems to be at least as many complaints about Qualcomm in that space. Looks like several root exploits against qualcomm CPUs released last year.

AMD Epyc sounds interesting though it seems to have quite limited availability at the moment from OEMs. I remember being very excited about Opteron 6000 when it came out and still have a bunch in production even today(HP DL385G7s)

0
0
Silver badge

Re: A power draw of up to 120 watts

Oh intel are terrible.

You really dont want use their chips if ypu avoid them.

Some are terrible. I possible to talk to someone for proper technical support - teams are broke down before silicon gets used.

The, the best you get is Oh well fix it in the next family, which will pin incompatible.

2
0
Silver badge

Re: A power draw of up to 120 watts

@Nate have you seen recent Intel errata?

1
0
Roo
Silver badge
Windows

Re: A power draw of up to 120 watts

"What makes you think Qualcomm will be better than Intel with regards to buggy chips ?"

I think they have a better chance because the target ISA is so much simpler - better defined, peer-reviewed etc. Qualcomm could still screw it up of course, but the problem domain *should* be a lot smaller than verifying an x86-64 design - so they have a better chance of making a good fist of it.

I don't think it's actually possible to produce a formal model of the Intel ISA, and I feel safe throwing that out there because I very much doubt anyone will ever produce a complete formal model of it and prove me wrong. :)

"The Intel f00f bug was a bad one, as was the FDIV bug."

The current errata are somewhat worse in my view, but don't take my word for it, you should take a look yourself and make your own call - Intel do publish them.

"If Intel chips were so buggy there would be a lot of people complaining"

I'm complaining - but I clearly don't qualify as a lot of people. Few people look at the errata, when a box is a bit flakey folks tend to (naively) assume the CPU is OK, and look elsewhere at stuff like firmware, memory, PSUs, or OS bugs. They might even find problems in those areas too - but for whatever reason few people choose to look at the CPU errata - my guess is that many simply don't understand the language & concepts in the errata sheets and so ignore them outright.

I am no Qualcomm fanboy - I would rather someone else punted this gear. ;)

2
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

The Register - Independent news and views for the tech community. Part of Situation Publishing