back to article Amazon's homegrown 2.3GHz 64-bit Graviton processor was very nearly an AMD Arm CPU

Amazon Web Services' customized Graviton processor, revealed this week, was very nearly an Arm-based chip from AMD, The Register has learned. Up until 2015, Amazon and AMD were working together on a 64-bit Arm server-grade processor to deploy in the internet titan's data centers. However, the project fell apart when, according …

  1. Chewi

    32-bit?

    Few are likely to want it but anyone know if these can do 32-bit? I gather the Cortex-A72 usually can but I'm unsure whether it's true in this context. Cavium ThunderX is (usually?) 64-bit only, which means no one else offers 32-bit ARM in the cloud, unless it's lower spec 32-bit-only bare metal.

  2. john.jones.name

    how clueless

    I bet the team at AWS / Annapurna love this:

    "It does poorly benchmarking our website fully deployed on it: Nginx + PHP + MediaWiki, and everything else involved. This is your 'real world' test. All 16 cores can't match even 5 cores of our Xeon E5-2697 v4."

    complete and utter garbage...

    how many optimizations does the ARM Compiler emit/use vs the number for Xeon ? NONE

    same with the geekbench its all garbage... until AWS / Annapurna actually get GCC to emit / optimize for basic things like AES then they don't have a chance and you cant get it into the mainline tree until you want to announce it. so lets see the code...

    so the question is how much has it been optimised for floating point and what is the IO bandwidth like

    IF and its a BIG IF they have a decent IO speeds that can compete with the Intel Xeon THEN it will be more than a negotiating tactic with Intel

    john jones

    1. HmmmYes

      Re: how clueless

      If I was bilionaire shave headed interweb billlonaire, sitting in my old volcano HQ, thinking out a CPU for my cloud farm then I would be looking at the following:

      - Processing/Wattage.

      - How well the chipset/ISA plays with VM.

      - Big caches.

      - IO permfromance.

      Compilerer performace is a no brainer. I can hire LLVMers to tweak that.

      1. JLV

        Re: how clueless

        +1

        and kickstarting interest by having real-life silicon, priced cheaply.

        chicken/egg. not rocket science, is it, @john.jones?

        wonder the performance/vm pricing tradeoff given this current benchmark?

        and, how much is this going to increase ARM compatibility/interest in general across the commodity FOSS stuff like nginx, postgres, redis?

        not good for Intel, in the long term. esp if it coincides with macOS moving to ARM as well.

        pity the guys @ Intel in charge of 10nm. they must be getting tons of pressure internally.

  3. Old Used Programmer

    Interesting comparison...

    So it's twice as fast as a Pi3B+... Let's see, about 1.5x clock speed (2.3GHz vs. 1.4GHz). Four times the core count (16 vs. 4). Later design (Cortex-A72 vs. Cortex-A53). And you probably can't buy one for $70 (twice the cost of a Pi3B+), let alone the SoC and all the support circuitry.

    If I had designed this chip, I'd cringe at that comparison.

    1. Peter Gathercole Silver badge

      Re: Interesting comparison...

      SciMark is inherently a single threaded benchmark, so it really measures single core performance, which would make sense given 2x performance with 1.5x clock speed and an architectural bump.

      Once you factor in the four times core count, it will be much more useful in a datacentre environment with real-world workloads.

      It's interesting that it's a non-NUMA design. This normally causes memory bus contention issues with multi-core designs, so I wonder what they've done to allow 16 cores to access the same memory without blocking.

      1. ToddRundgrensUtopia

        Re: Interesting comparison...

        Peter Gathercole

        I had assumed they meant it wasn't capable of working in a dual CPU configuration, i.e. no coherent CPU - CPU interface. It must be NUMA on chip as they talk about 4 x 4core clusters, yes?

        1. Peter Gathercole Silver badge

          Re: Interesting comparison... @ToddRundgrensUtopia

          Throwing terms like NUMA around in multicore systems without sufficient qualification can be completely misleading (and this is separate from the abomination that is term "non-Non-Uniform-Memory-Access" used here).

          NUMA is normally not used at a chiplet level, but at a complete system level. I certainly can see that each quad core 'cluster' chiplet could have Uniform Memory Access (see what I did there) to their local memory for each of it's 4 cores, but at a system level (or even a chip level), this will almost certainly be a NUMA architecture.

          I spent some time working with IBM Power 7125 575 and 775 systems, and know that as processor count increases, coherent cache and memory access becomes exponentially more difficult.

    2. druck Silver badge

      Re: Interesting comparison...

      They are talking about single core performance.

    3. Paul J Turner

      Re: Interesting comparison...

      Even allowing for the single-threaded tests, I bet the Graviton wasn't using medieval LPDDR2 RAM like the Pi. As you say, a very poor showing.

  4. John Smith 19 Gold badge
    WTF?

    Let's recall the other definition of MIPS

    Misleading Information Provided by Suppliers.

    Twice as fast as a raspberry Pi? Clocked at 2GHz?

    Underwhelming.

    Which either means this is

    a) A bargaining tactic to get Intel to lower their prices to Amazon before they dump it.

    b) Version 0.9 to get it on the cloud and accessible to real users before they start the serious tweaking.

    I'd like to think Amazon are serious about supporting ARM. But designing chips is not their core business, unless it gives them a significant advantage (like some companies running graphics SW on their custom processors back in the day).

    1. Anonymous Coward
      Anonymous Coward

      Re: Let's recall the other definition of MIPS

      > But designing chips is not their core business, unless it gives them a significant advantage (like some companies running graphics SW on their custom processors back in the day).

      Advantage is not measured in raw mips alone, also power consumption is important. It came to light a few years ago that even for mobile phone operators the power requirements in all their base stations represented a large enough cost that they had to cut that cost.

      Google has designed tensor flow chips, ISP (for Pixel phones) chips and also networking chips (Lanai) and who knows what else they have hidden away. When you are as large as FAANGyou start looking into all forms of cost control simply because the vast size of the operations provide equally large potentials for cost savings.

    2. ToddRundgrensUtopia

      Re: Let's recall the other definition of MIPS

      I'd like to think Amazon are serious about supporting ARM. But designing chips is not their core business,

      Correct that's why they bought a company that does design CPUs.

    3. Charlie Clark Silver badge

      Re: Let's recall the other definition of MIPS

      Twice as fast as a raspberry Pi? Clocked at 2GHz?

      Underwhelming.

      Not really. As others have mentioned, it depends what you're doing with it and what else the silicon does. I'm sure if Broadcom were still developing silicon they might have had a chance at the contract because the RPi CPUs are such a known quantity. But they're not and options for volumes of server chips are limited.

      But the workload for these chips is likely to be anything running on the lambda service. These are low latency, low workload, low power services where ARM makes more sense than x86_64.

    4. Zippy´s Sausage Factory
      Paris Hilton

      Re: Let's recall the other definition of MIPS

      I'd like to think Amazon are serious about supporting ARM. But designing chips is not their core business, unless it gives them a significant advantage

      There's a significant advantage for Amazon here, and it's not just cost control (the opportunities for savings are huge).

      In short, they have the advantage of leverage. Intel are a huge supplier. They now have to cut their own costs to the bone or face a major customer moving to a rival platform. Worse, Amazon moving to Arm would be a major signal to the marketplace that Arm has come of age and is now a significant threat to Intel.

      That's leverage that can get Intel to bend to Amazon's will, should they want it. They could, in theory, have significant impact on the way that Intel develop their processors over the next few years. Or, they could go their own way and, if they chose to, make a significant chunk of people decide x64's day in the sun is over.

      My thoughts anyway. And Paris, because what do I know?

  5. werdsmith Silver badge

    From what I can see, this is more expensive than on demand T instances.

    1. Phil Endecott

      > this is more expensive than on demand T instances.

      Yes, for some reason they’ve not scaled these ARM instances down to the t micro/nano sizes.

      That may be temporary; I think the same is true of the AMD instances at present.

  6. Mikerr

    Twice as fast as a Raspberry pi 3 ? Not as fast as an Asus Tinkerboard then....

  7. Spazturtle Silver badge

    Makes sense AMD gave up.

    The purpose of switching to ARM was better density, with a Xeon server you can saturate a 1Gbps Ethernet connection (or even 10Gbps) and still have processing power to spare, but Intel are stingy on PCI-e lanes so you can't just add more and more ports. So if 80% of your CPU power is going spare then why do you even need it? An ARM server can be much denser, you can fit 10 servers in a 1U space, each with their own Ethernet port.

    But now with AMD's Zen design they have lots of PCI-e lanes and lots of cores, so you have have 1 system running 10 VMs. This is even better then having 10 separate machines in the same space, as with VMs you can over provision resources.

  8. Robert Sneddon

    Power consumption

    There was no mention of how the power consumption of these chips compares to, say, a glow-in-the-dark 165W Xeon. That's been a major factor for server centre siting and operations for a while now. If AWS are saving money on their power bills while still able to deliver enough data processing capacity with these chips then it's probably a commercial win for them. I don't think Intel are going to be losing much sleep or orders for Xeon deliveries from server builders over this though.

  9. EnviableOne

    This has to be a PoC, they'll then drop in A76s and knock spots off the Chipzilla

  10. James Hughes 1

    Most odd

    Those performance figures do not add up at all. I would expect a A72 at 2.3G to be at least 3-4 times as fast as the A53 at 1.4G on the Pi3B+. It should be about twice as fast just from the architecture upgrade, then add on the extra clocking.

    As as for the website comparison - what utter bollocks. That more related to memory, and networking that the CPU speed.

  11. tentimes

    Was looking good up until the bit...

    .... where we here that all 16 cores are slower than 5 cores of a Xeon. Boom! I am an admirer of ARM but they have to do better - they have been getting an easy ride of it in the phone market for too long.

    1. Down not across

      Re: Was looking good up until the bit...

      .... where we here that all 16 cores are slower than 5 cores of a Xeon.

      Not everything is about speed. It is quite common for cores to be not that highly utilised for example.

    2. Anonymous Coward
      Anonymous Coward

      Re: Was looking good up until the bit...

      ARM have to do better? How do you figure this? It is the implementors job to do better. Consider the Apple Ax series of chips, other ARM based phone market processors do not even get close, they can hold their own against x86/x64 processors as well, the only flaw being the thermal/power constraints. I would love to see a server based Apple A12 and how it would compare to Xeons. ARM itself is not the problem.

  12. Anonymous Coward
    Anonymous Coward

    Expensive? We'll never know.

    > "AMD failed at meeting all the performance milestones Amazon set out."

    I notice that no one actually says that Annapurna managed to meet the same performance requirements. So this might have been a very expensive way of discovering that AMD weren't actually that bad[1] after all.

    [1] Apart from a salesman over-claiming on the performance they hoped to achieve.[2]

    [2] Citation required. :-)

    1. msroadkill

      Re: Expensive? We'll never know.

      "no one actually says that Annapurna managed to meet the same performance requirements." - yes I posted similarly a bit after you. I don't know when it was said, but they bought Annapurna in ~jan 2015 for ~$350m. They may have realistically realised "so that's what arm yields, & we still wanna play with it, but we will buy our own test kitchen where we have full control". Annapurna may have other revenues/prospects, which would make the price chump change vs their intel cpu spend. If it adds intel leverage, it pays for itself on that alone.

  13. Erik Backup2aws

    it is necessary to have the product at the right time in January 2016 the market was not ready

    Your article looks strangely like this one i saw just before yours...

    https://www.linkedin.com/feed/update/urn:li:activity:6473112468829802496

  14. MJI Silver badge

    Graviton

    Hmm

    Lance?

    Does it shoot black holes?

    Hmmm

  15. Andy00ff00

    is that legal?

    Sooo.... an AWS VEEP talks down the abilities of non-Intel CPUs around the time they're about to acquire a company that makes them.

    Isn't that called "manipulation of markets" or some such?

  16. msroadkill

    I declare I am an inexpert amd stock holder, but hopefully objectively, I agree with others here that its just a plan B for AWS to use as an "arm twister".

    They said amd's arm didn't live up to expectations, but nor does this it seems.

    It bears noting annapurna cost ~$350-370m back in 2015

    https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=2ahUKEwjRxY6ThPreAhWO62EKHakvB00QFjAAegQICRAB&url=https%3A%2F%2Fwww.extremetech.com%2Fcomputing%2F198140-amazon-buys-secretive-chip-maker-annapurna-labs-for-350-million&usg=AOvVaw0wBbSw4JXNuUrhMOer8xvl

    I think they just got a better deal this way than getting much the same arm twister from amd. 350M$ isn't huge considering their presumed cpu spend - they just bought their own designer team & arm licence.

    Its just another logical step in the very slow continuum of the arm solution finding a problem to solve in exascale.

    Instead of using amd's workshop & team for their experiments, they got a bargain on their own setup, team & arm licence. Their own ~test kitchen, just as arm instances seem to be for their clients to experiment on.

    Nobody seems to be saying arm just took a big leap vs x86.

  17. BoomHauer

    Dipping their Toes..

    Laugh now, but consider this their first entry into the market. It's a swing and miss overall, except for a few very narrow use cases. They'll learn and get better as AWS marches further into full vertical integration. In 10 years, lookout!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like