back to article EU plans for domestic exascale supercomputer chips: A RISC-y business

The European Union's consortium to develop European microprocessors for future supercomputers has taken a few more steps towards its goal of delivering a locally made exascale chip by 2025. What is the EPI? The European Processor Initiative (EPI) has a €120m framework funding agreement and involves 23 partner organisations …

  1. nuked
    Terminator

    Is that when we don't need the humans anymore?

  2. HmmmYes

    Its software that you need.

    Hardware only exists to run software.

    In fact, I welcome the day when software generates the hardware as some very last stage of he compiler process.

    1. herman

      Ah, well, that is what 3D printers are for...

  3. Anonymous Coward
    Anonymous Coward

    We can watch if from the UK

    Thanks to Brexit.

    1. HmmmYes

      Re: We can watch if from the UK

      You'll be wathcing it from whatever country you reside.

      Billions will pour into connected companies - big ones - and university research.

      15+ years alter, assuming anything is created, itll be something like a 2Ghz 6502.

      Yes I know ARM32 is a tarted up 6502.

      If the EU want the hardware then they just need to pay the Taiwanese fabs to create it.

      If they want EU based fabs then they need to copy a Taiwense fab.

      They are pissing money because they dont knwo that value add is the software.

      1. Tomato42

        Re: We can watch if from the UK

        yes, Europeans are really bad at at big scientific projects, they never deliver, just look at Large Hadron Collider, I think you can read about it on their fledgling technology called "The Web" (talk about ridiculous naming) that came from the same institution /s

        > 15+ years alter, assuming anything is created, itll be something like a 2Ghz 6502.

        nobody is saying they have to start from scratch

        > If the EU want the hardware then they just need to pay the Taiwanese fabs to create it.

        oh, so the foundries in Dresden closed recently? when was that?

        > They are pissing money because they dont knwo that value add is the software.

        WTF you talk about? all of HPC runs Linux. Just because you don't hear about Google's and GitHub's from EU, doesn't mean EU does not make a lot of software. There's a difference between technology aimed at consumers and technology aimed at corporations.

        1. HmmmYes

          Re: We can watch if from the UK

          Dresden is owned by Global Boundaries, a US resident company, fully owned by a UAE oil slush fund.

          'The Web' is nothing more than a dumbing down of SGML and crappy protocol (HTTP) to deliver it.

          HPC clusters as you need an OS and network stack to coordinate the number crunching.

          The software and hardware requirement for consumers, corps, and research boides are not the same. Although all benefit from economies of scale.

          For its size, the EU (even when you throw in the UK) does not make a lot of software. It really does not.

          There's SAP .......

        2. maffski

          Re: We can watch if from the UK

          'yes, Europeans are really bad at at big scientific projects, they never deliver, just look at Large Hadron Collider'

          The Large Hadron Collider is a great example of what public funding can do. Because it's a public good. It's knowledge for the the sake knowledge. It's exactly the kind of thing private enterprise is unable to deliver.

          A really fast computer is a great example of what public funding should never do. Because it's a product people will pay to use. It's exactly the kind of thing private enterprise excels at.

          Look at the size of Intel, that alone proves public funding shouldn't be spent on computing design.

          1. Anonymous Coward
            Anonymous Coward

            Re: We can watch if from the UK

            "A really fast computer is a great example of what public funding should never do."

            And yet, the 10 fastest known computers all have "national" in their name - as in, using public funding.

            https://www.top500.org/lists/2018/06/

            I believe you're missing that a huge part of the use for those things is simulating nuclear weapons, and other such projects with very, very low direct ROI.

            1. ToddRundgrensUtopia

              Re: We can watch if from the UK

              @AC I'm sure plasma physics, (weapons simulation), is a hug e part of most HPC. Most in the UK would be done at AWE on there own cluster and shared memory systems. It's the same in France, (can't remember the facility, but it's off the perpiherique, not far from Orly). Most HPC, (particle physics and bioinformatics are not in my view HPC, but are instead massively parallel), clusters are used for CFD, (e.g. everything we make either holds a gas or liquid or has a gas or liquid moving over it or through it), and various computational chemistry problems, such as density functional theory.

      2. Destroy All Monsters Silver badge

        Re: We can watch if from the UK

        > Yes I know ARM32 is a tarted up 6502.

        LOLWUT

    2. Anonymous Coward
      Anonymous Coward

      Re: We can watch if from the UK

      ARM is British, so in the spirit of Galileo, the EU can go and develope their own fucking hyperscale CPU.

    3. Daniel von Asmuth

      Re: We can watch if from the UK

      So long Brittannia, farewell ARM, goodbye INMOS.

      But the EUil Empire still has Signetics (part of NXP), with their old 2650 microprocessor, the NE555 chip and WOM chips (write-only memory). After 60 years of promotion by the Common Market, the European IT industry still cannot compete, not even on its home turf.

  4. ExampleOne

    Given the licensing restrictions around ARM, it strikes me as a strange choice for a major investment on this scale: OpenPower would seem a more obvious fit, with current licencing policies.

    1. David Lester

      Since ARM are involved -- as partners -- in many of these EU HPC initiatives, I think the licensing costs are the last thing to worry about.

      Besides, an Exascale machine will need at least 1MW of power; that comes to about €1 million per year running costs. The pennies per processor licensing costs will be dwarfed by the running costs.

      1. HPCJohn

        David, I believe the target for a realistic Exascale machine is 30MW - not as in target to get above 30, but to get below.

        Yes indeed, having worked for several HPC integrators the running costs are just as important as the hardware. So any innovative schemes to improve the PUE, or indeed to reduce the power per transistor switch cycle, are of interest.

        1. David Lester

          You're probably right, though I think I heard Valero suggesting a power budget of just 10MW.

          Me?

          I have the following constraint from the Head of Department: "You have a machine room with a 100KW supply".

          "Can I have any more?"

          "No. Any more and the CS Department will need a new electricity sub-station!"

          Luckily SpiNNaker self-powers down when no one is using it, so I don't think the electricity bills will be noticed for a while.

        2. MacroRodent

          30MW

          Sounds like it needs its own nuclear power plant to run. Has this been factored into the costs?

      2. ExampleOne

        "The pennies per processor licensing costs will be dwarfed by the running costs."

        The problem is less the cost of the license, and more the whole "who owns what?" aspect of working with ARM licensing. AIUI a clean room implementation of the ARM ISA is likely to end up with you having problems with ARM, OpenPOWER explicitly allows anyone to use the ISA now.

        1. David Lester

          If you want an open source ARM, there's always the Cortex-M0? But why would you? Either the core you've selected is up to the job (in which case licensing is easy), or it isn't (in which case you add an accelerator, which probably doesn't impact the ARM IP, and so is easy, too).

          The M0 is not my first choice core, but it does have a very low MIPs/W quotient.

          Now, the Cortex-M4F: that's what we've selected for our 10,000,000 core machine. With a few accelerators for the key features we need. I'm quite pleased with my five cycle exponential function, and then there's Marsaglia's JKISS-64 in just one cycle (pseudo-random number generator for Monte Carlo Simulations).

          1. ToddRundgrensUtopia

            @ david Lester. well said. You won't get to Exascale with ARM or Intel based CPUs, you need large numbers of GPGPUs and the last time I looked they were made by Nvidia and AMD!

  5. Anonymous Coward
    Anonymous Coward

    What are they building in there?

    What's it for?

  6. Anonymous Coward
    Anonymous Coward

    Why is Europe fixated on low power?

    This project will be a disaster largely due to Valero's misunderstanding of how to get to the Exascale. The EC have ploughed millions into the Mont Blanc projects - focussed on the use of low power SOCs based around the Arm core. When you do the sums you realise that to get to the Exascale you end up with far too little grunt per core and therefore need far too much parallelism (100s of millions if not billions of threads). Valero continues pushing this approach when every chip manufacturer knows he's wrong. Only the EC continue to listen because nobody dares challenge him.

    Just look at Cavium's Thunder X2 processor - multicore Arm (32 cores) - reasonable per core performance but 180W TDP in the top end model. This is much more than the equivalent performance Xeon. If you look at the Fujitsu Arm processor announced for the Post-K system in Japan - this is very similar. None of these designs are low power because at the end of the day a transistor is a transistor.

    The EC should think very carefully about ploughing money into the EPI initiative. It doesn't seem to involve people understand the basics of how to get to the Exascale with available silicon technology, it's timelines are way off (by several years if you talk to any silicon engineer) and the funding is probably out by a factor of 2. The Thunder X2 took 5 years and almost $500 million to bring to market.

    1. HPCJohn

      Re: Why is Europe fixated on low power?

      None of these designs are low power because at the end of the day a transistor is a transistor.

      Well said. There is work going into the power consumption per instructions 'picowatts per flop'.

      I saw one commen tthat the advent of GPU computing taught a generation that double precision floating point is not needed for everything. I think there will be more effort put into choosing the appropriate precision for calculations, saving power by making the actual algorithms more power aware.

      1. Destroy All Monsters Silver badge

        Re: Why is Europe fixated on low power?

        Maybe time to replace the IEEE Floating Shit out and replace it with the Unum. A really long shot but worth a try, eh?

        Not sure whether legit:

        "Most people don't realize that it is the data movement that is most expensive thing in a processor... It takes 100 picojoules to do a double precision (64 bit) floating point operation, but a humongous 4200 picojoules to actually move the 64 bits from DRAM to your registers. The really crazy thing is that around 60% of that power used to move the data is wasted in the processor itself, in the logic powering the hardware cache hierarchy. My startup (http://rexcomputing.com) is solving this with our new processor, and are working with John Gustafson in experimenting with unum for future generations of our chip.

        A load/store to a cores local scratchpad (Our software managed and power efficient version of a traditional L1 cache) is 1 cycle, compared to 4 cycles for an Intel processor. Add in the fact that we have 128KB of memory per scratchpad (compared to 16 to 32KB L1 D$ for Intel), you don't need to go to DRAM as much, greatly increasing performance/throughout on top of the 10x+ efficiency gain.

        ... Even in the case of a core access accessing another cores local scratchpad when they are on opposite corners of the chip, it takes only one cycle per hop on the Network on Chip... meaning for our 256 core chip, you can go all away across the chip (and access a total of 32MB of memory) in 32 cycles... Less than the ~40 cycles it takes to access L3 cache on an Intel chip.

  7. Anonymous Coward
    IT Angle

    I'm a little skeptical about this kind of industrial policy...

    Unless the paranoia about sourcing potentially-compromised/trojan'd hardware from outside your legal jurisdiction is a big issue (and these days I am not saying that it absolutely shouldn't be), then I kind of think the EU is fighting the last war here. Unless you can make that "national security" argument, I think that anything the EU develops will probably be at best a "me too" product by the time you complete development and get the semiconductors to market.

    1. David Lester

      Re: I'm a little skeptical about this kind of industrial policy...

      The important thing to understand is that Supercomputing is nowadays about taking stock components and putting lots of them together.

      Historically, since custom Mainframes died, that's always meant using Intel processors.

      But the bread and butter for Intel are the chips in the laptop I'm using at the moment. Likewise for ARM it's the Washing-Machine Chip du nos jours.

      Supercomputers are just for fun!

      1. ToddRundgrensUtopia

        Re: I'm a little skeptical about this kind of industrial policy...

        @ David Lester.

        "Historically, since custom Mainframes died, that's always meant using Intel processors."

        No quite right. Contemporary with mainframes we had bespoke supers from Cray, CDC, NEC and IBM SP2/3. Post those we had oddities like Thinking Machines, ELIXI and then the first ccNUMA super from Convex, which you can still today as an HP something or other. Then just before we got to Intel based clusters, we had DEC Alpha as the main HPC cluster processor of choice. Once Intel developed the PentiumPRO there was only going to be one $/flop winner and that was x86.

        On the stock component side, you can't use Ethernet, due to latency and so the cost is in the interconnect, such as Infiniband, (which is by far the biggest). Earlier high speed low latency interconnects included, SCI and Myrinet.

        1. David Lester

          Re: I'm a little skeptical about this kind of industrial policy...

          Absolutely right about the cost and complexity of the interconnect (and thanks for the historical correction; DEC Alpha supercomputers, eh? There's a blast from the past!)

          Nevertheless, I thought InfiniBand now counted as "stock" -- if somewhat expensive -- hardware?

  8. Lomax
    Mushroom

    ATOS

    > "involves 23 partner organisations with ATOS and the Barcelona Supercomputing Centre (BSC) in the driving seats."

    That bodes well...

    https://en.wikipedia.org/wiki/Atos#Controversy

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon