back to article HPC botherer DDN breaks file system benchmark record

Enterprise HPC storage vendor DDN has run the SPEC SFS 2014 benchmark 25 per cent faster than an E8 NVMe storage system using Intel Optane 3D XPoint drives. NVMe-over-Fabrics fanboy startup E8 had previously smashed the benchmark in January this year. DDN used its SFA 14KX array, fitted with 72 x MLC (2bits/cell) SSDs, 400GB …

  1. Steve Chalmers

    Sometimes those results are about how much money you're willing to spend

    I'm very impressed by the DDN result, but would point out that DDN was running 72 SSDs inside vs E8 running just 24.

    Back the last time I ran a team doing a benchmark of this kind, about 20 years ago, (1) we had to tie up over a million dollars worth of equipment for months, and (2) we had a competitor who had quietly made it clear that if anyone beat their number, they'd just come back with more equipment and win again. This looks like a much saner benchmark :)

    1. Anonymous Coward
      Anonymous Coward

      Money is no object?

      I'm finding this particular benchmark to be pretty frustrating, to be honest. Increasingly it seems that the top result will belong to whoever is willing to gather up the most hardware (although kudos to DDN for doing with with a stack of SSDs).

      For all its flaws, at least the old TPC-C benchmark attempted to factor in cost per TPS.

  2. Secta_Protecta

    Speed is all well and good but....

    Interesting that the top 2 and others are using IBM Spectrum Scale; when I tested it last year it kept falling over, nodes dropped out of the cluster and refused to be re-added etc etc. Not much use having all that speed if the software "defining" the storage is unreliable.

    1. Anonymous Coward
      Anonymous Coward

      Re: Speed is all well and good but....

      Oh yeah, I totally agree on the unreliability : It is so unreliable that ORNL chose it for the 2018 fastest computer in order to cope with their requirements, just because they can obviously afford to lose their 250PB of data, 30 billions files and 30 billions directories even if Spectrum Scale copes with their easy to reach requirements

      - 2.5 TB/sec single stream IOR

      - 1 TB/sec 1MB sequential read/write

      - Single Node 16 GB/sec sequential read/write

      - 50K creates/sec per shared directory

      - 2.6 Million 32K file creates/sec

      - 2 and 2,3PB file dumps ( the whole cluster memory)

      ;-)

    2. dannyk96

      Re: Speed is all well and good but....

      Nodes repeatedly dropping out of a Spectrum Scale Cluster indicates network problems, not a problem with Spectrum Scale itself

      1. curiiousguy

        Re: Speed is all well and good but....

        Yes 90% of the software defined storage issues are network related..

  3. daurtanyn

    DDN archetecture helps

    If the DDN architecture holds to their previous kit, the Fibre Channel connected raid controllers have 10 FC loops out the backside. DDN's secret sauce has been the ability to strip data across those 10 loops in parallel. A parallel connection system of SSDs could be how they are out performing NVMe device sub-systems.

    NVMe isn't magical, it still has limits. It appears DDN is just being agile in leveraging their architectural strengths.

    1. Anonymous Coward
      Anonymous Coward

      Re: DDN archetecture helps

      On the 14K they have 10 (well 12 but 10 is the optimal use case) SAS loops per controller out to the enclosures.

      And its a SFA14KXE so that's got GPFS servers embedded in the controllers so its got EDR IB or 100GbE out the back to the clients. (Or OPA if you really want) They have 4xIB/Ethernet ports per controller out the back for a total of 8 per array.

      You are correct in saying its the architecture helping.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon