back to article Flash array startup E8 whips out benchmarks, everyone will complain

NVMe over Fabrics flash array startup E8 says its box out-performs Dell EMC and Pure arrays by up to 20 times. E8 is now selling and shipping a 2U by 24 NVMe SSD and NVMe over Fabrics-accessed array, with dual controllers and some logic agents in the accessing servers. It provides a claimed 10 million IOPS with 100 microsecond …

  1. Anonymous Coward
    Anonymous Coward

    >With 256K BW MBps Infinidat achieved 10,024 and E8 37,197 – 37X higher.

    You might want to check your arithmetic.

    Anyway, IOPs are meaningless, especially as they don't state how big the box is. I can easily double the IOPs of storage model X by simply adding another one, i.e. 2X.

    What's important here is one figure, the steady-state latency of 700 us. But even that is unrealistic without the details.

    There are also no details regarding the price. Is this before unrealistic data reduction claims or after unrealistic data reduction claims? How does data reduction affect performance? Did they include it in the test?

    That said, there's no denying the abilities of NVMe. I do wish that someone independent would run these tests though.

    1. mishagreen

      Disclaimer: E8 employee here.

      $2/GB to $3/GB is for usable capacity and there are no funky de-dup factors in play. It actually is stated how big the box is: 2U height unit with 24 NVMe SSDs, you will get the same performance whether you use 1.6TB or 3.2TB or 6.4TB SSDs so usable capacity will be 35TB/70TB/140TB based on the SSD size.

      1. spinning risk

        JBOD?

        Is their any Data reduction? What is the Raid persona and what is overhead? Can you scale them out?

        Trying to find the value here. Thin Provisioning? Any guarantees at all?

  2. Anonymous Coward
    Anonymous Coward

    Pure numbers are still nonsense

    The Pure data was a fabrication when it was first published by EMC and was debunked as nonsense back then. Having other manufacturers leverage the same "alternative facts" as a benchmark, doesn't make the original data any less of a fabrication.

    1. Anonymous Coward
      Anonymous Coward

      Re: Pure numbers are still nonsense

      You appear to have skin in this game.

    2. Throatwarbler Mangrove Silver badge
      FAIL

      Re: Pure numbers are still nonsense

      [citation needed]

  3. Anonymous Coward
    Anonymous Coward

    DSSD

    Pity DSSD was abandoned which really was the array DellEMC had in this space.

    Why go upgainst Unity rather then XTreamIO.

    Really is an apples to oranges comparison

    1. klaxhu

      Re: DSSD

      why?! because its suits their agenda and for the non-techie it has become too cumbersome to keep up with the world of acronyms.

      when I left emc some years ago I thought I was making a mistake and now I know I did well that I did so.

      this has become the most boring segment of IT

    2. Anonymous Coward
      Anonymous Coward

      Re: DSSD

      DSSD was abandoned because the market for such a device was so very small. Thankfully that technology will trickle down into the rest of the DellEMC portfolio.

  4. briancarmody

    Hoisted with our own petard!

    Customers with compact, ultra-low latency use cases really need to look at E8. They are doing today what EMC and Pure have in their Future Vision pitch decks. Great product, great management team. Watch these guys.

    1. Anonymous Coward
      Coat

      Re: Hoisted with our own petard!

      You got the job then?

  5. spinning risk

    Looks like good performance

    NVMe is pretty compelling on the E8

    Does anyone know if E8 can scale up/out? or both?

    Any data reduction capabilities?

  6. Anonymous Coward
    Anonymous Coward

    they forgot to make a business plan. nobody cares. they forgot to take note that most orgs are barely just getting thier first SSD based array in house at it stands present time. Let alone moving (light)years ahead into something that needs to be orders of magnitude faster than SSD.

    Why? there isnt one good reason. "Because its cool" isnt a business plan. Where are my data services? Integrated backups? Cloud integration? multi-protocol support? god forbid a hybrid SSD / spinning rust strategy (because SAS is dead but high capacity SATA for archive is not).

    I correct myself. their business plan is "do enough (impressive?) powerpoints to technology executives until somebody buys my company and integrates my tech into their 2020 version storage array....

  7. TheSolderMonkey

    Luvleeee

    Great product, but what's the real world use case?

    It's a JBOD, there is no data protection, no teiring, no thin provisioning, no compression, no encryption, no applications written to make use of it, no sun no moon! No morn no noon! No dawn no dusk, no proper time of day. No viable market.

    I can think of maybe three customers...

    * CERN, for capturing data from those big collectors.

    * Lawernce Livemore and other extreme compute farms.

    * Nope, really can't think of a third. Not even Watson could make use of this.

    There are very few use cases that need this many IOPs over a relatively small dataset that can accept the lack of data protection.

    Really great product though, let's hope it's not E8's November.

  8. Anonymous Coward
    Anonymous Coward

    What is an AFA? (according to SNIA)

    Follow the link to the SNIA doc shared by E8 and you'll find none of the systems listed are 'real' AFA's anyway! ;-)

    ...

    What is an AFA?

    - All-Flash-Array

    - Not SSD (form-factor flash drives)

    - Built from Function-designed flash modules

    ...

    By that standard everyone listed is disqualified for not being a SNIA AFA! But this might just be because the doc was authored with a focus on the non SSD IBM FlashSystem.

    So forgiving SNIA for the editing miss, they do a pretty good job defining workload testing definitions; attempting to simulate mixed workloads with a variety of read/write ratios and a wide distribution of block sizes. However, many real world applications tend to have different block sizes for Reads and Writes (I'd say always for mixed workload environments) and very hard to simulate precisely with a synthetic test, but still less risky for most than chucking a new system into production to find the real answer.

    EMC, Infinidat and E8 state that they used an 80% Read workload for the published results at 8 & 16KB, I'd wager these weren't extracted from the mixed block SNIA workload run, across the board; "80/20 read/write ratio, all sequential, mixed block sizes matching your actual distribution, 5:1 reducible data, some I/O banding with drift". So, the results here appear to be more of a hero test for a specific IO profile than a meaningful real world comparison, but perfectly valid for a single Tier-0 app with that IO profile, hmm what's E8's sweet spot again?.

    Last point - one person's 5:1 reducible data, is another person's 7 or even 10:1 reducible data; pretty much every vendor uses different data reduction secret sauce, and some are more granular and efficient than others, achieving higher reduction rates for the same application data. For a given workload profile a higher reduction ratio can mean higher performance on a system, therefore testing with a 5:1 reducible dataset across the board still doesn't give you a real-world apples to apples comparison.

    Ignore the vendor spin and test as close to real world for both your data and system behaviour as you can!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like