back to article AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

NetApp and Nvidia have introduced a combined AI reference architecture system to rival the Pure Storage-Nvidia AIRI system. Fairy in the woods If you've got $1m+ to blow on AI, meet Pure, Nvidia's AIRI fairy: A hyperconverged beast READ MORE It is aimed at deep learning and, unlike FlexPod (Cisco and NetApp's converge …

  1. Anonymous Coward
    Anonymous Coward

    But do I care?

  2. SPGoetze

    Not by much?

    "It appears from these charts, at least, that NetApp Nvidia RA performs better than than AIRI but, to our surprise, not by much, given the NetApp/Nvidia DL system's higher bandwidth and lower latency – 25GB/sec read bandwidth and sub 500μsec – compared to the Pure AIRI system – 17GB/sec and sub-3ms"

    From the *numbers* I see, it's ~220-280% performance. Maybe if you would scale the graphs the same, it would not "surprise" you so much... And from the looks of it, it's probably NetApp's latency advantage and we didn't reach bandwidth/throughput saturation yet (in these configurations).

  3. sanjeevsharma

    I think that if you are comparing external storage system performance, you should compare AlexNet model results. From what I have read, ResNet models are GPU bottlenecked.

    Also, the bandwidth consumed by the benchmarks is no where near the published maximum throughput/bandwidth numbers for Pure and NetApp arrays so that ca't be the only comparison criteria when looking at results.

  4. Anonymous Coward
    Anonymous Coward

    RAID0 NetApp ?

    NetApp Benchmark was a RAID0 on 4 disks ?

    Might as well do it on internal drives. Or did NetApp just didn't disclose its A700 configuration ?

    Anyways, I don't Trust benchmarks that hide data.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like