But do I care?
AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp
NetApp and Nvidia have introduced a combined AI reference architecture system to rival the Pure Storage-Nvidia AIRI system. Fairy in the woods If you've got $1m+ to blow on AI, meet Pure, Nvidia's AIRI fairy: A hyperconverged beast READ MORE It is aimed at deep learning and, unlike FlexPod (Cisco and NetApp's converge …
COMMENTS
-
Tuesday 5th June 2018 11:08 GMT SPGoetze
Not by much?
"It appears from these charts, at least, that NetApp Nvidia RA performs better than than AIRI but, to our surprise, not by much, given the NetApp/Nvidia DL system's higher bandwidth and lower latency – 25GB/sec read bandwidth and sub 500μsec – compared to the Pure AIRI system – 17GB/sec and sub-3ms"
From the *numbers* I see, it's ~220-280% performance. Maybe if you would scale the graphs the same, it would not "surprise" you so much... And from the looks of it, it's probably NetApp's latency advantage and we didn't reach bandwidth/throughput saturation yet (in these configurations).
-
Tuesday 5th June 2018 20:05 GMT sanjeevsharma
I think that if you are comparing external storage system performance, you should compare AlexNet model results. From what I have read, ResNet models are GPU bottlenecked.
Also, the bandwidth consumed by the benchmarks is no where near the published maximum throughput/bandwidth numbers for Pure and NetApp arrays so that ca't be the only comparison criteria when looking at results.