Quick poll
Has anyone actually sat around waiting for typical task on a mainstream PC because the SSD did not have enough IOPs?
Intel's launched new and low-ish end versions of its Optane solid state disks based on 3D XPoint non-volatile memory. The new Optane SSD 800P comes in 58GB and 118GB variants, both in the M.2 2280 form factor and requiring an NVMe PCIe 3.0 x2 interface. Intel's advised the product can be used "as a standalone SSD, in a dual …
Not exactly, but I know my Samsung 950 Pro equipped PC feels a lot snappier than an equivalent SATA SSD equipped one. I multitask very heavily and like the way it keeps up with anything I throw at it.
But then I consider myself a power user than a typical consumer.
A .pst file repair (admittedly not very often) runs a lot faster on the 950 Pro than the average SATA SSD. But nothing beats running that in a RAM Drive :-D
Has anyone actually sat around waiting for typical task on a mainstream PC because the SSD did not have enough IOPs?
Typical for whom?
Visual Studio SQL Server Data Tools database projects build their models on-disk because Microsoft's Visual Studio team steadfastly refuse to compile their own software in 64-bit mode.
One particularly complex model which I work with frequently can take actual minutes to build on SSD.
(When I was using spinning rust, it could lock up the entire machine for hours.)
So, yes.
After seeing two Intel SSDs get bricked simply due to unexpected power loss, guess what? I purchase only non Intel.
SanDisk, Kingston and more recently Western Digital mainly.
Crucial and OCZ are other makes I actively avoid too.
Reliability rates higher than small differences in performance for most users. A fact that seems to elude some manufacturers.
I've had good luck with Crucial and OCZ, unlike many others, but I'm religious at my systems being more than capable enough to protect themselves against an universe that is out to get me through them. "I believe in a personally malignant universe." Which almost certainly accounts for the difference here. I guess Intel would work in my context, not that I intend to try. A 10-40 gbps upgrade is on my agenda currently.
Can I ask what you're doing that would justify upgrading from 10-Gbps to 40-Gbps? I recently started moving some fairly large customers back down to 1Gbps since we stopped using VMware and simply were wasting too much money on maintaining a data center network. So now that we're 100% container and FaaS, we don't need a data center network anymore. We just use a lot of small cheap nodes instead and if we lose a node, a switch, an entire location, who cares. We can throw it away and add a new one for $1000.
It's amazing how if you took the money you're wasting on 40GbE and spend it instead on building the systems running on it properly, there's often as much as a 10,000x resource waste (not percentage... actual 10,000 times).
Ask yourself... how much resources do you think you really need to run a 10 megabyte database (100mb if your company is really big) like active directory?
And how much resources do you need to handle a million banking transactions an hour, each taking about 5-10ms to run on a Raspberry Pi?
I've been lucky with my two OCZ's as well. Bought in 2011, right before the company lost wind, I feared the worst, but they're still chugging away, reliable as ever.
Okay, I have all my computer equipment on a UPS with current smoothing (or whatever that tech is called). That may count for something.
Then again, I've seen colleagues with their SSD laptops die suddenly in the span of ten minutes, so maybe I'll look into buying some replacements soon, just in case.
Edit : just checked, can't find any OCZ to buy anymore - so that's one problem gone.
Here is an interesting discussion on power caps in SSD. They are the reason I only buy "enterprise grade" SSD, and only after having double-checked the specs for capacitors and their function. Both Intel and Crucial make some good, enterprise-grade SSDs, but you have to make your choices wisely. I would definitely not trust OCZ, though.
The reason you got PCIe is because SATA isn't cutting it, or you just wanna show off. You pay the premium for the extra speed it offers, for your games or software.
I can't imagine anyone wanting to pay a premium for such a tiny SSD. Well, unless Apple decide to brand them and make them essential for the mac experience..
> Why would anyone want this Intel product ?
Flash has a 128K erase/write block size, so performs poorly when doing small in-place updates (of say 4K blocks).
Hence the people who want this are people who are doing tens of thousands of database updates per second. On their home PC or laptop. Hmm... pretty small market then.
and the previous 900P which are larger capacity direct SSD competitors. Noticed very little difference in day to day tasks even demanding IOPS ones the difference is so marginal I didn't notice until I played back the recordings side by side.
What I would say is that we need tech like this to keep coming out, it's pushing boundaries even if we're not sure we need that to happen, otherwise the markets all become stale and innovation is simply not valued.
To expand on your second paragraph, blocking factor is almost always CPU/RAM/Network/Disk in a never-ending cycle. I would certainly rather they are innovating for their next swing on the cycle than wait until it is the blocking factor to start looking for improvements.
When we first heard mention of XPoint, the obvious market was in high-IO environments. But there was also mention of the idea of using this stuff in low-end hardware to replace the typical RAM + Flash.
I suspect the price is still too high for that to make sense right now, but who knows. Is that still a likely thing?