back to article What do people want? If we're talking mainstream enterprise SATA SSDs, reliability, chirps Micron

Micron is looking to boost its high fidelity cred. The vendor has refreshed its three-model 5100 SATA SSD with a two-variant 5200, increasing reliability from two million to three million hours mean time before failure (MTBF). The 5100 SATA 2.5-inch SSD was built from 32-layer TLC 3D NAND came in three flavours: Eco (high …

  1. Naselus

    "The 5200, which replaces it, uses 64-layer TLC 3D NAND, and yet there is no change in maximum capacity, at 7.68GB."

    Do they intend to sell them to 1998, or do you mean 7.68TB?

    1. Anonymous Coward
      Anonymous Coward

      Man I would have been all over a 7.68GB SSD in 1998, though finding a SATA card would have been a bit of an issue...

  2. Anonymous Coward
    Anonymous Coward

    Pfft

    Shaine said it represented an opportunity for Micron to differentiate its products

    Which translated from double glazing salesman speak means price gouging

  3. Nate Amsden

    reliability and endurance good to see

    I just checked again the oldest SSDs in my org's first AFA are from 10/2014, cMLC media, and the array is reporting still 95% of wear life left(2TB media), with an average of ~80% write workload to the controllers. This tech has been a lot more durable than my expectations.

    1. joed

      Re: reliability and endurance good to see

      I bet that TLC media won't last this long (and manufacturers surely hope so too).

    2. Alan Brown Silver badge

      Re: reliability and endurance good to see

      "This tech has been a lot more durable than my expectations."

      Ditto. I have managed to kill some SSDs - but only low end consumer-grade MLC devices and only by writing well beyond their stated endurance.

  4. Anonymous Coward
    Anonymous Coward

    Why do people bother with SSDs?

    Ordered a new i7 and it arrived with a 128gb SSD, sure the OS boots in less than 9 seconds but why the hell would I trade 10 years of hard disk based rock solid reliability for obsolescence at a multiple of the price? Threw a swap file on the SSD, when it starts to fail in about a year I'll sling it in the bin.

  5. CheesyTheClown

    Reliability and endurance isn't really necessary in an enterprise

    As soon as you stop using legacy storage systems like SAN and DAS, the fact is, I'd much rather have cheap and maybe fast.

    Using modern file systems, you can easily scale out. So, having many small servers managed by a system like Ubuntu MaaS delivering scale-out is far better than having a few massive servers with lots of storage that if something fails kills everything.

    If you're running SQL servers, then scale out. Drive, system, data center failures don't really matter. There's always at least 3 copies of every piece of data. If there's ever 2, then the system makes a 3rd automatically.

    If you're running NoSQL... you would never ever ever run a SAN or NAS. It is possibly the worst idea in history to do so.

    If you're running Blob... you would scale out and use a file system which shards copies and guarantees at least 3 copies at all times.

    If you're running log storage, you'd use a system like FluentD which would scale out using sharding.

    If you absolutely have to use something like NFS or SMB, you'd use scale-out servers via pNFS or Windows Scale-Out File Server.

    If you absolutely have to run some block storage system, you can use scale-out iSCSI such as StarWind or Datera. In fact, StarWind is great because it can give you scale-out NFS on Windows Server.

    It's far smarter these days to use strictly scale-out systems since technologies such as NVMe and FiberChannel can no longer deliver performance. When they do, it comes at a ridiculous cost which makes no sense.

    So....

    big, is nice but no really important.

    fast is nice, but once you get away from keeping everything you ever owned on a single SAN, it's not that important

    reliable.... not really important since hard drive failures in modern storage doesn't matter

    cheap.... this is the thing. $50-$300 of storage per storage node is pretty good.

    I'm standardizing now on 120GB mSATA drives for the enterprise. The prototype has 9 banana pi nodes with one drive each and gigabit Ethernet in-between. I'm hoping to manage 100,000+ users on the platform. Since we're moving to 100% transactional, we're considering a few more nodes with 12TB spinning disks as cold storage. If we need more performance, we might add a few more nodes but I have no idea how we'll ever use that much capacity. We already over-provisioned by at least 3 fold.

    1. Anonymous Coward
      Anonymous Coward

      Re: Reliability and endurance isn't really necessary in an enterprise

      Holy f*ck, who the hell do you work for and what do you do?

    2. Anonymous Coward
      Anonymous Coward

      Re: Reliability and endurance isn't really necessary in an enterprise

      Storage guys always have too much time on their hands...

    3. Anonymous Coward
      Anonymous Coward

      Re: Reliability and endurance isn't really necessary in an enterprise

      Thanks, that was a very interesting read and quite well informed. You're obviously one of those thorough and knowledgeable older storage techies who know lots but do less, apart from management, which means doing less.

  6. Anonymous Coward
    Anonymous Coward

    No moore's law with ssd

    The electronics industry can mass produce chips in huge volumes, which used to increase capacity and reduce prices but now it the price does not seem to change much. As a technology SSDs have matured while reliability and performance have improved, sales have increased but capacity and cost have not reduced at the rate I would expect.

    Why are all SSDs only available as 2.5" drives and not 3.5" drive? The 3.5" size would allow significantly more space for storage allowing larger capacity drives using smaller capacity chips.

    A 500GB SSD appears to sell for around £125 but a HDD sells for less than £40, I would have thought manufacturing a hdd is more complex than an SSD.

    1. Alan Brown Silver badge

      Re: No moore's law with ssd

      "Why are all SSDs only available as 2.5" drives and not 3.5" drive? "

      Demand and heat.

      You can get 3.5" SSDs but noone's buying them. If you stuff a case that size full of NAND then they get pretty toasty (2.5" drives are just right for a single PCB - much easier to produce than a multi-board device and why put a single PCB in a 3.5" case when you can put it in a 2.5" and those who need 3.5" can buy an adaptor?)

      Lots of smaller chips is more expensive to manufacture than fewer larger ones and the price/GB factor is usually cheaper for the larger package sizes so there's not much sense in taking the "more, smaller chips" - on top of that the cumulative power consumption of the smaller packages tends to be higher which means more heat to get rid of.

      "A 500GB SSD appears to sell for around £125 but a HDD sells for less than £40"

      A TLC 500GB SSD is much faster and has more endurance than any £40 500GB HDD - and in a laptop or other portable device they don't break when you drop them - which means that they're about the knee point where it's worth paying the extra money. In smaller capacities it's a no-brainer. HDDs are still outselling SSDs in larger sizes unless someone actually needs the speed (I put 12 2TB SM863s in a machine 2 years ago for that reason) but as soon as they come down to about twice the price of HDD the market will be all over them like a badly fitting shirt.

      What's keeping the price of NAND up at the moment is not technology but that demand is vastly outstripping supply. That won't last forever as new fabs come online.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like