back to article Micron wheels out 'highest density' SATA SSD on the market

Micron has introduced a 5100 line of data centre flash drives and promises it will be the highest-density SATA SSD on the market. There are three models and they all use TLC (3bits/cell) 3D flash and have a 6Gbps SATA interface: 5100 ECO – 480GB, 960GB, 1.9TB, 3.84TB and (coming) 7.86TB 5100 PRO – 240GB, 480GB, 960GB, 3. …

  1. defiler

    Very nice, bus no SAS?

    At these speeds / densities / prices I'd have expected dual-port SAS. Maybe that's just me being greedy.

    1. Ian Michael Gumby

      Re: Very nice, bus no SAS?

      Why SAS?

      NvME or the M.2 interface is going to be much faster.

      Now if they cant get 7.2TB on that small of a form factor, I'm in.

      1. Sandtitz Silver badge

        Re: Very nice, bus no SAS? @Mr Gumby

        "Why SAS?"

        Dualport SAS would be nice for redundancy on the server room. SAS also would offer 12 Gbit speeds - double that of SATA. SATA has been a bottleneck for SSDs for a years. (And so is SAS and even NVMe)

        "NvME or the M.2 interface is going to be much faster."

        Yes, NVMe can be several times faster than SATA/SAS.

        M.2 is an interface standard, but the drive can still be SATA. M.2 SATA drives are just as fast as regular SATA drives.

        M.2 can't be hot swapped at all and the NVMe hot swapping (or adding) is in its infancy.

        1. Ian Michael Gumby

          @ Sand titz (Silicone sacks?) Re: Very nice, bus no SAS? @Mr Gumby

          Even at 12Gb/s you're still slower than what the underlying SSD can push.

          Yes, M.2 isn't 'Hot Swap' and for the most part not a lot of Server MBs have a M.2 slot.

          But lets rethink our servers.

          Are we talking about building out an Iceberg (storage array) or are we talking about fast compute / response? Virtual machines? High Density? (Blades)

          If you're looking forward... you would consider that you could build a blade that has more DIMM slots and hopefully more than one M.2 slot so that you could take advantage of NVMe and XPoint (or equivalent ) tech along with DRAM. This is short of custom ASIC / Flash combo for warp speed access that is at a price point only Google /FB and governments can afford. But for COTS, you could create blades that can take advantage of M.2s for local app storage.

          You could also throw in some attached storage SAN for longer term storage.

          I mean there's more to the design and options, all where SAS is less of a requirement because its too slow.

          The key here is being able to redesign mother boards, increase density on the chips and getting XPoint, MemResistor etc to work.

  2. Lee D Silver badge

    Tell me again - why is anyone bothering to make hard drives any more?

    1. Anonymous Coward
      Anonymous Coward

      > Tell me again - why is anyone bothering to make hard drives any more?

      Because hard drives are still 1/10th of the cost of SSD (per TB).

      Huge amounts of data are written once and accessed infrequently or never - from backups to videos of cats. It is not economical to store them on SSD.

    2. N13L5

      Re: Tell me again - why is anyone bothering to make hard drives any more?

      I don't know about everbody elses reasons, but they are far more reliable as back-up drives to put into your bank deposit box.

      An SSD you don't continually use has a strange propensity to quit. I forgot the reason that was given for this, but I've lost 2 SSDs just from being away for a year and when I tried to start up my desktop upon return, the SSD was dead. Twice with different brands, one OCZ and one Samsung.

      My HDD backup drives live forever so far. As long as you only start them to do a backup (or self-refresh the drive), they've literally had a 0.0% failure rate so far and I have some drives that are from the 90's still sitting as a backup of a backup of older data :)

  3. jms222

    Aren't these things all much of a muchness now ? Until somebody does realistic benchmarks such as when you've mostly filled the thing.

    At least physical disks have completely predictable performance when full which is one reason you should adjust the capacity stated when comparing to SSDs.

    1. Lee D Silver badge

      SSDs have entirely predictable performance throughout their lives.

      You can even schedule when blocks are culled (TRIM) if you want.

      Physical disks, however, have physical characteristics that may them inherently less predictable - they can literally "skip" a read and have to wait for the disk to spin around again, even under normal operation.

      SSDs, in general use, are O(1) devices - request a block, it appears in constant time. Request a file of n blocks and it takes O(n) time to return. Hard disks - that's NOT true.

      When they are full, SSDs are even more likely to be more predictable. Hard drives fragment REALLY quickly when full and each fragment adds more seeks times depending on how many fragments and where they are. SSDs? O(n) time again, whether it's in a billion pieces or one.

      And SSDs are kicking physical disk's backside in every metric but price lately. Even longevity is no longer an issue, except on exceptionally high write workloads, and even there there are special types made for that.

      But SSDs are - if anything - MORE predictable than disks. If for no other reason than they don't have a single component that works in any way differently depending on temperature or shock applied to the device.

  4. Anonymous Coward
    Anonymous Coward

    > SSDs have entirely predictable performance throughout their lives.

    Posisbly true, but the other thing which matters to people is failures. SSDs and hard drives both fail, but in different ways. And there's no substitute for real-world measurements:

    http://www.zdnet.com/article/ssd-reliability-in-the-real-world-googles-experience/

    1. Lee D Silver badge

      Drive failures are, by definition, failures.

      SSDs and HDDs both fail in detectable ways (i.e. they fail checksums on a RAID or filesystem, or they no longer work). That's all you need care about because, from that point on, you need to replace them. You, quite literally, cannot predict failures in either models reliably as any number of studies have shown.

      As the article itself says: The SSD is less likely to fail during its normal life, but more likely to lose data.

      It doesn't matter. Because ANY failure in any sufficiently necessary system is basically treated like total failure and the device replaced.

  5. jms222

    Ignoring what your filesystem layer if you write a bunch of blocks in a particular area of a disk that's where they will be. Reading time is a predictable seek time plus some rotation.

    Do the same on an SSD and they could be physically sprinkled about. Opening lots of blocks on a NAND doesn't take zero time though is small. The only way of getting the free space back is to relocate the contents WHOLE eraseblocks (which are 128k typically). How long this actually takes is complicated but also involves moving the data that shares the same eraseblocks.

    "and in a dangerous way" is what Google also said.

    Now the UBER situation is more interesting. The current policy of taking entire devices out can't go on and we need to make better use of tolerant layers on top such as the ZFS filesystem which can correct and heal errors (up to a point of course).

  6. Disk0
    Joke

    I'll have two!

    ..for my laptop, and two for my other laptop, and two for the missus' laptop, and two for the kids' laptops, and two for the media player laptop, and two for backups, and then finally we can download the Entire Internet. Is that so wrong?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like