Re: Is the world ready for a 30TB Failure Domain?
Erasure coding has its place for large devices because larger transfers (inherent with larger disks) raise the risk of glitches: silent corruptions like double-bit-flips that manage to still pass on-the-fly checks like parity checking. With erasure codes in place, you can correct for those glitches.
Now, for whole device (ie. controller) failures, yes you need redundancy, but also recall that reconstruction is a function of time, and one thing SSDs have in spades over rust is transfer rate, especially when using 4x PCI Express. This greatly reduces the reconstruction time which in turn reduces the risk of a failure during the vulnerable reconstruction phase. Perhaps because of these faster times you can get away with just 2 backups when you would've needed 3 with rust. Besides, at some point you have to think enough is enough because if you get a major event that nails say FOUR of your devices at once (AND maybe even all your backups, including the offsite, think a major earthquake) you're into Act of God (aka Crap Happens) territory when all you can do is pray.
That's why I use BOTH strategies, though in a smaller capacity (because the data I'm backing up is less critical): two copies of each complete with PAR2 sets. The PAR2 files provide erasure codes to deal with glitches, while having the second copy (normally kept offline to reduce wear, and the two are rotated periodically) provides a failsafe in case one goes kaput.