back to article Primary Data's metadata engine 'speeds up' NAS, saves 'millions'. Leaps burning buildings, too?

Primary Data has updated its storage silo-converging DataSphere product, and says it’s ready for mainstream enterprise use, not just test and dev. The new release can save millions of dollars, handle billions of files and accelerate Isilon and NetApp filers. The v1.2 product gives enterprise customers, Primary Data says, the …

  1. TRT Silver badge

    Those are quite some claims!

    I hope they are borne out in practice.

  2. jamesb2147

    From the company that brought you years of losses...

    ...comes the Brooklyn Bridge! Only $10k, people! Once in a lifetime opportunity!

    In short: I'll believe it when I see it. Before I bother taking the time, though, I'd love to see a review from you, Senor Mellor...

  3. Anonymous Coward
    Anonymous Coward

    Reminds me of a TV-Infomercial: We know you have several hometraining appliances in the garage that didn't work.... but this one really works. It provides and abs-layer between your determination and your efforts.

    Why pin your knowledgeworkers and managers down on determining what you really need, when buying layers is so much more fun. :-)

  4. Androgynous Cow Herd

    one thing about those multi petabyte customers...

    They are usually extremely eager to change platforms and workflows for new file service products, and they can absolutely turn on a dime to adopt a new solution for something that is not really a problem.

    </sarcasm>

  5. l8gravely

    Been there, done that. Failed

    I've been there and done this using Acopia for NFS tiering of data. It's great until you run into problems with A) sheer numbers of files blowing out the cache/OS on the nodes. B) Now your backups are a horrible horrible horrible mess to implement.

    Say you have directory A/b/c/d on node 1, but A/b/c/e on Node 2? How do you do backups effectively? You can either use the backend storage's block level replication, because you can use NDMP to dump quickly to tape. But god help you when it comes time to restore... finding that one file across multilple varying backends is not an easy case.

    And this link cloud storage is a joke when they're talking Terabytes, much less Petabytes of information.

    This is completely focused on scratch volumes and spreading the load of rendering and large compute farms with large IO loads of temp files. Once the data is set, it's obviously moved elsewhere for real backup and retention.

    Then, once we kicked Acopia to the curb (don't get me wrong, it was magically when it worked well!) we moved to using CommVault's integrated HSM/Backup solution, which worked better... but still not great. God help you if the index to files got corrupted, or if the link between CommVault and Netapp to intercept accesses to files moved to tape or cheap other disk storage got delayed or just broke. Suddenly... crap took a while to restore by hand.

    I've seriously come to believe that this is truly a hard problem to solve. Now if you can do like what the people above are doing, targetting large scratch volumes where people want to keep stuff around for a while but don't care as much if backups are done (or can spend the money to tier off to cheap cheap cheap disk... maybe it will work.

    But once you have a requirement to go offsite with tape, you're screwed. Tape is still so cheap and understood and effective. Except for time-to-restore, which sucks.

    Too many companies believe that sending fulls offsite each night will magically give them DR when they only have one datacenter. Gah!!!

    Sorry, had to get that off my chest. Cheers!

    1. ptbbot

      Re: Been there, done that. Failed

      Ahhh, Acopia. Or more, ARRRGH, the feckin ARX has fallen over again.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon