* Posts by Captain Dan

4 publicly visible posts • joined 6 Mar 2013

Do containers stack up as data storage building blocks?

Captain Dan

GlusterFS is already containerized

Gluster exists as a containerized implementation on top of Kubernetes, including management support from Kubernetes Storage API. The idea is to re-use the same infrastructure orchestration layer as the actual application uses - so eventually storage is deployed as another function next to the application stack. Thus no separate scheduling / high availability / orchestration mechanism needs to be implemented for storage and accessing data this way is potentially very fast - as it does not have to cross several layers of hypervisors and hyper-convergence.

I believe this will be the most elegant form of containerized storage.

Attack of the clones: Oracle's latest Red Hat Linux lookalike arrives

Captain Dan

"They can keep their existing RHEL servers in place and simply switch to Oracle support, and they can install RPMs from Oracle's repositories on their RHEL servers without a hitch."

It's worth nothing that - unless you want to pay for support twice - this would violate the all-or-nothing clause. If a RHEL system is out of support you need to shut it down or otherwise all of your other currently employed RHEL instances will be out of support too. So no, no can't just leave the RHEL installation in place and switch to Oracle support.

One day later: EMC declares war on all-flash array, server flash card rivals

Captain Dan

Performance numbers

Interesting... just a while ago I was testing one of these 1.2TB ioDrive2 MLC cards in a 2P x86 box on Linux with fio... 8k 70/30 mix at 32 outstanding I/Os yielded around 90k combined IOPS out of the box, not 60 as in the figure. And that was with an older version of the Fusion-io driver from 2012. Telling from the past the performance always increased with every driver released. Not so much of a difference anymore to the alleged 120k IOPS from XtremeSF...

So everyone's piling into PCIe flash: Here's a who's who guide

Captain Dan

Just a word on EMC vs Fusion-io...

Just a couple of words on the first one: EMC is very late with this. As they are a Storage-only vendor competing against ALL major server makers storage portfolio except Cisco (well, actually they are not a major server vendor and their partnerships starts to have some cracks) it's very questionable if the DELLs, HPs, IBMs or Fujitsu's of this world will ever officially support their XtremeSF cards in their boxes. So it goes with XtremeSW. Customers really don't like messy support situations where something fails in a box and vendors start finger pointing.

XtremeIO is very interesting though and easier to EMC's known audience and installed base.

However the XtremeIO architecture has not changed since when it was still a startup. x86 boxes tied together with Infiniband virtualizing direct-attached SSDs? This is not how you leverage the density and energy consumption advantages of NAND. Look at RamSan or Fusion-io ION where the same or double the performance is served at a lower or equal footprint.

Thin Provisioning, Dedupe and Snapshots are very good arguments to the traditional storage guy. But if you look where AFAs are sold today this is not a must. You don't need any of this for your shared redo logs in a RAC cluster for instance, nor does it make sense.

Most of this stuff is sold to companies who either have this scenario or use very sophisticated software that intrinsically provide these features already. Fusion-io is selling mostly to these kind of hyperscale and innovative shops today and EMC will have a hard time to fight their position. The Fusion guys are way ahead of them offering memory-like access APIs to their storage and specialised IO stacks and filesystem where EMC is just supplying some sheet of metal and NAND. It was very funny to watch yesterday when EMC was beating dead horse that they supply 200k vs. 120k IOPS on brand F where brand F showed well over 9M 'IOPS' (yes that's an M for million) having the ioDrive act as a memory tier. Imagine this being used in modern hyperscale software like Hadoop or Database software and you see that this is not quite playing in the same ball park.

Just my 2 cents