Did EMC buy Xtremio to fend off NetApp

This topic was created by Chris Mellor 1 .

  1. Chris Mellor 1

    Did EMC buy Xtremio to fend off NetApp

    Regarding EMC's Xtremio acquisition Piper Jaffray analyst Andrew Nowinski reckons; "I think many investors assumed the acquisition was targeted at Fusion-io. In my opinion, EMC views Fusion-io as a mosquito they could squash at any time and have no fears of losses to them.

    In the large, mid-size enterprise market, EMC has a two-headed monster. The VNX is a “scale-up” architecture, that offers good database performance, good data management functionality and can be used in VMware environments, though it is complex to manage as there are still two underlying operating systems. They also offer as Isilon, as a “scale-out” architecture. The problem is that Isilon offers no data management functionality, poor performance and the block sizes are simply too large to run in a VMware environment.

    NetApp’s 8.1 essentially combines these two platforms and offers a scale-out architecture, good performance via PAM cards and great data management functionality, all in the same, easy to manage OS.

    Rather than combine Isilon and VNX somehow, EMC acquired XtremIO. XtremIO offers scale-out, great data management and great performance. In fact, their subsystem was built specifically for flash, whereas flash was an afterthought for NetApp (they still leverage an HDD-optimized subsystem)."

    What do you think?

  2. MDebelic

    Isilon slow?

    According to Mr Nowinski, Isilon has "poor performance". Their storage can stream multiple, tv broadcast quality video files, but that's still - slow? I am curious to learn what is"fast" according to the financial community? ;)

    1. Anonymous Coward
      Anonymous Coward

      Re: Isilon slow?

      Yes, for random workload (VMs, DB, etc.) .... vs video streaming which is mostly Sequential IO.

      1. InsaneGeek
        Meh

        Re: Isilon slow?

        The Isilon has two series: the X series geared towards sequential video streams (like NetApp's LSI arrays) and the S series geared towards random I/O.

        From the specfs benchmark which does test a good chunk of random workload: the Isilon S200 series hits 1.6 million and the largest 24x node NetApp 8.1 cluster hits 1.5 million. So I'd say that fully popped either platform can do an "oh my god" level of random I/O for an "oh my god" price point.

        1. dikrek
          Megaphone

          Re: Isilon slow?

          No, the highest Isilon result published is 1112705 SPEC SFS NFS 2008 ops.

          Not 1.6 million.

          Linky:

          http://bit.ly/s1IFH6

    2. Anonymous Coward
      Anonymous Coward

      Re: Isilon slow?

      Even EMC's reps say that Isilon is not currently a good choice for serving VMware because of high latency, although they expect to fix that Real Soon Now.

  3. Nate Amsden

    Netapp scale out is not scale out

    their "cluster" is little more than a hack it seems like. I drilled what I think was a NetApp employee on some of the finer points of their cluster (given I have not used it, and there seems to be a lot of hype about the release) and was kind of surprised and dissapointed by the results --

    http://datacenterdude.com/netapp/netapp-dataontap-81-reponse/

    I don't know about Isilon's performance, perhaps they are 'slow' in IOPS but it really seems the system is built for throughput rather than IOPS. I agree it doesn't seem like an ideal platform to run Vmware directly on top of. I assume (hope) that EMC didn't buy Isilon for that market segment. It makes a lot more sense to have Isilon as a scale out NAS where you put your data (directly accessing it via guest OS-based NFS) vs your VM images which you put on more traditional storage like a VNX or V-MAX or whatever. The amount of data for the images usually pales in comparison to the amount of data that the applications are using by orders of magnitude.

    The impression I get, is that NetApp continues to bolt stuff onto their system which just makes it more complicated, instead of really addressing the core issues of scalability and even scale out. They probably just have too much invested in the current system to be able to really, truly fix it (much like Cisco). But it seems NetApp is still years away from having what most would consider a real cluster, if they ever get there.

    Now how they are able to market the thing and get customers to buy into it is another matter. It wouldn't surprise me if they can sell a few more systems with this, but from a purely technical point of view, as a cluster - NetApp isn't there yet.

    Look no further than the lack of ability to stripe a volume across more than a single controller node, even in cluster mode, I mean come on. Take NetApp's 24-node SpecSFS results that they released around the time 8.1 came out. Your basically having to MANUALLY manage 48 different storage systems (because a volume can live on only one controller). If you have a perfectly optimized workload like SpecSFS you can distribute your data over everything, but if your a more traditional user I can imagine it keeping the administrator up at night (unless they are massively over provisioned) because the system can't even automatically move a volume to another controller in the event I/O goes up. And even if it did (Compellent has this ability) - there is quite a large overhead in moving TBs of data around between systems. Instead of having it balanced from the get go like a real cluster should be, and being able to move more finer units of storage around - e.g. sub LUN auto tiering between arrays.

    And as you might expect, that data management stuff you speak of (of which I bet part is deduplication) doesn't apply cross cluster nodes either, so are you now going to try to optimize de-duplication by trying to move volumes with like data on subsets of your cluster? Manually?

    For all of the hype this release seems to have, if I was a customer I would feel let down - this is all they get after so many years of trying to integrate that Spinmaker(?) stuff ?

    Also, if Xtreme IO is built from the ground up to be flash based, I don't see why it would compete with a hybrid NetApp PAM/HDD system. Unless EMC wants to try to integrate the XtremeIO into their existing line up, vs keeping it as a stand alone product(would that take them many years to do like it took NetApp to do with Spinmaker?). Perhaps the acquisition was to fend off the likes of Violin (maybe Xtreme IO came at a much better price)? I don't know. I'm not too familiar with that market space.

    I just really don't see from a technical stand point how an Xtreme IO vs NetApp stacks up, they appear to be two completely different approaches to solving different problems. Though that won't stop sales people from using even more force to fit square pegs in round holes.

    1. schubb
      Pint

      Re: Netapp scale out is not scale out

      I would make sure you are talking to a NetApp employee. Whoever you talked to doesn't know that that 8.0 7 mode is scale up, whereas 8.0 C-mode is designed for scale out, and you can run either on the same hardware platform. Try running DART on a Symmetrix or Flair on a datamover...try running a VNX without either of these(Sorry Unisphere does not count, it is a shell that glues up the two, if things go bad there, right back to DART and Navisphere)...VNX is simply Celerra 2.0 and I dreaded dealing with Celerras.

      From the ONTAP 8.1 C-mode documenation:

      "Each node in the cluster can access the same volumes as any other node in the cluster. The total file-system namespace, which comprises all of the volumes and their resultant paths, is global across the cluster."

      You manage one point, not a bunch of individuals...You are describing ONTAP 8.0 7 mode, not cluster mode.

      1. Nate Amsden

        Re: Netapp scale out is not scale out

        Well I didn't ask him directly but the blog now that I checked says he is "Virtualization Solutions Architect for NetApp".. and he seemed to have a lot of knowledge on the platform. There was a discussion in the link's comments about spanning volumes across controllers and stuff and there was a specific comment that the feature was removed in 8.0 (not even 8.1 but 8.0)

        the cluster mode I was describing was the hypervisor-analogy the guy was using where you can move volumes between controllers and stuff in a more transparent/easy fashion. The main points of automatic cluster load balancing, data distribution etc don't seem to be present in the 8.1 cluster mode. If you haven't read his posts I suggest you do, if they are incorrect you should help him fix the information for NetApp's own sake(assuming you care of course!)

        He even covered the ability to move volumes between arrays pre 8.1 cluster mode which I specifically asked him about. I even asked him whether or not there was a 3PAR-like persistence cache feature which would allow one set of nodes to mirror cache for a another pair of nodes that was missing a node due to software/hardware failure/change). Though I didn't mention 3PAR by name(despite my 3PAR background I do try to stay neutral when possible) - this capability should be possible when dealing with a real cluster.

        Another line of thinking around cluster mode and data movement and PAM specifically - last I recall at least the PAM/Flash/flex cache/whatever it's called today was not mirrored in any way, so for example if you happen to move a volume from one controller to another(or one array to another), that may be heavily benefiting from the flash cache you may take a serious performance hit until the flash on the destination controller can be primed for that volume.

        It sounds like an improvement over the traditional 7-mode but still seems to be a far cry from a real cluster.

        The blog posts were informative and technical and honest he seems like a straight shooter. Not a lot of those folks out there these days.

  4. dikrek
    Stop

    Apples and Oranges

    Hi all, Dimitris from NetApp here.

    @Nate:

    Indeed, ONTAP running in Cluster-Mode and Isilon aren't really designed in similar ways, as I'm explaining towards the end of the article here: http://bit.ly/uuK8tG

    Isilon will be strong for high throughput, yet there are other ways to get much higher throughput: http://bit.ly/zqqJ3G

    Ultimately, there is no single system that "does it all". Meaning, that you get all the protocols, and dedupe, and fancy app integration, and fancy snaps, and fancy replication, and be able to stripe a volume across umpteen controllers at low latency for random I/O.

    If you want to run DBs or VMs at low latency and take advantage of all the cool features mentioned above and be able to get some flexibility with the cluster for migrations and upgrades at zero downtime, you'd be better off using ONTAP Cluster-Mode than Isilon, SONAS, etc.

    If you have a workload that needs to be in a single gigantic container bigger than 100TB, with extremely high throughput requirements (not IOPS),and can't break up that workload into smaller than 100TB chunks, then, for now, there are alternative solutions.

    Remember that ONTAP Cluster-Mode is designed to be used in general-purpose scenarios.

    D

    1. Nate Amsden

      Re: Apples and Oranges

      yeah I agree completely that NetApp is a better platform for DBs or VMs at low latency than the likes of Isilon or SONAS - I would hope that wouldn't be a hard question to answer without even having to do any POCs or anything like that. Isilon and SONAS are more for unstructured stuff.

      Even with cluster mode NetApp is not my storage platform of choice(especially when my current workload is 90% write), but it's still a good product for a bunch of folks out there, I have a hard time knocking it unlike others like Equallogic and stuff.

    2. Nate Amsden

      Re: Apples and Oranges

      Just read over your post on the 1TB/sec. Sure you can get that kind of throughput with a cluster file system and scale out nodes, I was sort of expecting a more traditional NetApp approach rather than use the LSI stuff you acquired and put Lustre on top of it which is of course what many folks have been doing for a while as well.

      Reminds me of this

      http://www.technologyreview.com/computing/38440/page2/

      (IBM building a GPFS-based system with 200,000 disks and 120PB)

      Those sorts of systems are obviously more complex to manage(for a traditional enterprise type environment) of course than a cluster that presents it's data to the clients over a more standard protocol like NFS/iSCSI/FC or something. But at the same time the typical traditional enterprise type environment doesn't need throughput reaching to hundreds of gigabytes/second anyways.

      As you say everything has it's trade offs, right tool for the job right?

  5. Anonymous Coward
    Anonymous Coward

    Netapp has another ace

    Netapp's underlying WAFL file systems fits in rather well with how flash storage should be utilized. So while EMC needed to acquire XTREMIO to compete with Netapp on flash, Netapp lucked out with WAFL. WAFL still could be optimized and tailored for flash (possibly by simplification) but it is pretty close to what you want to begin with.

    1. InsaneGeek
      WTF?

      Re: Netapp has another ace

      What??? Pretty much everybody in the storage industry is saying the exact opposite, NetApp doesn't have a flash/ssd solution and that EMC does. Just look on the blogs, NetApp has been talking how they don't need a flsh/ssd solution because their PAM cards are all that's needed... but are now changing that tune as they have a product. In fact EMC have an all SSD array already shipping that you can purchase. Rather than saying EMC needed it to compete with NetApp, the reality is more that EMC purchased it to keep NetApp from competing with them, delaying their entry into the market EMC is already in and shipping a product for. You gotta have deep, deep pockets but the fact is they have a product and NetApp doesn't.

      The way WAFL works is that it always does full raid writes to avoid a penalty of having to read in existing data from disk. The entire point of WAFL is to avoid the partial raid write latency impact of rotating rust, using flash storage removes the rotating disk latency impact so the value of WAFL on flash becomes pretty much null. If you have enough cache in the array any array can do full raid writes just like NetApp (that cache might have to be infinitely large though), WAFL just allows it to write "anywhere" in an aggregate so blocks generally are not sequentially close to each other. If you go to an advanced NetApp performance class the instructor (at least ours did) will directly say that NetApp's are designed for high write performance not read performance because of this. I've found from our NetApp's that reads are generally the speed of a drive seek, for random read workload doesn't impact much, but if your workload has any sequential work load you can drive seek yourself into very slow performance.

      1. Anonymous Coward
        Anonymous Coward

        Re: Netapp has another ace

        Netapp not having an SSD solution is a flawed management decision. Not a technical problem in the file system's underpinnings.

        As you said: "NetApp's are designed for high write performance not read performance because of this" - yes, in the rotating disk world. In random access flash, the distribution of the blocks all over the disks is not problematic for reads anymore so you have a fast write and fast read.

        WAFL writes could be simplified by relaxing the write constraints which are mindful of rotating disks but they do provide a nice form of inherent wear leveling which could reduce the requirement for firmware wear leveling on the SSDs themselves. This would improve write latency (this is somewhat speculative on my part but I think a real possibility that Netapp would be foolish not to investigate thoroughly).

        1. J.T

          Re: Netapp has another ace

          They can't afford to "relax" their writes, they already are randomizing even sequential data leading to read performance loss.

          1. dikrek
            Stop

            Everyone is a WAFL expert all of a sudden... :)

            J.T. - I don't think you quite understand how ONTAP works. It doesn't write randomly - it actually takes great care in meticulously selecting where writes will go. When writing to flash, we don't need to be quite as meticulous. But for HDD it helps a lot.

            In addition, there are important technologies like read reallocate, that will sequentialize upon a read data that was written randomly.

            At a block level.

            Amazing for databases - where frequently people will write stuff randomly in the DB then there will be a job that needs to read the data sequentially (sequential read after random write).

            I'm not aware of any other disk array that will do this optiization for the end user, and leave the blocks optimized for the future (this has nothing to do with caching and readahead).

            Not to mention insane new stuff coming in the next few months.

            Unfortunately, way too many people think ONTAP is still where it was 20 years ago. Or maybe fortunately, in the case of competitors, since it's so easy to discredit them... :)

            The write allocator has been rewritten multiple times in the last 20 years :)

            Not to mention everything else, including the entire kernel.

            Very relevant with respect to competitor documentation - I often see stuff from them that was maybe a little valid 10 years ago, especially from the smaller vendors that can't afford the resources necessary to understand other people's gear.

            This is IT, folks. I'd argue that if you stop intimately understanding a technology for more than 2 years, your knowledge of it is completely obsolete, to the point of being dangerously so.

            Here's some fun reading:

            http://bit.ly/IVr0Xy

            D

  6. Diskcrash

    Close but not quite right

    The financial analyst makes a good point but is basically wrong. Yes, I would agree that EMC is focused on NetApp and not too worried about FusionIO. But, I disagree that EMC is very worried about OnTap 8.1. It has some good points as far as software features are concerned and in consolidating some odd NetApp design choices but it still has very clunky under pinnings and can be a management mess.

    The Isilon boxes are not ideal for all performance workflows but in looking at the software and hardware choices that were made to build the system it would appear that there is a lot of room for improvement and I would expect that EMC is busy working on that. The trick being to keep the features, and performance while maintaing decent pricing and margins.

    Acquiring Xtreme IO would seem to play into their vision of global storage domination more than really much of a direct concern for any single competitor. If they can wrap up all the bits and pieces in to a single coherent storage environment then they could quite easily push everyone to the margins.

    1. Hoosier Storage Guy
      Thumb Up

      Re: Close but not quite right

      Agree with Diskcrash. This analyst is comparing apples and oranges. XtremeIO acquisition has nothing to do with 8.1 or Fusion IO.

      And did I really just see D from NetApp say "there isn't one system that does it all"?!?!

  7. J.T

    I had a much longer post, but then I realized it's not worth it. The problem is the analyst in question seems to need NetApp stock to go up in a rather major way, enough that it's almost getting to the point where someone should examine his bank account for NetApp payouts, or his portfolio for how much NetApp stock he has.

    He also released a report saying NetApp will outperform EMC based on not thinking EMC will release as many products as they did last year and NetApp's release of 8.1 coming late in the year.

    He's just a financial analyst that's been handed the Storage World, he is showing he doesn't understand it.

  8. Assumed Name
    FAIL

    This ain't about Netapp

    The real story here is that the best educated, most well heeled potential buyer of flash start-ups just picked it's date to the prom. How that affects the rest of this cast of characters remains to be seen. Apparently the experts evaluating these companies didn't give a lot of weight to how bright your sign might be, or how close to the 101 freeway you put your HQ. Lot's of private investors seem to think that those are important factors to consider.

    So, will private equity and VC dollars keep showering down upon anyone with a box of flash? Does this make the leftovers more valuable? Like the last drunk girl leaning against the bar at 2:01 AM?

    All we need now is a billion dollars in "green subsidies" for a couple of these companies. Then we have a good movie script.

    This is going to be interesting.

  9. unredeemed
    FAIL

    A financial analyst at an investment bank, making deep technical analysis on products he's never managed in his life. Where do I sign up for his newsletter, he sounds uber smart.

    I see no way in hell this is an accurate assessment of technology.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon