back to article Dell EMC man: Hyperconverged is love, hyperconverged is life, but won't kill SAN yet

Hyperconverged infrastructure appliances (HCIAs) are ready to take on the bulk of data centre x86 workloads but won't necessarily kill off the SAN. That's the conclusion drawn from a conversation with Chad Sakac, president of Dell EMC's converged platform division, at Dell EMC World in Las Vegas, where hotels hyperconverge …

  1. returnofthemus

    "Sakac, what exactly is this Hyperconverged Bollox?"

    I mean do, you really expect us to believe that it's anything more than a fancy collection of boxes and wires, based on the same machine clustering principles used in so-called Supercomputing?

    1. Anonymous Coward
      Anonymous Coward

      Re: "Sakac, what exactly is this Hyperconverged Bollox?"

      Yes and No...

  2. Naselus

    Guaranteed that HCI will kill SAN within 18 months then, judging by how well Dell's similar proclaimations about Cloud have played out.

  3. Anonymous Coward
    Anonymous Coward

    So Google and others need large 'bit buckets' (storage clusters) and have applications which are super sensentive to latency jitter (eg Google search). No one needs ancient VMAX software, EMC is just trying to pretend VMAX is still relevant. All these HCI systems are basically supposed to be a Google or FB style architecture for those that don't want to move to cloud bc they get a kick out of buying generators. Why is Google able to work with zero SANs whereas DellEMCVmwareVCE seems to be unable to work at scale with latency break downs or asymmetric scale out to create storage pools?

    1. Anonymous Coward
      Anonymous Coward

      "Why is Google able to work with zero SANs whereas DellEMCVmwareVCE seems to be unable to work at scale with latency break downs or asymmetric scale out to create storage pools?"

      Because they built their applications from scratch to work that way with all the resiliency built into the SW.

      Most large (and medium) enterprises run big monolithic, fickle, legacy apps like Oracle, SAP... that have ridiculous infrastructure requirements that need all the resiliency, low latency, high performance at the HW layer. "Scale" is a very relative term. Web scale is hundreds of thousands of nodes with exabyte storage capacity. Large Enterprise scale is an order of magnitude (or two) smaller than that. Completely different requirements and design considerations. People get themselves into trouble when they try to make broad-based generalizations just like this

      1. Anonymous Coward
        Anonymous Coward

        "People get themselves into trouble when they try to make broad-based generalizations just like this"

        Yeah, but, as you write, HCI doesn't work well for the big, monolithic workloads either as they are not built to massively scale out... that's why they are monolithic. I understand why people use scale up and SAN, because they have legacy apps. I don't understand why HCI makes sense, basically ever. It isn't the way the true hyper-scalers build either. It's not mainframe/big iron scale up with SAN and it's not Google style hyper scale. Unless you have a workload that happens to use CPU, memory and storage in the proportions that Dell happened to put in their VxRail nodes, just by blind luck, the HCI architecture is going to be inefficient... as you are going to be buying cores you don't need need to get storage, or storage you don't need to get the cores, or cores you don't need to get the memory, etc... and you'll need to do much better than 10g over TCP/IP to avoid latency issues.

    2. Anonymous Coward
      Anonymous Coward

      spot on...

      "supposed to be" a Google or FB style architecture... which would be a disaggregated rack/pod where the compute, RAM and storage elements are designed to scale independently?

      Where HCI tries it's best to force a one size fits all node, until it doesn't anymore and you have to add dedicated storage, compute or RAM heavy appliance nodes. HCI might work well for relatively small, and reasonably predictable environments, but at true hyper scale the disaggregated storage looks a lot like a SAN for the rest of us.

      1. Anonymous Coward
        Anonymous Coward

        Re: spot on...

        "Where HCI tries it's best to force a one size fits all node, until it doesn't anymore and you have to add dedicated storage, compute or RAM heavy appliance nodes. HCI might work well for relatively small, and reasonably predictable environments, but at true hyper scale the disaggregated storage looks a lot like a SAN for the rest of us."

        Exactly, that is what I was getting at. Dell and other HCI vendors push this as "cloud", ie just like the big boys only in your own data center, but it is actually nothing like the hyper scaler environments... well, it's something like it, in the sense that they scale out, shard and cluster, but they don't use HCI at Google, fb or the like because it doesn't make sense to scale CPU, memory and storage in a uniform manner as workloads will use them in varying amounts.

    3. JohnMartin

      Perhaps it's because Google etc only use hyper-converged configurations for the workloads that make sense ? The whole "google only uses hyper converged because thats hyper scale" argument came out of the following use-case

      "Files are divided into fixed-size chunks of 64 megabytes, similar to clusters or sectors in regular file systems, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read."

      So .. how many of your workloads look like that ??

      Do you think that for other workloads they might have highly engineered and dedicated storage systems connected via high speed networks for large portions of their infrastructure. If you look at many large scale supercomputing implementations you'll see that even though you can implement things like Lustre or GPFS in a hyper-converged configuration, compute and storage are often scaled separately, using node configurations that are optimised specifically for that purpose. Even large scale hadoop style big-data analytics like EMR and Spark increasingly pull their data from network attached storage (vis S3). The whole "local disk = better" only stacks up so far from an economics and management point of view, especially at scale.

      Part of the reason for this is that if you can eek out even just an addition 1% efficiency by using some hardware designed and dedicated specifically for storage and data management, then for hyperscalers who spend a few billion on infrastructure every year, that makes a lot of sense. Those optimised systems are called "storage arrays", and the networks (virtual, software defined or otherwise) which connect them to other parts of the infrasturctures are non-blocking fabric based "Systems/Storage Area Networks" .. so yeah, storage arrays, and SANs are likely to be with us in one form or another for the foreseeable future.

      Hyperconverged currently makes a lot of sense, but it depends on the notion of a 2U server with a chunk of CPU and memory with "local" SAS/NVM/SATA attachments over a PCI bus (keep in mind PCI is now over 20 years old) .. what happens when we start to see the next generation of computing architecture built on memory interconnects like OmniPath or Gen-Z, or stuff like HP's "The Machine" ..Until then having a general purpose inexpensive building block for a majority of your tier-2 and tier-3 storage and compute makes a lot of sense, just don't expect it to do everything for you.

      1. Anonymous Coward
        Anonymous Coward

        "Hyperconverged currently makes a lot of sense, but it depends on the notion of a 2U server with a chunk of CPU and memory with "local" SAS/NVM/SATA attachments over a PCI bus (keep in mind PCI is now over 20 years old)"

        Provided you don't mind wasting/stranding resources... as you are unlikely to have a workload which perfectly fits the building blocks pre-defined by the HCI vendors.... The other issue here is that commercial networking, e.g. Cisco, is probably going to struggle to keep up on latency if you try to build a large cluster running a workload that requires data consistency... or struggle to keep up on throughput if you are moving data around that cluster at a regular interval, which due to the sharding process you are likely to be. Google got around this by building their Jupiter networks which approach 2 petabits/s bi-directional bandwidth and writing their own protocol which gets around IPs poky nature.

        The best solution is to just use a public cloud which is the best of both worlds - No stranded resources, a network which can keep up (maybe not Azure, but the real hyper scalers) and the ease of management of HCI.

        1. JohnMartin

          wasting/stranding resources

          Most IT provisioning practices are pretty wasteful, so that's not a particularly new problem in HCI, and I've seen some horrific utilisation rates from LUN based provisionin so while In theory you can be a lot more efficient with separately scalable compute, storage and network, in practice there's still lots of wastage. Also while we're talking about theory with a large enough number of smallish workloads, and a half decent rebalancing algorithm, the law of large numbers should fix the HCI efficiency problem over time too.

          The main place where the HCI approach promises to justify its approach clearly isn't in raw efficiency or performance or price numbers. HCI is often more expensive in terms of $/GB and $/CPU even when factoring in all those "overpriced SANS" .. it's main saving is in operational simplicity. Some of this comes from having simple building blocks with repeatable, automated ways of scaling and deployment, but IMHO, a lot more come from collapsing the organisational silos between the storage, network, and virtualisation teams. The purchasgin and consumption model for HCI are also a big improvement on building everything around a 3-4 year Big Bang purchasing cycle that the big building block / scale up solutions promote.

          Having said that HCI isn't the only way of simplifying the purchasing, deployment, provisioning and other infrastructure lifecycle tasks, but its been one of the more elegant implementation I've seen over the last couple of decades, especially for on-premesis equipment.

          Of course you're right that the simplest way of simplifying operational complexity around infrastructure lifecycle is to use a public cloud offering, but there are valid reasons why people will keep a portion of their infrastructure on site, and ideally that infrastructure will have comparable levels of simplicity, automation and elastic scalability as its public cloud counterparts.

          CI and HCI or their eventual evolution into composable infrastructure, still seems like a good way of achieving that, and elegantly solve a decent number of IT problems today.

          1. Anonymous Coward
            Anonymous Coward

            Re: wasting/stranding resources

            "there are valid reasons why people will keep a portion of their infrastructure on site"

            Fewer and fewer all the time though. There used to be compliance issues, but now all of the Big 3 public cloud providers certify for HIPAA, PCI, FedRamp, FINRA, everything under the sun. There is really no workload which cannot run in public cloud for compliance reasons. AWS, and especially Google, have security capabilities which are just not available elsewhere... like Google using custom protocols, not IP, in their data centers.

            I can see workloads remaining on prem, but because of legacy constraints... like having mainframe workload which is going to take time to rewrite for a modern platform. HCI is generally pretty modern workload though.

            1. JohnMartin

              Re: wasting/stranding resources

              I mostly agree with you, but even if you use some of the more aggressive adoption assumptions, theres still going to be a multi-billion dollar spend in on-premesis architecture for at least the next five to ten years, though that will be a shrinking market overall. Even then based on the data I have, it will probably reach a steady state between 2025 and 2030. The reasons for that is that there are economic and technical benefits to keeping kit "on-site" that doesn't have anything to do with legacy applications, examples include latency in real time control systems, failure domain resiliency, network and API access costs, exchange rate variations, commercial risks, prudential regulation and a bunch of other things. The question isn't an either / or for public vs private but a what is the right mix today and how do you change that over time as technology and economics change. There's also a changing definition of "on site", arguably your pocket is a "site", and increasingly a lot of processing and other supporting software infrastructure will migrate to the the edge, leaving the traditional datacenter looking increasingly lonely.

              That reminds me of something I saw, an AWS guy wearing a t-shirt the other day saying something like "Friends don't let friends build datacenters" .. its kind of hard not to chuckle at the truth of that.

              1. Anonymous Coward
                Anonymous Coward

                Re: wasting/stranding resources

                No doubt there will always been some "on prem". Control systems is a good example. They will always likely be running in side the manufacturing facility so you do have to worry about connectivity killing production, or at least have local fail over. That is true today though as well. GM doesn't run their manufacturing out of a data center, on prem data center, they do it locally at the plants. I think this will be the end of the big corporate data centers, although there will always be some gear on prem for special cases.

                To some extent, it isn't even in the hands of infrastructure teams or IT in general. If the lines of business want any commercial app which has been created in the last 15 years and anytime in the future, e.g. Salesforce, ServiceNow, Workday, etc, it is all SaaS with no on prem option.

                I see mobility (smartphones) taking over as the primary end use device as all the more reason to move to cloud. If all of your users were on beige PCs 100 meters from the data center, there is some argument for local performance. As more users are off who knows where accessing data from their pockets, Google or AWS, especially Google, have a much better global delivery network than any business and the performance will be much better via cloud vs on prem data center.

    4. Nate Amsden

      Not certain about Google. But am certain the likes of amazon, azure and even facebook make huge use of enterprise storage arrays internally. I'd wager google does too. Certainly not everywhere but I'd wager they have 10s of PBs of storage on enterprise systems.

      1. Anonymous Coward
        Anonymous Coward

        "Not certain about Google. But am certain the likes of amazon, azure and even facebook make huge use of enterprise storage arrays internally."

        Google uses a separate scale out, shard and cluster set of servers (basically storage nodes) which are geographically dispersed for storage... they detail it in their Google Cloud Storage object storage docs. fb does the same. Amazon is in the same camp. No idea what Azure does, but probably a less sophisticated version of AWS or Google Cloud... just based on Azure in general. So it depends your definition of "enterprise storage array". If you mean VMAX or the like connected to fiber channel, none of them do that. If you mean separating the storage clusters from the compute clusters all running on a super high throughput interconnect network, that is definitely in use and makes sense.

        1. RollTide14

          For their cloud....no none of them use enterprise arrays but if you think that google/Microsoft/amazon don't have millions upon millions of dollars of EMC/NetApp/HDS in some of their environments then I've got a bridge to sell you

          1. Anonymous Coward
            Anonymous Coward

            "millions upon millions of dollars of EMC/NetApp/HDS in some of their environments then I've got a bridge to sell you"

            I know Google doesn't use any of the above. They build their own hardware, everything, from the CPU up.

            It may be true of Azure. It doesn't make sense so it wouldn't surprise if MSFT does it. I'm sure MSFT has a bunch of that gear sitting around for on prem certification.

            1. JohnMartin

              "I know Google doesn't use any of the above"

              Google, Apple, Facebook, AWS, Mictosoft etc all have plenty of custom made gear, but it doesn't form 100% of their environment. The people who buy and install cloud infrastructure inside the hyper scale data centres don't disclose what goes into it, neither do the people who sell stuff to them. In short anybody who really "knows" isn't giving away the details.

  4. Anonymous Coward
    Anonymous Coward

    What? The guy responsible for developing and selling _____ says that ____ are the future? Gee.

  5. Anonymous Coward
    Anonymous Coward

    Chad does make some good points, any time you have a latency sensitive app you need to keep the hops as low as possible and that's a difficult proposition in a scale out cluster. However he breaks his own rule once again by going negative on the competition.

  6. Anonymous Coward
    Anonymous Coward

    I would so hate ...

    ... to be a storage SE these days. They can already see the end of their careers and as a last ditch effort now have to sell servers with SSD's and try to convince customers its hot sh!t.

    And before anyone tells me that Storage Vendors make money from software then you should ask the sales reps how they get compensated... It's tin.

    Storage has for a long time been software on commodity, yet overpriced hardware.

    Data management has moved back to the host/hypervisors and storage vendors have lost that battle.

    There hasn't been any new developments in the storage market in recent years - with the exception of flash. And flash is not an achievement of the tradional storage vendors.

    Storage these days is about as exciting as comparing prepaid mobile phone plans.

    1. Anonymous Coward
      Anonymous Coward

      Re: I would so hate ...

      I think all of the open systems will be in public cloud. People will either do this HCI intermediate step or not, probably not more of the time. It's of when, not if, those workloads go public cloud though. Just a matter of people getting over the FUD about security and whatnot.

  7. Anonymous Coward
    Anonymous Coward

    "Some workloads must have shared arrays because they need:

    Consistent response being very sensitive to latency jitter"

    - So HCI cannot deliver consistent response in terms of latency ?

    Sakac: "Yes and No...."

  8. Anonymous Coward
    Anonymous Coward

    As the storage industry shifts away from "buy storage array X along with software to manage it, licensing, physical/virtual nodes to unlock functionality you need, only to be forced into throwing it all away in 3-5 years due to maintenance extortion, then rinse and repeat with product X ver 2" it's completely natural that Dell/EMC, Nutanix, etc. all push for HCI because it allows them to continue the same 3-5 year buying cycle but in a new, "web scale" format!

    Why sell a better product or business model when you can brainwash everyone into the new religion of "HCI" and continue the same games you did in the past because now storage is tied to compute, essentially forcing customers to rip, replace, and re-buy every node at 3-5 year intervals because not only has the storage aged and been replaced with new technology, but so has the microprocessor, RAM, and front end connectivity.

    HCI is just a play to keep customers on the rip and replace money train that storage companies have enjoyed for decades.

    1. Anonymous Coward
      Anonymous Coward

      Agree. It is a way of trying to continue their extremely profitable on prem business model... and, even if Dell wanted to, they can't build a cloud to rival AWS or Google. Not enough capital and no financial scenario where it makes sense. Certainly can't do it when they are already $67b in debt from the EMC acquisition.

      You kind of need to be a Google or Amazon to play the public cloud game over time because of the large subsidies they receive from already having huge infrastructure build outs for their consumer businesses.

  9. Hard Will

    Container cattle

    Is HCI good for container cattle?

    Isn't JBOServers with Docker and Kubernetes better than trying to align to the specifics required for a support HCI?

  10. virtualgeek

    Disclosure - Dell EMC employee here – namely the interviewee.

    I suppose it's inevitable that with an El Reg article a pile of snarky anonymous commenters would pile on. If you’re confident in your point of view – put your name on it.

    These demand some form of response - and I hope to add to the dialog (though I'm sure I'll get a whole bunch of snark in return).

    I'm on an airplane, so have time - and I've watched all the movies, so here goes. WARNING - if you want a trite soundbite, I'm not the place to go, so stop reading now, and fling the poop you want. If you want deeper dialog be willing to invest minutes + brain cycles.

    ---

    HCI and CI are being used in managed service providers, yes (and I gave examples) – but in Telco NFV / SaaS / Hyperscale clouds, they tend NOT to use HCI or CI.

    Those customers tend to build their own stack via commodity HW components and SDS stacks (commercial or open source, or proprietary internal stacks they develop/sustain) because there's value in it, they can sustain it, and it's their business.

    Conversely, anyone in an enterprise trying to build/maintain/sustain their own stack is wasting their time.

    People who play with IT build a small test environment and say "look, it can be done"... It's so cute :-) Try running that for 3 years, in a full production environment, through multiple updates and waves of your staff turning over - then please come back and comment, at that point your feedback will be intelligent.

    This is why HCI is seeing the growth that it does. It represents an easy button, so they can get on to the business of doing something that matters.

    Then there is the reaction to my comments about the place for external storage.

    My main point was simple: while we're already at the point where SDS stacks can support the majority of workloads. Period. The transition will take time (IT has inertia) – but is happening. But there are clear places where this won’t happen.

    When I say "latency jitter" - it's nigh impossible (with current state of the art) to have a distributed storage stack over an Ethernet fabric that can deliver consistently sub millisecond response times, with latency jitter less than 200 microseconds. Most workloads are perfectly fine with that – but some aren’t – that’s my “yes and no” answer. 10GbE port to port forwarding times are on the order of 500ns - 1us + a few more microseconds for SFP+/10Gbase-T interfaces and cables. Doesn't sound like much - except that all SDS stacks are distributed in some fashion, and the software stacks add their own distributed latencies (a single IO can and will hop between multi nodes, needing multiple ACKs. This isn't magic – it is the nature of persistence. There are even computer science theorems that govern this. Learn something - look up the CAP and PACELC theorems.

    I laugh right out loud at the ignorant comments in thread that lump object stores like S3 and transactional persistence models – you’re flashing your ignorance.

    We have customers running 100's of GBps of bandwidth, and tens of millions of IOps on ScaleIO systems deployed at 85PB+ - and that's at a **single customer**. We have mission critical SAP and EPIC use cases on vSAN. And, this isn't about our tech - but rather a point: "this architectural model is ready for customers".

    I don't suspect the SDS stacks will duplicate SRDF like data services. Want thousands of consistency groups? Want multisite sync or async replication, with thousands of consistency groups? Need multi initiator multi-target active-active? Need all of that and more? Those workloads will be with us for decades to come.

    Yes, the hyper-scale players build their own “bit-bucket” object stores – but just like my point for the on-premises world, they don't run on generalized servers, but have dedicated, very proprietary dense enclosures, and even don't use off-the shelf media.

    Other comments were along the lines of "HCI linear scaling = bad" and "public cloud for everything" - those are just silly.

    Do you know how many different compute/memory/persistence ratios there in any HCI worth its salt? Thousands.

    VxRack FLEX (which has "flexible" right in the name) can literally have any combo of compute/memory/storage in a cluster - and we have customers with literally hundreds (closing in on a thousand nodes in a cluster.

    Saying “HCI linear scaling is bad” = advertising ignorance.

    Re “public cloud” - of **course** more workloads will be on public cloud tomorrow than today.

    Workload and data growth measured as CAGR in the public cloud currently outstrips on premises stacks by a huge factor (100% vs. largely flat). Furthermore it shows no sign of stopping.

    But, anyone that thinks that means that all workloads belong in public cloud is ignorant, and doesn't talk to a lot of customers - apparently spending time posting anonymously on El Reg :-)

    On-premises and off-premises choices have multiple decision criteria:

    1) Economics: highly variable or unknown workload vs. steady state + compute/data ratios + nature of ingress/egress - and other factors

    2) Data gravity: compute **tends** to co-locate with the data against which it is computing - and moving data is hard - note that this doesn't apply to workloads that are not latency sensitive, or have no long term persistence needs - like a lot of recursive risk/ML algorithms;

    3) Governance: data sovereignty and attestation needs are real. Note this is NOT the same as “security”. on/off-premises has no bearing on security - and all the clouds are getting long lists of certifications.

    The market is speaking on this topic, much more than any single voice (certainly mine included) – the answer is hybrid.

    None of this is a "punchy sound bite" - but is the intellectual, data driven, scientific reality.

    If you've made it this far, thanks for reading!

    Feedback welcome of course – including anonymous snark ... but thought-provoking public dialog = better.

  11. Anonymous Coward
    Anonymous Coward

    Well, that was very impressive and will surely justify a huge paycheck. I'm certain this information is technically all accurate and can be backed up by stats and numbers.

    One shouldn't be surprised though that there will be snarky comments in exchange for free Informercial Airtime on ElReg.

    Now with all this in depth knowledge about storage and stacks do you expect that your average customer understands - or even cares to understands these numbers or acronyms? By average customers I mean the small-to-medium disk array customers that cannot afford entire teams to build their own stack.

    Isn't that one of the main reasons that HCI is so attractive for service providers? They can hire 1 or 2 guys to decipher acronyms and cut through the BS (snake oil) so that the end customer doesn't have to deal with complexities anymore. Complexities that are and should be treated as commodity. We're talking about buckets for data.

    In order to become sticky the traditional storage array vendors tried to move more and more of the data management function onto the array. That way the customer had to come back after 3 years to purchase the upgraded version which has a larger CPU and a bit more RAM (inexpensive commodity hardware) for outrageous prices. They didn't have a choice because by that time they'd be struggling with performance issues for the previous 6 months and the customer would be quite happy to sign the check.

    Customers have woken up to the fact that hypervisor vendors now provide data management and small all-flash startups offer cost effective alternatives (without on-array bells and whistles data management), as well as startup HCI players.

    Once the traditional storage array vendors realized they won't sell new controller heads and disk shelves every year, they quickly jumped on the bandwagon (some not so quick).

    Now to sum up the ElReg article - "New technology(HCI) is almost ready to replace old technology(traditional Array), except where it can't"

    I suggest we stop treating storage as Alchemy, so that customers can focus their time and energy on their business.

  12. rnr

    Interesting argument about latencies and throughput for on-prem storage systems but it's probably too self-comforting.

    Your customers already moving to the cloud with full speed and monsters like SAP are forced to accommodate.

    They already have instances with 4-16TB of memory on the roadmap...

    https://www.reddit.com/r/aws/comments/6bdi3k/ec2_instances_with_416_tb_of_memory_are_on_the/?st=J2R9HYX6&__s=v5t5rubgou2kspq2nnkq

    The conversation evolves around if on-prem type of customers will continue to use legacy monster app that requires all these SAN arrays and infra. They might ignore the trend, yes. But they going to miss the boat... It's not about on-prem or cloud, it's how fast you ship and how easy to change your product. If they won't be able to figure out how to embed reliability and agility into their own software - they'll probably be out of business.

  13. virtualgeek

    Thanks for the comments Anonymous and rnr!

    1) I appreciate that you think my comments were impressive, but they aren't a function of a huge paycheck :-) They are a function of passion for this topic and this area, and frustration in anonymous, ignorant comments that are demonstrably false.

    You're right - the acronyms are irrelevant, but the facts matter.

    For a small customer that needs easy - that market is moving to SaaS (not generally IaaS public clouds) the most quickly, and the "easy button" that HCI (or native functions in the OS/hypervisor) represents is very compelling for the things that stay on premises.

    2) Storage isn't alchemy - I AGREE. Like all things - it is science. I can't state it enough - I'm not "defending" the SAN domain. Actually, if you think about it, it's fascinating to have the market leader in this category stating that SDS/HCI models are ready for the majority of workloads. I'm just pointing out the intrinsic places where it's unlikely that SDS/HCI models will displace things that look like "external storage", and that those are important for many, many people.

    3) You're right that vendors that don't disrupt themselves (both in the tech, but also in the economic/consumption models) - things are going to go from "hard" to "impossible" to "death". As a leader in Dell EMC - I have a duty to our customers and our employees to make sure that doesn't happen.

    4) rnr - I believe you're right - new in-memory models (particularly as we hit the next wave of HW platforms and the early days of the Next-Generation Non-Volatile Memory (NGNVM) wave - this can have a huge effect. An interesting observation is that the highest performance transactional systems that power hyper-scaled applications already use distributed memory caches (things like memcached, gemfire, and others) that front-end a persistence layer. This has been the case well before the SAP HANA wave - but that is one of the things bringing in-memory approaches into the mainstream consciousness. This will move, over time, from the weird fringe to a more broadly applied approach.

    For perspective though - some of the coolest, most scaled SaaS platforms millions use every day still sit on a oldy-timey RDBMS that in turn sits on massive SANs.

    Those customers would LOVE to move into a distribute in-memory front end sitting in front of a distributed SDS/HCI back-end supporting a modern NOSQL data layer. In that process, they would replace the RDBMS and the SAN - and make a huge quantum leap. But they cannot do it in one step. We're working to help them - but it's not a light switch. They need to modernize what they have, while they look at how to rearchitect their whole application stack.

    **AS A GENERAL AXIOM** if you have a dollar to spend, the BEST place to spend that is at the application layer. If it's not a generic app (in which case - go SaaS) - the best dollar is to rebuild the app stack around modern 12-factor app principles. If you do that - infrastructure dependencies go away, you have great workload portability. That axiom - while true, isn't the end. That axiom has a corollary: if you can't change the app - make the infrastructure as simple as possible.

    It's fascinating to me, the power of "brand association" you can see coming through this comment thread - It's a reflection of the unconscious mental image that brands carry.

    "EMC" is brand associated with "big iron storage", "Dell" is brand associated with "laptops and client computing". "VMware" is brand associated with server virtualization/consolidation.

    Yes, we are those things, and sometimes that's all we are, the only way we engage with customers. When we do that - it's us at our worst.

    Each part of the business is so much more than that - and when both come to the surface, that's us at our best.

    We power Spring Framework (in Pivotal) - the most popular developer framework for how people transition apps, downloaded tens of millions of times per month.

    We (in Dell EMC) are leading the transition from component level assembly into CI, to HCI, and to the things that are the evolution of HCI.

    We are leading the open networking (in Dell EMC) and SDN (in VMware) transition.

    We are pushing the Kubernetes efforts forward with Kubo (in Pivotal), with core persistence contributions like Rex-Ray (in Dell EMC) in Kubernetes 1.6 (along with other container/cluster managers).

    We have the best Developer-Ready Infrastructure stack (Pivotal/VMware/Dell EMC).

    We are partnered with RedHat on their cloud stack, and also with Microsoft on Azure Stack

    And... yes, we are also leading in traditional servers and storage for the huge masses for whom all that stuff doesn't apply.

    All true - just not "sound bite simple" :-)

    1. Anonymous Coward
      Anonymous Coward

      Just for the record

      Just for the record, anyone saying that EPIC is running on vSAN is blatantly lying, luring, or lack knowledge of the EPIC application stack. You may have vSAN supporting the VDI (XenDesktop/XenApp/Horizon) portion of the stack or even ancillary services such as Kuiper and printer services, but no customer is running production Cache or Clarity on vSAN or any other HCI platform today.

      1. virtualgeek

        Re: Just for the record

        @Anonymous - thanks for the comment. You're right, but missed one other option: that the person was oversimplifying. Yup - in a 1000 word comment (count 'em up - I needed to cut a lot to fit into the El Reg comment maximums) I'm completely guilty of OVER simplifying, not over complicating.

        All traditional app stacks (and EPIC has been around for a LOOOONG time) have multiple components, and trying to say "SAP runs on" or "Oracle runs on" or "EPIC runs on" is an over-simplification. It's also an oversimplification to say that "mission critical workloads don't run on SDS/HCI".

        @anon is right - EPIC is a complex EMR system with many components (Cache/Clarity, but also all the services around them). BTW - to nurses and doctors, the EMR stack as a whole is mission critical - the end-user portion of the stack being down means "EMR is down".

        In fact, that's one of the reasons the vast majority of EPIC deployments are deployed on VMAX and HDS systems - specifically for the reason I pointed out that "SANs aren't going away anytime soon".

        Even if SDS/HCI is ready for the majority of workloads - there are workloads that are low in COUNT that are very important. The number of healthcare customers on CI I'm also intimately aware of - a ton use VxBlocks. Also a reason why the "HCI vs. CI" debate is a silly one from where I sit.

        Those workloads depend on the resilience, availability, serviceability - and data services (including complex compound replication engines that can manage multiple related devices in consistency groups). SDS/HCI models can be very high performant now, very resilient now - but those data services are something I don't anticipate SDS/HCI to replicate any time soon - not because it's not possible, but because it's really hard, and not the majority of workloads, or where there is the most workload growth.

        I'll find out if any of the deployments are public, and provide further detail about exactly what is used where.

        @Anon - thank you, It's a first that I've been correctly tagged for not being long winded enough or explicit enough :-)

        1. Anonymous Coward
          Anonymous Coward

          Re: Just for the record

          Please - no more. My eyes hurt. Just enjoy your 'passion' as long as it lasts - or until the next hot thing comes out that you can feel even more passionate about.

          Did you know that electric cars will eventually supersede diesel and petrol? Same discussion. I'm sure some individual will have all the data and is willing to present it passionately, especially if his/her employer is a manufacturer of electric cars.

          Most customers just want to get from A to B.

        2. Anonymous Coward
          Anonymous Coward

          Re: Just for the record

          Blah blah, you got busted Chad.

          You claimed EPIC runs on vSAN and it definitely doesn't. Let's be honest for a moment: VxRail was scraping the bottom of the barrel. VCE had to launch a HCI product and all there was were those crappy Quanta VxRack appliances and vSAN. The rest is marketing.

          You should have taken ScaleIO or virtualized the XtremIO code to build a proper HCI/SDS but that wasn't in the parts bin so you ended up with a rehashed VSPEX Blue/EVO:Rail.

          VSAN is already shitting the bed left and right with performance problems mounting in many customer accounts, at least that's what your resellers are saying. Is it because it's crap or because your salesforce under configured the solution to win the deal? How will you deal with Purley and ever increasing VM density when you are already cracking now? When can we expect a state of the art storage subsystem with inline dedupe and linear scale out? (Please save yourself a diatribe; I know your roadmap and the challenges.)

          This is a disgrace, VSAN was never meant to be more than a science project but got elevated due to internal EMC/Vmwarepolitics. Gelsinger had to have his own SDS, didn't he? Please spare us the squid ink cloud of butthurt, some of us have been decades in this industry and know the score.

  14. rsingh1

    Great response!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon