Looks like someone got the EMC marketing packet
And regurgitated it.
Hyper-converged systems integrate compute, storage and networking into a single purchasable entity that is easier to deploy, operate and manage than traditional best-of-breed component systems. They are a step up from converged systems that integrated just storage and compute. That's the simple story – but the definitions are …
And it was the NEXT article I read after I wrote a comment on another article about the reason I read the Reg was because they didn't regurgitate press releases without comment like half the IT sites. Yes, I used that very word! Hopefully this isn't a new trend, or I'll have to find another IT site :)
I remember the days of being hyper-converged back in the 80's... Big mainframes carved up for multiple customers who were connecting in to each from remote locations, running reports and making enquiries into their databases... If only we'd thought of a snappy name rather than "main frame computer processing bureaux"...
like "cloud computing"? Not much difference, really...
True, in the sense that both the service bureau model and the cloud model are examples of utility computing - you pay to have data processed and stored, and the vendor handles all the actual hardware and, generally, the generic software (OS and the like).
Cloud providers emphasize features like on-demand capacity upgrades and geographic co-location. Some, but certainly not all, service bureaus provided those; they were less important as selling points.
For several years I worked for a small software company that had some mainframe software products, and we used a service bureau for all our mainframe computing. It was actually in the same building, so when we need to cut a tape to ship to a customer, we'd walk over and hand it to the operator on duty. It was a good arrangement.
We bought IBM AS/400's with integrated everything, and we LOVED it!
Yeah, and the AS/400 came in on the tail end of the minicomputer era (in 1988). The low-end ones were slow - I remember builds taking an order of magnitude longer on the '400 we had (at the time, the very smallest one you could buy) than the same software product took to build on a '386-equipped PS/2 Model 80.1 And the PDM development environment was awkward and limited, not nearly as capable as, say, the VMS TPU editor or CMS XEDIT (much less something like ISPF or the UNIX collection of tools).
But, man, the problems we did not have with that thing. Hardware and software were rock-steady. Built-in UPS and modem that, if you provided a phone line, would dial home if it detected a problem. Bug in your code cause an application job to crash? You'd get a nice message in your terminal message queue with fault and backtrace information. Every command had menu and prompt modes.
I never used the '400s ancestors - the System/38 and System/36 - but I gather they were similar, without the recycled Future Systems aspects of OS/400. I did use VAXes pretty extensively at school, and they were similarly reliable and non-frustrating. And the little time I spent with PDP-11s was also pleasant. Never got to use DGs machines or the other famous minis, alas.
1The source base wasn't identical, of course. This was circa 1990. The software in question was mostly written in C, with platform-specific layers for OS/2, Windows, and UNIX. The only C available at the time for the '400 was EPM C, a rather interesting beast (the later System/C and ILE C were less idiosyncratic), and a bunch of functionality couldn't be implemented in it and had to be done in COBOL, Pascal, or CL, with various odd OS/400 constructs. But in terms of number of modules and lines of code all the variants were pretty close.
One thing that specifically changed to make large dual-controller SAN's less interesting is that random IOPS are now essentially a commodity item. In the Pre-SSD era, the only way to provide thousands of random IOPS to a host was to lash hundreds of spinning disks together. This was a) expensive and b) hard to do - requiring dedicated hardware and software. A modern Intel DC class SSD (e.g S3700) can provide up to 70K random read IOPS. You would need around 350 SAS drives (assuming 200 IOPS per drive) to generate the same IOP capacity.
As a result we no longer need specialized hardware to drive high IOP rates.
Full Disclosure: I work for Nutanix, previously at NetApp.