* Posts by Hard_Facts

8 publicly visible posts • joined 22 May 2012

IBM runs OLTP benchmark atop KVM hypervisor

Hard_Facts

3x faster transaction response time in tpc over VM

While 8-10% lower transaction volume is understandable when benchmarked on VM -- How come this TPC on VM delivers almost 3X better transaction response time ? I mean with all the virtual IO latency in VM, I thought transaction response time over VM will be slower.

3PAR goes all-flash, shaves hefty wodge off price tag

Hard_Facts

Today's traditional Storage does not have the Backend BW or the architecture to support as many SSD as traditional spindle based disks, this I presume is well known fact.. Having said that, 512 SSD vs. 1900 traditional disks is commendable, though with fewer SSD, HP may have been able to deliver "More IOPS per GB".

I presume all Storage from any vendor will max out with 128-256 SSD with current Interconnect architectures available today.

Having said that, How you look at these statistics is important.

Also, it's usefulness in terms of practical use of such a Storage with SSD configuration, would be:

1. While 100/200GB 512 SSD disks it gives 6-11X less Storage Capacity, it also translate to 6-11X faster IOPS to application using "SSD only 3PAR" -- Which is useful for application that needs such Performance from an Enterprise Class Storage; Flash storage can deliver even faster IOPS, but they don't have the features required for Mission Critical Data.

3PAR with 1900 * 300GB Disks -- 0.78 SPC-1 IOPS/GB

3PAR with 512 * 100GB SSD -- 8.79 SPC-1 IOPS/GB (Assuming HP used 100GB SSD)

3PAR with 512 * 200GB SSD -- 4.39 SPC-1 IOPS/GB (Assuming HP used 200GB SSD)

2. Doesn't make any sense to say, "70% less cost with this SSD based configuration" -- As "Cost per GB" is much higher (would be the case for any Storage with SSD) -- Who needs 6-11X Performance will pay for it.

NetApp: Steenkin' benchmarks – we're quicker than 3PAR

Hard_Facts

What is Netapps trying to prove to prostective buyers of storage ?

Ramsan and the likes have shown how these results gets boosted. Same with vendors publishing benchmarks on their storage with SSD only disks..

To get the results and lopsided comparison vis-a-vis other benchmarks, is at what expense

1. "Single NetApps FAS6240 Storage" Benchmark: Total 193TB storage on the Netapps (70TB unused) & 1TB Flash to assist the 123TB actual storage used in this configuration for the benchmark.

2. "Single 3PAR V800 Storage" Benchmark: Total 573TB storage on 3PAR (83TB Unused) & 0.768TB of Cache to assist 490TB actual storage used in this configuration for the benchmark.

3. "A storage Cluster of 16 IBM V7000" Benchmark: Total 281TB storage across 16 Node V7000 (81TB unused) & 0.448TB of cache (192GB on 8 SVC + 256GB across 16 V7000) to assist 200TB actual storage used in this configuration for the benchmark.

All three delivers 3-7 ms Response time, But

One uses 57% of all Raw storage (123TB Usable) with 1TB Flash to deliver under 3ms Response Time @90% Load

One uses 84% of all Raw Storage (490TB) & delivers under 7ms Response time @90% Load

One uses 70% of all Raw Storage (200TB) & delivers under 6ms Response time @90% Load

Pretty clear what are the "PAINS for the Supposed GAINS" for such performances --

1. Use all of what storage you buy & get get optimal performance ?

2. Do not use "half to third" of what storage you buy ?

3. Pay for flash assisted Performance by putting more GB of Flash & Cache per TB of Storage you intend to you?

As a buyer, I would rather compare: Cost per total Usable storage at an optimal performance of say Sub 10ms Response time

IOPS Physical Usable Capacity Price $ / U $ / P

Capacity (P) (ASU+DP) (U) $

----------------------------------------------------------------------------------------------------------------------------------

Netapps FAS6240 250039 193 98 1,672,602 8666 17,067

IBM V7000 520044 282 237 3,598,956 15185 15,185

3PAR V800 450212 573 528 2,965,892 5617 5,617

If I need a Sub 3ms response time for a specific application with a 5TB usable capacity need, I may consider a RAMSAN or a smaller stoarge with lots of SD & Flash/Cache.

Is the Store Once Catalyst/B6200 8-node cluster a single system?

Hard_Facts

Re: Is the Store Once Catalyst/B6200 8-node cluster a single system?

It is probably one more example of Benchmarks all vendors indulge in to outsmart the others.

I am saying this because, I wonder how many clients will need a Dedup System that can do 100 TB/Hr.

Having said that, I read about comments in this article questioning "Single Namespace, Multiple Indexes" in this HP Benchmark vis-a-vis other benchmarks including that of EMC's latest one where they have "Single Namespace, single / Global Index"...

I believe there is a merit / advantage in having "Multiple dedup Indexes" vis-a-vis one Single Large Global Index, especially as we are seeing more and more of these massive Dedup / Disk based Backup Systems backing up multitudes of Terabytes.

Back Then when we have limited capacities on Nodes & people needed a sort of "Federated Multi-Node" solution for more storage, having a Single Namespace & Global Index was a logical choice.

Today massive storage capacity on each node (B6200 max out at 192TB Raw per Couplet, that's really big) & that translate to 768TB Raw on 4 Couplet... What would be the performance if there is a Single Global Index ? I presume it doesn't need second guessing.

There are other downside of have one single large Global indexes. You can use little or no tricks to speed up specific backup jobs...

So, somehow I find this particular HP Benchmark do make Technical Sense, though practically there mat not be too many customers needing 768TB on a single namespace doing 100TB/Hr.

Many insensible benchmarks by many vendors come to my mind, not necessarily limited to Storage, but mostly in Systems benchmarks space..

1. Most recent is the VMAX-40K Mamoth with 2400 Drives

2. HP & IBM started the "TPC-C" benchmark competition way back -- Trying to out-do each other as to who can do the most millions of tpmc ona Single System/Single OS.

---- Probably if they partitioned their Superdomes and p595/p795 & achieved higher tpmC aggregates across multiple partitions, just benckmarks would have had real life relevance

3. So, when one vendor commits a crime another is infected too & Oracle does a 30million tpmC benchmark across a cluster of 27 SPARC servers -- TPC is not HPC, so why do a transactional workload benchmark on a massive SuperCluster..

List of such gross in-sensibilities is long. We users (who buy and run business on these technologies) have come to a point where where taking a holistic view of these.

IBM parks parallel file system on Big Data's lawn

Hard_Facts
Linux

Re: IBM's realised file systems are key.

Another product riding on IT buzz-word / frenzy "Big Data" trying to capitalize on it.

Yes there is Data growth, challenges of Big data to handle, but not by making Bigger storage & Larger filesystems with efficient data stores...

What is needed is efficient algorithm that stores more data, occupies less space & speeds access to it.

We need, IBM "The most valuable brand" to bring value differently to solve problems & make "A Smarter Planet".

IBM to park mainframes on the cloud

Hard_Facts
Linux

Re: Everything old is new again

IBM SmartCloud on IBM Mainframe... -- Indeed a Smart Move to make some Smart Money riding on the Crazy Cloud Frenzy in the Industry.

At it's basics, Cloud Computing is -- Pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy driven processes. Converged Systems, Unified computing for a Dynamic Infrastructure..So, it doesn't necessarily require a Mainframe to be "Always available, Always running".

The Incredible 4PB Hulk: EMC monsterises VMAX

Hard_Facts
Linux

Re: XIV really?

Hope VMAX-40K catches up with 3PAR's 450K SPC-1 IOPS (with 1920 odd disks)....

With VMAX-20K probably it couldn't come close enough & hence we didn't see any SPC-1 on 20K...

Here comes Big Fat VMAX-40K with 3400 disk, don't know how big would be the market for this mammoth, but this unrealistic configuration may at-least help it match-up / catch-up on benchmark numbers to make "A Speeds & Feeds" statement".