* Posts by Casper42

10 publicly visible posts • joined 25 Feb 2012

You've been served: Market rakes in $22bn, Dell does rather well – IDC

Casper42

Re: HPE playing for profit

Not what you think.

There were deals HPE did where they sold servers to Microsoft and other cloud providers and did so at a slight loss when it e to the hardware, just so they could make some money on attached services and longer term support contracts.

Not like they suddenly started gouging the average customer, they just walked away from many of the Tier 1 Service Providers

Casper42

Spectre and Meltdown don't affect the Enterprise nearly as much as they do the big cloud providers.

It's also going to be several years before all those issues make it for design to tape to full production. Enterprise simply can't wait that long.

Lastly, the newer boxes with patches are still faster than the older boxes, even without patches (which often lag on release)

HPE burns offering to Apollo 6500, unleashes cranked deep learning server on world+dog

Casper42

Re: guesses

The PSUs are front mounted because there was no room in the back and the cables are on the front because an existing PSU Design was 99% recycled (fans reversed).

The internal design noted in this article is horribly wrong.

The GPUs are all in the top rear above the system board so their hot air goes right out the back and the SMX2 connection between GPUs (think SLI Bridge from the GeForce cards) can easily connect multiple GPUs.

The Apollo 6500 Gen10 QuickSpecs is now live and has more details.

Casper42

Tesla

They are both named after Nikola Tesla who was a brilliant Scientist and made a ton of discoveries and advancements with electricity. Thus the Car company name...

Saddens me people don't know who he was.

Tesla is Nvidia Compute lineup whereas GeForce is gaming and Quadro is professional workstations (CAD, 3D Animation, etc)

Nvidia GPUs have been named after Mathematicians and Scientists for a while now.

V = Volta

P = Pascal

M = Maxwell

K = Kepler

HPE server firmware update permanently bricks network adapters

Casper42

Once again El Reg likes to post incomplete information.

For all you people people blaming lack of testing, the combination that bricks the NIC is when you use a brand new driver with firmware that is like 2+ years old.

If you follow the DOCUMENTED Recipe for Drivers and Firmware, you'd be fine.

Image and SPP was pulled to prevent customers who don't RTDM from hurting themselves.

Casper42

Re: HP are getting good at this

HP != HPE and even if they were, the fact you think the same people would be working on both is comical.

HP Ink shrinks workstations to puckish form factor

Casper42

HP Ink - is that a freudian slip?

HP goes off VMware's EVO:RAIL, stops selling sole appliance

Casper42

Why bother with Nutanix, you can still get the HC200 with StoreVirtual for WAY less than the EVO:RAIL config, and with or without SSDs depending on your needs.

Besides, this time next year, Nutanix might be nothing more than a software company, and probably owned by Cisco :shudder:

HP freezes out SAN fabric

Casper42

SAN is integrated, not eliminated

I heard the way this works under the sheets is to simply enable a traditional FC Switch inside the FlexFabric module that's already there but simply running in NPIV mode.

I believe that switch ASIC is made by Qlogic in the current model.

So in essence, you are not eliminating the SAN Switch but rather than having a large central pair of SAN Switches, you are moving the switch out to the edge. Then you use Virtual Connect's own GUI to manage the Zoning by simply attaching a "Fabric" to the Server Profile like you already do today.

And as far as other storage vendors eventually being supported:

One of the things that makes this possible is the fact that even a moderate 3PAR T400 supports up to 64 Host Ports (the port facing the SAN as opposed to Disk Shelves).

Maxing out the FlexFabric module with FC, that would be only 8 connections per Enclosure.

Which means you can hang a minimum of 8 enclosures from a single T400.

NetApp and EMC have generally less than 16 host ports and would then only enable what, maybe 2 enclosures? The EMC VMAX 20K can grow up to 128 Host Ports but only does 16 ports per 20K Engine. So that could work in this design perhaps but would also cost an arm and a leg.

So its not just a vendor lock in by design, but simply comparing the architecture of the competitors shows they probably wouldn't work well in this design.

Cisco's 3-ring circus: Xsigo CEO on bait and switches

Casper42

"Open"?

Full disclosure, I work for HP and work with Blades and Virtual Connect every day.

I am curious how HP VC is not considered "Open" but Xsigo is?

Xsigo is a box that sits between the Server and the Network/SAN

VC is the same but happens to fit in the back of an HP Chassis.

Xsigo can connect to any upstream Network equipment

VC can as well.

Xsigo can connect to any upstream SAN environment

VC can, as long as you support NPIV.

So is it not open because it only works with HP's blades?

If HP was to partner with Xsigo, how would that change anything that either company does today?

HP already offers IB Adapters and Switches for all their Rack and Blade Servers, nothing preventing someone from using that with a Xsigo today is there?

I fail to see how this partnership would benefit anyone but Xsigo.

Not to mention its yet another thing I have to manage.

Something Cisco's UCS platform commonly uses as an attack point on the competition.