2TB vs 1TB is typically a difference in what's available at launch vs. 6 months in and greater density.
Dell goes on Epyc server journey with AMD
Dell is producing one- and two-socket rackmount servers using AMD Epyc processors alongside its Xeon SP server family. The Epyc processor is said to be faster than equivalent Xeons. There are three PowerEdge AMD-powered servers: the R6415, R7415, and R7425. These accompany the PowerEdge 14G R640 and R740 Xeon SP servers in …
COMMENTS
-
Tuesday 6th February 2018 18:41 GMT Daniel von Asmuth
Not Lying
NL-SAS is no Dutch tradition, but seems to be a consumer disc with a SAS interface....
https://www.techrepublic.com/blog/data-center/how-sas-near-line-nl-sas-and-sata-disks-compare/
2 TB? Guess you need to switch to SPARC or POWER for some main memory.
If AMD is faster than Intel, is that because AMD's heaviest processor has 32 against 28 cores or are they comparing same-price models (for similar parts, Xeon is slightly faster, but AMD a lot cheaper)?
-
Tuesday 6th February 2018 19:06 GMT Anonymous Coward
Re: Not Lying
If AMD is faster than Intel, is that because AMD's heaviest processor has 32 against 28 cores or are they comparing same-price models (for similar parts, Xeon is slightly faster, but AMD a lot cheaper)?
With meltdown fixes applied probably faster in more than one comparison.
Looking at the HP benchmarks which are absolute and pre-meltdown, Intel was 11% faster per core. That was pre-meltdown which cut Intel numbers by up to 20% for real life transactional, cloud and web loads.
-
-
Tuesday 6th February 2018 21:38 GMT Nate Amsden
Power usage
I must've missed the article that talked about HP's DL38x Epyc. Was curious on power usage because I was reading mixed messages on Epyc's power usage (most of that revolved around Epyc's SOC design so you couldn't do apple to apple comparison with Intel with the extra chipset power).
Looking at this online HP Power advisor calculator https://paonline56.itcs.hpe.com/?Page=Index
I was just comparing most basic of specs (CPU + RAM).
----------------------------------------------------------------
Spec 1 - what my org currently uses (for vsphere 5.5 enterprise+)
DL380Gen9 2x 22 core 24x16GB 1Rx4 (technically my systems report as 2 rank but the power advisor says the 2 rank dimms they list are not compatible with those processors) -
Idle power: 61W 50% usage power: 230W 100% usage power: 395W
Fan loss operation: 540W
~8.97W / core @ 100% utilization
----------------------------------------------------------------
Spec 2 - High end Intel DL380
DL380 Gen10 2x28 core 24x16GB 2Rx8
Idle power: 64W 50% usage power: 282W 100% usage power: 500W
Fan loss operation: 644W
~8.92W / core @ 100% utilization
----------------------------------------------------------------
Spec 3 - High end Epyc
DL380 Gen10 2x32 core 24x16GB 2Rx8
Idle power: 174W 50% usage power: 422W 100% usage power: 675W
Fan loss operation: N/A (I assume this system can survive fan failure??)
~10.54W / core @ 100% utilization
----------------------------------------------------------------
I thought the fan loss operation metric was interesting, something I have never seen before. Interesting to see the idle power is almost triple on the AMD systems.
Taking one of my systems at random(lightly loaded) and looking at what iLO reports as power usage over the past 24 hours (22 core / 24x16GB, WITH 2 dual port PCIe 10G NICs, and 1 PCIe dual port Fibre HBA):
Average: 162W Maximum: 259W Minimum: 160W
Another random system over past 24 hrs(identical hardware)
Average: 251W Max: 335W Min: 239W
I have about 40 DL3x0 systems and have had 2 fan failures in the past 5 years (both of which on the same DL360 server, an HP StoreOnce system). Main point being I guess I am not concerned about frequent fan failures in accounting for power usage.
Having more cores is nice but have to try to balance with other factors as well of course. I find it interesting that power per core on the newer intel chips is basically identical.
I don't know if the numbers from the HP Power advisor are accurate or not - I have not noticed any other numbers(system wise) myself yet (though haven't spent a lot of time looking)
I remember being super excited about the Opteron 6000 when it first came out, still have 15 DL358 G7s in operation, so far not nearly as excited about Epyc -- though I can see it's biggest strengths aren't in the market segment that is most important to me (two socket vmware hosts). If you need massive I/O and PCIe lanes(I don't) they look awesome though.
-
Wednesday 7th February 2018 01:46 GMT Anonymous Coward
Sell superior AMD systems or lose sales
Dull has little choice as Epyc servers will dominate sales for years to come as AMD is continuing to provide superior performance for the price. Stealing Intel's lucrative enterprise market will hurt revenues in addition to Intel's massive security issues with all CPU models. AMD and consumers/enterprise are the winners.
-
Wednesday 7th February 2018 14:14 GMT ColonelClaw
NVMe RAID
Here's what I don't understand - why would you buy the 24xNVMe chassis option, when there's no support for hardware RAID? In fact, why are there no hardware RAID cards that support NVMe (that I can find)? Are you meant to use a software RAID? Or just use the 24 drives separately?
What am I missing?
-
-
Wednesday 7th February 2018 18:01 GMT vogie563
Re: NVMe RAID
NVMe slots like this are for the "software defined" storage roles, like VSAN, etc. You either have software RAID or you have object level replication doing your data protection instead of a RAID card. Putting current RAID type cards between these drives and the PCI bus would probably be a bottleneck.
-
Monday 9th April 2018 07:06 GMT pbuschman
Re: NVMe RAID
Software RAID (or some other resiliency model entirely) is better for an all NVMe system. There is no way a RAID controller can keep-up with the performance of even 2 drives, let alone 24. Dedicate some of those many cores to doing RAID or, even better, skip RAID altogether and create a scale-out cluster with resiliency above the level of the individual server.
-
Thursday 19th April 2018 21:31 GMT Levente Szileszky
Re: NVMe RAID
If you really want to try you can use Highpoint's new SSD7120 NVMe RAID HBA which also gives you (4) full 4x U.2 connectors on a single x16 card: http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm
Where it has some merit is not the top seq read, as others pointed there sw will beat it, but I can imagine it might beats sw approach when it comes to random writes across (4) NVMe drives...
...that being said my main interest in this ~$3xx HBA is the ability to add U.2 drives (which come much larger sizes than M.2 sticks) to pretty much any host. :)
-
-
Thursday 8th February 2018 10:18 GMT Crypto Monad
Check out the pricing
http://www.dell.com/en-uk/work/shop/servers-storage-and-networking/sc/servers/poweredge-rack-servers
R640: Starting at £1,937.74 (for 10-core Xeon, dual CPU socket, 16GB RAM)
R6415: Starting at £4,438.88 (for 8-core Epyc, single CPU socket, 8GB RAM)
That is one hell of a premium for getting a Meltdown-free system :-(