back to article Dell goes on Epyc server journey with AMD

Dell is producing one- and two-socket rackmount servers using AMD Epyc processors alongside its Xeon SP server family. The Epyc processor is said to be faster than equivalent Xeons. There are three PowerEdge AMD-powered servers: the R6415, R7415, and R7425. These accompany the PowerEdge 14G R640 and R740 Xeon SP servers in …

  1. BoomHauer

    2TB vs 1TB is typically a difference in what's available at launch vs. 6 months in and greater density.

  2. Voland's right hand Silver badge

    Finally, it is long overdue to have some competition

    There is finally some glimmer of competition in the server segment. Hallelujah.

  3. johnnyblaze

    Go AMD. We've got our first 2 socket EPYC server arriving this week. First of many if things go to plan.

  4. Daniel von Asmuth

    Not Lying

    NL-SAS is no Dutch tradition, but seems to be a consumer disc with a SAS interface....

    https://www.techrepublic.com/blog/data-center/how-sas-near-line-nl-sas-and-sata-disks-compare/

    2 TB? Guess you need to switch to SPARC or POWER for some main memory.

    If AMD is faster than Intel, is that because AMD's heaviest processor has 32 against 28 cores or are they comparing same-price models (for similar parts, Xeon is slightly faster, but AMD a lot cheaper)?

    1. Anonymous Coward
      Anonymous Coward

      Re: Not Lying

      If AMD is faster than Intel, is that because AMD's heaviest processor has 32 against 28 cores or are they comparing same-price models (for similar parts, Xeon is slightly faster, but AMD a lot cheaper)?

      With meltdown fixes applied probably faster in more than one comparison.

      Looking at the HP benchmarks which are absolute and pre-meltdown, Intel was 11% faster per core. That was pre-meltdown which cut Intel numbers by up to 20% for real life transactional, cloud and web loads.

      1. P. Lee

        Re: Not Lying

        Faster overall, but if you're licensed per core, you may have problems grabbing ms sqlserver gigs, unless the throughput is better balanced and you are shoveling more data out the interfaces.

        Open source is the way! :)

  5. Ian Baker

    Presumably 1 TB with a single processor installed and the maximum of 2 TB when both are present?

  6. Nate Amsden

    Power usage

    I must've missed the article that talked about HP's DL38x Epyc. Was curious on power usage because I was reading mixed messages on Epyc's power usage (most of that revolved around Epyc's SOC design so you couldn't do apple to apple comparison with Intel with the extra chipset power).

    Looking at this online HP Power advisor calculator https://paonline56.itcs.hpe.com/?Page=Index

    I was just comparing most basic of specs (CPU + RAM).

    ----------------------------------------------------------------

    Spec 1 - what my org currently uses (for vsphere 5.5 enterprise+)

    DL380Gen9 2x 22 core 24x16GB 1Rx4 (technically my systems report as 2 rank but the power advisor says the 2 rank dimms they list are not compatible with those processors) -

    Idle power: 61W 50% usage power: 230W 100% usage power: 395W

    Fan loss operation: 540W

    ~8.97W / core @ 100% utilization

    ----------------------------------------------------------------

    Spec 2 - High end Intel DL380

    DL380 Gen10 2x28 core 24x16GB 2Rx8

    Idle power: 64W 50% usage power: 282W 100% usage power: 500W

    Fan loss operation: 644W

    ~8.92W / core @ 100% utilization

    ----------------------------------------------------------------

    Spec 3 - High end Epyc

    DL380 Gen10 2x32 core 24x16GB 2Rx8

    Idle power: 174W 50% usage power: 422W 100% usage power: 675W

    Fan loss operation: N/A (I assume this system can survive fan failure??)

    ~10.54W / core @ 100% utilization

    ----------------------------------------------------------------

    I thought the fan loss operation metric was interesting, something I have never seen before. Interesting to see the idle power is almost triple on the AMD systems.

    Taking one of my systems at random(lightly loaded) and looking at what iLO reports as power usage over the past 24 hours (22 core / 24x16GB, WITH 2 dual port PCIe 10G NICs, and 1 PCIe dual port Fibre HBA):

    Average: 162W Maximum: 259W Minimum: 160W

    Another random system over past 24 hrs(identical hardware)

    Average: 251W Max: 335W Min: 239W

    I have about 40 DL3x0 systems and have had 2 fan failures in the past 5 years (both of which on the same DL360 server, an HP StoreOnce system). Main point being I guess I am not concerned about frequent fan failures in accounting for power usage.

    Having more cores is nice but have to try to balance with other factors as well of course. I find it interesting that power per core on the newer intel chips is basically identical.

    I don't know if the numbers from the HP Power advisor are accurate or not - I have not noticed any other numbers(system wise) myself yet (though haven't spent a lot of time looking)

    I remember being super excited about the Opteron 6000 when it first came out, still have 15 DL358 G7s in operation, so far not nearly as excited about Epyc -- though I can see it's biggest strengths aren't in the market segment that is most important to me (two socket vmware hosts). If you need massive I/O and PCIe lanes(I don't) they look awesome though.

  7. Anonymous Coward
    Anonymous Coward

    Sell superior AMD systems or lose sales

    Dull has little choice as Epyc servers will dominate sales for years to come as AMD is continuing to provide superior performance for the price. Stealing Intel's lucrative enterprise market will hurt revenues in addition to Intel's massive security issues with all CPU models. AMD and consumers/enterprise are the winners.

  8. Anonymous Coward
    Anonymous Coward

    Dc power

    I think i see a dc power option.

  9. ColonelClaw

    NVMe RAID

    Here's what I don't understand - why would you buy the 24xNVMe chassis option, when there's no support for hardware RAID? In fact, why are there no hardware RAID cards that support NVMe (that I can find)? Are you meant to use a software RAID? Or just use the 24 drives separately?

    What am I missing?

    1. Anonymous Coward
      Anonymous Coward

      Re: NVMe RAID

      hardware RAID is too slow for NVMe. It would add latency where you don't need it.

      Yes, sw RAID is the way to go.

    2. vogie563

      Re: NVMe RAID

      NVMe slots like this are for the "software defined" storage roles, like VSAN, etc. You either have software RAID or you have object level replication doing your data protection instead of a RAID card. Putting current RAID type cards between these drives and the PCI bus would probably be a bottleneck.

      1. ColonelClaw

        Re: NVMe RAID

        Thanks for the answer Vogie and AC!

        Not quite sure why I got downvoted for asking a question

    3. pbuschman

      Re: NVMe RAID

      Software RAID (or some other resiliency model entirely) is better for an all NVMe system. There is no way a RAID controller can keep-up with the performance of even 2 drives, let alone 24. Dedicate some of those many cores to doing RAID or, even better, skip RAID altogether and create a scale-out cluster with resiliency above the level of the individual server.

    4. Levente Szileszky

      Re: NVMe RAID

      If you really want to try you can use Highpoint's new SSD7120 NVMe RAID HBA which also gives you (4) full 4x U.2 connectors on a single x16 card: http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm

      Where it has some merit is not the top seq read, as others pointed there sw will beat it, but I can imagine it might beats sw approach when it comes to random writes across (4) NVMe drives...

      ...that being said my main interest in this ~$3xx HBA is the ability to add U.2 drives (which come much larger sizes than M.2 sticks) to pretty much any host. :)

  10. msroadkill

    As we know, buyers have different priorities, but I think the largest group are focused on IO, and in this respect, intel is lame on single socket rigs. amd's 128 lanes wins hands down.

    The most exciting improvement in IT has been nvme, but they are very lane hungry.

  11. Crypto Monad Silver badge
    Alert

    Check out the pricing

    http://www.dell.com/en-uk/work/shop/servers-storage-and-networking/sc/servers/poweredge-rack-servers

    R640: Starting at £1,937.74 (for 10-core Xeon, dual CPU socket, 16GB RAM)

    R6415: Starting at £4,438.88 (for 8-core Epyc, single CPU socket, 8GB RAM)

    That is one hell of a premium for getting a Meltdown-free system :-(

    1. anoncow

      Re: Check out the pricing

      Something is wrong with your price information. "The R6415 starts at $2,179.00". Maybe you are looking at pricing for a 32 core unit? In which case, £4,438.88 is not too bad. Would be less now because the previously rare 32 core parts are in better supply.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like