back to article 10 Gigabit Ethernet still too expensive on servers

It may not be entirely clear what is holding back the x86 server racket in the most recent quarter – round up the usual suspects – but networking market watcher Dell'Oro says it knows what isn't helping as it was expected: 10 Gigabit Ethernet. In a report casing the 10GE server adapter market it is a bit hard to figure out …

COMMENTS

This topic is closed for new posts.
  1. Steve Todd
    Stop

    Kind of forgetting the cost of the switches there aren't they

    It's no good having cheap 10G Ethernet cards in your servers if the switches still cost a bloody fortune.

    1. Ian Michael Gumby

      Re: Kind of forgetting the cost of the switches there aren't they

      I'm not sure who you're looking at but ToR 10GBe switches aren't that bad. Arista seems to have the lead in this space. Cisco? Seems to be $$$.

      Not that bad for enterprises... small companies... maybe still a pain point.

  2. A Non e-mouse Silver badge

    Servers need switches

    I've been having a look around at 10Gb kit. The adapters aren't the problem, IMHO. It's the switches. You either get a 1Gb switch with one or two 10Gb uplinks (too few), or a 24 port 10 Gb switch (with no other ports). 24 Ports is way too many - especially when you can't plug a 1Gb device into a 10Gb port.

    If switch manufacturers can produce smaller and cheaper switches, I'd be much more likely to move towards 10Gb. Until then, I'll just sit and wait.

    1. Andre Carneiro

      Re: Servers need switches

      " especially when you can't plug a 1Gb device into a 10Gb port."

      That's odd, I could have sworn 10GBASE-T was supposed to be compatible with 1000 and 100?

      1. Anonymous Coward
        Anonymous Coward

        Re: 10GBASE-T

        Obviously the down voter is well versed in tech. /sarcasm

    2. Captain Save-a-ho
      Boffin

      Re: Servers need switches

      You can connect 1Gbps to 10Gbps ports, but the costs of 10Gbps switch and its associated SFP+ optics make that a really stupid decision. Why waste the 10Gbps real estate on 1Gpbs when a smaller 1Gbps switch is much cheaper?

      One item not addressed in the article is that the uptake of 10Gbps has mostly been reserved for either server virtualization or interswitch/internetwork links. There's not many instances I know of where my clients have elected to spend on 10Gbps for any specific application. All the ones that come to mind are HPC-related and none of those represent the sort of volume that would start to shift the industry as a whole.

      1. Ian Michael Gumby
        Boffin

        @Captain Save a ho... HUH ? Re: Servers need switches

        "You can connect 1Gbps to 10Gbps ports, but the costs of 10Gbps switch and its associated SFP+ optics make that a really stupid decision. Why waste the 10Gbps real estate on 1Gpbs when a smaller 1Gbps switch is much cheaper?"

        First you don't have to go SFP+ connectors.

        Second, you can go 10GBe at the switch and then have your legacy hardware running, and then add in new 10GBe kit as you build out the rack.

        Third, you should also be able to upgrade your 1GBe kit by adding a 10GBe card, provided that you have an open PCIe slot available. (Some 1U boxes can only house a single PCIe slot which may already be in use.

        Beyond HPC there's this thing called 'big data' and Hadoop...

        -Just saying

  3. Jeff 11
    Stop

    £400 10Gig adapters aren't the issue. They cost roughly three times as much as high quality Gigabit adapters. Spending that much on organic tech growth is pretty much discretionary.

    By contrast the switches cost around eight times as much as decent Gigabit switches; they're all managed affairs (which isn't particularly desirable in a lot of backbone networks) and an £8k barrier to entry is not an easy pill for a lot of smaller businesses to swallow. The market is not going to take off until this becomes more balanced.

  4. Jim McDonald

    +1 for having 4-8 10GbE ports whether on their own or as upgraded ports on a larger switch.

    Until the price comes down we're stuck with link aggregation.

  5. justincormack
    FAIL

    problem is...

    ... there is no 100Gb ethernet upstream. No point having 10Gb on server without faster uplink.

    1. James 100

      Re: problem is...

      For a small network (20-60 machines?), 10Gb for the server going into one switch, then another 10Gb to the other switches if they're not stackable, would make perfect sense. 1 Gb can be a bottleneck these days: one fast workstation could saturate it, or a network backup, or syncing two servers up, but a well-placed 10Gb link or two could make a big difference without any need for 100Gb anywhere.

      Meanwhile, at work, our enterprisey IT department wanted four figures for a single GbE port ... having just rolled out FE everywhere, they said it would mean putting in an all-gigabit access switch specially, which would then need its own fibre links to the aggregation switch too...

      1. Callam McMillan

        Re: problem is...

        My university made use of 10Gbps links between the core switches, and that worked well. Cisco now offer 40Gbps as I am sure other network providers do. The problem is cost - for a laugh I quickly priced up a Cisco 6500 with 16 x 10Gbps ports and 2 x 40Gbps uplinks:

        The chassis will set you back £2800

        The power supplies will set you back another £3200

        The supervisor is £9700

        The pair of 40Gbps cards cost £32400

        Two eight port 10Gbps cards cost £38400

        That comes to a nice £86500. So by the time you've added in your optics etc you're probably looking at £100K retail for a switch. Obviously that's including the VAT and if you're buying that kind of kit, you'll have a discount relationship with Cisco, but even so, when you're talking multiple thousands per port, it just goes to support what everybody else is saying with regards to the cost of the switchgear.

        1. Schmeelster

          Re: problem is...

          Simple response here is don't use Cisco. There is no benefit in using the 6500 (aka "Trigger's Broom") as it is a jack of all trades and does not offer the simple 10gb/e switching you would need. Look at all the others (in alphabetical order) - Arista, Brocade, Dell (A10), Gnodal, HP, Mellanox.............possibly missed some others.......all would be keen to get anyone away from Cisco with kit that can be considerably less costly to buy and to run and may actually also offer considerably more benefits in the long run.

  6. Herby

    Or...

    We can all go back to token ring. Needs to be reliable, eh?

    Bits at 10G speeds aren't far apart (little over an inch), and the circuitry to figure it all out isn't easy to get right. Given that, it may be easier to just have point to point links between things in your server farm, and forgo the switches entirely. So, your server may need a bunch (half dozen) of 10G ports, but the rest (to talk to the rest of the world can be a bit slower.

    Decisions, decisions....

  7. naive

    It is a bit strange that 10GBit takes so long to become an affordable standard solution.

    100MBit came at the late 90's, and was succeeded quite fast by 1GBit, a 10 fold increase from 10Bit in the period 2000-2005.

    Also the hardware manufacturers play a role. I suspect that most of them held it back because, until recently, their PCI busses were way too slow for 10GBit. Also the servers of some premium brand (starting with a H) will blue-screen often when other 10GBit cards were used because the PCI bus timing is bad. (These cards did great on al the other brands).

    Since the networking industry still charges heavily on 10GBit connectivity kit, due to lack of competition because nearly everybody buys the same brand, 10GBit remains a distant promise.

  8. Anonymous Coward
    Anonymous Coward

    $629?!?

    Where can I buy? Sounds like the perfect NIC for my cluster nodes. As everyone says, switches are way too expensive still.

    1. Tom Womack
      Boffin

      Re: $629?!?

      For cluster nodes I'm not quite sure why you wouldn't use infiniband; colfaxdirect.com will sell you an 8-port QDR switch $2000 and adapter cards for $600, and it's four times the speed of 10Gb Ethernet.

      (they used to do SDR cards, which are equivalent of 10Gb, for $125 with a $750 switch, but those have mysteriously disappeared)

  9. the spectacularly refined chap

    10GbE is still a bit TOO quick...

    Consider the situation with previous ethernet speedups. Even in the early 90's a relatively mundane desktop PC could easily saturate a 10Mbit link. My first job after graduation in 1998 was still using a Mac SE/30 running A/UX as a fileserver. It was slow even then, but more than fast enough to go as fast as the network fabric. When people switched to 100Mbit they went to almost immediately at least half filling it from a regular desktop.

    With gigabit it was less clear cut, depending on what date you pick for comparison (there are still a lot of 100Mbit machines out there even now) but if we say perhaps five years ago your typical desktop may have managed 30MBps real world (not benchmark) performance from the hard drive. Even now many machines using real disks struggle to hit 100MBps in real world conditions. Even if they can go faster it is probably still under 200MBps - you are not going to get the same immediate five or six-fold performance increase.

    Sure, servers are servers and desktops are desktops, but that held just as true in previous generations and unless there is a lot of bandwidth concentration going on it is end-to-end single connection speed that still matters in many cases. Without the large performance boost there is little incentive for the early adopters to take on 10GbE while the pricing is so unattractive which creates a catch-22 - no-one will adopt it since the benefits outweigh the cost, and the cost won't come down until people adopt it. That will only change when cost-effective storage speeds up.

    1. DavidRa
      Boffin

      Re: 10GbE is still a bit TOO quick...

      While that's absolutely true, there are a couple of factors that many (including I) thought would force adoption faster.

      First and foremost, virtualisation. When you have 30 VMs on a single host (not hard to do whether you're ESX, Xen, KVM or Hyper-V) even a 4Gbps channel averages out at under 135Mbps per VM (and you're hoping the peaks on one VM cancel the troughs on another for higher throughput).

      Secondly, iSCSI storage, where 4-8Gbps total from the array might be OK for smaller environments but isn't enough for larger ones, especially during backup;

      Thirdly, server-backup. Aggregation only helps so much, as many (most?) aggregations split data across the connections based on source and/or destination IPs. So a single stream is limited to 1Gbps.

      All those scenarios would fare better with 10Gbps, especially if all the vendors start doing the funky network/bandwidth splitting like HP - where each 10GbE can be logically separated into 4 different virtualised adapters for the OS.

      In those cases, 2 x 10Gbps connections provides similar connectivity and throughput to 12 x 1Gbps connections. If 10Gb is only 5x the price, it's CHEAPER than doing 1Gbps.

    2. James Ashton

      No Desktop Need to Push Volume

      10Gbps is way faster than desktops need: 1Gbps is plenty for almost any application. So there goes the volume for 10Gbps hardware so it will say expensively in the server room.

      In contrast, 1Gbps ports appeared on desktop motherboards very quickly after the standard was released. The datarate was a good match for what a hard drive could manage so it made sense. All-Gbps switch prices quickly became very affordable too. None of this is going to be true this time around.

      1. Epobirs

        Re: No Desktop Need to Push Volume

        I agree with the first part about sales volumes at the client not being there to drive demand like previous generations.

        But you skipped the better part of a decade on the time from the standard appearing and it becoming typical in new desktop systems. At the time gigabit ethernet showed up most PCs didn't have a bus that could drive it properly. You had to have the extended PCI slot found only in high-end workstations until PCI-e starting replacing PCI in consumer desktops. Until then the biggest value for gigabit in most networks was to relieve backbone congestion.

        It hasn't been that long since Intel motherboard included third party controllers for gigabit networking support. Gigabit is now cheap enough that it's used throughout my entire household network from the router on down to the switch in the entertainment center, although the PS3 is only item in there that does gigabit. (The Wii U lacks a wired network port entirely, much to my annoyance.)

        Gigabit had a reasonable evolution but there is still plenty of 100 Mb gear being sold, especially on the consumer side. It's 100 Mb that had it really easy. Most of the world never dealt with networking before 100 Mb became the rule. In fact, I find I cannot recall ever seeing 10 Mb embedded on a motherboard.

  10. Rob F

    I am working on a education VDI deployment

    on a large greenfield campus. Their preferential vendor was Dell and their core switching is Brocade.

    Pricing up the Dell Blades was a strange one because the M620 comes with 10G on the LOM. Additional 10GB Broadcom cards were the same price as 1GB cards and that is before the insane discounts that the education sector gets.

    The M1000e was kitted out with 6x IO Aggregators which have QSFP+ on board for a 40GB to uplink. Unless you start spending silly money trying to convert the QSFP+ to optical, you need a SFP+ or QSFP+ capable switch. The one that made sense was the Force10 S2410, but it doesn't exist any more so we had to go for the S4810 which comes in at $30k and then there is the price of the cables. Again education prices made these dramatically cheaper.

    Where it got really expensive was the Brocade MLXe 24 port SFP+ modules. Unfortunately the Brocade MLXe doesn't support 40GB QSFP+, but their top of rack switches, the ICX 6615 does. Also Dell were being really fussy about what they would support, so this became a no go.

    So really, this reiterates what others have said. Switch prices and options are the problem. I am struggling to justify having 96 SFP+ ports on a two-switch redundancy design, but it was really the only option given to us. When the environment is fully kitted out with SANs, uplinks, connections to the Blade Chassis, we will only need to use 4 per blade chassis, 1 per iscsi SAN and maybe 4 to the core.

    Datacentres have a different problem, but environments that are smaller really need some smaller switch options to justify 10GB or a reduced price point.

This topic is closed for new posts.

Other stories you might like