back to article PCIe speed to double by 2019 to 128GB/s

The Peripheral Component Interconnect Special Interest Group (PCI-SIG) has revealed a roadmap for PCI 5.0 to debut in 2019 at 128GB/s. And that's before it finalises PC 4.0 at half that speed. The SIG met last week for its annual DevCon. The basic message from the event is that I/O bandwidth needs to double every three years, …

  1. A Non e-mouse Silver badge
    Pint

    R/F Design

    There must be some serious R/F design going on with those link speeds.

    1. jab701

      Re: R/F Design

      Yeah, there is.

      I was working for a company on 400Gbps Ethernet solutions last year. The 50Gbps transcievers used on the link (8 are used bonded together) are running PAM-4 modulation. :)

      I am wondering how long it will be before PCI-E will require optical fiber. It was thrown about as an idea for PCIe-4 but maybe thats where they will be going for PCI-e 5

    2. Christian Berger

      there already is

      PCI-E already has symmetrical lines which make the problem a bit easier, but those lines are proppely engineered HF transmission lines. As far as I know the lanes don't even have to be the same length electrically, as each lane has it's own sync signal. So though this is HF engineering, a lot has been done to make the problem simpler and therefore more solvable.

      However the problem is much more complex with memory interfaces as you have more lines and you need to have all your lines having the same delay.

  2. Anonymous Coward
    Anonymous Coward

    But Can It Run Crysis?

  3. Christopher Reeve's Horse

    One bus to rule them all?

    Looks like more and more things are going to consolidate to using PCI-E connections in a system, whether an internal slot or via high speed external connections like USB C. Storage is increasingly going that way - and as Optane starts to blur the lines between storage and memory, I wonder if DRAM will end up going that way too, once things get fast enough?

    1. Christian Berger

      Re: One bus to rule them all?

      Well external PCI-E on a PC is a very bad idea, as it allows an external attacker really easy access to your PC.

      However PCI-E is already common in some more exotic places like Routers. High performance routers (the kind that needs kilowatts just to drive the fans) already use PCI-E to connect the interfaces. They use a fabric of PCI-E switches there. This might even be the first area we see those new interconnects.

  4. Matthew 17

    I wonder what the Peripheral Component Interconnect Special Interest Group do for fun?

    And is there a casual interest group also?

  5. Sgt_Oddball

    the mind boggles

    Just what sort of through put could be achieved either with graphics cards or flash cards. Even better when start linking them (flash raid anyone? Maybe 3 way sli?) That's allot of bits to push.

  6. algotr

    128 GB/s, is that BOTH directions?

    Is it maximum 64 GB/s in ONE direction?

  7. Lusty

    Shoddy

    Half of the article refers to PCI rather than PCIe.

    Bandwidth won't be 128GB/s it'll be 4GB/s. 16x is just 16 lanes but you could easily use 32 lanes and get 256GB/s.

    The whole point of PCIe is that it uses lanes and you configure lanes in the mobo/chassis to your requirement. network cards are usually not 16x for instance.

    1. Charles 9

      Re: Shoddy

      Not really, as the slot design isn't long enough to make a 32-lane slot. GPUs are the traditional use case for something that needs all that bandwidth because the GPU chews through tons of data when running full out, so it's understood that the max bandwidth quoted is for a max-sized (16-lane) slot and provides a consistent metric. And while most network cards wouldn't need 16 lanes, an adapter for the emerging Ethernet standards probably would need it. Also, the trend in CPU and motherboard tech is to provide more of these lanes to accommodate more devices using them such as NVMe solid-state drives (these currently top out at 4x via the U.2 connector, but a future spec may expand this to improve performance).

      1. Lusty

        Re: Shoddy

        And nobody ever made a dual slot cart I suppose?

        The point was that most systems have a mix of 4, 8 and 16 lane slots in order to balance number of devices against bandwidth to devices. a lane or two is usually taken up by motherboard functionality too such as USB so my point stands that PCIe 5 is 4GB/s and can be aggregated while the article said something quite different in order to pointlessly sensationalise a quite bland subject.

        1. Anonymous Coward
          Anonymous Coward

          Re: Shoddy

          Except that it's not just El Reg. MOST jornos quote the x16 speed as that was the benchmarck ever since PCIe v1.0, where the aggregate bandwidth was the main selling point to wean buyers off AGP.

          1. Lusty

            Re: Shoddy

            Maybe in gamer mags. In the data centre 16x is pretty irrelevant for most use cases.

  8. talk_is_cheap

    AMD interconnects

    It starts to become very clear why AMD is happy to just use PCI links as its interconnects for its new server chips. Currently, the design uses 64 PCI 3.0 links to connect 2 processors which is as far as their published designs go. With PCI 5.0 they could greatly increase the overall interconnect speed or allow a 4 processor design using 16 links* between each with the same overall performance as now. I guess PCI 4.0 will provide a shorter term compromise with 4 possible processors connected at a higher overall speed, but lower processor to processor speeds.

    * A 4 processor system would only need 48 PCI links (16 to each of the other 3 processors), this would allow the possiblity of hyper-cube designs (many processors not directly connected to each other) if AMD designs the correct protocols for cross processor communications.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon