back to article Internet be nimble, internet be QUIC, Cloudflare shows off new networking shtick

CloudFlare has puts its weight behind a new internet protocol that should make mobile browsing faster and more secure. The content delivery network has launched a new test site for Quick UDP Internet Connections (QUIC), complete with a variety of blog posts outlining the protocol's promise in both general and technical terms …

  1. Anonymous Coward
    Anonymous Coward

    cellular roaming

    "you will be constantly shifting your network address as you move from cell tower to cell tower"

    If the cellular connection drops completely for longer than that providers timeout, a new address can be expected, but when roaming between cells one's IP (v4 and/or v6) address should remain the same.

    An address change would usually be expected when roaming between WiFi and Cellular.

    For a "simple" test, open a SSH session from a mobile device to somehwere else, and roam between celular and WiFi to see what happens when your address changes, then while only on cellular go on a walk/bus/taxi journey.

    1. Jamie Jones Silver badge

      Re: cellular roaming

      Aye.... Back in the 90's I held open a telnet session (back in the days where packet sniffing hadn't been invented :-) ) via 2G from West London to East Essex...

  2. Ian Mason

    Shome mishtake shirley?

    Would it be too much to ask that El Reg, when doling out work, gives the stuff out about network protocols to people who have, even a vague, grasp of how they work?

    The relationship of UDP to underlying IP addresses is exactly the same as TCP's relationship to underlying IP addresses, as indeed is QUICs. So saying "And the reason is that TCP intrinsically assumes you will stay at the same address on the network while you are sending and receiving information." as if the very same thing doesn't apply to UDP or QUIC is a huge clue that the writer is outside the zone of their competence.

    An "explainer" article should do that, not muddy the waters further.

    1. Steve 53

      Re: Shome mishtake shirley?

      Yes, Jesus wept at this article... A cursory check of Wikipedia would have spotted half the issues.

      It will also mean saying goodbye to the protocol that effectively made the internet possible: TCP.

      TCP will continue to be a fallback, not least because there is no support for UDP tunnelling under a HTTP proxy

      "And the reason is that TCP intrinsically assumes you will stay at the same address on the network while you are sending and receiving information. As soon as you starting moving around however, that address shifts. If you leave your house and your home Wi-Fi to join a 4G network, that's one shift."

      Yes, that would be. At which point you'd have the break down the old TCP conneciton and build a new one. But UDP despite being stateless is likely still going through NAT / GiFW, so you'll still need to send packets to get traffic.

      "If you get on a bus or a train to head to work in the morning, or if you stroll home at the end of the day, you will be constantly shifting your network address as you move from cell tower to cell tower."

      Handoff between cells generally keep the same IP. Not all subscribers, but the vast vast majority

      "This modern use of the internet has already led to plenty of other changes and improvements to existing internet protocols – for example, the shift from HTTP 1.1 to HTTP 2.0 was largely because people now use multiple applications at the same time and expect each to be able receive data."

      Jesus wept. HTTP 2.0 allows multiple streams of data to a single service, not multiple services, not from multiple applications. With HTTP 2.0 you'll establish a new TCP connection for each app to each destination, or with QUIC UDP.

      "What's more, if you are moving around from network address to network address, this UDP approach should end up much faster because it pulls out TCP's checking mechanism, speeding things up."

      Checksums are offloaded to hardware, so the "Effort" is minimal. With UDP over IPv4 checksumming is technically optional, but if you skip it you have to zero pad the checksum field, so you don't reclaim bandwidth. Under IPv6 it's mandatory anyway, as skipping checksumming makes no sense. Besides, you need to hash for DTLS anyway.

      What's faster is you have direct control of the congestion control algorithms, fewer roundtrips to bring up a "Connection", etc.

      "And that's what first Google and now the IETF internet engineers have been working on: how to add TCP-style encryption and loss detection to UDP. It will also add in the latest standards like TLS 1.3."

      TCP doesn't have encryption. TLS only runs over TCP, true, but DTLS (UDP transport) has been around for a very long time

      "It will create problems for people using NAT routers as a way to handle the painfully slow move from IPv4 to IPv6. NAT routers track TCP connections to work seamlessly and since QUIC doesn't use TCP, its connections through such networks could well drop out."

      Bollocks. NAT routers track UDP "Connections" in more or less the same way as TCP. Plus QUIC clients fall back to TCP in case of issues

      "Likewise, if a network is using Anycast or ECMP routing – both used for load-balancing - the same problem will likely occur."

      Anycast and ECMP break TCP too. And require more work to re-establish

      1. Michael Wojcik Silver badge

        Re: Shome mishtake shirley?

        It will also mean saying goodbye to the protocol that effectively made the internet possible: TCP.

        TCP will continue to be a fallback, not least because there is no support for UDP tunnelling under a HTTP proxy

        And because many HTTP clients and servers will never implement QUIC, and legacy software tends to stick around far longer than some people think.

        And because the Web is not the Internet. I don't foresee QUIC ever being very popular outside the HTTP domain.

  3. Herby
    Joke

    "Check one two"

    Proves the maxim: Sound guys can only count to two. Then there are those that understand binary.....

    1. EnviableOne

      Re: "Check one two"

      this is only used by those idiots that think they know what they are doing

      the one two transition checks the low range, you need to do the two for the hard t and the two three transition to check the high range too, so the "check 1,2" brigade are as clueless as the author!

  4. onefang

    Dropping TCP and trying to layer the same sort of reliability of transport on top of UDP is what SecondLife tried long ago. That didn't work out so well, constant lag, "What does it look like at your end" *, having to rebake a lot, etc. The advantage of using TCP protocols to transport content around is that we have been doing that for a very long time, we know how to do it, there's many tools to work around the kinks.

    * Though I'll note something that not many people know, the texture used on terrains is procedurally generated at the client end, with some randomness, each time you visit a place. So not only will that look different to different people, it'll look different next time you look. Nothing to do with UDP v TCP.

    1. Jamie Jones Silver badge

      Aye, and streaming was all UDP (with TCP an unfavoured fallback) at one time.

    2. Anonymous Coward
      Anonymous Coward

      Don't blame poor little UDP for second life.

      Linden Labs and it's code is to blame for the lag, not UDP : )

      I mean if my house falls down, I don't blame the hammer right?

      UDP should probably lawyer up and seek damages for emotional distress and a toxic work environment for being forced to act as a beast of burden for Second Life's flying todger clouds.

      Seriously though, as TCP hits all sorts of limits that cause less than Ideal behavior under scale/load/latency people have been deploying "UDP + whatever I needed from TCP without the other stuff" for ages. The issue is really that none of these "I rolled my own" protocols has been able to convince the rest of the world to use their unfiltered cigarette solution en masse. (My bet is the naive substitution of FEC for reliable delivery or re-transmission windows played a part there, YMMV)

      So the tools are cowboy country compared to the allure of TCP and its ubiquitous an pervasive library support. TCP is a default assumption that people just make until it starts to break things.

      That leaves the discussion mostly to the small community of people who have probably been wrangling things like UDP and multicast for decades, who by the tone of this thread aren't being wowed out of their socks.

      1. onefang

        Re: Don't blame poor little UDP for second life.

        "Linden Labs and it's code is to blame for the lag, not UDP : )"

        As a developer that has had years of experience working with Linden Labs code, and the OpenSim variety, I'll happily admit it's all seriously crap. I've said so often enough in the past. I was saying so to someone I gave a demo to today when my avatar refused to lie down on the beach towel. But even they have replaced some of their home grown UDP mess with standard HTTP. Sure, UDP isn't to blame, but as you mentioned, LL's crappy re-implementation of bits of TCP was, and their other crap code.

    3. Michael Wojcik Silver badge

      Dropping TCP and trying to layer the same sort of reliability of transport on top of UDP is what SecondLife tried long ago.

      Reinventing TCP with UDP is common enough that it was a major trope on comp.protocols.tcp-ip. Hardly a week would go by without a regular pointing it out to some newbie.

  5. tip pc Silver badge
    FAIL

    VPN’S ALREADY USE UDP

    No one wants a lost encrypted packet back as it could be part of a replay attack, as in constantly lose the x packet and let’s see how it gets reencrypted to see if we can decipher the key, that’s why VPN’S use UDP.

    Space comms use interesting error correcting UDP like protocols simply because the round trip times of a tcp like protocol are prohibitive.

    Any chance el reg can get some journalists that actually understand networking to look write or even proofread these articles? Network types are very much under served yet everything anyone does is totally reliant on us getting our shit to work.

  6. RAM Raider

    I could tell you a joke about UDP...but you may not get it. Oh wait!

    1. Francis Boyle Silver badge

      Yor riht

      I do't undrssnd yur jke.

  7. stiine Silver badge

    if they want to do way with round-trip-times

    All they have to do is bring back multicast. We used to multicast the quarterly all-hands broadcasts on our LAN back in the 1990's.

  8. LondonDario

    "(UDP) It's your fun but unreliable uncle" best networking protocols metaphor I have heard in 19 years in IT! ;-)

  9. Mario Becroft
    Go

    There is a real point to QUIC though--my inferences only, haven't read the (draft) specs.

    The big one is maintaining connection state while moving between networks. It's great when the PPP session to your mobile provider hands over cleanly from cell-to-cell, but although this can happen as others noted, it can by no means be relied upon (especially if dropping in and out of coverage) and certainly not when moving between wifi networks.

    Now, in my mind this might all be best solved by IPv6 and appropriate routing protocols. But we don't have those deployed in a way an average (or even technically-savvy) person can even remotely use. So, pragmatically, onus of fixing this increasingly falls to the application layer.

    The dream is that no matter where you are, networks via which you might be connected, or plain inadequacies in mobile infrastructure...your connections should retain any state and continue to work. If you suspend your laptop or shut down your phone radios, 10 hours later and in another timezone, the conections should remain established and working the instant you pick up your phone again. They should also have differential service, so e.g. your html, css and js must be received fully and in-order at the expense of possible retransmissions, the RTP traffic as you talk can trade packet loss for reduced latency Techniques not used before at this layer such as FEC could reduce the impact of retransmission.

    Not to mention revamping TCP's strategies for handling packet loss to recover much more rapidly in a fast changing network environment.

    Everybody wants this. Apple and Google have been hammering at the problem internally and implemented some proprietary solutions.

    For UN*X users, the mosh project has implemented the same concepts for some years now. It is breezy. I can have a dozen ssh sessions on my laptop, suspend it, wake up in another country, unsuspend, and in no more than 1 to 2 seconds, all my ssh sessions continue uninterrupted, with any output while I was offline being immediately refreshed.

    This is the end-game and it ever more compelling. I will be interested to learn more about QUIC and see how it performs in the real world.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like