back to article IETF protects privacy and helps net neutrality with DNS over HTTPS

The Internet Engineering Task Force has taken the first steps towards a better way of protecting users' DNS queries and incidentally made a useful contribution to making neutrality part of the 'net's infrastructure instead of the plaything of ISPs. The Register first noticed the technology in this article by Mark Nottingham ( …

  1. Christian Berger

    Now this would be a great idea...

    ...if HTTPs wasn't built on TLS for these reasons:

    1. TLS is to complex to be implemented without security critical bugs, so in the end this may enable all kinds of attacks. Remember Heartbleed?

    2. TLS is based on CA infrastructures which only are safe, when every single CA is safe. Though you can work around this by having your own CA for DNS.

    It just seems like DNSSec would solve most of those problems with far less effort.

    1. Charles 9

      Re: Now this would be a great idea...

      The article itself notes that DNSSEC doesn't help if the ISP is willing to block DNSSEC at its level by port-checking (this is also how ISPs can enforce their own DNS even above self-chosen resolvers: by hijacking the port wholesale). The ONLY solution against such a determined adversary is to "piggyback" it on something the ISP can't block without complaints. Since the connection is encrypted, the ISP can't tell what the connection is calling (unless it's an enterprise-level secure proxy, in which case you were screwed before you started).

      1. Eugene Crosser

        Re: Now this would be a great idea...

        What @Charles 9 said, plus DNSSEC unfortunately only guarantees non-repudiation, but no secrecy. So it stays open to data harvesting (user profiling and surveillance), and to blocking on per-domain level (that governments love to do).

        On the other hand, tunneling DNS in a TLS session is only practical when you already have a persistent connection, otherwise latency will be unbearable. And that means, this approach only works inside the browser, when you are looking at a website with lots of external links, to resolve these links. Meaning, all the surveillance that your ISP or government did on you is moved to Google and Facebook.

        Whether this is a good or bad thing is up to you to decide.

        1. Doctor Syntax Silver badge

          Re: Now this would be a great idea...

          "Meaning, all the surveillance that your ISP or government did on you is moved to Google and Facebook."

          This is the real problem. The bottom line might be that you'd have to take a paid service from a provider in a country that takes privacy very seriously. DNS, email, storage hosting; eventually a small country is going to realise that this could be a nice little earner - just like running a tax haven and maybe a prerequisite.

          1. Anonymous Coward
            Anonymous Coward

            Re: Now this would be a great idea...

            you could send different DNS requests to different providers - at least one entity wouldn't have all your requests. Would have thought a simple plug-in could do this automatically

      2. Christian Berger

        Re: Now this would be a great idea...

        "The article itself notes that DNSSEC doesn't help if the ISP is willing to block DNSSEC at its level by port-checking"

        Considering that in most countries where ISPs block DNSSEC or external DNS queries, they also likely break HTTPS, I don't think it's much of an advantage.

        1. Charles 9

          Re: Now this would be a great idea...

          "Considering that in most countries where ISPs block DNSSEC or external DNS queries, they also likely break HTTPS, I don't think it's much of an advantage."

          More and more sites are going HTTPS-ONLY, meaning you'd be shutting your people out of popular services like Facebook. Like I said, that's going to start raising complaints.

    2. A Non e-mouse Silver badge

      Re: Now this would be a great idea...

      TLS is to complex to be implemented without security critical bugs, so in the end this may enable all kinds of attacks.

      ALL cryptography is complex*. There are so many ways to get it wrong in lots of non-obvious ways. That's why the mantra is: Don't ever try and invent or write your own implementation as you're almost guaranteed to get it wrong.

      *Except double-ROT13

      1. Charles 9

        Re: Now this would be a great idea...

        "That's why the mantra is: Don't ever try and invent or write your own implementation as you're almost guaranteed to get it wrong."

        But hasn't another mantra emerged, too? "Don't rely on other people's work because you can't be sure they got it right (or worse, were subverted without your knowledge)."

        So basically, if you want something done right, you MUST do it yourself, only you practically CAN'T do it yourself because Encryption is HARD and most people can't handle it right. Does that mean we're basically screwed either way?

        1. Anonymous Coward
          Anonymous Coward

          Re: Now this would be a great idea...

          Isn't "Trust but verify" and answer to that?

          And isn't that essentially what OpenSource (and its descendants) is about ?

      2. Arthur the cat Silver badge

        Re: Now this would be a great idea...

        ALL cryptography is complex*.

        *Except double-ROT13

        Some people would get that wrong as well.

      3. Lysenko

        Re: Now this would be a great idea...

        Don't ever try and invent or write your own implementation as you're almost guaranteed to get it wrong.

        That depends on the value of "wrong". All commonly used industry standard encryption is "broken" in a mathematical sense. It is fully deterministic and the maths needed to break encryption keys is mostly a solved problem (factoring primes etc). It is the *scale* of the problem that provides the security, not the principle.

        I've knocked up algorithms and implementations for fun and one of them is actually in use for live data (again, for fun). I am certain that the NSA is closer to cracking AES than they are to cracking my cypher, not because of the quality of my code or my maths ability, but because one time pads cannot be broken except by resolving the random number generator that created the pad and that isn't feasible retrospectively for a pad generated from RF static.

        Once you have a secure pad, you can implement your own substitution cypher pretty much any way you like and security isn't compromised in any way. If I were directing anything criminal I would issue my confederates with 128Gb USB sticks full of random noise and cease to worry about spying. The only way to break in is to get a copy of the pad and that is the exact same problem as getting a copy of any other password that is too long to memorize (that's all an OTP is after all - a giant password).

        1. Doctor Syntax Silver badge

          Re: Now this would be a great idea...

          "The only way to break in is to get a copy of the pad"

          That's also your weakness. The recipient of a message also needs a copy of the pad. That means that you have to have a secure method of distributing the pads.

          1. Lysenko

            Re: Now this would be a great idea...

            That's also your weakness. The recipient of a message also needs a copy of the pad. That means that you have to have a secure method of distributing the pads.

            Oh, I agree, but if you're establishing any sort of long-term relationship with the peer then there are established mechanisms for that just as there are for distributing physical credit cards. In my case, they're remote SBCs programmed by me and then deployed in the field.

            1. A Non e-mouse Silver badge

              @Lysenko Re: Now this would be a great idea...

              The reason public key cryptography was invented was to remove the burden of distributing one-time pads.

              1. Lysenko

                Re: @Lysenko Now this would be a great idea...

                The reason public key cryptography was invented was to remove the burden of distributing one-time pads.

                I'm aware of that, however, for some use cases, OTP distribution isn't a significant problem. However, a perpetual problem with IT security is cargo culting "best practices" without really understanding them and imagining that it eliminates the need to model your attack surface and threat environment. Sometimes TLS/RSA/AES isn't the best answer[1] and "roll your own" encryption can be a superior solution.

                [1] IoT leaf nodes, for example. You often don't have the MIPS, RAM or power budget for anything like TLS but you might have enough flash to store an OTP large enough to last for the battery lifetime of the unit.

      4. Christian Berger

        Re: Now this would be a great idea...

        "ALL cryptography is complex*. There are so many ways to get it wrong in lots of non-obvious ways."

        Yes, but putting ASN.1 into it certainly doesn't make it easier.

    3. bombastic bob Silver badge
      Devil

      Re: Now this would be a great idea...

      aside from the "pay the toll" CA cert assignments for https, it's not a bad idea. It WOULD help to prevent MITM attacks on DNS. Not sure what DNSSec does to mitigate THAT one.

      but yeah, why invent a NEW protocol if an EXISTING one does the job? Let's just implement the existing one first, and see if that needs to be fixed/updated/enhanced/whatever...

      1. Crypto Monad Silver badge

        Re: Now this would be a great idea...

        There is a bigger problem to consider: SNI (Server Name Indication), which allows a single webserver / IP address to support multiple certificates.

        Over time, SNI has become more and more prevalent, and it means that someone sniffing a TLS connection can see what hostname you're trying to connect to. TLS itself is now so leaky that DNS is no longer the issue.

        To prove this to yourself:

        1. In one terminal, type:

        tcpdump -c10 -i eth0 -nn -s0 -A '(host 104.20.250.41 or host 104.20.251.41)'

        (WIndows users: do the Wireshark equivalent)

        2. In another terminal, type

        curl https://www.theregister.co.uk/

        (or just use a web browser which hasn't recently connected to El Reg). At around the fourth packet in the exchange you'll see:

        13:18:11.210438 IP x.x.x.x.55010 > 104.20.250.41.443: Flags [P.], seq 1:206, ack 1, win 8192, length 205

        E.....@.@.-.

        ...h..)......_.....P. .i>.............Z2z.H(...<...._.....K...'.>.~..b..D...,.+.$.#.

        . ...0./.(.'...........k.g.9.3.......=.<.5./.

        .............W.........www.theregister.co.uk.

        .......................................................

        There it is - in clear text - the name of the secure site you're connecting to.

        SNI is a result of IP address depletion, but actually the problem would still be there without it.

        Even if every TLS website had a unique IP address (and SNI were disabled), you could still easily build a database of hostname to IP address mappings, just by taking logs from any heavily-used DNS cache.

        Generating random IPv6 target addresses for each distinct client might help.

        1. Blotto Silver badge

          Re: Now this would be a great idea...

          @crypto

          No need to build a DB of hostnames to ip’s that’s what a DNS server is for, or am I missing something here?

        2. Jaybus

          Re: Now this would be a great idea...

          "Even if every TLS website had a unique IP address (and SNI were disabled), you could still easily build a database of hostname to IP address mappings, just by taking logs from any heavily-used DNS cache."

          Nevertheless, it doesn't affect what I think is the principle feature of DOH. Sniffing out what sites are being visited by hosts on your network is possible, but DOH would prevent redirecting those hosts by altering the DNS replies that they see.

  2. Herby

    This proves it...

    There was an old adage (from when Usenet was going strong), that went like "The net sees censorship and routes around it".

    Now we know for sure!

    1. Charles 9

      Re: This proves it...

      You meant it TRIES to route around it. But, like with a crevasse, you eventually reach an impasse (and that impasse can come with ISPs blocking encryption wholesale at most levels).

      1. Doctor Syntax Silver badge

        Re: This proves it...

        "and that impasse can come with ISPs blocking encryption wholesale at most levels"

        The points about ossification and greasing made in the linked article ( https://blog.apnic.net/2017/12/12/internet-protocols-changing/ ) are worth a read. But in this case encryption of HTTP is now so prevalent that an ISP who tried blocking that would be out of business PDQ. That's why initiatives such as DOH use HTTP.

        1. Charles 9

          Re: This proves it...

          "But in this case encryption of HTTP is now so prevalent that an ISP who tried blocking that would be out of business PDQ."

          Not necessarily, if they're (a) working under a government mandate, meaning they're dead if they DON'T do it, or (b) ALL the ISPs are working in a cartel to ensure data harvesting.

    2. Anonymous Coward
      Anonymous Coward

      Re: This proves it...

      If by "routing around it" they meant "turn everything into HTTP/HTTPS because that's all you can count on firewalls letting through" then I guess they were right.

      1. Nick Kew

        Re: This proves it...

        Golly, is it really ten years since I wrote this?

  3. theloon

    the devil is in the implementation

    Since your average user blindly accepts their LAN dhcp from their provider, and per device / per network config is an effort unlikely to be taken by said user... deployment seems only for the 'advanced' ... at best

    Likely this is the same set of folks already using services like dnscrypt.

    Today's now increasingly redundant IETF once again looking for a horse which bolted 5-10 years ago.

    1. Charles 9

      Re: the devil is in the implementation

      Except with an implementation like this, router makers can take control back from the ISPs by using the implementation and instead defaulting to the likes of OpenDNS confident the ISP can't hijack it back. That kind of approach would even protect the Stupid User.

      1. Sven Coenye

        Re: the devil is in the implementation

        The same batch of router makers who seem to be unable to get their equipment to perfom even the basic functions without screwing something up? And who can't be bothered to ever fix that because it would cost them money?

        How long before they discover they can make money by sending your DNS traffic to the highest bidding profiler?

      2. Blotto Silver badge

        Re: the devil is in the implementation

        @charles

        It’ll be the browser makers that’ll do dns look ups in browser over HTTPs instead of whatever they are doing now. I think browsers can already ignore local dns servers and instead use their own, will need to have a play to confirm but seem to remember safari and chrome working fine when my dns server was restoring last time round.

  4. Nick Kew

    DNS scales ...

    Everything about DNS is designed to scale to billions of connected devices.

    HTTP is intrinsically more heavyweight, and would need some careful design work to have a hope of scaling like that (HTTP "edge" devices do some of that - including their own DNS resolution in at least some cases).

    And HTTPS is completely off the scale, not so much in the crypto work where one might invoke Moore's Law, but because it precludes regular HTTP cacheing. That's a whole nother kettle of ballgames (damn, my metaphors are getting as confused as the idea), and when someone implements a cacheing DNS-over-HTTPS agent that'll make a juicy target for blackhats attacking regular HTTPS.

    Sure, there could be uses for this. But to replace regular DNS? What could possibly go wrong with so many new layers of overhead and complexity?

    1. bombastic bob Silver badge
      Devil

      Re: DNS scales ...

      (from the article)

      If, for example, you're trying to download a Web page which embeds sa dozen external links, that's a dozen DNS lookups slowing down the load.

      As I recall you can look them all up simultaneously with a single DNS query to a non-authoritative (cacheing) name server if your DNS request packet has multiple "questions" in it, but it's highly likely that most web clients don't actually do that (or won't) because they tend to split processes up into WAY too many pieces (like some kind of a coding bureaucracy), and so the pieces don't know what the other ones are doing...

      But THAT particular problem (the 12 DNS lookups for a single web page) is REALLY between the web developer's chair and the keyboard.

      read: WHY in the HELL must YOUR web page query 12 COMPLETELY DIFFERENT SERVERS in order to display YOUR content?

      ^^^ put THAT blame where it belongs

      1. Claptrap314 Silver badge
        Pint

        Re: DNS scales ...

        One dozen? You poor summer's child. Try turning on NoScript or uMatrix. Be prepared for a bout of depression.

        Beer, 'cause you're gonna need one.

      2. Anonymous Coward
        Anonymous Coward

        Re: DNS scales ...

        "As I recall you can look them all up simultaneously with a single DNS query to a non-authoritative (cacheing) name server if your DNS request packet has multiple "questions" in it,"

        That feature isn't supported by any major DNS resolver. You probably could, however, send multiple queries at once with HTTP pipelining,

    2. michael.moon

      Re: DNS scales ...

      I agree with you in principle, perhaps with caching of the dns responses it might offload the initial hump of all the ssl setup , as in dns address's do not on a regular basis change , so if one can cache the answers for much longer , most things shouldn't break . As long as the dns server ( preferably in your router) ,implements this and receives updates for when bugs are detected with the service.

      It offers better privacy and net neutrality.

      For all readers that say if you are doing nothing wrong you have nothing to hide, I say great please send me copy of passport, birth cert , address, bank questions, copy of your bank card would be nice along with the pin code, O and all the naked selfies you have ever taken. Those not willing to do this I am assuming now understand WE ALL HAVE THINGS WE SHOULD HIDE, also because naked selfies of me you would need bleach on the brain to get the image out of your head :-) .

    3. handleoclast

      Re: DNS scales ...

      @Nick Kew

      Sure, there could be uses for this. But to replace regular DNS? What could possibly go wrong with so many new layers of overhead and complexity?

      The one that immediately springs to mind is the need to hardwire the IP address of the web server you get your DNS from, in order that you can get DNS from it.

      The one that comes to mind next is what happens when that webdns server is down.

      The one that come to mind next is how does the webdns server get IP addresses (and all sorts of other DNS info)? Does it use ordinary DNS to query ordinary DNS servers, in which case you've not eliminated your trust problem, just pushed it a little further away. Or does it use webdns, where we end up with every IP address hard-wired into gazillions of webdns servers and a horrendous scaling problem?

      The thought that comes next is that DNS is (well, was) a very lightweight protocol. It used a primitive compression to keep packet size down whilst not requiring excessive hardware resources. It used UDP (for ordinary queries) rather than TCP. Webdns appears to be more of a sumoweight protocol.

      There are probably many other things I lack the knowledge, experience and wit to think of.

      Apart from that, it seems like a wonderful idea. Even if the phrase "D'oh!" keeps going around in my head.

    4. Blotto Silver badge

      Re: DNS scales ...

      @nick

      Http2 permits the reuse of an existing http/s session, so a remote web server could serve all the resources of the page under its existing session without additional HTTPs renegotiation.

  5. Arthur the cat Silver badge

    Chickens and eggs?

    DNS over HTTP. HTTP needs DNS to get the address for the HTTP request, which it gets via DNS over HTTP ...

    1. A Non e-mouse Silver badge

      Re: Chickens and eggs?

      What's wrong with

      https://8.8.8.8/dns?q=www.theregister.co.uk

      1. Nick Kew

        @the d-rat (non e-mouse)

        What's wrong with ...

        Now tell us how you propose to scale that to serve a few billion devices.

        1. Warm Braw

          Re: @the d-rat (non e-mouse)

          It's pretty difficult to scale anything if you can't trust your ISP - and moving the "scaling" functions (such as caching and load balancing) away from the supplier you're paying to provide them to a provider who can only get their income from elsewhere is not necessarily in your long term interests.

          There's plenty of opportunity for mischief if a malicious site I happen to visit also provides the DNS translation of the various domains linked from the same page - or can potentially influence the translation of domain names of sites I have yet to visit.

        2. A Non e-mouse Silver badge

          Re: @the d-rat (non e-mouse)

          Now tell us how you propose to scale that to serve a few billion devices.

          That's a very valid question (And I'd be interested in the answer) but it wasn't the question that was asked ;-)

          1. John Robson Silver badge

            Re: @the d-rat (non e-mouse)

            >> Now tell us how you propose to scale that to serve a few billion devices.

            >That's a very valid question (And I'd be interested in the answer) but it wasn't the question that was asked ;-)

            Given that there is a reasonable likelihood that you'll want a handful of queries then you could do a couple of things to make it much more efficient...

            You don't drop the https connection straight away - you hold it open for a second or two, so that subsequent queries can be sent over the existing connection.

            Maybe even have a scan through the page you load and fire off all the queries in one question as well.

            The point isn't individual scale at this point - it's getting a 'known' and (for some level of the word) trusted provider to give you DNS results.

      2. PyLETS

        What's wrong with https://8.8.8.8/

        I don't think CA's trusted by any browser currently issue certificates per IP address. I'd also guess it would be insecure for them to do so unless they only issued these for addresses known to be static for the future lifetime of the certificate anyway, and I guess also that the PTR reverse mapping pointed back to a domain which also participates in the same ownership establishment protocol. Could possibly be done in the IN-ADDR.ARPA domain using DNSSEC.

    2. bombastic bob Silver badge
      Happy

      Re: Chickens and eggs?

      ok nice logic puzzle except that the DNS server is usually supplied as an IP address, not a name.

      made me think a bit, though.

      1. Doctor Syntax Silver badge

        Re: Chickens and eggs?

        "made me think a bit, though."

        Not too long, I hope. The reminder was in A Non e-mouse's reply.

  6. MacroRodent

    Unwarranted optimism

    > That's where DOH reaches into the 'net neutrality debate. For example, if a network provider is using DNS to identify sources it wants to discriminate against, it will be defeated by the encryption.

    I'm afraid that is too optimistic. The evil network provider could simply block all https requests towards known DOH servers. Or manage to deep-inspect the packets to detect DOH.

    1. Charles 9

      Re: Unwarranted optimism

      But those DOH servers can ALSO be legitimate (not to mention POPULAR) web destinations such as Google and Facebook. Any ISP that tries to block Google and Facebook are likely to start getting complaints.

  7. Daggerchild Silver badge

    "Nobody would be THAT evil!" she said naively.

    So this all relies on the ISP being unable to stop you using end-to-end encrypted 443 traffic?

    Up next: All Comcast customers must include a Comcast root cert to access the web.

  8. Richard Conto

    How wonderful for scammers!

    This sounds like a great way for a scammer to take over your browser.

    And the premise of the article is variant of "not invented here" - or "I don't want to drive my father's Internet".

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like