back to article World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved

A much-needed update to internet security has finally passed at the Internet Engineering Task Force (IETF), after four years and 28 drafts. Internet engineers meeting in London, England, approved the updated TLS 1.3 protocol despite a wave of last-minute concerns that it could cause networking nightmares. TLS 1.3 won …

  1. Will Godfrey Silver badge
    Black Helicopters

    Round we go again

    Well, I'm happy to see this, but can't help wondering how long it will take for someone to find a hole.

    1. Oh Homer
      Headmaster

      Re: Round we go again

      This is normal. Security, either of the digital variety or in the real world, can in principle never be absolute, so it's futile and indeed dangerous to have that expectation.

      A door without a lock is more secure than no door, but a door with a lock can still be forced open with a battering ram. A security door can't be kicked in, but it can still be breached with a plasma cutter. Some vaults are resistant to plasma cutters, but even they can be breached with a drill and a well-placed explosive.

      The point of security is never to stop incursions, it's only to discourage opportunistic attempts, and to slow down planned attempts long enough for mitigation (e.g. escape).

      You need to start with not merely the suspicion, but the absolute certainty that your security will fail, because only then will you have the foresight to plan what to do about it when it does. Your security only gives you time to execute that plan. This is the best you should reasonably hope for.

      In cryptography, that time may be years rather than minutes, but the principle remains the same.

      1. Charles 9

        Re: Round we go again

        Which still means squat if they can break in TRACELESSLY. You can't mitigate something you don't know about.

    2. JeffyPoooh
      Pint

      Re: Round we go again

      "...until they figure out a way to crack this new protocol. At which point the IETF will start on TLS 1.4."

      They're doing it wrong. The IETF should immediately switch mental gears and try to replicate the approaches that the miscreants will employ, to try to stay ahead or keep up. Waiting is obviously a suboptimal approach.

      1. Amos1

        Re: Round we go again

        Given that those are fairly intelligent people and undoubtedly have learned the lessons of OpenSSL and other bad implementations, I'm certain that as much as of that was as could be done during the threat modeling portion of the TLS 1.3 development was done. That's the value and danger of an open source protocol.; people can rip into it before it comes out.

        1. Michael H.F. Wilkinson Silver badge

          Re: Round we go again

          In cryptography you tend to assume that the adversary knows exactly what your encryption algorithm is. It should be (relatively) safe even then. Security through obscurity doesn't work for long

      2. Afernie
        Trollface

        Re: Round we go again

        "They're doing it wrong. "

        I expect if you write them a letter telling them all about how much better you could do it, they'd hang on your every word.

      3. Michael Wojcik Silver badge

        Re: Round we go again

        They're doing it wrong. The IETF should immediately switch mental gears and try to replicate the approaches that the miscreants will employ, to try to stay ahead or keep up.

        Why do crypto stories bring out the sophomoric posturing?

        The IETF does not, as a body, perform cryptographic research. That's done by independent researchers, alone or (more typically) in teams. The IETF is a standards body.

        Security researchers have been looking at all aspects of TLSv1.3 since they were published or presented. It's not like 1.3 was a big secret until it was finally approved. All of the algorithms, primitives, and protocols have been under constant scrutiny. And they will continue to be.

        Many of the vulnerabilities in previous versions of SSL and TLS were published by white-hat researchers before any exploits were seen in the wild. That doesn't prove they hadn't been used surreptitiously, of course; but they weren't widespread. It's an arms race, and both sides have been racing all along.

        And by the same token, people are always discussing what might be in the next version of TLS. 1.3 does fix (for various values of "fix", and in some cases controversially) a number of issues with older versions of the protocol, though, and importantly adding new suites doesn't require a protocol rev.

    3. TheVogon

      Re: Round we go again

      Well at least Exchange finally supports TLS 1.2. I wonder how many years it will take for 1.3 support.

    4. Mark 65

      Re: Round we go again

      The backdoor that was defeated - that was just a diversion. The real backdoor is the 0-RTT Resumption. Sure it relies on access to the machine but when has that caused the security agencies an issue? There's plenty of zero days to go around for that. This is undoubtedly their "in". Does it provide access to prior conversations as well as continuing? Who knows but I'm sure they'll be all over this.

      It never ceases to amaze me how this shit continues to happen. Backdoor argued for. Discussion ensues. Concept defeated. Much praising. Security bods spot another issue. Swept under the carpet for reason "X". Likely flawed security protocol enacted.

      FFS when will we learn that convenience is the enemy of security?

  2. Anonymous Coward
    Anonymous Coward

    The client says hi

    If it doesn't say hi like Natalie Portman in Leon: The Professional, it needs a redesign.

    1. Anonymous Coward
      Anonymous Coward

      Re: The client says hi

      What like a 12 year old girl?

    2. GnuTzu

      Re: The client says hi

      Voting up: "Leon: The Professional". Meh on the joke.

  3. Lorribot

    At which point the IETF will start on TLS 1.4.

    Err if it takes 4 years to get to a standard they should start now. Or get a better project manager whos not on a day rate.

  4. Anonymous Coward
    Anonymous Coward

    Great article! Security = effort, simple..

    I'd like to compliment the author for their very keen choice of words: "while banks and similar outfits will have to do a little extra work to accommodate".

    And my favorite: "In short, it's a win-win but will require people to put in some effort to make it all work properly.".

    I think it hits the nail right on the head: keyword being effort. The problem is that plenty of people would rather take the easy way out, but if you truly care about security then you'll man up and work your way around it. Providing good security takes effort, plain and simple.

    I welcome our new TLS 1.3 overlords :P

    1. Charles 9

      Re: Great article! Security = effort, simple..

      But effort takes resources. How do you secure something on a shoestring budget?

      1. Graham Dawson Silver badge
        Coat

        Re: Great article! Security = effort, simple..

        If that thing is a pair of shoes...

        1. Charles 9

          Re: Great article! Security = effort, simple..

          ...with only ONE shoestring...

      2. P. Lee

        Re: Great article! Security = effort, simple..

        >How do you secure something on a shoestring budget?

        It is actually really easy.

        You don't mix secure client systems with insecure client systems. No general internet access from a company system.

        Then you offer to subsidise BYOD personal mobile internet, on the basis that internet access is required for some work purposes.

        You get rid of massive complexity and ongoing corporate cost.

        1. Charles 9

          Re: Great article! Security = effort, simple..

          And if someone over your head objects?

        2. Amos1

          Re: Great article! Security = effort, simple..

          P.Lee, your definition of "shoestring budget" and management's definition might be a wee different. :-)

          There is a massive cost and complexity to subsidising anything owned and operated by someone else. The subsidy is a direct cost and the support is an indirect cost but they're both costs.

          "You don't mix secure client systems with insecure client systems. No general internet access from a company system."

          That's a great start but be prepared to hear the whining from the admins about how hard you've made their jobs.

  5. JakeMS
    Thumb Up

    Nice!

    This is what I've been waiting for before beginning to implement TLS 1.3 on our small business servers and e-commerce website.

    While currently it's ticking along nicely scoring an "A+" on Qualys (For both IPv6 and IPv4) I've been looking into TLS 1.3 to strengthen the service that bit further.

    Now it's been officially approved the final draft should start finding its way into various software packages which are needed to make the services run, which means I can start enabling it on those services where possible :-D.

    As for not being able to inspect data that is within encrypted traffic.. well isn't the whole point of encrypted traffic to stop just that?! Thankfully we don't try to do that anyway.

    1. Amos1

      Re: Nice!

      "As for not being able to inspect data that is within encrypted traffic.. well isn't the whole point of encrypted traffic to stop just that?! Thankfully we don't try to do that anyway."

      The adage about "You can't find what you don't look for" may apply here. I do work for a bank and we do crack TLS for browser (and other) traffic between the Internet and our systems. We've been graphing the number of legitimate but compromised HTTPS web sites and it just keeps on climbing. We used to see just a few each day but now that the ad networks are slowly moving their ads to HTTPS it's a few dozen each day.

      If we were not doing this we would have far more malware incidents than we do now. On average we have to re-image one or two PC's a year for suspected malware out of 3,000+. I have friends working for law firms with less PC's who have multi-person teams re-imaging PC's full-time due to confirmed malware.

      This is not a personal attack and it's something we hear from other companies all the time. "Oh, the horror! Our employees have a right to privacy!"

      Yeah, well, guess what? The customers who have entrusted your company with their personal information also have a right to privacy and their right is legislated. My employer needs to protect the data of all our customers, not just yours.

      These traffic inspection process for most corporations (not spies) are all automated. It's decrypted, inspected and re-encrypted. There is no storage and no retention unless something triggers an inspection rule and then only the part that fired the inspection rule is retained.

      And if you got hit by the Equifax breach, one of the things they apparently did correctly was to have a full-packet capture device on their internal network. Like the spies, these devices capture, decrypt and store 100% of all network traffic. That's how they were able to definitely provide good detail on what happened. Without that device they could have claimed "We have no evidence that any data was stolen" and you might not even have been notified. So yeah, they bought a device that testified against them as to their problems. :-)

      TLS 1.3 is just a ho-hum event. We'll still use TLS 1.2 between our employees and the proxy servers while allowing TLS 1.3 between the proxy servers and the Internet. Once TLS 1.3 is really ready, we'll allow TLS 1.3 from our employees to the proxies. Their HTTPS connections all terminate on the proxy server so they won't even notice. Even HSTS won't save them because HSTS doesn't have certificate issuer information.

      As far as web sites go, everyone is using a reverse proxy or load balancers acting as reverse proxies anyway. The only companies that TLS 1.3 will negatively impact are those using span ports or taps to sniff the traffic off the wire as it goes past. The web sites that have no traffic inspection won't be impacted at all, just infected.

      If you plan on replying about your super-dooper endpoint product that catches APTs and everything, feel free but you're wrong. Each endpoint is a single-point-of-failure and unless you have a NAC (network access control) system that will not permit the PC on to the network if its anti-malware is malfunctioning or not installed, you're just playing the compliance game and not the information protection game.Endpoint protection is needed but relying on it solely is...

      1. JakeMS

        Re: Nice!

        Well, our networks are completely different.

        Firstly, we have no Windows computers, so that eliminates a significant number of threats, granted running Linux exclusively does not guarantee a lack of threats. Any machine can be compromised.

        We're a small business so we do not have a network full of computers that a bunch of random people use.

        Primarily we have email servers and web servers and the physical retail store in which the EPOS system is also Linux based, but said machine has no access to customer data and nor should it.

        The web servers run among other things an intrusion detection system (monitors system binaries, system configuration, website files etc), application firewall (and of course, system firewall) and SELinux which are all configured to instantly send alerts if something changes which should not (Logging is kept outside of the servers), all of this is along side being checked manually by myself on a daily basis for any oddities.

        These are the only systems which hold any customer data and backups are kept encrypted. The only "remote login" feature of any of the systems is SSH which is setup to use keys instead of passwords (though the keys used do have passwords, so even if you lift my key you have to crack the (very, very, very long) password before you can use it, but I change the keys once per month so by the time you've cracked it, it may be useless).

        There is of course the ability to manage products and such however to access that you need to come from a specific network before you can even attempt to access the page (IP Spoofing may work here, but you'd need to know which IPs to use, the login details, have access to a 2FA device for verification and of course know where said page is located).

        Only three people (myself included) have access to any customer data, all of those three people use Linux devices, are not likely to click on malicious links and don't fall easily to scam emails.

        The email servers are setup similarly, but do not hold customer data. They use various blacklists, reject unknown hostnames and employ pretty much all other non-conflicting antispam techniques to minimize the amount of spam received (it's not perfect, but it's better than without) reducing the likely-hood of someone clicking a spam link, and of course HTML email is disabled system wide.

        If someone is unsure whether an email is legitimate or not they are instructed to let me see it before going any further with it, in which case I'd be checking the emails headers and source.

        I mean, sure there is still the risk of SQL injection attempts and such, but our code base is pretty solid and we sanitize all user input. Users have the ability to add 2FA to their accounts and of course there are restrictions on how weak a password can be.

        While we are a small company we do our best to ensure everything is as secure as possible. It would take a fair amount of effort and need to be a targeted attack to get into customer data. But we do our best to prevent that from happening and deploy as many security measures as we can. I'm also always looking to see if I can find new ways to make it even more secure.

        I also conduct my own pen-tests against our systems to see how far I can get seeing as I know the systems inside and out.

        Could it be broken into with a ton of effort in a targeted attack? probably yes.

        Would it be undetectable? Probably not.

        Is it perfect? Probably not. But I do my best to make it as secure as possible.

        On the plus side, absolute worst case data breach is loss of customer names, addresses, phone numbers and order history. We do not store their debit/credit cards or financial data (Payments are process through Stripe or PayPal depending on which the customer selects). So in a sense it's a minimal loss[1] if it were to occur, which thankfully it has not yet in the five years we've been running *touch wood*.

        We do not run any ad networks or any other junk like that (so no google analytics or other spying networks, or even any services like cloudflare). So the only way to add a "card lifting" script or such like would be to alter one of our website files. But if you do that, you'll trip an alarm and I'd know about it.

        But yeah, as I said totally different, but I do my best.

        [1] Minimal compared to say the Equifax breach.

        1. Anonymous Coward
          Anonymous Coward

          Re: "Well, our networks are completely different."

          And that sounds like an excellent challenge.

      2. Anonymous Coward
        Anonymous Coward

        Re: Nice!

        @amos1 & Jake Thanks for the interesting take from your perspectives. This is the kind of thing ElReg does best.

        1. Amos1

          Re: Nice!

          Agreed. Thanks for taking the time to write that. One item that would concern me is this:

          "Primarily we have email servers and web servers and the physical retail store in which the EPOS system is also Linux based, but said machine has no access to customer data and nor should it."

          Actually card data is customer data. If you're soliciting it, it's considered your data if it's lost. At least in the USA. That statement coupled with this one:

          "The only "remote login" feature of any of the systems is SSH which is setup to use keys instead of passwords (though the keys used do have passwords, so even if you lift my key you have to crack the (very, very, very long) password before you can use it, but I change the keys once per month so by the time you've cracked it, it may be useless)."

          leads me to ask who supports your point-of-sale systems. For all small businesses it's a third-party and most POS breaches nowadays occur because of weak controls and weak remote access controls by the third-party. As you know, PCI now requires a certified installer to install those systems so it's pretty much a given that you have a third-party involved somehow. Even if you're strictly online you still have some liability via the use of an iframe or similar. Not nearly as much as physical terminals though.

          "If someone is unsure whether an email is legitimate or not they are instructed to let me see it before going any further with it,"

          When you rely on unreliable humans the control will fail. Perhaps you're small enough that the number of people with email is tiny.

          "Users have the ability to add 2FA to their accounts" - If you're not requiring it, particularly if you're using that cesspool called "Office 365", you probably have a problem there.

          "I also conduct my own pen-tests against our systems to see how far I can get seeing as I know the systems inside and out."

          You seem reasonably knowledgeable but pen tests, particularly application pen tests, take an entirely different set of skills. You may have them; I do not. We once eval'd a proposed banking vendor who told us they had passed their last pen test with "flying colors". We always ask to see a copy of the full results but very, very few will do that. They did. They sent us a two-page printout of a NMAPv3 beta scan with safe checks enabled and with application checks disabled. After NMAP v4 was out. That raised a lot of alarm, we dug further and suffice it to say our customers are a lot safer because we declined to do business with them.

          We do not run any ad networks or any other junk like that (so no google analytics or other spying networks,..."

          If your employees, even the execs, have Internet browsing capabilities you've got them even if you're not running them on your systems.

          None of this is meant as criticism. You seem to know your business and risks and that is kind of rare. Perhaps I've given you a few things to look over before bad things happen to you.

          1. JakeMS

            Re: Nice!

            I understand your concerns so I'll do my best to address them :-).

            "Actually card data is customer data. If you're soliciting it, it's considered your data if it's lost. At least in the USA. That statement coupled with this one:"

            Agreed. However, our EPOS system doesn't process the payment per-say. The card machine is not linked to any of our systems bar the stores firewall via ethernet. You have to type the price manually into it and then the customer will enter the card or "tap it" if it's contactless.

            The machine then (on it's own) will connect directly to Barclays Bank and Barclays will process the payment.

            Naturally to prevent someone (who shouldn't be) from just connecting to it remotely there is a firewall between it and the world wide web which prevents any and all remote connections to it. There is also a firewall between it and the other systems on the network, it is effectively isolated.

            The card machine still works fine and updates itself regularly (it connects outbound rather than having incoming connections).

            In addition, because I wasn't sure I ran wireshark to check to see if it encrypts its communications, which thankfully it does so that helps prevent snooping.

            I did that via temporarily wiring it through my laptop (ethernet) for a single transaction using my own card to see what it did, it was then connected back to the regular network. Doing this I was able to confirm how the device acted and able to adjust firewall settings accordingly. I'm not sure on the legalities of that, but I felt it was necessary to know what a device is doing on my network.

            As far as I'm aware all card data is then dropped and forgotten after a transaction is complete. Whether Barclays retain it on their end I'm not entirely sure when I asked them that question they dodged it telling me I didn't need to know and just kept trying to assure me the machine was "secure" and that I should just wire it up and forget about it (Hence the test above).

            But we also have to physically check it as well to ensure it hasn't been tampered with. Sometimes people may add an "additional device" to a card machine to steal data that way. So we have to check for those too.

            "leads me to ask who supports your point-of-sale systems. For all small businesses it's a third-party and most POS breaches nowadays occur because of weak controls and weak remote access controls by the third-party. As you know, PCI now requires a certified installer to install those systems so it's pretty much a given that you have a third-party involved somehow. Even if you're strictly online you still have some liability via the use of an iframe or similar. Not nearly as much as physical terminals though."

            I installed the point of sale systems, they run Arch Linux (rolling distro, so I do not have to turn them off to upgrade versions). But as mentioned above they do not obtain or hold any card data. Granted they hold sales and product data but that is not tied to any customers. It's merely numbers and dates. No customer data on the machine at all. Technically speaking the till/epos system itself is PCI exempt because it does not hold or process card or customer data (The system will never, ever see a customers contact or credit card details). This allows us to run our own custom point of sale systems, saving a boatload of money in the process.

            There are no third parties with allowed access to our systems, no other party holds any SSH keys or otherwise. If someone wants access to our system they have to go through me. I may issue a key if necessary, however that key would only be valid for the duration of which they actually need it. I've however not yet had to do this.

            As for online systems again they do not process the card data. We have two methods of payment:

            Method 1: PayPal, once the customer is at the point of payment he/she will be redirected to PayPal's website, at which point they may enter payment data there. However we do not see or process that data.

            Method 2: Stripe, this one is higher risk, however it uses stripes API to provide card payment forms on the checkout page using Stripe Elements. But once again we still do not see the card data or process it.

            You can see more about how stripe payments are handled here:

            https://stripe.com/docs/stripe-js/elements/quickstart

            And info on all the PCI stuff for Stripe here:

            https://stripe.com/docs/security

            This of course runs the risk that if someone is able to inject another script into our page then they may be able to "lift" the data as it is typed, example for websites which run Magento (which we do not) this has become a problem:

            https://www.theregister.co.uk/2016/02/16/one_magento_patch_seals_shoplift_hole_the_other_loots_credit_cards/

            Thankfully, we don't run that software. Though among other things we do use Content Security Policy to ensure the scripts we have enabled are the only scripts running. "Just in case".

            "When you rely on unreliable humans the control will fail. Perhaps you're small enough that the number of people with email is tiny."

            This is true, sadly this will never be 100% foolproof. All I can do is advise people and ensure that it is a minimal amount of spam that ever reaches them.

            "If you're not requiring it, particularly if you're using that cesspool called "Office 365", you probably have a problem there."

            I think there's been a slight misunderstanding here.

            Firstly, nope we do not have Office 365, or any other Microsoft software/cloudware for that matter :-).

            For the last 15 years I've been almost entirely Linux exclusive, so I avoid running Microsoft on my network at all costs as I consider it one of my skill set "weak points", I could easily misconfigure a Microsoft operating system incorrect and cause it to be insecure, so to avoid that situation I simply don't run the software at all. Stick with what you know and all that.

            Thus, we use a local copy of LibreOffice for any tasks we need in that regard and of course, Linux. It's a bit of a cross between CentOS/RHEL (servers), Fedora (workstations, mainly my computer), Linux Mint for those new to Linux and finally Arch Linux for the point of sale system. I tend to use which ever distro is best for the task at hand.

            For access to customer data from an administrative point of view (aka able to see and manage all) then 2FA is absolutely required and they are issued a YubiKey, it's all pre-configured so they cannot say no or disable it.

            This is accessed solely via a "control panel" on the web-server and it cannot be exported (bar a database dump, which only I could do, but you'd have to SSH into the system to do that).

            (Including myself, well technically I *could* disable 2FA for myself, I am root after all, but I don't, heck I'm not even exempt from the automatic logout after 5 minutes of inactivity!)

            Now, in the case of optional 2FA that was referring to customers with their online logins to the website. They would be using one time codes generated by a Mobile App for that. So they have the ability to add that if they wish. However, it is not mandatory.

            I should have probably been more clear between "users" and "customers" however. Now I re-read it I can understand how that would have been confusing.

            "You seem reasonably knowledgeable but pen tests, particularly application pen tests, take an entirely different set of skills."

            This is true, but I do my best, I do a ton-o-research into it to make myself better at it all the time. I do however have some experience on the other side of the hat from my younger years so I have a good guestimate as to what someone will try.

            If we were to start storing or processing more sensitive customer data I would pull in a third party just to be safe.

            "If your employees, even the execs, have Internet browsing capabilities you've got them even if you're not running them on your systems."

            I am one of the "execs", this is a business partnership and I am one of the partners. So it is in my interest to ensure it's all secure. Because if it is not, it all falls on my head.

            However, others run uBlock Origin and Privacy Badger on their systems to help reduce the amount of scripts running on their systems. In addition Firefox runs in its own little sandbox called "FireJail"[1] which again helps reduce the impact of a dodgy website on the system.

            And of course, no one runs as root/admin for web browsing. That would be silly.

            "None of this is meant as criticism. You seem to know your business and risks and that is kind of rare. Perhaps I've given you a few things to look over before bad things happen to you."

            Actually, I prefer to have someone criticise or question my actions. It helps keep me on my toes, thus at least then if I've screwed up I'd much prefer someone to outright say it so I can go back and correct the mistake if it exists. I'm always trying to learn more or do better.

            While I do my best to prevent mistakes and make sure it all runs correctly, I'm still only human and I can still screw up just like anyone else.

            So I thank you for your comments :-).

            [1] FireJail is actually pretty cool, it works for more than just Firefox and can be handy way of ensuring applications can only access what they are supposed to. I'd highly recommend anyone who hasn't heard of it to check it out. It may come in useful.

            Link:

            https://github.com/netblue30/firejail

            PS: Please excuse any typo mistakes in this post, I wrote this after having just woken up on my first coffee :-).

            1. Amos1

              Re: Nice!

              You have no idea how much I wish you were one of our vendors.

    2. Michael Wojcik Silver badge

      Re: Nice!

      As for not being able to inspect data that is within encrypted traffic.. well isn't the whole point of encrypted traffic to stop just that?!

      No, it is not. If you don't want to "inspect data that is within encrypted traffic", just emit random data instead. For encryption to be useful, it has to be decrypted under authorized conditions.

      Many organizations have good reason to inspect traffic, typically for exfiltration detection, fraud detection, or malware detection. And if it's running over their network, they typically have a legal and ethical right to do so.

      Security involves not only preventing misuse of a system, but also enabling proper use - and proper use is defined by the owner of the system. (Another version of this maxim is the "CIA" model: Confidentiality, Integrity, and Availability. Confidentiality and Integrity, which are what cryptographic systems generally aim for, are of limited use without Availability.) If the owner says "data should be concealed from everyone except three parties: sender, recipient, and monitoring system", then that's the proper use.

  6. Crypto Monad Silver badge

    "Unfortunately, that self-same ability to decrypt secure traffic on your own network can also be potentially used by third parties to grab and decrypt communications."

    Only if they have the private key which sits on your webserver. The weakness is if someone captures and stores the encrypted traffic, and then later (perhaps years later) retrieves the private key from your webserver, they can retrospectively decrypt all the stuff they captured.

    The introduction of PFS means that a temporary key is created for the session and discarded away at the end. The private key is only used to prove your identity, and it cannot be used for decryption.

    However a side effect of the old approach was that you could install your web server's private keys in your firewall/IDS system, and it could passively decrypt the traffic looking for things like SQL injection attacks.

    With TLS1.4 this won't be possible. The firewall will have to act as "man in the middle", decrypting and re-encrypting traffic, with different random keys to the end-user and the web server. Or possibly people will switch back to using plain HTTP between firewall and web server, which opens up other weaknesses.

    Security is a balance, and it depends who your adversaries are. Today the enemy du jour is the NSA slurping your web traffic for potential later decryption.

    Remember that if the PFS stuff is based on Diffie-Hellman, that's 1970's crypto. The NSA could have cracked this for all we know, in which case they will be delighted by this new protocol.

    1. Amos1

      "Or possibly people will switch back to using plain HTTP between firewall and web server, which opens up other weaknesses."

      Not "possibly". That is the common practice between the HTTPS-terminating load balancers and the back-end web servers. It allows far better automatic failover between the back-end web servers resulting in a better customer experience.

      Why? Because if you run HTTPS between the load balancer and the web server you need to enable "stickiness" because the encryption is done between the load balancer and that web server only. So if the web server experiences problems, there's no way to seamlessly move you to another one and you get to fill in that ten-page form all over again (because you've never negotiated session keys with the other web server).

      That HTTP connection is generally bullet-proof on both a physical layer and a logical layer. But if you're trying to run the back-end web servers in multiple physical locations the risk does go up.

      1. Anonymous Coward
        Anonymous Coward

        Amazing

        Amazing, every word of what you just said was wrong.

      2. Charles 9

        "That HTTP connection is generally bullet-proof on both a physical layer and a logical layer."

        But wouldn't that still mean they'll just deploy the equivalent of a .30-06 (penetrates bulletproof material)?

        1. Amos1

          Bulletproof in my world includes physically separate switches, physically separate wiring and physically separate hosts. The term is "poka-yoke" which translates to mistake-proofing. Many, many incidents occur because of a misconfiguration caused by a human; Them doing something "just to see if it fixes the problem", doing something from memory, etc.

          By physically separating the environment you remove much of the possible misconfiguration possibilities and the possibility of pivoting from an infected host to an isolated one. No more ACL problems, no more misconfigured VLANs, etc. They can still misconfigure them but the chances of that error causing a breach are dramatically reduced.

          That being said you also need to remove the human element as much as possible. On one PCI penetration test I'm familiar with the company's internal network was 100% compromised in short order but they were absolutely unable to penetrate the PCI network. Right up until one tester said "I wonder what the chances are that some administrator used the same usename and password in both the production and PCI environments for convenience?" Yes, Game Over.

          PCI now mandates that true two-factor authentication be used for access to the PCI cardholder data environment. They finally caught up to where we've been for years.

          "Bulletproof" does not mean "following what the minimum controls are". It means performing threat modeling and admitting that one of the threats are your own people whether intentional or not. Believe it or not, some managers and HR people have a real problem with that type of thinking. :-)

      3. Anonymous Coward
        Anonymous Coward

        > Not "possibly". That is the common practice between the HTTPS-terminating load balancers and the back-end web servers.

        Not necessarily. If the company has policies that prohibit confidential data from being on the wire unencrypted, then you cannot run HTTP between the load-balancer and web servers. This is sometimes mandated by B2B contractual agreements and cannot be avoided.

        Luckily, many load-balancers these days allow you to decrypt HTTPS traffic, inspect and modify the HTTP payload, and then re-encrypt it back to HTTPS before sending it onto the web servers. This allows you to use session stickiness features while still keeping traffic encrypted on the wire.

  7. Steve Graham

    Geography lesson

    "London, England" eh? Oh, THAT London.

    1. Gnomalarta

      Re: Geography lesson

      Five states in the US are home to cities named London: Kentucky, Ohio, Arkansas, Texas or West Virginia.

      1. Amos1

        Re: Geography lesson

        And Ontario. They're kind of cousins.

      2. jchevali
        Trollface

        Re: Geography lesson

        There's also a district called London in a city called England, in the United States.

  8. Adam 1

    The client then says which encryption system it plans to use for the weaker, session key – which allows data to be sent much faster because it doesn't have to be processed as much

    That's a bit misleading. The session key allows data to be sent faster because it uses a symmetric cipher. That is AES these days, and this is computationally as simple as bit shifting and XORing.

    Asymmetric encryption is usually done with an elliptic curve* variant of the Diffie Hellman algorithm. In ballpark terms, that costs about 5000x more CPU time for the same payload. The real question is why not just use symmetric encryption? Spoiler alert, symmetric encryption requires both parties to know the shared secret (session key). How are two parties to communicate this without "Eve" learning it too? By using the asymmetric encryption to send the session key, you, in general, get the throughput close to symmetric alone but without the problems around how to share that key without another party discovering it.

    *There is nothing wrong with Elliptic curves, just don't use the parameters that NISTNSA were pushing.

    1. tom dial Silver badge

      Two observations about the NSA EC parameter bogeyman:

      It is true that NSA could have generated the NIST parameters with knowledge the secret related parameters that would provide them (and anyone else who obtained access to the secrets) with a back door. As far as I have seen reported, it has not been established that they did so, and it also is possible that NSA generated its parameters as described in the NIST document appendix, as anyone with the resources can do.

      I have not seen reports, either, of atttempts to generate EC parameters using the procedure, and I have not done so. It occurs to me, however, that different possible values of the randomly generated 'P' and 'Q' parameters might differ substantially in the security, and it seems possible that the values provided in SP 800-90 and its revisions (until the Dual EC DRBG was removed in 2016) were selected as the best of many generated correctly as described in Appendix A of that document. It may be that using Dual EC DRBG with homegrown P and Q would be expected to be less secure than using it with NSA's parameters.

      That said, uncertainty about a back door in Dual EC DRBG is only one reason to avoid its use. Others, independent of that, are its high computational cost (even after the bit generator was weakened) and the fact that it produced biased output. These were widely known to specialists in 2005 or 2006, and reason enough to select other types of DRBG.

  9. Will 28

    Handshake?

    I'm confused. As I read the "reduced" list of conversation requirements that are stated, it just seems to indicate that there's not a handshake anymore.

    Could someone clarify please? Is it that the handshake indicated in the second "reduced" steps is because of some assumed process that would involve a handshake. Or have we found a nicer, and more simple way around a security handshake?

    Is it to do with the random letters thing? Not trolling, genuinely confused.

    1. Charles 9

      Re: Handshake?

      It's a simple streamlining. The handshake takes place at the same time as the Hello. Instead of Hi, Hi, I can grok this, I'll take that; it's Hi I can grok this, Hi I'll take that.

    2. Anonymous Coward
      Anonymous Coward

      Re: Handshake?

      Obviously it's turned into a *secret* handshake. Hence not being described.

  10. CheesyTheClown

    Strong protocols, weak implementations

    Great, we have a major rewrite of a security protocol. This of course is good. But consider that we started implementIng TLS about 20 years ago as SSLv2. It has changed a lot, but never as fundamentally as it has now.

    Even today, TLS 1.2 isn’t jacked nearly as often through algorithmic weaknesses as opposed to weak implementations.

    Consider that most implementations are written in C by people focused on performance. Look at OpenVPN for example... they have great encryption and do a great job on the protocol, but whenever I need to hack OpenVPN, I just attack the ASN.1 parser which generally has endless problems with buffer handling. The same code in C++ with a buffer class, RUST or Java or C# would never have those problems.

    Then there’s hardware acceleration. Writing code which can access and manage hardware cores from user space is almost impossible to write securely.

    My guess is that will will take 20 years to have a reasonably secure implementation of TLS 1.3. Don’t get me wrong, it should be used now by every internal server. But I would wait 3-5 years before touching any 1.3 code written in C and/or VHDL.

    But I’m not a security expert... so what do I know? :)

  11. Lusty

    This isn’t progress

    Adding TLS 1.3 doesn’t make anything more secure. Disabling 1.2 (and earlier) might. History shows that admins never disable old insecure stuff and it’s left for a vendor security update to take care of.

    Older encryption algorithms could be disabled on 1.2 so the point in the article about older encryption would be meaningless if admins did their jobs effectively. Humans just aren’t built that way though :)

  12. Anonymous Coward
    Happy

    Wooohooo

    Can't wait

    Many others above are correct, "a holes' abound on the net, appearing in all kinds of products, we must expect them to fail. but it is sure good to that 'nice and secure' feeling back for a little while.

    hurry up, hurry up, hurry up !

  13. GnuTzu
    Stop

    Block The Laggarts

    I vote the Internet starts blocking those that fail at Qualys SSL Labs server test (https://www.ssllabs.com/ssltest/). I'm using this now for every security review I do, and it's amazing how many sites do business or health care related services on the Internet get bad scores there. Everything from 3DES and SHA1 to lack of secure renegotiation and lack of forward secrecy. It's unforgivably lame.

    1. EnviableOne

      Re: Block The Laggarts

      cant do that, you'd not be able to work with the .gov or the banks or anyone who should really know better.

  14. Chronos
    Facepalm

    And yet still...

    ...SNI is done in the clear. You go to all that trouble to stop the ISP redirecting/snooping your DNS queries and then simply hand the latter to them on a plate in the first useful packet.

    1. Anonymous Coward
      Anonymous Coward

      Re: And yet still...

      But what good is that if they don't have the corresponding key?

      1. elip

        Re: And yet still...

        They have enough metadata to infer the content on *many* occasions. I don't have the URL handy currently, but MS researchers had done some nice work regarding key stroke reconstruction based on TLS traffic metadata alone, back around 2009/2010 I believe.

        1. Charles 9

          Re: And yet still...

          How can they reconstruct individual keystrokes when most web traffic is batch in nature?

  15. Anonymous Coward
    Anonymous Coward

    Back to basics - inbound or outbound

    For newbies, don't try and get your head around this until you understand the fundamental differences between inbound and outbound traffic.

    Inbound is traffic to services you host that are used by your customers and partners from over the internet. You own the keys and manage the crypto, you control the traffic flows and you can decrypt the data where you want/need and manage and inspect it as you see fit. To show your A+ certificate to your customers you should look to implement TLS1.3 ciphers as appropriate. You should ensure for web services / applications that they're not vulnerable to common attacks - read up on OWASP. TLS 1.3 doesn't change that and it's your biggest risk for inbound services of the HTTP(S) kind.

    Outbound is traffic to externally hosted websites like that there Google, Faceache, Salesforce, Office 365. This is where all the phishing and bad shit occurs and where you're most likely to have to manage day-to-day pain through dumb users, malware and its consequences. You don't manage the crypto but you can control the endpoint and in some cases manage traffic through MITM devices like web proxies that can themselves terminate and inspect traffic for badness. Talk to your vendors to work out if any of your existing web security mechanisms will be borked by TLS 1.3.

    Oh and lastly, why does anyone on a corporate network, using a corporate device expect a right to privacy? You have a personal phone, tablet or laptop for booking appointments at the VD clinic.

  16. EnviableOne
    Mushroom

    Still think it should be TLS2.0

    its a major overhaul of the entire protocol, not the simple patch and perfect a dot release is meant to be, but that lead to more infighting and options than you can imagine.

    Suggestions on the Mailing list came wide and varied

    you cant have TLS2.0 as its too close to SSL2.0

    you should call it 4.0 cos so peopl know its better than SSL3.0

    you should call it 3.4 if you take it back to the start of SSL

    I like 2.0

    it shouldnt be TLS 2.0 it should be TLS/2 like HTTP

    bunch of winers copped out and stuck with 1.3

    Its TLS not SSL, its a new version, so increment main number, return sub to zero.

    </rant>

  17. Michael Wojcik Silver badge

    0-RTT

    That will make connections much faster but the concern of course is that someone malicious could get hold of the "0-RTT Resumption" information and pose as one of the parties.

    That's not the issue with 0-RTT. The problem with 0-RTT is that it has no replay protection.

    TLS servers that allow 0-RTT Resumption are instructed to ensure the request is idempotent so that there's no vulnerability if an attacker replays it. That pushes a critical security responsibility up to the application layer, which a number of people (myself included) believe is a Bad Idea.

    0-RTT is one of the optimizations that big HTTPS sites (Google, CDNs) pushed for, because it makes a measurable difference to their costs. Everyone else should avoid it like the plague.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like