back to article TLS developers should ditch 'pseudo constant time' crypto processing

More than five years after cracks started showing in the Transport Layer Security (TLS) network crypto protocol, the author of the "Lucky 13" attack has poked holes in the fixes that were subsequently deployed. Back in 2013, University of London Royal Holloway professor Kenneth Paterson popped the then-current version of TLS …

  1. Anonymous Coward
    Anonymous Coward

    Obviously, their code 'Review and Approval' processes need some work...

    These issues are primarily a result of supervision and management process failures.

    1. Anonymous Coward
      Anonymous Coward

      Re: Obviously, their code 'Review and Approval' processes need some work...

      Oh? The article notes the code was formally verified. What does that say?

      1. Adrian 4

        Re: Obviously, their code 'Review and Approval' processes need some work...

        Formal verification shows the code to be mathematically correct. That's not the same as 'having no out-of-band signature'.

      2. Anonymous Coward
        Anonymous Coward

        Re: Obviously, their code 'Review and Approval' processes need some work...

        AC asked, "...code was formally verified. What does that say?"

        It reinforces the point made in the post title. i.e. Their code 'Review and Approval' processes need some work.

        Down votes do not excuse inept management. Those that disagree may write themselves up as deserving of a 20% pay cut.

        Why not hire these "researchers" in advance to assist with code Review and Approval? They could provide the adult supervision that is clearly required.

        1. Loyal Commenter Silver badge

          Re: Obviously, their code 'Review and Approval' processes need some work...

          Down votes do not excuse inept management

          Meanwhile, in the real world, the people managing those who implement complex code are, almost by definition, going to be less technically minded than those whose code they will be checking. This means not only being less technically adept at writing code (because they spend at least some of their time managing, rather than honing their coding muscles), but also being less familiar with the technical reasoning behind the code. Because otherwise, they'd be doing their job AND the developer's job. This concept is known as Division of Labour and dates back to the industrial revolution.

          Unless a manager (or more likely a peer in a code review situation) was explicitly briefed to look for this type of vulnerability in the code they are reviewing, why would you expect them to find it? It is, after all, a side-channel attack. Laying the blame at the feet of the manager is not exactly reasonable. A lot of things in life can be blamed on bad management, but this is not one of them. It's very easy with hindsight to say, "someone should have thought of that" - but would you have?

          It is worth remember two important adages of security (paraphrasing from Bruce Schneier); security is hard, and just because you can't find a way to break it, doesn't mean it can't be broken.

          1. Claptrap314 Silver badge

            Re: Obviously, their code 'Review and Approval' processes need some work...

            The primary job of management is to decide who is in the room when decisions get made. You don't need the expertise yourself, you need to identify, hire, and retain the experts. You also have to listen to them when they tell you there is a problem. The fact that other implementations got this right means that it was not an exceptionally hard thing to get right. So, get it right. And don't dodge responsibility.

      3. Claptrap314 Silver badge

        Re: Obviously, their code 'Review and Approval' processes need some work...

        I'm putting on my "formally trained mathematician hat" here. It is meaningless to state that a piece of software has been "formally verified". Formally verified as to what? Complete without crashing? Having a deadlock? Complete in a given amount of time, assuming certain things about the caches? Compute the correct result? Require only a certain amount of memory? Be reentrant? Be position independent? Compile?

        "Formal verification" means that some sort of automated process, (the checker) has been used to generate a proof that some proposition holds true. The operator is still required to select that proposition. If one is not careful, it is entirely possible to generate a vacuous proof, by, say, having a faulty translation layer between the actual code output by the compiler and the code that the checker uses. Does this sound bizarre? Suppose that the checker is in language x, which does not happen to be machine code. Something has to do the translation. Suppose that instead the checker ingests the source code natively, but that there is a compiler bug.

        And, as mentioned, assuming that the code being checked actually matches the code being run, you can still be checking meaningless things.

      4. Adam 1

        Re: Obviously, their code 'Review and Approval' processes need some work...

        > The article notes the code was formally verified. What does that say?

        That it is a hard problem that even a reviewer or 10 can miss.

        Imagine an ancient city under seige. The defender must cut off each and every attack against their stronghold. Be they through the city gates, over the walls, under the walls, earthworks outside to cause a collapse in those walls, every vector, every time. If they fail once, the city is at risk of capture.

        Now imagine the attacking army. They get to choose how to attack. Whether to try and sneak one person through to sabotage the defences, or whether to block off the water supply and wait for surrender. They may notice a piece of wall that is not visible from the defensive ramparts to start digging or climbing. They may observe a pattern of those sentry guards and learn when they have 30 minutes of time.

        That's the equation here too. One step wrong and you are exposed. If it's not a timing attack then it could be some other vector to act as an oracle. It's serious, sure. But let's be realistic.

  2. martin__r

    Lucky 13 is an *INSIDER* attack, not an attack against true properties of TLS

    The security properties of TLS are described in rfc5246 Appendix F.

    Lucky13 is not an attack against TLS itself, but an attack against a rogue endpoint, which is aiding the attacker to perform an insider chosen plaintext attack against TLS. This is only possible with negligently design-flawed endpoints, such as Web Browsers, or stuff that is needlessly and negligently multiplexing data from different sources (including from the attacker) over a single TLS-encrypted channel, such as "SSL-VPNs".

    There is an explicit warning in rfc5246:

    Any protocol designed for use over TLS must be carefully designed to deal with all possible attacks against it. As a practical matter, this means that the protocol designer must be aware of what security properties TLS does and does not provide and cannot safely rely on the latter.

    but it seems web browser suppliers can not read.

    Rather than respecting TLS clearly stated design limits (network-based attacker), they want TLS to protect against whatever stupid behaviour they come up with.

    1. Lee D Silver badge

      Re: Lucky 13 is an *INSIDER* attack, not an attack against true properties of TLS

      It doesn't matter.

      Any modern algorithm that can't survive a chosen-plaintext attack is useless in the modern era. Literally, it's a core requirement.

      There is no distinguishing a "rogue endpoint" from a valid one, if you're performing services over the Internet. You should not be able to do ANYTHING that recovers messages or a key in any form, no matter how much you try.

      TLS can't explicitly defend against something in a sub-protocol, no. But it shouldn't be giving even the slightest hint about the context of its messages to anybody. Not even those authorised to see them. They have the key and can recover them, everything and everyone else should see something approaching random noise.

      That OpenSSL (and presumably LibreSSL) and others have fixed this with a small tweak means it's important enough to worry about. It really doesn't matter what the context is - it's not a secure transport layer if you can determine ANY information about the content, certainly not if that information aids in breaking the encryption entirely.

      1. EnviableOne

        Re: Lucky 13 is an *INSIDER* attack, not an attack against true properties of TLS

        The issue isnt the algorithm or more accuratley the protocol (as KRACK is to WPA2), its the implementation of the protocol(TLS). OpenSSL, boringSSL and LibreSSL fixed it properly, WolfSSL, amazon and all the others mentioned didnt.

        if you are creating a patch for a problem, check to see its fixed not just the specific one, but the class of problem.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like