Encryption can be cracked if one is willing to throw resources and time at it. How much time and resources the attacker has is key. The major goal of encryption is not to be uncrackable but to difficult enough that any information gleaned is worthless.
So what's the internet community doing about the NSA cracking VPN, HTTPS encryption?
Now that the cat is firmly out the bag, and it's clear that the NSA has cracked the encryption behind, potentially, a huge amount of internet traffic, the question inevitably turns to: what are internet engineers going to do about it? Clearly the experts at the Internet Engineering Task Force (IETF) have pondered the same …
COMMENTS
-
-
Saturday 24th October 2015 09:01 GMT Velv
Don't know why you've been down voted, so have an upvote.
You cannot prove a negative. The "unsinkable" titanic sank, "unlimited" broadband isn't unlimited, safes are designed to make it difficult and time consuming for criminals to rob you, but ultimately with time and resources criminals do break in to most safes.
And no encryption is "uncrackable". If you believe the uncrackable nirvana exists, be prepared for a surprise.
-
-
Monday 26th October 2015 21:00 GMT Michael Wojcik
Not quite true: XOR encryption with a random single-use key is mathematically uncrackable
God forbid we ever make it through a thread on cryptography without someone mentioning the one-time pad. Maybe we could have an icon for "obligatory OTP citation"?
The OP didn't say "mathematically uncrackable". You've argued a different claim.
No cryptography is uncrackable because a ciphertext qua ciphertext has to be decryptable by someone or something, and that recipient can be coerced, suborned, etc. A ciphertext that isn't decryptable is no longer a ciphertext (if it ever was).
Distinguishing between an OTP-encrypted message and key exchange is also a dodge. You transmit the pad over one channel and the message over another. You have to protect both channels, so the OTP just defers the problem. There's no real distinction between "key exchange" and "message exchange" for the OTP, except the circumstances of the particular case.
(Also, you left out some constraints. The pad has to be random in a strong sense - all possible values equally probable, for one - and must be at least as long as the plaintext.)
-
-
Monday 26th October 2015 20:54 GMT Michael Wojcik
Don't know why you've been down voted
Perhaps because this sort of commonplace about security gets bandied about regularly, instead of more meaningful statements about work factors and threat models and the like? It was a (relatively) lot of words with little substantive content.
I didn't downvote the post, but it says nothing you wouldn't get from the introduction to a security primer. Everyone either knows it already, or isn't teachable on the subject.
In this particular case, there's a huge difference between a 1024-bit DH prime and a 2048-bit one. Handwaving statements about "you just need resources" are not particularly useful when the resources in question are ~300 orders of magnitude beyond what's feasible for a major nation-state.
Even if the NSA have a really big quantum computer at their disposal (putting them very, very far beyond the published state of the art), that machine would still be facing a work factor about eight times what they need to crack a DH group based on a 1024-bit prime, and about ten times as many qubits. Yes, that's "only" a linear increase (for this implausible magical-technology scenario), but a linear increase in cost means a linear decrease in the number of decryptions, or equivalently a linear increase in the bar the value of your encrypted communications have to reach before someone will pay to break them.
And, in fact, this article is wrong. Few organizations have any pressing need to move to larger DH groups. Just switching to a DH prime that's not well-known is sufficient, because it's the widespread use of the same primes that made it economically feasible to break them. If everyone generates and uses their own set of DH parameters, then the cost of breaking goes way up, and only the most valuable communications will be targeted.
That is what is important here.
-
Sunday 25th October 2015 15:08 GMT Anonymous Coward
How about if Obama issued an Executive Order telling the NSA, CIA, and all the other agencies that work for him to stop violating people's privacy and follow the U.S. Constitution?
How about if Obama stopped persecuting Snowden because Snowden pointed out that the U.S. is violating the Constitution?
-
Monday 26th October 2015 21:06 GMT Michael Wojcik
How about if Obama issued an Executive Order telling the NSA, CIA, and all the other agencies that work for him to stop violating people's privacy and follow the U.S. Constitution?
How about if the NSA, CIA, and other agencies just ignored him, because why wouldn't they? Who's going to audit their behavior? Who's going to enforce this dictum?
Even if 1) such spying were to be redefined as, say, treason (which would take a constitutional amendment), and 2) an agency were caught doing it, I am sure that among the thousands of dedicated zealots working for such agencies they could find a few willing to fall on their swords. So even if there were punishment, nothing would change.
Bottle open, djinn out.
Now, that doesn't mean that I'm opposed to any of the three branches slapping the TLAs and telling them to cut it out; and I'm certainly not opposed to attempts to roll back the PATRIOT Act and related shenanigans. Curbing the police state is never more than a temporary and partial fix, but that doesn't mean it isn't worth doing. But no wave of the presidential fairy wand is going to make everything right again.
-
-
-
-
Saturday 24th October 2015 02:13 GMT Old Man - Grey Fleece
Re: Questions
A rainbow list would be impractically large and a nightmare to build in terms of cpu cycles. If you look up distribution of primes you can get a reasonable approximation to the number of primes in any range and we are talking 10^100 ish. (Yes, I haven't checked my number theory book for a few years). The bigger the prime the more it costs per prime to find it - see other Vulture articles. Unless we get serious quantum computing.
Why use primes? - because they are what give unique decription, hairy group and number theory from what little I remember and you probably have to check the specific algorithm to see why.
-
Saturday 24th October 2015 03:38 GMT John H Woods
Re: Questions
"There are a finite number of prime numbers that use 2048 or less bit" -- Wade Burchette
Finite yes, but also ENORMOUS.
The number of primes less than x, pi(x), is approximated by x / (log x-1) or more roughly, but more conveniently, x / (log x). For 1024 bits, x = 2^1024 which is about 10^308.
pi( 2^1024) ~= 10^308 / 1024 ~= 10^305. As there are probably only about 10^80 atoms in the universe, give or take a power of 10, no such list can exist, even for primes of 1024 bits. For 2048 bits you'd be looking at > 10^600!
So although you have to use primes (otherwise the encryption wouldn't work), "the finiteness" of the number of primes is not a problem. But I thought it was a reasonable question, so if you do get any downvotes, they weren't from me :-)
-
-
Wednesday 24th February 2016 12:13 GMT Benchops
Re: Questions
But surely everyone uses the 5*10^307-th prime?
I mean, no-one would use the first prime (2) as that's too obvious. Similarly the largest prime less than 2^2048 would be too obvious. So then the next primes you would obviously not use are the 2nd prime (3) and the 2nd to last prime below 2^2048. etc. Everyone decides to use the middle prime.
-
-
Saturday 24th October 2015 08:54 GMT Anonymous Coward
Re: Questions
Perhaps, but the defenders and the attackers have to start from the same position: finding those primes in the first place, and finding one by chance is going to be difficult. So usually some kind of algorithm is used to try to hunt down these primes more quickly. Only one catch: the attacker can use the same techniques, trading in space for time but knowing the defender is under the same auspices.
-
-
Saturday 24th October 2015 03:46 GMT Anonymous Coward
Re: Questions
A lot of the basic security in these sort of protocols comes from some math proposition.
Typically when primes are mentioned, it's exploiting the fact that finding the product of two co-prime numbers P=(p*q)is easier than factorizing P/q = p. If the numbers are prime, then only one (p,q) pair will make a given P.
-
Saturday 24th October 2015 11:16 GMT Doctor Syntax
Re: Questions
AFAIK* the Diffie-Hellman works by agreeing on a prime and then each party performing some computation on it and exchanging results they mutually calculate a key and the crack** actually does work by calculating*** a sort of rainbow table for a given prime. By observing the exchange between the parties, including the prime they agree on, they can calculate the key for themselves. The weakness of implementations of D-H is that rather than search for a large prime at run time they have a limited number built in which makes it feasible to calculate a few tables which will be sufficient to attack most sites.
*And I'm not a mathematician.
**Nobody knows for sure what the NSA do but some boffins worked out that this is how they might have done it.
***This is a humungous calculation but is now achievable by throwing enough CPU cycles at it.
-
Sunday 25th October 2015 19:32 GMT Anonymous Coward
Re: Questions
>> The Diffie-Hellman works by agreeing on a prime]
no it doesn't Please play the video in the article.
The number of primes in 2^1024 space (around 10^300) is significantly more than the number of atoms in the universe (around 10^80) so it could be a little challenging to build a rainbow table ...
-
-
This post has been deleted by its author
-
-
-
-
-
Saturday 24th October 2015 08:45 GMT allthecoolshortnamesweretaken
Re: The NSA just recommended dropping ECC
No, it was Sir Francis Walsingham...
ELINT goes back to the days of the first telegraph. SIGINT (in a broader sense) goes back so far that you can't really date it. Spying realy is 'The second-oldest profession'.
This is about computer networks, and Bletchley Park didn't spy on them because there weren't any yet. The NSA, however...
... anyway: pedant mode OFF, have a nice weekend, a pint an an upvote. Bletchley Park references always get one (I think it's in the forum rules somewhere).
-
Sunday 25th October 2015 06:57 GMT Mike Pellatt
Re: The NSA just recommended dropping ECC
"This is about computer networks"
I'd make the case that this isn't about computer networks, but about communication networks. In which case, they most certainly did exist. With exactly the same challenge to address as today's networks - how to communicate securely over an insecure medium.
In WW2, the medium was morse code transmitted over HF radio. This was easily intercepted with a sufficient number of skilled operators at sufficient receiving stations. These operators are amongst the unsung heroes - accurately transcribing random characters is far harder than plain language.
-
Monday 26th October 2015 16:43 GMT filter_guy
Re: The NSA just recommended dropping ECC
Morse code is not random characters, although it can be used to transmit random characters. Yes, morse code may seem like gibberish to those who cannot understand it. For amateur radio operators, who communicate using morse code, it is understood as a valid language, and not just "random characters".
-
Friday 1st January 2016 01:51 GMT Anonymous Coward
Re: The NSA just recommended dropping ECC
Morse Code is no longer used,at least it is no longer officially recognised as a language of communication.
OH ... .... .. _ ! I must be giving my age away.
An old shipmate,who was a former long -serving submariner,whose ears were flat against the side of his head from years of wearing headphones,reckoned that he could tell exactly who was sending messages to anyone else,probably due to the unique rhythm when keying.
I understand that as when on shore leave, I listened to morse code on my old Murphy B40 Communications Receiver,which was in use by the RN during the 1960's.
-
-
-
-
-
Saturday 24th October 2015 10:15 GMT Anonymous Coward
NSA as the original black hat
No. Cracking Enigma like Bletchly Park did, or the NSA cracking whatever codes the Soviets were using back during the Cold War wasn't evil, it was just ordinary spycraft.
Turning that capability on their own citizens was when it became evil. And the NSA didn't used to be evil, at least not completely. Remember, they were the ones who strengthened DES when IBM developed it in the 70s with some flaws the NSA knew about back then but the security research world didn't figure out until the 90s! Until those techniques were discovered, many had assumed the changes the NSA had IBM make to DES were to weaken it.
-
-
-
Saturday 24th October 2015 07:12 GMT Chris Miller
Is this a legacy problem
As I understand it, the weakness is in the implementation of Diffie-Hellman. When it was first introduced, it was impractical to calculate even a 512-bit prime in a reasonable amount of time, so the software came with a list of such numbers that you could use, and it's this list that (it is claimed) the NSA has broken with something like a rainbow table. But today, calculating a 2048-bit prime on a PC or decent smartphone takes a couple of seconds*. So we could, if we wanted, produce a unique prime each time and the NSA's advantage would go away.
I suppose the obvious answer is that it's easier just to move from 1024-bit to 2048-bit, even though (a) many people won't; and (b) there'll be lots of backward compatibility attacks like logjam.
* 2 seconds would be a long time to wait for each TLS handshake, but we could always pre-calculate keys at start-up. And, in a few years the time needed will drop to milliseconds.
-
Saturday 24th October 2015 11:26 GMT Doctor Syntax
Re: Is this a legacy problem
"2 seconds would be a long time to wait for each TLS handshake, but we could always pre-calculate keys at start-up. And, in a few years the time needed will drop to milliseconds"
As you say the weakness is in using very few built-in primes everywhere. One remediation, even without going to eliptic curves, would be frequent, say monthly, updates with new and maybe larger sets of built-in primes. According to the times given in the paper this should enable users to keep ahead of the NSA. Another would be to have servers running a background task searching for new primes so each server would be able to offer a different prime each time it was contacted.
-
Saturday 24th October 2015 12:43 GMT calumg
Re: Is this a legacy problem
Distributing primes, even regularly, would be a problem, because priority targets would have all their encrypted comms archived. (At least, that's what I'd do if I were GCHQ.) So even if it takes longer than a month, once the prime is cracked, all the archives can be decrypted. A reminder that encryption keys should be long enough to prevent cracks by *future* computing power.
-
Saturday 24th October 2015 13:42 GMT Anonymous Coward
Re: Is this a legacy problem
"A reminder that encryption keys should be long enough to prevent cracks by *future* computing power."
Except sometimes future computing power involves a paradigm shift such that the fundamental method becomes vulnerable. In this case, D-H Exchange can be some vulnerable if a someone can produce a working quantum computer that can subject the shared secret to Shor's Algorithm (the scary part being it may already exist but no one knows about it because it's a "black" project). The more qubits the quantum computer can process, the larger a shared secret it can crack. That's why you can't really future-proof an encrypted communication that has unlimited useful life. Eventually, computing should be able to catch up, if not by sheer force, then by working around the problem.
As for archiving, couldn't you address this issue by some form of forward secrecy so that decrypting one message doesn't open the door to all the rest?
-
Saturday 24th October 2015 23:42 GMT Doctor Syntax
Re: Is this a legacy problem
The argument put forward in the paper is that the processing for a single prime is mathematically feasible but computationally very expensive. But what the researchers realised was that once you'd done it the additional work to crack any eavesdropped exchange was trivial and that as the same prime was being used by a large number of sites for a long period the cost could be spread out over an enormous amount of traffic. It was this which made it economically feasible.
If the a larger number of primes were used for shorter periods the economics would work against it and only a small proportion of the traffic would ever be decrypted. As to being concerned about future computer power, you have to remember that in order to make decryption worthwhile you have to do it whilst the messages are still relevant.
Nevertheless, a switch needs to be made to better algorithms or much longer primes.
-
-
-
-
Saturday 24th October 2015 08:58 GMT John Sager
A little clarification
The recent issue with Diffie-Hellman is that the standards, and a lot of implementations, use one specific 1024-bit prime known as 'Oakley Group 2'. The conjectured hack is to calculate a lot of specific data from this prime which can then be used to rapidly break any shared keys generated by D-H using this particular prime if the D-H message exchange is observed. The counter is not to use that particular prime. More modern implementations tend to use Elliptic Curve Cryptography (ECC) which, as far as is known publicly, is secure with large enough fields over which the calculation is done. For example, OpenSSH has for a while used in preference an elliptic curve algorithm called Curvep25519, which is supposed to be as hard to break as 128-bit AES, i.e. impractical currently.
Note: Although thethere has been a recent debacle over the NIST Dual_EC_DRBG random number generator which uses elliptic curves in a specific, and conjecture to be hacked, way, this has no bearing on the general security of ECC.
-
Saturday 24th October 2015 09:15 GMT Alan J. Wylie
Re: A little clarification
At last, a post that clarifies that the recent issue is that a small number of 1024 bit primes are very commonly used, and that the NSA is theoretically capable of analysing each such prime within its budget and a timescale short enough to return a dividend.
https://freedom-to-tinker.com/blog/haldermanheninger/how-is-nsa-breaking-so-much-crypto/
For the most common strength of Diffie-Hellman (1024 bits), it would cost a few hundred million dollars to build a machine, based on special purpose hardware, that would be able to crack one Diffie-Hellman prime every year.
...
Breaking a single, common 1024-bit prime would allow NSA to passively decrypt connections to two-thirds of VPNs and a quarter of all SSH servers globally. Breaking a second 1024-bit prime would allow passive eavesdropping on connections to nearly 20% of the top million HTTPS websites.
-
This post has been deleted by its author
-
Saturday 24th October 2015 10:07 GMT Anonymous Coward
Re: A little clarification
And this is where the security guys really screwed up and screwed us. Even if they thought "it would take many years to crack this one prime" why the HELL did they use a single prime? Security isn't about doing 'just enough'...what happened to the idea of "defense in depth"?
Would it really have been that much harder to have it choose from a list of thousands of 1024 bit primes, that could be updated when the software was patched? It was sheer laziness, poor assumptions and stupidity on the part of the security community, along with not recognizing how difficult of a task it will be to update all this software when the time comes that caused this, not the NSA's evilness.
-
Saturday 24th October 2015 20:26 GMT Bronek Kozicki
Re: A little clarification
Here is explanation : people implementing small details of encryption / cryptography programs do not necessarily consider themselves "security community". These are often your "regular" developers who sometime make poor choices, e.g. make an assumption that if a single prime is good, then it can be reused over and over again. Why would "good prime" matter? You just cannot use any prime in D-H, because some of these numbers would render your encryption incredibly weak. You need "good" numbers, and how do you find those? Well, for one you can use those "known to everyone to be good".
-
-
Sunday 25th October 2015 13:17 GMT Bronek Kozicki
Re: A little clarification
In D-H key exchange there is no such thing as "private key prime" . The two prime numbers from which key exchange starts are called generator (a very small number, usually 2, 3 or 5) and modulus (which is very large, currently recommended length is 2048 bits at least - OpenSSH uses /etc/ssh/moduli file to randomly select these numbers from), and both are exchanged in public. Our focus is on modulus. And yes, you cannot use just any prime number as a modulus, some of them are not suitable and will result in very poor encryption. Unfortunately how that happens is beyond me. Luckily with openssh there is a tool called ssh-keygen, which can be used to generate suitable prime numbers (which you can call "private" since you have generated them yourself, but they are not private in cryptographical sense)
As for the real private part, that's your random number which has to be generated in a cryptographically secure manner and is never exposed at any point of key exchange. Weak (i.e. predictable) random number generator is another possible weakness of D-H, but it is not discussed in the article. The focus is on potentially compromised by NSA modulus (or moduli, plural) which is deemed to be possible for small e.g. 1024-bit sized prime numbers.
-
-
-
-
-
Saturday 24th October 2015 09:30 GMT Primus Secundus Tertius
Who is that sending?
Diffie-Hellman, as described in the article, does not guarantee who it is at the other end of the line. For a fixed landline circuit that is fine, but not on the packet-switched Internet.There, for something approaching a guarantee, you need public/private keys.
Then Able sends Baker a proposed key in Baker's public key. Baker returns that key using Able's public key. So each has confirmed the identity of the other.
Or have I misunderstood?
-
Saturday 24th October 2015 09:40 GMT Chris Miller
Re: Who is that sending?
I agree, and if there were a universal registry of trusted public keys and everyone had one, that would be fine. But the purpose of D-H (and similar protocols) is to generate a shared secret key using open messages over a public network, in a situation where at least one party has not implemented asymmetric encryption. It's also useful where one end would like both security and anonymity, often the case on public web services.
-