And Lo!
The only thing on this morning's updates: libcrypt...
Linux users need to check out their distributions to see if a nasty bug in libgcrypt20 has been patched. The software fix, which has landed in Debian and Ubuntu, addresses a side-channel attack published last week. The researchers published their work at the International Association for Cryptologic Research's e-print archive …
...however for most people the attack is rather theoretical, the attacker would have to execute code on your computer to attack. Of course with more and more stupid things like WebAssembly forced upon us, this does become more and more likely to be exploitable.
Why would WebAssembly be any different to anything else? Do you even understand what it is and does?
WebAssembly is nothing more than a cut-down interpreted VM, like Java used for decades, in a very limited scope, executing only with the privileges given to a normal HTML page. Just because it has the word "Assembly" in there, don't fool yourself into thinking that it's actually executing anything. It's still just a compressed version of a very limited instruction set subject to the browser security controls (which are a lot stricter now than they were in the days of plugins - Java plugins were basically given full run of the machine, WebAssembly can't even open a network socket as you would expect - it gets encapsulated as a WebSocket that will only communicate on web ports).
You need to run software. In the absence of direct execution, that means running an interpreted restricted-instruction-set language. Whether or not you want to run software from a particular website? Well, that's a browser control. But WebAssembly has NOTHING to do with executing code on your processor, certainly not one that interacts in any way with local shared libraries, and certainly not one that can just execute routines and pass stuff off to your OS.
"WebAssembly is nothing more than a cut-down interpreted VM, like Java used for decades". I know nothing about WebAssembly, but people have certainly demonstrated side-channel attacks using javascript (which fits that description). The whole *point* of side-channel attacks is that they need no privilege to perform. I see no intrinsic reason to suppose they can't be implemented in WebAssembly too.
In which case we're all dead, because HTML is just the same - any website is potentially complete compromise by that reasoning.
However, in practical terms, WebAssembly is low down on the list of possible avenues, as is modern Javascript (however it's scripting of ActiveX etc. was always it's main problem, not the Javascript itself). Just above HTML, and probably just below Javascript.
And it's REALLY difficult to execute any kind of local attack utilitising local C-written shared libraries that are nothing to do with the browser by any of those. Honestly, those are not the issues to worry about.
You'd be measurably safer if all your application writers recompiled their apps to WebAssembly and you only accessed them via a browser. However, you'd also lose a lot of functionality in the process - e.g. opening local files, network communication etc. - because of the browser security model that would be imposed on them by doing so.
You'd be measurably safer if all your application writers recompiled their apps to WebAssembly and you only accessed them via a browser.
I am happy to believe you. But you would be much safer still if the apps remained on the websites where they belong and the browser was just using HTML.
Obviously not everything could be done that way, but the answer is not to make it easier to create pages which do a lot of processing locally, particularly processing which is not easily inspected by human beings.
If you've got infinite server capacity that's fine. If you're concerned about server load then using the processing power of the client machines is a sensible way to scale out. Why would you call the server to do an animation, or filter a list that's already in memory as the user types? You can do a round trip per keystroke, but if all you're doing is string matching it's pretty inefficient.
There are certain functions that should live on the server (authentication and data persistence being two of the main ones) there's another set of application functions that make much more sense running on the client.
"Why would you call the server to do an animation"
Why the fuch would most people want to see an animation for animation's sake?
If it actually genuinely adds value to whatever's going on, fair enough, but the number of those is massively outnumbered by the number of pointless animations added because some dumb presentation layer person liked the look of it in the office.
> data leak was tolerated because it was believed only part of a key was recoverable
Such assumptions do grate with me. Every bit you allow to leak literally halves the keyspace. Although with large key sizes it may still remain practically secure (half of unimaginably massive is still unimaginably massive) but I would hope that they would be uncomfortable about it and there is some Todo on it.
> Such assumptions do grate with me.
Fortunately, you can always set up a bounty to entice the right people to work on it.
> I would hope that they would be uncomfortable about it and there is some Todo on it.
"They"?
<tt>git clone git://git.gnupg.org/gnupg.git</tt>
It's as much "theirs" as it is ours, you know. With FOSS, we share a responsibility for making the software as good as we want it to be.
Perhaps we could have an icon for "the old fix-it-yourself chestnut", which would be ideal for comments like this. I'd draw one myself but have a headache, and am a bit busy trying to recreate paracetamol from the open-source description of the molecule. I have no expertise in the area, but all you need is the source, right?
@AC, sorry, not following your point. I am not railing against the failure of some open source project to implement some feature that I want. I do not personally use GnuPG, but no doubt some information about be is at some point encrypted using this library, so I am an indirect stakeholder. Or do you personally also check up on openssl, ms crypto, and the dozens of others that other people handing your data may be using?
This isn't even a complaint that they got something wrong. Good crypto is freaking hard at the best of times. But what has happened here according to TFA is that they *knew* of the partial compromise but believed the keyspace was big enough that they could get away with it. As with the OP above, that sort of attitude of near enough is good enough, bred through an entire codebase of a security product is concerning. All I said was that I hope that when they concluded "not practically crackable because of massive key sizes at play"that they nonetheless had an expectation that they needed to get back and fix it properly anyway.
Which TBH I doubt many people would have considered important.
Although that's in the open literature.
Who knows what various TLA's have investigated.
It's a library specifically labelled for cryptography. It's likely to have been high on their study list.
It's a library specifically labelled for cryptography. It's likely to have been high on their study list.
And they probably found it and would have known that their counterparts in <insert name of currently despised foreign power> would probably have also found it. But rather than protect us by having a quiet word with the GnuPG maintainers they chose to not tell anyone -- presumably hoping to break crypto on messages.
One does wonder which side the TLAs are on ? The general population or some shadowy masters ?
Which makes this a pretty valuable result, wheather or not it compromises this particular algorithm.
Proving once again that "Crypto is tricky."
However those who bought Sky TV's premium packages can rest assured their content will not be pirated, as they use at least 2048bit RSA keys for their encryption.
Your content will continue to remain exclusive, and not to be enjoyed by the riff-faff, unwilling to pay the "Murdoch tax."
Rather than enumerate two distributions, it's more informative, I think, to name the version of the library where the problem is fixed, so people can quickly know whether their fully-patched machine is, or is not safe.
The issue was fixed in libgcrypt 1.7.8. If you have that version, you have a fixed libgcrypt.
The release announcement for that version is here:
https://lists.gnupg.org/pipermail/gnupg-announce/2017q2/000408.html
> Rather than enumerate two distributions, it's more informative, I think, to name the version of the library where the problem is fixed, so people can quickly know whether their fully-patched machine is, or is not safe.
Exactly, which is also why the CVE is useful (for those of us who read the update logs anyway).