Re: Not to downplay the security hole....
Plus in 30 seconds you can probably just replace the device with an identical looking one that's bugged. Or you could implant a bug into one of those.
4851 publicly visible posts • joined 9 Mar 2007
"In a world where most people aren't developers, most people will always run someone else's code."
You're completely missing the point. Of course you won't have to security audit all the code you are running yourself, but you can get code from other trusted sources. Just like people now replace their Windows XP or Windows 8 with some Linux, or replacing their manufacturer branded Android with Cyanogenmod, being able to choose what software runs on those devices is a good thing.
Just imagine Google deciding to "upgrade" the software to display ads. Or to sell off the data they collect from those devices. Just because Google doesn't do this today, they could one day get into financial troubles and be sold to a company having other ideas. In the 1990s nobody would have thought IBM would sell off their PC division.
And seriously, how is the mentioned "security hole" even a security hole. If you have 10 seconds alone with such a device, you could also simply replace it with an identically looking other device. Or you could just stick on additional hardware to it.
I'd not want to run such devices with some Google software which is designed to spy on me, but with a software coming from a source I trust. In fact since the task is rather simple, I'd want to be able to write my own software to get onto those devices.
It's not a security vulnerability, it's a security feature. Running your own code means that you can get rid of all the security problems the manufacturer put in there.
We must stop seeing "running your own code" as a security problem, since "code is law" and only if you can decide what code a device runs, you truly own it. Seeing more and more devices going against the will of the person who paid for it, that's a really important thing.
I disagree, from all I've heard the specification itself is already far to complex to ever be implemented correctly. I mean the reference implementations are already larger than the Linux Kernel... and those implementations don't include any drivers.
It just seems to be a heck of an overhead just to do booting and hardware support. OpenFirmware did the same, much more cleanly with much less code.
Maybe we should stop comparing EFI with the IBM-BIOS and instead compare it to something that actually was "state of the art" at one point.
Well it doesn't matter if they _want_ to do it. Because of information asymmetry they can simply be blackmailed into such positions.
This is why the Chaos Computer Club heavily advises against any sort of such cooperation. There simply is no way you can win in such a situation.
The Blackphone went down a wrong route. It's just a slightly modified standard phone.
The problem with that is complexity. Mobile operating systems are orders of magnitude to complex to be secure. More complexity means more errors, and more errors mean more security critical errors.
Another problem on those devices is that you have several instances of "binary blobs", code running with very high privileges, facing outside, but having never gone through some sort of security audit.
If you actually want to have a secure device, you need to design it differently. One important thing is to spread out your hardware to different components connected via simple interfaces. Todays mobile phones often have their GSM/UMTS/LTE baseband connected via shared memory or USB, this means that once the baseband is is compromised it's plausible it can attack the application processor, and therefore read out all the keys... or just fake the display.
If you had a simple high speed serial port running a much simpler protocol like PPP, this becomes so hard it gets implausible.
You could have each function of your mobile phone done by an independent microcontroller. The software running on each of those would be simple enough that it would be essentially bug free, so it wouldn't need to be updated. Simple protocols could reduce the attack surface even more.
Without any need to update your software, you could just embed your electronics in transparent resin with a bit of glitter. That would even make the hardware tamper evident.
Then you could greatly simplify the software architecture. Since it'll always be possible to get keys out of your device, and since the CA concept of TLS is severely broken, you could just limit the communication of your device to a single server you own yourself. Since you can exchange the key in advance, you can simply use symmetric encryption. Securing a server is much easier than securing a device that's inside your pocket.
They claim to have security benefits by encrypting RAM. They claim to do this by having a "secure hypervisor" in CPU cache. Which is hard enough to do, but they don't seem to have any actual credentials in security.
The way they are trying to get around the obvious "boot another OS" attack is by using bootloaders that only run signed code... something that may sound good in theory until you realize that it typically depends on certificate chains... which have so far failed in so many places and are regularly exploited on the Internet. It's not designed to protect the user, but to protect business models.
In essence, they are running more code, which will mean more bugs and therefore more security critical bugs. There's very little security benefit in that.
The "GSM" baseband is very complex adding layer upon layer of code trying to implement standards which are in part badly designed.
Added to that is the principle that the network is always trustworthy, so those implementations were never tested against malevolent networks.
What makes this a really big problem is that some mobile phone manufacturers use shared memory to have the baseband talk to the application processor. So if you take over the baseband CPU you'll likely be able to compromise the rest of the system.
The problem is that we are increasingly cutting off people from accessing what's below the shiny surface. In fact on many mobile devices you don't even get to have root access by default.
Compare that to the home computer era. Sure most people used them to play games, but once you turned them on, you were presented with a fully fledged command prompt in the form of a BASIC interpreter.
I don't see how that would work. TCP is rather good at streaming data over long latency connections. You just push in your data and it'll come out with the latency of the line. Having a bit more or less data wouldn't change the latency.... Besides there are Websockets for that kind of thing.
We all know that SSL is broken in so many ways that we actually should just abandon it and replace it with something more like SSH. Mandating SSL will only slow down that process, plus it'll cause lots of problems.
I do not see a point for compressing headers. The web isn't slow because we use a text based protocol that's uncompressed. The web is slow because idiotic web designers spread their contents across dozends of domains (causing DNS queries) and bloating the headers with cookies.
Well Blackberry is needed to lure people into what is probably the easiest plattform to have access as, as a large attacker. I mean they even sent the e-mail passwords to a Blackberry server. The intended usecase involves a "Backend Server" which runs on Windows with System rights.
It's just like saying "Google Mail is bad, let's all switch to De-Mail".
I mean we already let computers make decisions which are bad for society, for example in high speed trading. As long as this is not explicitly forbidden, corporations will go on doing this.
Corporations themselves are like machines. Although the individual parts are humans, the whole thing behaves like a being. That is why corporations must never be half-treated as people as its done now in the US, where corporations can do nearly everything people can, but they cannot be sent to jail. If you send an individual of a corporation to jail, it'll simply work around that missing part.
that Cisco actually fixes their bugs.
The strategy of the NSA is not to do the bare minimum to get to the data, but to do everything they can. So they probably knew about such bugs, but still added hardware... just because they can and they want to have redundancy.
Well of course the US pushed the DMCA. However if you go to a politician outside of the UK, they will always refer to the international agreements.
For an US politician international agreements are not an argument, they just want them to make life in other places worth and just ignore them when they become problematic.
Unfortunately the US is probably the only country where this is possible, since there are braindead international contracts which are used by other countries to argue against abolishing their local DMCA versions. In the US nobody cares about international law.
C is a powerful tool in the hand of capable people. It's natural environment is UNIX and simple systems.
One should notice that good C programmers don't program complex things in C. This may sound paradoxical, but what they actually do is writing a small "interpreter" which interprets data structures containing the actual logic. Thus creating something like a domain specific language. C with its data and function pointers makes this very simple. This is the true strength of C.
Apparently that is not what people have been doing here, they literally programmed complicated things directly in C, making both their life unnecessarily hard and risking serious problems if they mess it up.
Well the point about that new OS is that the code has been proven not to suffer from certain kinds of bugs. Since such a proof is very hard to do, they only did it with very little code, hence a microkernel. It is then hoped that a "secure" microkernel will be able to secure the rest of the system... which is not necessarily true.
However it is a big step towards security.
AV companies started their products in the 1990s, back when nobody was good at programming, at least not the people who programmed for Windows.
Then they keep putting layer on layer of complexity. First they only scanned files, then they scanned archives. They continue to mess around with more and more complex programs. If a team implementing a compression algorithm cannot get it right, why should a team also responsible for lots of other things, get a whole bunch of compression algorithms right.
Among security people, AV is seen as snake-oil. It cannot work in principle therefore they won't work on such projects.
Lastly to answer the question why browsers are harder to exploit and AV software: Browsers have been mostly open source for more than a decade now. Browsers are actively researched and exploited by a large variety of people. Compare that to AV software nobody who knows about security cares about.
...that there don't appear to be any simple and published standards. Sure there actually are standards in use to get the multicast streams from your ISP, however you'll never know what they use and if there's DRM in there.
What we would need is a public standard for IPTV. Something DRM-free which I can ask for in a store like DVB-T or DVB-S. This is the base for interoperability, particularly in the face of open source solutions which become more and more important as the commercial solutions more and more act like malware.
I have to say that you are still lucky if you have a full blown Linux system, as there you at least had a chance. We had to work with Nucleus, an operating system which had it's own "Ping of Death" bug. However to be fair, trying to respond to a 64k ping when you only have 30k of RAM left kinda is a futile task. Then again the code was so bad that every DNS query leaked 512 bytes of RAM. Again you won't notice that on short test runs when you have megabytes of RAM.
The really big problem is that lots of people who have no idea of secure or even practical software design are now swept into positions where they have to do complex embedded systems.
Yes on high latency connections this could bring a considerably improvement. However it would require a new protocol, kinda an TCPwR (Transmission Control Protocol with Redundancy).
There are 2 Problems with this:
1. It won't go through unmodified NAT.
2. It can be hard to implement.
The first problem is particularly bad with "carrier grade NAT" you commonly have on high latency mobile connections, or mid latency consumer connections.
The second one is evident if you look at real life implementations of TCP/IP stacks. There will are ones, particularly in embedded systems still having severe problems. For example the Nucleus one just tends to drop connections without telling the application about it. Adding more complexity will cause lots of problems.
Maybe one sensible way of doing it would be to extend TCP in some way so connections could easily fall back.
If there wasn't any DRM, services like Netflix would easily cache via a transparent cache and we wouldn't have that problem.
Other than that it seriously makes me wonder how bad the infrastructure in the US must be that ISPs actually cannot get proper affordable upstream bandwidth.
...at least not in anything "communications grade". Of course it will for cheap SPDIF-like systems, but if you want to reach high speeds you encounter a rather simple problem: The parts that bounce around will take considerably longer to arrive than the parts which go through straight through the middle. This may not sound like a lot, but it adds up. If you have a 100km cable a percent longer or shorter ways can really spoil your bandwidth.
Instead fibre optic cables actually work more like microwave waveguides by providing an environment where, ideally, only the wave you are interested in can exist. This involves lots of math.
Apple is one of the few companies that doesn't give out their source code. What other reason, except for betraying the user can there be for this?
I think we should ban binary only software. It's not just to much of a security risk, it's also a question of consumer rights. If I buy a car or a vacuum cleaner I have every right to modify it in any way I want. Why don't I have that right with software? Why can't I just patch out features I don't like or patch in features I'd like to have?