Yes, and there are also two s's in STASI. But I wouldn't expect anyone who came of age after 1990 to get that.
IETF wants packets to prove where they've been, to improve trust
Virtualization changes everything – and in the case of the routers that keep the Internet working, it's not always in a good way. Over the years, the IETF has accepted a variety of proposals that let network admins stipulate where their packets will go, under working groups like Internet Traffic Engineering and Service …
COMMENTS
-
Monday 4th June 2018 05:35 GMT Pascal Monett
So each node adds its part of the key
What about routing around failures ? The Internet is supposed to remain flexible, there is no single path from source to destination because if there were, a connection failure along the line would mean packets don't arrive any more at all.
How do they take that into account ?
-
Monday 4th June 2018 12:33 GMT Robert Carnegie
BGP misuse
I think it's called that... it's possible for a national Internet provider in Russia or China to announce itself as the quickest route to Google or Apple or TSB or whatever, so that all of the Internet is sent to that provider. When this has happened, as I understand it has, it usually appears to be by accident. but you never know.
I assume that this secure traffic mechanism does allow more than one secure route to be declared permissible, so that if one of your data centres suffers a power cut or a tactical nuclear missile strike then your network keeps running, bout I don't know about it.
-
-
-
Monday 4th June 2018 06:06 GMT Anonymous Coward
For reference
Shamir secret sharing forms the basis of most of the backdoor scenarios proposed by various security services. I can't see this ending at all well, anyway. Too often we've seen side-channel attacks of one sort or another that results in "known only by ... and ..." no longer being true. The NSA makes a specialty it seems when it comes to getting things it swears up and down is secure but, ....
-
Monday 4th June 2018 13:03 GMT handleoclast
Ugh
We appear to be heading to the point where even a trivial UDP packet of minimal length will end up sized in kilobytes because of all the overhead it accumulates on the way. It was bad enough with all the shit IPSec added, but this on top is even worse.
Hmmm, just how will this variable-length (depending on how many hops it traverses) and IPSec coexist? I shudder to think what the answer is going to be.
-
-
Monday 4th June 2018 19:11 GMT Claptrap314
What am I missing?
I fail to see the point here. From my (limited) understanding of the net, I thought that the entire point of at least a couple of layers of protocols is that, as an end user, you do NOT need to know or care about what nodes are actually being transited. Authentication and encryption are endpoint concerns.
Yes, when you get to the level of evading traffic analysis, then more effort is needed. But the v2 remailers figured this out around 1995. I've not followed the thread that carefully, but I assume that this is what TOR does.
-
Monday 4th June 2018 20:21 GMT Anonymous Coward
Re: What am I missing?
The difference, so far as I understand this, is that the purpose of TOR is to obscure the path which encrypted packets take between source and destination devices. Think of that as multiple layered VPN's between the two.
What we have here is an attempt to attest that the only path(s) taken was/were between devices. At the destination you validate, using Samir secret sharing, for each node and if it does not meet your requirements (only certain devices along the way), you drop the packet and go back and request the packet again. Hopefully this time, it goes only on the data streets and freeways you require. That becomes an interesting process in itself. I need to go back and see if I can figure out how you handle reliability issues, ACK's, ....
The best real-world example I can come up with would be having direct, surveilled, links with dedicated copper, fiber-optic, or masers/microwave systems. Exactly as is supposed to be the case between, say, the Pentagon and the White House. You can be damned certain you don't get to the White House by some other path, say any other network of whatever type. That really does happen In Real Life. Tack attestation, not just protection by observation for instance, of that path and you pretty much have this.
I wouldn't be at all surprised if that model/technique is behind this. Not a new problem, more an old one where public networks are perhaps involved to give it a new spin.
-
Monday 4th June 2018 20:38 GMT Claptrap314
Re: What am I missing?
I think you're missing my points.
TOR (and v2 remailers) were not about obscuring the path of travel per se. They are about obscuring the connection between endpoints. To achieve that, they obscure path of travel between servers. BUT... packet routing still has to happen, and to achieve this, the sender includes a message to each server along the route stating where it goes next. This message is encrypted with the private key of that server. In this scenario, the fact that the receiver got the message at all attests that the series of servers that the sender choose did in fact receive the packet.
Who else got it? What if server X tries to forward a packet to server Y, but the direct link is down? It sends it to server Z with instructions to send it to Y. Furthermore, there is nothing that an endpoint can do to require X to only send directly to Y. As long as Y accepts connections from Z, then the packet is going through, because Y cannot know if the packet was supposed to come from Z or not. And Z cannot tell if the packet was supposed to go through Z, because X can rewrap the packet.
OTOH, if you own all the servers in the network you are using, then the standard routing information that gets added, for instance in our email headers, works just fine.
So...I'm still missing what is the need for this thing.
-
-
-
Tuesday 5th June 2018 00:25 GMT eldakka
If the IETF instead mandate end-to-end encryption, i.e. no MITM-type appliances/network infrastructure in between the services at either end, and each service does end-to-end encryption between itself and other sub-services (e.g. web server to database server, or web server -> application server -> database server, each element encrypted, and where server = software stack service, not 'a box'), then what care (beyond latency impacts) do we have what path has been taken?