nav search
Data Centre Software Security DevOps Business Personal Tech Science Emergent Tech Bootnotes
BOFH
Lectures

* Posts by Crypto Monad

73 posts • joined 14 Dec 2017

Page:

Keen for much-hyped quantum computing to finally land? Don't expect it for a decade

Crypto Monad

> And land. Let's not forget landing. Any idiot can take off and fly and largely be successful at avoiding other things in the air. But getting safely back on the ground again? That's the REALLY hard part.

Landing at an airport is actually pretty easy - commercial planes have had fully automated landing systems for years. It's very much following rules.

Landing safely in someone's garden wouldn't be that hard if the flying car has drone-like flying characteristics. Choosing a sensible landing spot might be harder, but at worst nominated "safe places" could be marked up on a map. Not landing on top of another flying car or human is probably the most difficult part, but that's comparable to the job a self-driving car has to do.

Crypto Monad

although crack AI, and you have driverless cards anyway

That depends on what you actually mean by "AI", which changes decade by decade.

Firstly it meant tree-searching algorithms (a machine that can play chess).

Then it meant fuzzy logic and expert systems (a machine that can diagnose disease).

Then it meant pattern matching and neural networks (a machine that can recognise faces).

None of these is anything like the public perception of AI, which is more along the lines of "I, Robot" or "Ex Machina": a fully self-aware, "living" machine.

If we get the latter, then it will be able to drive your car. Whether it chooses to or not, is another matter.

If you ever felt like you needed to carry 4TB of data around, Toshiba's got you covered

Crypto Monad

Re: Eggs. Basket.

Proper backups allow point-in-time recovery. What you have is periodic replication of the most recent state only.

Consider the following sequence:

1. you accidentally delete (or corrupt) a file, but don't realise immediately

2. you make your regular "backup"

3. you realise what you did in step (1)

You now have no way to recover the data.

Google: Psst, hey kid, want a new eSIM? Our Fi has one right here

Crypto Monad

Re: Overpriced?

"Pixels can be provisioned by Deutsche Telekom in Germany, EE in the UK"

So: one provider in each big marketplace.

Wake me up when there is a drop-down menu that lets me choose plans between multiple providers. Until then, the hassle of obtaining and plugging in another SIM is still going to work out far cheaper.

Huawei MateBook Pro X: PC makers look out, the phone guys are here

Crypto Monad

Re: I rather like it, but for one detail

The IPv6 Buddy suggestion was tongue-in-cheek - but if you google "usb numeric keypad", you'll find a ton of standard-layout ones, many for under a tenner - and bluetooth options too.

Crypto Monad

Re: I rather like it, but for one detail

You could always plug in an IPv6 Buddy

Boeing 737 pilots battled confused safety system that plunged aircraft to their deaths – black box

Crypto Monad

> We don't tolerate autopilot for trucks or chartered busses

Trucks have been suggested to be one of the first expected applications for self-driving vehicles.

A rumble in Amazon's jungle: AWS now rents out homegrown 64-bit Arm server processors

Crypto Monad

> The world needs competition to Intel for over 15 years now.

There has been a resurgence of rumours of Apple moving to ARM too.

Apple have done this twice before: Motorola to PPC, and PPC to Intel. Of course, you are best placed to do this if you own the OS and have good influence over the application ecosystem.

I don't see Dell and HP pushing ARM while they are so reliant on Microsoft; and all Microsoft's ARM products to date have been so awful, you'd think they did it on purpose just to keep Intel happy.

But Apple could tip it. Once people are happy with ARM on a laptop, an ARM Mac Mini could be the breakthrough into desktop and/or small server environments.

The other big selling point ARM have is the trustworthiness (of lack of it) in Intel chips.

Well now you node: They're not known for speed, but Ceph storage systems can fly

Crypto Monad

Re: 6ms+ w NVMe

> Maybe Ceph is short for Cephalopod

Err, yes it is. The company was called "Inktank" before being bought by RedHat.

Crypto Monad

Re: 6ms+ w NVMe

The article wrongly states that the reference architecture requires 3TB (!) of RAM in each node.

If you read the document, you find that the servers are *capable* of 3TB, but the reference configuration uses 12 x 32GB DIMMs = 384GB.

(Still quite a lot though)

A 5G day may come when the courage of cable and DSL fails ... but it is not this day

Crypto Monad

Re: 46.2Mbps fiber?

> VDSL2 gets up to 200-300Mbps.

Actually VDSL2 is what we use in the UK, but because we use profile 17a, the maximum speed is 100M (capped to 80M by OpenReach)

Some countries use profile 35b, which could do up to 300M in the best case. Unfortunately, OpenReach decided to do G.fast instead.

G.fast is crippled by skipping over the VDSL2 17a lower frequency bands, to avoid interference. But those are the frequencies which propagate better over longer distances. As a result, beyond about 500m, G.fast is actually *slower* than VDSL2.

Plus: because there are LLU providers with their own ADSL modems in exchanges, OpenReach run VDSL2 with a reduced power level to avoid ADSL interference. Again that reduces the speeds obtainable on VDSL2.

Behold, the world's most popular programming language – and it is...wait, er, YAML?!?

Crypto Monad

Re: No and yes [Was: HTML-only calculator?]

LISP is a programming language, and LISP is written in S-expressions; YAML is comparable to S-expressions.

Sadly, there are a bunch of programming languages which are indeed programmed in YAML. Two examples are Ansible and OpenStack Mistral. They are both excellent examples of Greenspun's Tenth Rule.

But that doesn't make YAML itself a programming language.

Cathay Pacific hack: Airline admits techies fought off cyber-siege for months

Crypto Monad

Re: Flight Pattern

> The lucky ones have excellent IT teams and hardware and appropriate budgets and can defend themselves to a certain point.

Or at least they have logs and/or other ways of detecting attacks.

The others have probably been attacked but just don't know it.

Lucky, lucky, Westminster residents: Who better to look after your housing benefits than Capita?

Crypto Monad

Automation and robotics?

ED-209 turns up at your door.

"Your council tax is overdue.

You have 20 seconds to comply."

Mourning Apple's war against sockets? The 2018 Mac mini should be your first port of call

Crypto Monad

Re: Macs typically have a longer usable life than Windows PCs ...

> Is any linux distribution from 2007 still supported?

Yes.

RHEL 5 (from March 2007) still has "Extended Life-cycle Support" available until November 2020. This "delivers certain critical-impact security fixes and selected urgent priority bug fixes and troubleshooting for the last minor release" - for a price.

See https://access.redhat.com/support/policy/updates/errata#Life_Cycle_Dates

RHEL 4 (from Feb 2005) is technically still in its "Extended Life Phase", but since support has ended, this doesn't count for much. "No bug fixes, security fixes, hardware enablement or root-cause analysis will be available during this phase". You just get access to the documentation and knowledgebase.

Crypto Monad

Definitely not trash.

If you want a powerful server that you can stick in your rucksack or airline carry-on bag, there's not much to match this currently.

The Intel Skull Canyon NUC is similar size and weight by the time you've included the PSU brick, but is limited to 32GB RAM and 4 cores (Mac Mini does 64GB and 6 cores). The NUC does have two replaceable PCIe SSD slots though.

It's been a week since engineers approved a new DNS encryption standard and everyone is still yelling

Crypto Monad

Re: Not one to nitpick but...

> what is stopping DoT from using port 443 too?

Because HTTP and DNS are different protocols with different payload format. The whole point of a well-known port is that you know in advance which protocol you are supposed to speak, when you open or accept a connection.

> block known DoH server IPs

That's called whack-a-mole, and it doesn't work.

Remember that the first provider of DoH services is CloudFlare. They could enable DoH on *all* their front IP addresses. In that case, it would be impossible to block DoH without also blocking all sites hosted on CloudFlare (including El Reg)

> I cannot see the logic in involving HTTP. ... why it makes any more sense to do so with DNS?

In other words: why are some people pushing for DoH rather than DoT?

Well, DNS is a request-response protocol which maps quite well to the HTTP request-response cycle (unlike SMTP or FTP).

But the real reason is because it makes DoH almost impossible to block. Your site's DoH traffic is mixed in with your HTTPS traffic and it's very difficult to allow one but not the other. That makes it a real pain for network operators, who may use DNS query logs to identify virus-infected machines (calling home to C&C centres), or to filter out "undesirable" content such as porn.

It's a question of whose rights prevail. Consider a university campus network. Does the network operator (who pays for the network) have the right to enforce an AUP, which says you can't use university resources for browsing porn? Or is this trumped by the rights of the student to use the Internet for whatever they like?

This has national policy implications too. In the UK, large ISPs are required to provide "family-friendly" filters, and this is generally done by DNS filtering. If the mainstream browsers switch to DoH, those filters will be completely bypassed. The ISPs can switch to blocking by IP, but there will be much collateral damage as one IP address can host thousands of websites - and if the undesirable site is hosted on a CDN like Akamai or CloudFlare, this sort of filtering may be impossible.

(Today you can also filter on TLS SNI, but SNI encryption is also on the near horizon)

GitHub lost a network link for 43 seconds, went TITSUP for a day

Crypto Monad

Re: re: Why did GitHub take a day to resync

What you also need is a mechanism which *guarantees* that there is no split-brain scenario: a provably-correct consensus protocol like Paxos or Raft. You want writes to be committed everywhere or not at all.

Some databases like CockroachDB integrate this at a very low level; whether it is fast enough for Github's use case is another question.

If you haven't already patched your MikroTik router for vulns, then if you could go do that, that would be greeeeaat

Crypto Monad

Re: Would anyone...

who regularly reads here, admit to owning a MicroTik router?

Yes!

The Mikrotik CCR1036-8G-2S+ is a rackmount box with 8 x 1G and 2 x 10G ports, and costs under £1K, with no charge for software upgrades or for turning on features.

A Cisco 4431 will cost you upwards of £5K once you've paid for the "performance licence" to unlock it from 500M to 1G. Plus you pay software maintenance every year on top of that.

If you want 10G ports in a Cisco you're talking at least an ASR1001-X at £12K+ (and that is locked to 2.5Gbps until you pay more)

There are a few foibles in RouterOS, but equally there are some very nice aspects to it as well. Cisco are just having a laugh with their 1990's pricing.

It's a cert: Hundreds of big sites still unprepared for starring role in that Chrome 70's show

Crypto Monad

""My guess for why organisations haven't replaced these certificates at this late stage only comes back to them not knowing the change is coming"

More likely it's that they don't even know what certificates they've deployed and where.

If you're very lucky, somebody might have a calendar entry for when they expire.

A web where the user has complete control of their data? Sounds Solid, Tim Berners-Lee

Crypto Monad

Would have been helpful...

...to link to any details about what Solid actually is or how it works.

Here you go:

https://solid.inrupt.com/

https://github.com/solid/solid-spec

There doesn't seem to be a huge amount to it: basically it's a web server with a complicated ACL mechanism. The social parts like "friends" and "followers" are not done yet.

How an over-zealous yank took down the trading floor of a US bank

Crypto Monad

Re: Unplugging the keyboard = kernel panic ?

Because when confronted with a message on a screen, people's understanding becomes astonishingly literal.

Like people who phone the helpdesk saying that they can't find the "Any" key.

https://www.theregister.co.uk/2003/09/25/compaq_faq_explains_the_any/

Crypto Monad

Re: Unplugging the keyboard = kernel panic ?

Almost as good as the infamous IBM PC boot error:

"Keyboard not found. Press F1 to continue"

You'll never guess what you can do once you steal a laptop, reflash the BIOS, and reboot it

Crypto Monad

Re: Physical Access

"But encryption keys aren’t stored in the RAM when a machine hibernates or shuts down. So there’s no valuable info for an attacker to steal."

Maybe not - but if they can reflash the firmware, they can put in a keylogger or whatever trojan nonsense they want.

The missing laptop is "found", "handed in" to the hotel, returned to its owner, gets used again, and is p0wned forever more. This is the well-known Evil Maid attack.

Official: Google Chrome 69 kills off the World Wide Web (in URLs)

Crypto Monad

Re: Are they keeping HTTP(s)?

> I'm sure http://www.<mydomain> and http://<mydomain> are technically different - I recall cases where one would work and not the other though please someone feel free to explain why?

1. In the DNS, "www.example.com" and "example.com" are two different names. They can point to two different IP addresses - that is, the user would end up connecting to two completely different servers. Or: one name might have an IP address and the other does not, in which case trying to use the other name would give a DNS error.

2. Even if both names point to the same IP address, the web browser sends a "Host" header containing the hostname part of the URL. The web server may respond with different content depending on which host was requested. It might not be configured with one of the names and return a page not found error instead.

(A fairly common example where you want different content is when "www.example.com" is the real site, and "example.com" just returns a redirect to the real site)

3. For HTTPS sites, the certificate might have been issued to "www.example.com" only. This would mean that a request to https://example.com/ would be flagged as insecure, because the certificate name doesn't match.

You can have a certificate which contains two subjectAlternativeNames - or you can have two different certificates and use Server Name Indication to select which one to use. But not everyone remembers to do this.

Crypto Monad

Good news for the owner of www.com!

$ whois www.com

Domain Name: WWW.COM

Registry Domain ID: 4308955_DOMAIN_COM-VRSN

Registrar WHOIS Server: whois.uniregistrar.com

Registrar URL: http://www.uniregistrar.com

Updated Date: 2014-09-23T18:24:31Z

Creation Date: 1998-11-02T05:00:00Z

Registry Expiry Date: 2024-09-20T04:16:04Z

Registrar: Uniregistrar Corp

Now they can create subdomain "paypal.www.com", add a LetsEncrypt certificate, and have it display as "paypal.com" with the green Secure flag.

Strewth! Aussie ISP gets eye-watering IPv4 bill, shifts to IPv6 addresses

Crypto Monad

Re: Has anyone truly made the switch?

I suspect that the average CIO confronted with the spectre of finding money to replace/reconfigure every router and switch in their network, and reconfigure every computer in the building(s), and probably do something cute and costly with some expensive custom gear -- all without shutting down operations for more than a holiday weekend

If it were possible to *switch* from IPv4 to IPv6, this would be perfectly feasible. You'd run dual-stack for a week or a month or however long you needed, and be left with a pure IPv6 network at the end, job done. Dual stack, in fact, would be an excellent tool for this sort of transition.

But that's not feasible, because you'd disconnect yourself from the IPv4 Internet. You still need *some* IPv4: including for inbound connections such as VPN (I've never stayed in a hotel which provides IPv6)

So you have three choices:

1. Run IPv4 and IPv6 dual stack across your whole network indefinitely. This gives you double the number of firewall rules, and hard-to-debug problems when a particular device becomes reachable over v4 but not v6, or vice versa. Increased on-going expense and pain, for no business benefit.

2. Migrate to IPv6 and use NAT64/DNS64 - in other words, IPv6 replaces your RFC1918 private IPv4 addresses. Some places are experimenting with this approach, even Microsoft themselves. But you will still have islands of dual-stack required, and lots of pain with legacy devices, in particular legacy applications which can only listen on an IPv4 address. You end up doing nasty things like NAT464. Again, little obvious business benefit to demonstrate.

3. Stay on IPv4 just as you are today, which works as it always did, and avoid all the pain.

Guess which option almost everyone chooses.

What I'd like to see is that at least for "green field" networks, they could be built single-stack IPv6. This doesn't work today unless you're happy to build your own NAT64 infrastructure (*). And even if you do, your NAT64 still needs an IPv4 address from your ISP, so you may as well just do NAT44 instead.

(*) A few ISPs today do provide NAT64/DNS64 for those who want to try it (e.g. AAISP).

Crypto Monad

Re: Has anyone truly made the switch?

my pick is somewhere between 18 months to 2 years for IPv6 to move from 40% to 90% of connectivity and traffic

"traffic" and "connectivity" are two very different things. Anecdotally, a dual-stack network already gets about half its traffic over IPv6 - because much of the traffic volume comes from a handful of huge content providers like Google (YouTube) and Facebook. But in terms of the proportion of sites reachable over IPv6, it's still tiny.

As for migration, the low-hanging fruit has been picked already - things like mobile networks (heavily CG-NAT already) and university networks (where they have the time to play with IPv6), and it will only get slower now. Some university networks have even turned it off, as the ongoing costs of running two networks in parallel become apparent.

The solution I've proposed for a long time is for the big CDNs - e.g. Cloudflare, Akamai, Google - to offer a public NAT64 service. Then it would be possible to build a single-stack IPv6 network at the edge and still access the vast majority of the Internet.

Crypto Monad

Re: Has anyone truly made the switch?

You are right. Only a tiny, tiny fraction of the Internet is reachable via IPv6. Turning off IPv4 would be equivalent to disconnecting yourself from the Internet.

So it's not an either/or choice. You still need IPv4 addresses to talk to the vast majority of the Internet.

What this provider is doing is using CG-NAT to make multiple users share the same IPv4 address. Separately from that, they will run IPv6 along side; then at least traffic to Google/YouTube and Facebook will bypass the CG-NAT, for those customer-side devices which support IPv6 anyway.

The other option is to do NAT64, but that's messy. You have to spoof DNS responses with DNS64; it doesn't play well with DNSSEC. And you are still doing NAT, and you are still sharing IPv4 addresses. On top of that, the NAT64 solution forces *all* devices at the customer site to be IPv6-capable; if you've got an old IoT device or games console which doesn't do IPv6, then it's completely useless.

So basically the title of this article should be "Strewth! Aussie Broadband gets IPv4 bill, decides to do IPv4 address sharing"

Google cracks down on dodgy tech support ads

Crypto Monad

How many ads?

Google said that last year it took down more than 3.2 billion ads that violated its advertising policies.

I seriously doubt this is 3.2 billion distinct ads.

32 ads, each of which had already been served 100 million times before being taken down? More likely. But in that case, the damage has already been done.

London's Gatwick Airport flies back to the future as screens fail

Crypto Monad

Nobody has yet asked the obvious:

Does Gatwick have an online departures board? You know, the sort of thing that people could access with those mobile screens that they carry around with them?

And was it affected by this outage, or not?

Drama as boffins claim to reach the Holy Grail of superconductivity

Crypto Monad

Interesting how the immediate response without seeing any supporting evidence at all was 'this is clearly bullshit'.

Not exactly. The immediate response upon seeing that the supporting evidence is obviously faked is "this is clearly bullshit".

Australia's Snooper's Charter: Experts react, and it ain't pretty

Crypto Monad

Re: Still Puzzled!

> How does this legislation, with or without backdoors, help the so called "good guys" gain access to their messages?

AFAICS, it doesn't. It would apply only if a "service provider" were helping them to keep their messages secret: such as the vendor of the equipment they were using, or some managed encryption service they were using.

If they build their own devices and write their own software, then it seems they are not affected.

However, if they provide these devices and software to others, then they become service providers and so may be required to add law enforcement back doors (even though they're not called "back doors")

The assumptions seem to be:

1. Most people are lazy and/or don't have the skills to build this stuff themselves

2. There won't be a black market in genuinely secure devices for use by criminals

(1) is a reasonable assumption, (2) rather less so.

If manufacturers or distributors of secure devices refuse to comply with back door requirements, I guess they will be in violation of the law. But what does that do for open-source crypto apps? Does github need to be blocked?

Crypto Monad

Two options

It seems to me there are only two options to give law enforcement access to cleartext messages:

1. Find and exploit unintended vulnerabilities in devices and/or algorithms

2. Get manufacturers to add specific mechanisms to allow law enforcement access

If 2 isn't adding a "backdoor", I don't know what is.

Put WhatsApp, Slack, admin privileges in a blender and what do you get? Wickr

Crypto Monad

Obligatory XKCDs

Two which are particularly relevant:

https://xkcd.com/927/

https://xkcd.com/1810/

Things that make you go hmmm: Do crypto key servers violate GDPR?

Crypto Monad

Re: Removal breaks replication

> As another poster has said, part of the point of PGP is non-repudiation

The non-repudiation aspect of PGP does not depend on the existence of keyservers.

The signature itself is inherently bound to a public key.

In order to verify the signature, then you need a trusted copy of the correct public key. Some random public key you find on a keyserver is not worthy of any trust, unless it has been signed by someone you in turn trust as an introducer.

The keyID is worthless. It's something which *may* look like an E-mail address (it does not have to), but has not in any way been confirmed to correspond to that E-mail address.

Therefore: if it is important to you whether a specific signature is valid or not, then it's up to you to possess the correct public key which corresponds with the private signing key.

Unless it's signed by someone you trust, you should go to some trouble to ensure that the public key is the right one (e.g. by having a phonecall and exchanging fingerprints). Then you can sign the key yourself to remind yourself in future that you verified it.

Crypto Monad

> consent requires a "clear affirmative action" by the data subject

Uploading your key to a keyserver and requesting it to be published is pretty clear affirmative action.

The problem is when someone else uploads your key without your permission - or worse, a different key which claims to be yours.

That is why I don't use keyservers: anyone can upload any random key with any random label. There's no assurance it's the right one, unless (a) I got the fingerprint from a trusted source (in which case I could have got the key from that source too); or (b) the key happens to be signed by someone in my web-of-trust, which is pretty small.

Therefore, in general I get keys directly from whoever I'm corresponding with: it's much easier to make a value judgement over whether it's the right key or not.

Back to GDPR: there is an assumption baked into PGP that public keys are, well, public. Simple answer: get rid of keyservers. These days you can publish OpenPGP keys securely in DNS/DANE instead.

One other thing: can anyone give me a good reason why a keyserver should *not* remove a key on request, if the request is signed by that key?

Pass gets a fail: Simple Password Store suffers GnuPG spoofing bug

Crypto Monad

Re: The real problem is...

Agreed. Exit status would be reasonable for this sort of thing, but as you've found they don't bother to set it in important situations, nor even document the exit codes in the manpage.

Have a look at gpgme for the API, although it's probably not as high-level as you'd like.

Crypto Monad

The real problem is...

... parsing unstructured output from the stdout of a command-line tool is not what you could call a robust API.

If the tool had a mode to output JSON or XML, that might be better - as long as you parse it with a correspondingly robust library. But here we're talking about a shell script parsing the output of some other command, which is a recipe for security disaster.

In my experience, most shell scripts I come across are littered with errors waiting to explode. The most common is unquoted variable expansions:

rm $filename

instead of

rm -- "$filename"

The former doesn't work if $filename contains a space. But it could also do very nasty things if the filename is, say, "-rf --no-preserve-root /"

Stern Vint Cerf blasts techies for lackluster worldwide IPv6 adoption

Crypto Monad

Re: Analogy Units

"if all IPv4 addresses were contained inside a smartphone, IPv6 would fill a container the size of the Earth"

Sadly, this is nonsense. Because of the stupid and wasteful way that IPv6 addressing works, each LAN needs a /64 prefix (burning 2^64 addresses for typically a few dozen devices). And because it can't be subnetted on a longer prefix boundary, each subscriber who might need two or more subnets needs a larger allocation than that.

What it means is that in practice, an IPv6 /56 prefix is the same as an IPv4 single address with NAT - i.e. the unit that an ISP must give out to a "small" subscriber. Since the first three bits are fixed, this means that in practice that there are 2^53 usable IPv6 addresses. This is 2 million times (2^(53-32) = 2^21) more than IPv4; still a lot, but not mind-bogglingly vast.

The original plans assumed giving a /48 prefix to each subscriber. This would have meant that the IPv6 address space was only 2^13 times more than IPv4. IPv6 address depletion panic set in even before there were any users.

A few years ago, a single ISP - France Telecom - managed to get assigned a /19 of IPv6 address space. Remembering that the top 3 bits are fixed, this means they own 1/65,536 of the entire IPv6 unicast address space. And there are more than 65,536 autonomous systems making up the Internet today.

Clearly not everybody can justify a /19, but every member of RIPE gets a minimum of a /32, and can get a /29 on request with no questions asked.

IPv6 growth is slowing and no one knows why. Let's see if El Reg can address what's going on

Crypto Monad

Re: Privacy issues with IPv6?

> SLAAC will never exist on anything I control if I have any say in the matter

Then you're not using any Android devices, which still doesn't support DHCP6

I agree that SLAAC sucks, but there is one valid use case: on home networks with dynamic addressing. If the line drops and reconnects, you need to renumber your devices very quickly, which would mean extremely short lease times if using DHCP6.

Now you say, surely IPv6 has enough address space to give everyone a static allocation? It certainly does, but dynamic addressing is not due to shortage of address space: it's due to route aggregation.

When a user disconnects and reconnects, their session may terminate on a different BRAS. To avoid route flaps in the ISP's core network, the subscriber gets an address out of a larger pool which is routed to each BRAS. So if you change your BRAS, you must get a different address.

For business customers who require static addresses then usually this involves L2TP tunnelling the session to another BRAS. This makes providing static IP services more expensive (additional equipment).

Crypto Monad

Re: Simples

> How, exactly? Invent some more numbers?

Options might include:

* An IPv4 address extension header. When a client talks to a server which doesn't support the address extension option then it would fall back to stateful PAT.

That's the sort of approach which should have been taken in the first place.

If instead we want to complete the IPv6 transition:

* A comprehensive, global NAT64 infrastructure is put into place. It could be hosted by the existing CDNs (e.g. Akamai, Cloudflare, Google), and would treat the whole IPv4 Internet as a pool of content to be served to IPv6-only clients. It would be run as a public service, like public DNS resolvers.

Access providers could then start providing IPv6-only connections, releasing the chokepoint of IPv4 supply at client side.

As usage ramps up, content providers have an incentive to make their content available via IPv6: (a) to get better logs, (b) to serve content faster to this increasing pool of IPv6-only users.

Crypto Monad

Re: Simples

"Does it really matter if we live in a 25 percent IPv6 / 75 per cent IPv4 world?"

Maybe not, but there is another possibility: we could have reached peak IPv6. The proportion of IPv6 could start to decline if IPv4-only networks grow faster than those with IPv4+IPv6. At that point, the people who held back from IPv6 deployment will smile smugly as they say "I told you so", and the decline becomes self-reinforcing.

This is not great for society - scarcity of IPv4 addresses entrenches the market power of the existing big players. Which, erm, makes it rather likely that the big players would like it to play out this way after all.

Now, I imagine Google + Facebook between them have enough clout to ensure that IPv6 remains in some form rather than vanishing entirely, and indeed they probably quite like running their own private Internet, but there is a risk it could become increasingly irrelevant.

At that point it's back to the drawing board. IETF: please make a way to *extend* the address space of the Internet incrementally, not replace the Internet with a new one running alongside it.

If not, then sooner or later someone is going to propose a 64-bit port number option for TCP and UDP. When that happens, NAT is entrenched forever.

Git push origin undo-my-last-disaster

Crypto Monad

> There was a system do that using CVS for Cisco and other network gear as far back as 1999.

Rancid - the HP bug should now be fixed. Or there's Oxidized.

But this isn't really "gitops". It's just sucking down the configs: if you make a screw-up, it's up to you to upload or apply the right config changes to bring it back into sync. Nobody likes rebooting routers and switches.

Project Lightning, you say? Virgin Media's fibre rollout is pretty glacial

Crypto Monad

Re: Same here, maybe cancelling

> IPv6... yes, that's annoying. I grant you that one. But there's no change there, almost no ISP in the UK offers it

Erm, apart from the minnows Sky and BT?

If you want a *static* IPv6 block, yes that's more difficult. You'll likely need a business service. Check if Zen do this.

We just wanna torque: Spinning transfer boffins say torque memory near

Crypto Monad

Re: Intriguing....

A single, unified, freely-rewritable storage space might not be such a good idea. For instance, what would happen when something went wrong with data in the OS working memory? At the moment, you can just reboot because it's the RAM contents that's wrong, not the disk. With a unified storage space, you'd have to reinstall the OS. Either that, or you'd need twice as much memory so that you can always have a "last known good" image.

Old minicomputers with core store were exactly like this. You could turn them off, turn them on, and they would continue running.

If the OS got corrupted, then you would toggle in a small bootstrap loader in binary via the front panel switches, which in turn would reload the OS from paper tape.

I am just old enough to have done this :-)

Apple unleashes FoundationDB as an open source project

Crypto Monad

Re: Can someone explain in simple terms what this is and why I'd want to use it?

If they had open sourced this immediately, it would have stood a chance.

As it is, they've given CockroachDB a 3 year head start (as well as commercial offerings like FaunaDB).

Aside: to build the thing you need both Mono *and* Java. That combination should put off 90% of open sourcers at a stroke :-)

OK, this time it's for real: The last available IPv4 address block has gone

Crypto Monad

Re: @boxplayer - "Nobody uses it..."

> it will getting more expensive as time passses by

Citation Required™.

Running a dual stack network costs more than running a single stack network. If there were a defined end-point for running dual stack, then the cost could be quantified and limited. But there is no defined end-point, so the cost is unbounded.

Dual stack is not a "transition technology" in the true sense, which is to make a change to your own network. In other words: "I want to migrate my network from IPv4 to IPv6. So I will run IPv4 and IPv6 in parallel for (say) a month while I do the changes, and at the end I'll have an IPv6-only network". Dual stack would work very nicely for that.

Unfortunately, ending up with an IPv6-only network is a non-starter. You would be disconnecting yourself from the vast majority of the Internet. There is no clear benefit to starting the transition if you cannot finish the transition; but there is a very clear cost.

Of course, one day, it may be possible to actually transition a network from IPv4 to IPv6. When that time comes, those who waited will find a wider choice of less buggy IPv6 implementations to work with.

Despite what some people have said here, people aren't stupid when it comes to their own money. They *will* happily invest their own money if it will save them money. But that's not something IPv6 offers today. Just repeatedly saying that it will doesn't make it true.

Crypto Monad

Re: I've been trying to get this happening

Won't happen, don't stress over it. When the first site that's gotta be visible for some application is available on IPv6 only, then you'll get what you need to go IPv6 :)

And that ain't ever going to happen, not in my lifetime anyway.

No *business* is going to put their content on IPv6 only and have it visible to only a fraction of the world, when for a few dollars more it can be visible to the whole world. Perhaps once 99% of the users have IPv6 access then IPv6-only sites will start to appear.

There is no IPv4 shortage at the *content provider* side of things. You can share IPv4 addresses via CDNs, reverse proxies, load balancers, HTTP virtual hosts, SNI etc; this has been going on for years.

Even if a content-provider business *does* need their own IPv4 address for a service, and suppose it cost $10,000, they would still pay it just to make their service usable to everyone; they often pay more just for a cool domain name.

Things are different at the access side (i.e. users / customers). There, the shortage of IPv4 addresses is acute (at least in some regions). But unfortunately, deploying IPv6 does nothing to reduce the shortage, because it doesn't remove the need for IPv4 source addresses to access most of the Internet. If you don't have enough IPv4 addresses to give each customer one, then you are forced to use some sort of NAT, whether it be NAT44 or NAT64.

I see one solution: connecting the IPv6 and IPv4 Internets with a giant NAT64. This could be done by the existing content providers (e.g. Akamai, Cloudflare, Google): each of them could treat the whole IPv4 Internet as a big pool of user content and NAT64 to it as a public service. Then an end-user could have an IPv6-only connection, but still reach the whole IPv4 Internet (at least over TCP and UDP).

Page:

The Register - Independent news and views for the tech community. Part of Situation Publishing