back to article One IP address, multiple SSL sites? Beating the great IPv4 squeeze

We're fresh out of IPv4 addresses. Getting hold of a subnet from your average ISP for hosting purposes is increasingly difficult and expensive, even the public cloud providers are getting stingy. While we wait for IPv6 to become usable, there are ways to stretch out the IPv4 space. There are several big problems with IPv6 that …

Page:

  1. A Non e-mouse Silver badge

    El Reg is a great place for IT news and opinion. I'm not sure it's the place to have a hand-holding tutorial on how to configure something.

    1. Missing Semicolon Silver badge
      Thumb Up

      Don't care (@ A Non e-mouse)

      What you say may be true, but this is Gold. I shall save this page in my "stuff I must know" folder, for when I need to implement a reverse proxy.

      I can see this becoming relevant at both home and work quite soon.

      1. Anonymous Coward
        Thumb Up

        Re: Don't care (@ A Non e-mouse)

        Yep. As soon as I saw "Do a minimal CentOS 7 install, disable SELinux, and follow the basic steps outlined here", I was saving the page and bookmarking the page. I can already see my future doing the arcane here, and arcane it is.

        1. Anonymous Coward
          Joke

          Re: Don't care (@ A Non e-mouse)

          @Jack of Shadows: "Yep. As soon as I saw "Do a minimal CentOS 7 install, disable SELinux, and follow the basic steps outlined here", I was saving the page and bookmarking the page. I can already see my future doing the arcane here, and arcane it is."

          Yea, what we need is more articles about DevOps and Continuous Deployment :)

        2. e^iπ+1=0

          Re: Don't care (@ A Non e-mouse)

          'disable SELinux'

          I somehow sympathetise with this suggestion, but ...

        3. Vic

          Re: Don't care (@ A Non e-mouse)

          Do a minimal CentOS 7 install, disable SELinux

          SELinux can be a bti daunting, but it is exceptionally useful. Disabling it is usually a mistake.

          Perhaps I should write an article for ElReg...

          Vic.

          1. Anonymous Coward
            Anonymous Coward

            Re: Don't care (@ A Non e-mouse)

            Agreed, many shy away from SELinux, but once you get into it, it's not that hard. I lose a lot of respect for anything that starts "Disable SELinux" - as it usually means the author doesn't know about SELinux and just wants it out the way for the purpose of their guide, which likely isn't best security practice for whatever it is they're guiding you through setting up.

            Spend the time to learn SELinux and bake the config into your guide. At most here I reckon you'd need to set the context on the caching data directories and perhaps allow nginx to listen on unusual ports.

    2. Anonymous Coward
      Anonymous Coward

      > El Reg is a great place for IT news and opinion. I'm not sure it's the place to have a hand-holding tutorial on how to configure something.

      It was like reading a hobbyist computer magazine from the 1980s - interesting article, turn the page and suddenly reams of Basic to type in. :-)

      1. Anonymous Coward
        Anonymous Coward

        The only time in history when I enjoyed adverts as much as the articles.

        Full page ads for a 1Mb ram upgrade, forever out of reach of my pocket money.

      2. Anonymous Coward
        Anonymous Coward

        only now

        you can cut-n-paste rather than all that typing and wondering if the new line is a CR or just the end of the column width, or if the missing bracket on line 1652 was a print error or your own typo...

        them were the days...

    3. Anonymous Coward
      Anonymous Coward

      Ummm, "Sysadmin blog", ummm....

  2. Anonymous Coward
    Anonymous Coward

    We'd have plenty of IPs

    If there werent massive swathes of unused blocks assigned to universities and the military.

    Universities have enormous blocks of IPs most of which are likely to be unused.

    Also thanks for the nginx tut, but we're all familiar already thanks.

    Anyone here that read that and learnt something new, I'm deeply concerned for you and the firm you work at.

    1. Andy Tunnah

      Re: We'd have plenty of IPs

      Not everyone who reads this site works in tech, not everyone who works in tech works in networking.

      Don't be a snob. Or without the S

    2. Kiwi
      FAIL

      Re: We'd have plenty of IPs

      Anyone here that read that and learnt something new, I'm deeply concerned for you and the firm you work at.

      So you know every command for every piece of software written for every os? No? I feel concerned for whatever firm sad enough to employ you then.

      I know it surprises you, you being such a leet ex-spurt and all, but there is other software out there that does the same job as nginx, and other non-Red Hat OS's as well. Afraid I've only run stuff from the MS, Apple and Debian branches of the OS world, with some dabbling in a couple of the BSD's (and a Devuan VM I haven't got round to doing much with). Have never run anything from RH and have never run nginx, so had plenty to learn from this. Much grats to the author.

      1. Anonymous Coward
        Anonymous Coward

        Re: We'd have plenty of IPs

        No I dont know every command (who does) but you can be damned certain that for any tech I use in production I have more than basic knowledge like this article.

        I support one of the biggest websites in the UK, pretty much single handedly (its me and a dev) as well as countless other sites where im in a similar situation.

        Its easy to say you don't have the time or luxury to properly learn something, but its entirely different when you don't have the luxury of trawling google for solutions.

        Its not snobby to expect your peers to work to a good standard. I personally build things with my peers in mind. I won't be at my clients forever someone at some point will have to take over, I therefore have to assume a certain baseline of knowledge and experience. I like to assume that anyone taking over from me is likely to better and more knowledgable than I am, rather than crapper and dumber than I am.

        If we as an industry constsntly assume that our proteges are less knowledgable and document / build with this in mind we will drive the quality of our successors down as they wont have to be as smart or knowledgable.

        I talk to all engineers as engineers, I dont talk to engineers as if whatever they're being handed is the first work they've ever done.

        Think about how absurd it would be if other types of professionals overused Google.

        A pro golfer googling which club to use for his next shot.

        A builder googling how to build a wall.

        A high end chef googling recipes.

        A pilot googling how to get the landing gear down.

        ...a network tech googling how to set up a simple reverse proxy.

        1. Kiwi
          FAIL

          Re: We'd have plenty of IPs

          No I dont know every command (who does) but you can be damned certain that for any tech I use in production I have more than basic knowledge like this article.

          I call bullshit. So you know every bit of web server software out there? Every trick with networking? Bullshit.

          And that's what those of us thanking the writer are speaking about. Something that may be useful later on when we look to try something new.

          I support one of the biggest websites in the UK, pretty much single handedly (its me and a dev) as well as countless other sites where im in a similar situation.

          That all? I support the entirety of all websites and the entire internet for the whole Andromeda galaxy! As well as several smaller galaxies as well. (Oh, and something stinks about your statement.. Aside from the AC handle...)

          Its not snobby to expect your peers to work to a good standard. I personally build things with my peers in mind. I won't be at my clients forever someone at some point will have to take over, I therefore have to assume a certain baseline of knowledge and experience. I like to assume that anyone taking over from me is likely to better and more knowledgable than I am, rather than crapper and dumber than I am.

          And yet you put so many people down who probably are far more knowledable than you are.

          But if you really do have any more clients than mum's home network, I pity them. I've met arrogance like yours in the workplace, and it often means that while you're spouting off about how great you are and how you support such big clients etc etc, your network really is a badly fucked up mess and they clients would be better hiring the CEO's 3rd cousin's former roomate's severely retarded 3yo niece - much more likely to do a decent job.

          If we as an industry constsntly assume that our proteges are less knowledgable and document / build with this in mind we will drive the quality of our successors down as they wont have to be as smart or knowledgable.

          If you really worked in the industry you'd have long ago realised that there is so much software out there, so many different ways of doing things that are all very different yet all just as right as each other. It's impossible for any team of people to have even in-depth knowledge of even 1/10th of what is out there in use today. I've used software for jobs that you probably don't even know exist, likewise if you're really in the industry there's things you consider run-of-the-mill and use in your day-to-day work that I've yet to come across.

          Professionals should know where to go to find out the answers they need when they need them. Those who think you can know it all should be avoided. There is absolutely nothing wrong with going to a search engine (Google or otherwise) to find a solution to a problem. Knowing how to get the best out of the results counts, putting people down for looking up a tut or how-to simply marks you as a class-1 wanker who spends way to much time alone in mum's basement.

          I talk to all engineers as engineers, I dont talk to engineers as if whatever they're being handed is the first work they've ever done.

          That's fine. But bear in mind that they may've been to busy with x to have yet looked at y.

          Think about how absurd it would be if other types of professionals overused Google.

          "Use" and "over-use" are very different things. If you were so great as you claim, you'd know that.

          A pro golfer googling which club to use for his next shot.

          Er, they do. Ok, not while in a game, but some use such tools to start to familiarise themselves with courses they're expecting to play at in the near future. Their caddy, however, is a person who is supposed to have extensive knowledge of the course and who advises them on which club to use next.. So instead of Google they have someone there with them all the time to tell them what to do next.

          A builder googling how to build a wall.

          Lots do. Admittedly it's often materials research or looking at new ideas in construction. And sometimes they have an unusual case they want to look at other solutions on, or they want to (re)check building codes for something. Did you know that there's lots of new ideas in construction and materials every year? No, of course not, you're such an expert in everything!

          A high end chef googling recipes.

          I wouldn't be surprised, if they're making something they've never made before. Why would that be an issue?

          A pilot googling how to get the landing gear down.

          They don't use Google. They use flight sims and extensive training. I guess in one of those movie-style emergencies where something knocks out the entire flight crew and they only help is a cessna pilot or something.. Oh, did you know there's a shitload of difference between single engined, twin engined, prop vs jet, light planes etc? Did you know that the cockpit of different brands and models are quite different? A person who can fly an A300 would struggle with a 747. Oh, and what about all the checklists that they go though, the ones that tell them what they need to do for each plane at each airport? Yes, landing a 747 at Wellington is different to landing at Sydney, and without the notes that a pilot normally uses their chances of a successful landing are reduced, Google? No. Detailed instruction sheets for each stage of flight in each craft and for each airport? You betcha.

          ...a network tech googling how to set up a simple reverse proxy.

          So you set up secure reverse proxies every day? I doubt it. If you do, then you can't be any good at your job, obviously what you do doesn't stay up very long and needs rebuilding so much you can learn every bit of it by rote. Only, it's no good because it doesn't stay running and you have to do it again. Maybe you should re-learn it?

          When I first set up Apache to handle multiple sites I looked on Google (or whatever other search engine was around at the time) for a decent tutorial. Having found one, I set it up. Next time I had to add a site I could simply copy things over from the first site. Before setting up Apache (or any web server) for the very first time I had built a fairly extensive VPN for the firm I worked for (to allow other branches access to the main databases and other software), had built up a number of other nets including a wired version of a mesh network for the neighbourhood I lived in (at a time when internet connections were not a household thing, but you might have 3 or 4 homes in a block with it in), which was done as a prototype and test case for something that didn't happen due to the advent of much cheaper internet. IOW, before I first set up a web server I had done a fair bit of network stuff. There's a first time for everything.

          I've not yet set up a reverse proxy with Nginx, never ever looked at it. If I ever decide to I will either remember that this article is there, or I will turn to DDG or Google or some other search engine to familiarise myself with what is involved, and then decide if it is the right tool for the job, worth doing, and within my abilities. Like any real professional would.

          Now, toddle off to your dreamland where you're not really living in mum's basement but you run the entire internet!!!!

          (Speaking of network experts - El Reg can you please get the captcha sorted out so when we have to use it we don't lose the entire content of the post while jumping through the hoops needed to post from an IP we've only been using for 20 seconds? Some of us still have dynamic IP's that change often. Especially annoying when it's someone who is already logged into El Reg!)

          1. OGquaker

            Re: We'd have plenty of options

            So, my father who patented R.A.M. and computerized image analysis US2933008 in the 1950's,

            my brother, a Ham at 17 with 15 years as chief metrologist in aerospace, nephew, now with 10 patents at Microsoft, and i with a year (1977) rebuilding R2D2 in my garage, we spent an hour trying to recharge and/or jump start the kids '56 Buick. Everyone knew the others were getting it all wrong.......

    3. Evil Auntie

      Re: We'd have plenty of IPs

      Most of those Class A domains were converted to class B or C years ago when the first IP4 shortfall occurred. NAT is now the standard for internal corporate use as it is the basis for first level firewalling. It is pretty common for an international corporation to run a 10.x.y.z domain with a different x for each country, a different y for each site and a unique z for each node. VPN is used to tunnel through the Internet.

      1. ravenstar68

        Re: We'd have plenty of IPs

        "NAT is now the standard for internal corporate use as it is the basis for first level firewalling."

        While I don't disagree with the statement, NAT was not designed to be a firewall. It was designed to make the internet last longer.

        The term "NAT Firewall", was I suspect coined by marketers.

  3. Andy Tunnah

    Nice

    This is excellent. I'm a bit behind on stuff like this, and this is both a great article and a brilliant guide.

    Nice work. Would love to say an el reg guide section, for us lads who simply /pretend/ to know what the shit you lot are talking about most the time

  4. Anonymous Coward
    Anonymous Coward

    Doesn't a proxy defeat the purpose?

    This is exactly why I'm not a big fan of the sudden push for HTTPS; the issue with the dedicated IP addresses which is less of a problem with HTTP and name based virtual hosting.

    Sure, a reverse proxy can help, but doesn't it also basically create a new weak link? What is to stop attackers from pointing their attention on the proxy so that they can use that as leverage to gain access to the rest of the traffic? It's not as if we haven't been down that path before...

    1. Tom Chiverton 1

      Re: Doesn't a proxy defeat the purpose?

      What dedicated IP ? TLS has SNI.

    2. Anonymous Coward
      Anonymous Coward

      Re: Doesn't a proxy defeat the purpose?

      It can reduce the exposure of your backend servers so it's not a loss to security or reliability. If you've got redundancies in your backend you can have redundancies in your frontend too. The backend redundancy can be easier in fact, as the reverse proxy can handle monitoring and failover.

      Many commercial load balancers act as reverse proxies and many of those have failover built in. I'm not sure if any support SNI though as I haven't looked at any since IE6 was still supported - IE6 didn't support SNI.

    3. Warm Braw

      Re: Doesn't a proxy defeat the purpose?

      the dedicated IP addresses which is less of a problem with HTTP

      Well, since SNI, there isn't any difference in the number of IP addresses you need for HTTP or HTTPS virtual hosting.

      The encryption/decryption load, though, can be very significant once you swap to HTTPS and using a reverse proxy is probably not a solution if you want to pack a number of heavily-used sites onto a single IP address: dedicated hardware appliances may be a better bet under those circumstances.

      1. DaLo

        Re: Doesn't a proxy defeat the purpose?

        "The encryption/decryption load, though, can be very significant once you swap to HTTPS"

        [Citation needed]

        I'll give you a head start https://www.maxcdn.com/blog/ssl-performance-myth/

        1. Warm Braw

          Re: Doesn't a proxy defeat the purpose?

          [Citation needed]

          Thanks; I'm clearly behind the times...

      2. Yes Me Silver badge

        Re: Doesn't a proxy defeat the purpose?

        Particularly, it's not a solution if you're also load balancing, since you can end up losing session persistence... making the affected user most unhappy.

        1. Anonymous Coward
          Anonymous Coward

          Re: Doesn't a proxy defeat the purpose?

          That's not correct. You either set up a shared session system or you use a load balancer that pins a visitor to a server for the duration of their visit.

          The benefit of a shared session system if that the user won't even notice if you reboot the backend server from underneath them.

        2. chuBb.

          Re: Doesn't a proxy defeat the purpose?

          Nope, either put your reverse proxy in front of the load balancer and have redundant rps, or share session state between app servers using memcached or red is etc. Or combine reverse proxy and load balancing into a single role as nginx is capable of load balancing too.

          My current favoured approach is to distribute session state meaning i can spin up app servers and add to pool and not really care about maintaining an affinity between them, I.e. Any server can handle any request then use a redundant cluster of nginx images to reverse proxy port 80 and 443 only to the app pool making use of the load balancer in nginx. Management of the pool is done via vpn to the management lan of the cluster, with the only publically accessible entry points being the ports open on the nginx box it sounds like a complex setup which is true in terms of initial deployment but 99% less work from an operational point of view, as security largely comes down to app design and sensible coding rather than masses of network policy as any traffic coming in from the net on a port which isn't port 80 or 443 just gets logged and sinkholed while app traffic is easily monitored using off the shelf tools, logging and other insight frameworks.

          This approach isn't just for web/http, with a few port swaps a very similar config underpins the voip platform at the day job...

      3. plugwash

        Re: Doesn't a proxy defeat the purpose?

        It is possible to reverse proxy TLS/SNI without decrypting it. Just grab the hostname from the initial packet the client sends, then proxy at the TCP level.

        Doing that does have the downside that you can't use "x-forwarded-for" but there is an alternative called "proxy protocol" that at least some backend servers support.

    4. Brewster's Angle Grinder Silver badge

      Re: Doesn't a proxy defeat the purpose?

      The former method is like a office block where there's a public lobby and each office has its own individual key. If an attacker break in, they only get one office. But there are lots of doors to secure and possibly somebody forgets to lock one, or it gets damaged and nobody notices.

      Trevor's method puts a single guarded door on the entrance to the office, but doesn't locked any of the doors within. So the attackers only have one door to focus on; but the defenders only have one door to monitor. Swings and roundabouts. That said, if the attackers get in, they have full access. But there's no reason you couldn't encrypt the traffic between the proxy and the backend servers.

      1. Anonymous Coward
        Anonymous Coward

        Re: Doesn't a proxy defeat the purpose?

        Not really. Your reverse proxy doesn't need any special access to the backend servers - just HTTP. It also has a much smaller attack surface. You aren't running Wordpress on your reverse proxy server for example - you're running that behind it.

        If you managed to break into the reverse proxy server, the only bad thing you'd be able to do is to sniff all the traffic. While that's admittedly bad, your reverse proxy server is going to be considerably more secure than your backend servers due to the lack of anything running on it.

        You can also make that more difficult by using TLS between your frontend and backend servers ( although if ngix can decrypt it, a determined attacker can too )

        1. patrickstar

          Re: Doesn't a proxy defeat the purpose?

          End-to-end SSL (applies to other encryption as well of course) comes with its own security risks.

          Basically it means you have no reasonable way to inspect the traffic except on the host itself. Which, if the host is compromised, means that it might very well be flat out lying to you. Even if it's not, you'd have to instrument or trace the server software to do it, which in the best of scenarios is a major hassle and slows incident response tremendously. Worst case you end up having to do something insane like MITMing the traffic yourself. Either isn't very good for long-term passive observation, plus alerts the attacker that you're on to him.

          So the attacker might be exfiltrating lots of juicy data from your backend, or working on compromising the rest of your network, and all you'd see is SSL traffic with no way to tell the contents. While the attacker can (and often will) mask his traffic as somewhat legit HTTP requests, it's easy to fool a computer pattern-matching the contents and another thing entirely to fool a human observer consistently.

          Not saying it's universally a bad idea or anything, just that it's something that has to be considered, and weighted against the risk of terminating SSL somewhere else than the actual web server.

    5. PyLETS

      Re: Doesn't a proxy defeat the purpose?

      No particular reasons not to run the proxy on the same host for low traffic multiple domain name sites, allowing more modular webserver configuration. Then no part of the link between the proxy and the back end web server becomes any less secure than the host OS. In most cases the threat model being defended against with HTTPS in preference to HTTP isn't likely to concern the link between the proxy host and the backend host if these are running on different hardware within the same secured LAN anyway.

  5. Anonymous Coward
    Anonymous Coward

    Even putting aside the inability to get IPv6 addresses directly from the ISPs on consumer lines, getting an IPv6 subnet for use with business fibre connections can often be a nightmare of justification forms and bureaucratic nonsense.

    Wow even in rural Alabama , fiber is IPv6 . Comacast, ATT ,Verizon are Rolling IPv6. Almost all .gov sites have an IPv6 address.

    1. John Sager

      Yup. I'm just a home user & my ISP just handed me a /48 no problem - the equivalent of a Class B in old money. Not that I need that much (yet...).

    2. Ken Hagan Gold badge

      I noticed that, too. I wonder how much of Trevor's hostility to IPv6 would disappear if the ISPs that he is forced to work with (and I mean forced, since they are chosen by his clients and presumably that decision isn't one that El Trev can easily overturn) suddenly offered working IPv6 with no restrictions on its use.

      As things stand, the choices appear to be:

      No IPv6. Nada. Fuggeaboutit.

      Well, maybe we could fix you a tunnel through some provider and you could try to funnel all of your client's network presence through that tunnel (and hope that the tunnel provider never fails).

      Well, maybe we could give you IPv6 but it would have to be on our "special plan for lab-rat customers" whilst we figure out ourselves how to do it.

      Well, maybe but we won't provide you with any level of 4/6 interop, so you'll have to implement your client's operations twice -- once in IPv4 and once again in IPv6.

      Faced with those choices, and knowing that the extra leg-work would be different for every client I have, I'd probably be like Trevor: work out an IPv4 workaround for everything and ignore IPv6 until the ISPs of the world get their fucking fingers out.

      1. Trevor_Pott Gold badge

        I use SixxS tunnels. They randomly stop working and cause problems. I'm not a fan.

        Even if they did work, however, there's still the renumbering problem, which was never solved. Every other complaint I have aside, renumbering is a massive problem that you simply can't get around without 1:1 NAT, something which causes the purists to ooze out of the wall and start wailing about how the world isn't fair and we're trying to take away their toys.

        Which means, of course, that you have to choose between downtime and a :lot: of administrative effort whenever you need to fail over between links (because you don't get BGP access for SMB internet connections) or you have to very carefully pick your software such that it doesn't require some stupid end-to-end configuration because there's some gods-be-damned IPv6 purist working as a dev at the wretched urine factory that made the app you want to use.

        So you know what? Not so fond of IPv6. Maybe if it wasn't drafted by, and subsequently lorded over by, a bunch of elitist fuckbaloons that don't give a rat's ass about anyone who can't stump up a few million a year in internet connectivity I might care. Bunch since the poxy whoresons decided to just abandon the majority to the wolves because we "don't matter", I'm not particularly inclined to give them a free ride.

        1. Notas Badoff
          Joke

          Terminology

          I'll add these exciting new terms to my educational materials. Double thanks!

        2. Orv Silver badge

          Something very much like 1:1 NAT exists in IPv6. It's called Network Prefix Translation. It can solve pretty much the problem you're talking about.

          1. Trevor_Pott Gold badge

            NPT *is* 1:1 NAT, and IPv6 purists hate the ever-living crap out of it, with many refusing to code for it, add support for it, etc.

            I even wrote about it in the article I linked to...

            1. Gerhard Mack

              Wrong.

              "NPT *is* 1:1 NAT, and IPv6 purists hate the ever-living crap out of it, with many refusing to code for it, add support for it, etc.

              I even wrote about it in the article I linked to..."

              It would have helped if the article you linked to wasn't completely full of crap.What IPv6 Purists hate is 1 to many NAT. NPT on IPv6 is easy and has been supported for years (I've used it) and support is firewall based so application independent.

              Don't even get me started on the bits of IPv6 doing away with static IPs, it was actually DHCP they wanted an alternative to. On public servers, you will want to renumber anyways if the ISP changes your address. On private servers, you will want to assign them to a local (non routeable) IPv6 range and either 1:1 NAT at he gateway or use the local IPV6 addresses internally and allow the machine to auto assign the external IPs for internet access. Again, IPv6 makes this easy.

              1. Yes Me Silver badge

                Re: Wrong.

                "it was actually DHCP they wanted an alternative to."

                Historically, not. DHCP wasn't even there when IPv6 autoconfiguration was invented, modelled on Novell Netware IPX. DHVPv6 was an add-on some years later, after DHCPv4 saved IPv4 from configuration collapse.

                (While I'm here, NAT44 wasn't there either, in terms of actual products, when IPv6 was invented. NAT44 saved IPv4 from an early grave, but *after* IPv6 was already designed.)

                1. Trevor_Pott Gold badge

                  Re: Wrong.

                  And it took 20 years to get the bastards to admit we needed Network Prefix Translation, and it will be 20 more before it's widely supported enough for use. NAPT in IPv4 scared the IPv6 purists enough for them to fight a generation-long war against the simple idea ease of use matters for someone other than developers, universities flush with grant money and large corporations.

                  1. Gerhard Mack

                    Re: Wrong.

                    "And it took 20 years to get the bastards to admit we needed Network Prefix Translation, and it will be 20 more before it's widely supported enough for use. NAPT in IPv4 scared the IPv6 purists enough for them to fight a generation-long war against the simple idea ease of use matters for someone other than developers, universities flush with grant money and large corporations."

                    Again, it has been supported and completely usable since before you wrote the original article in 2012.

                    You are like the Breitbart of the tech world.

                    1. Trevor_Pott Gold badge

                      Re: Wrong.

                      An RFC existing doesn't make anything supported or usable. Being incorporated into working products does. Having applications not coded to expect end-to-end and having them not die when there's a prefix change does.

                      In short: years and years of IPv6 "support" has to be completely undone and redesigned. NPT hadn't been done then, and is still incredibly rare today. Of course, we could always use the traditional IPv6 purist answer: everyone should throw away everything they have and buy the most expensive possible new everything and just hope it supports what you need. Just do that regularly and you'll clearly be fine.

                      Or, you know, not use IPv6 until everyone gets their shit together.

                      RFCs are only "usable" once broadly implemented. Still fucking waiting...

                      1. Orv Silver badge

                        Re: Wrong.

                        I've used NPT successfully in pfSense, and they tend to be on the trailing edge. I'd be pretty surprised if the megabuck network equipment from the likes of Cisco and Juniper that "real" sites use didn't have support.

                        1. Trevor_Pott Gold badge

                          Re: Wrong.

                          @Orv: then you'd clearly be surprised at the number of network equipment vendors still shipping models today that don't support it. Let alone any of the midmarket, SMB or consumer level stuff, which are the folks that really need it. You know, because of renumbering. We're still a decade away from NPT getting to the folks as need it. And judging from the reactions of IPv6 purists here in this very thread, we might have to wait more than a decade before the purists decide they'll support NPT in the software they develop.

                          Awesome. And just think, had the IPv6 elites not been stubborn asshats for 15 years, we could have solved all of this ages ago and could be using it today in a manner that met everyone's needs. But people suck.

                          1. Orv Silver badge

                            Re: Wrong.

                            I will admit that consumer-level stuff mostly doesn't support NPT. But consumers are generally not running static IPs to begin with. If their prefix changes they just get the new prefix via router discovery and carry on.

                            I can't speak to SMB equipment. But for the cost of a low-end server you can use pfSense, which does support it, and has for quite a while now. That code originated with OpenBSD, and no one has ever accused them of being insufficiently pure. ;)

                    2. billse10

                      Re: Wrong.

                      "You are like the Breitbart of the tech world."

                      There's no need for that sort of language around here.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like