back to article TLS proxies? Nah. Truthfully Less Secure 'n' poxy, say Canadian infosec researchers

Enterprises buying TLS proxies to improve their network security could easily be making things worse, according to Canadian research out this week. dunce_cap_648 TLS proxies: Insecure by design, say boffins READ MORE The analysis is depressing enough on its own, but it comes from a group with a long history of demonstrating …

  1. Anonymous Coward
    Anonymous Coward

    no Forcepoint

    wow, what a blow to Raytheon for buying out Websense... their appliances didn't even make the test.

  2. Nate Amsden

    lesser threat

    I don't use these appliances(I haven't done corp IT work since 2002) but certainly can understand security folks seeing whatever vulnerabilities they have is less than letting people connect externally to things they can't inspect. I'd wager many companies would even be fine with SSL termination on the proxies and just providing http internally(that is quite common in application load balancing setups anyway, something I have been doing for about 15 years now).

    Now it'd certainly be good if these vulnerabilities were fixed - though I don't agree with disabling support for older protocols not without a graceful failure mode of some kind. It drives me insane that browsers and people push to completely disable stuff without any sort of graceful failure mode. I've been saying for years now treat those sites as if they were using self signed certs -- provide a warning, and a way for the user to continue past the warning if they deem the risk is acceptable, or if they just don't care. Because the same level of threat exists with self signed certs as it does with weak(er) encryption. Well I guess technically non trusted certs are much easier to deploy and so a much greater risk than weaker encryption.

    But as it stands the things in the article seem to be far less of a threat than if the companies removed the appliances altogether as an example.

    I had discussions with one guy who is really good in security who at his previous company didn't allow any outbound communications from the servers to the internet unless it went through a proxy. Which on paper sounds fine but unless your doing SSL interception on that proxy there's still a very wide open door there for stuff to get out that you can't see(in the earlier days at his company it was long before wide spread https adoption). With more and more external services dependent upon large crappy clouds that have wide swaths of IP addresses that change often(without notice) it's not quite practical to try to lock down communications to IPs(at least to ones based in those clouds), and often even more difficult to determine what is on that IP.

    Recently had to diagnose a network issue in this situation and fortunately the remote IPs were serving regular https and their ssl certs were specific enough that it allowed me to identify the organization(a provider the company I am with does business with so I recognized it) that was running the service on those ips.

    1. Flakk
      Pint

      Re: lesser threat

      Exactly my thinking. I am grateful to these researchers for their excellent work, and am hopeful that these companies are mindful of the results and will fix their kit. Nevertheless, not using the gear seems like the bigger risk. There are no absolutes in risk analysis/management, but not having visibility into encrypted ingress/egress traffic would keep me awake at night.

    2. Tomato42

      Re: lesser threat

      > provide a warning, and a way for the user to continue past the warning if they deem the risk is acceptable, or if they just don't care.

      we had this kind of behaviour in browsers, it is exactly the reason why BEAST was exploitable

      and showing HTML error that the user can click through is way, way too late – the authentication cookies were already sent over the insecure channel

      1. Nate Amsden

        Re: lesser threat

        Not quite the same I think. Well it could be I'm not sure how browsers behave in the background when this happens. What I'm referring to is the big warning dialog that pops up saying "this cert is not trusted", and says why. Then you can override the connection if you wish (unless the site uses HSTS or whatever that thing is called) and continue connecting. I don't expect browsers to submit data until that exception is granted but they might, I haven't checked myself.

        I recall back to 2004 or so time frame the company I was at had tons of SSL certs, so many that we had a special portal to Verisign's site where I could issue certs without ordering them each time and they would invoice us(something like $90,000 a year in certs). It was also my first (and probably only) experience using client side SSL certs for authentication to a website.

        Anyway in one case we had a cert error that I saw, and one of the support folks wasn't seeing it. He wasn't the smartest guy in the company but he was a good support person. But he was conditioned I guess you could say to just click past SSL errors (in this case I think it was IE with a pop up dialog box one step click to bypass the error). I went to his desk and was talking him through the process to get to the error. The error popped up and he instinctively clicked the "continue" (or whatever it was called button), the error didn't even register to him. I laughed and said STOP the error was RIGHT THERE. Went back again and he realized it at that point.

        So certainly people can be conditioned to go past the errors but as long as "untrusted" certs can be allowed in browsers (and if browsers some day decide to stop that I'll just, get off the internet entirely perhaps), the risk of a un trusted cert intercepting data is far greater than that of MITM decryption data because of weak(er) encryption.

        But at the end of the day the whole SSL CA stuff is flawed security wise anyway since the list of CAs that are trusted seem to go on forever and there doesn't seem to be good enough controls on how certs are issued. Obviously there's been several incidents over the years where certs were issued to the "wrong" people for big domains..

        But go beyond browsers, think of all of the server side applications that use SSL, I'm talking server to server communications whether it is API endpoints, email services, and other proprietary protocols that use SSL. Maintenance on SSL versions and stuff is honestly I'd call it black magic in many cases. Something as simple as the ordering of the ciphers can throw everything off.

        A few months ago I upgraded some of our internal systems and when we hit production a critical external endpoint was simply failing. It worked fine prior to the upgrade, but not after. It was working in test only because they had configured it to use http. In test https would fail because the vendor's cert expired years ago so it failed validation. In production http was not allowed(on their end). After some investigation I determined they were using ciphers on their site that were now considered very insecure and OpenSSL (or gnuTLS I forget which) refused to connect to the site(no matter what). Strangely enough whichever OpenSSL or gnuTLS refused to connect the other one worked fine(so if OpenSSL was failing gnuTLS would work or vise versa I forgot which worked and which did not). I ran a SSLlabs diagnostics on the site and it was reported as a grade of "F". Ended up building an older OS system for that API call until the vendor could fix their stuff.

        Fortunately for HTTPS based sites there is ssllabs testing site, without that I don't know what I'd do myself.

        As for BEAST, I don't recall the details of it much, but I do recall putting an easy workaround in on my Netscaler load balancers a few years ago, back when we were prevented from upgrading the code on the load balancers to something that supported newer than TLS 1.0 due to an unrelated bug in the platform which took a good 2 years to get a resolution on.

        The whole dumbing down of the internet is quite annoying to me. Present the user with choices and let them choose which they want to do (I have no problem with default choices, just let them override that if they desire). Browser vendors in particular Chrome and Firefox have been absolutely positively terrible in this regard(I say this running the Pale moon browser, I clung to firefox for as long as I could).

        1. Anonymous Coward
          Anonymous Coward

          Re: lesser threat

          > HSTS or whatever that thing is called

          Respectfully, it doesn't sound particularly reassuring that in a discussion about TLS proxies someone would not be familiar with HSTS.

          > I don't expect browsers to submit data until that exception is granted but they might, I haven't checked myself.

          Neither does this sound reassuring but nonetheless, thank you for being honest about your own capabilities.

    3. Anonymous Coward
      Anonymous Coward

      Re: lesser threat

      Absolutely correct, Nate. We've used them in decryption mode (Raytheon a.k.a. Forcepoint f.k.a. Websense) for six years. That article is garbage because it doesn't address exactly what you mentioned, the true risk they can mitigate.

      For a ~1,000 employee company we see HTTPS stops many times on a weekly basis. As more traffic moves to HTTPS it will only get worse for companies that don't decrypt because that garbage makes all the way to the endpoint.

      In the past two years we've reimaged precisely one PC for a suspected malware hit, an AV detection but no application whitelisting hit. It was purely precautionary.

      I know some same-sized and smaller law firms who have a person or two dedicated to reimaging PC's because the partners won't permit decryption. One does about 1% of their PCs each week. Seriously.

      As far as what the article reported on, I can tell you that Forcepoint released patches for many of those issues a year or more ago but if your company culture is to not pay attention you're going to get burned regardless what products you buy.

    4. Anonymous Coward
      Anonymous Coward

      Re: lesser threat

      > can understand security folks seeing whatever vulnerabilities they have is less than letting people connect externally to things they can't inspect.

      This is exactly how not to do a security assessment.

  3. ecarlseen

    Unfortunately, there can be some good reasons for this.

    As someone who occasionally manages such devices, we've run into situations where we needed to offer support for poor-quality encryption in order to enable business to function with outside organizations that are not up to snuff. And before the ZOMG screams for regulatory intervention begin, I would note that nearly all of the organizations we have to make accommodations for are governmental or government-appointed monopolies (exclusive rights to provide services for government agencies). We had one the other week whose Internet-facing web server was still running on Windows 2003. They plan to upgrade eventually, when they get around to it. As far as they're concerned, as long as browsers connect then they give precisely zero fucks (and this in an area where private businesses are tightly regulated due to presumed terrorism risks).

    And here's the other thing: while solid encryption is critical for protecting many sorts of information, there are other areas that just aren't important. Ironically, the drive to encrypt everything to the eyeballs seems to be largely driven by Google who then hoovers up so much information about everyone which, in turn, is available to various governments upon request - and, if their Dragonfly project has any meaning, preemptively. Since encrypted transit across the Internet is mainly a protection against spying by nation-states (until non-state criminal organizations are able to tap Internet backbones) the whole thing seems to be immensely overblown.

    In my mind a more rational response would be to have the browsers do a better job of indicating the relative strength of encryption on any given site. This should be done in a manner that is continuously obvious to the user as they use the site (frame the window in red or something), but does not require additional action on their part. If a site doesn't have encryption, then indicate it but go no further. Browsing some online brochures is usually not a secret worth protecting. We'll get further with shaming poorly-secured sites than we will with the current trend of giving users so many click-through warnings that they just ignore them all.

    1. Adam 1

      Re: Unfortunately, there can be some good reasons for this.

      Do you honestly believe that nation states are the only ones who MitM? The hardware to MitM an open WiFi access point is in the order of $100-$200, complete with YouTube instructions. Injecting coinhive.js into any HTTP delivered page is beyond simple. Runs on batteries and is small enough to be discretely hidden in your bag, some even in your pocket (depends on the range you want as to how big the antenna is). In terms of complexity, this is "interview question for a junior info sec position" complexity level. As in, not even a theoretical test but rather here is a device, do it,

      And coinhive is at the lighter end of a criminal payload.

      But even taking your example of browsing some online brochure which you deem to be perfectly adequate over http. When you click the buy it link, I'm sure that you would agree that it should jump to Https. The site may even put the redirect in for you, so that's nice. Unfortunately, as the page was delivered in an insecure fashion, the MitM can intercept that page and replace the form submit target. Awkward.

      1. J. Cook Silver badge

        Re: Unfortunately, there can be some good reasons for this.

        Do you honestly believe that nation states are the only ones who MitM? The hardware to MitM an open WiFi access point is in the order of $100-$200, complete with YouTube instructions.

        Yep. There's even a commercial product that does it. (WiFi Pineapple)

  4. Anonymous Coward
    Anonymous Coward

    Preventing good security practice

    My biggest grievance with SSL / TLS proxies is they prevent good security practice on the client. Fundamentally, the end user is the person in the best position to make an assessment of how secure a particular session needs to be. Whilst I might be completely OK with accepting a domain validated (even self signed) certificate when reading theregister, I would certainly not be doing my internet banking under such a connection. The fact that browsers by and large hide this information away behind a blanket lock logo and most users will never look any deeper, doesn’t mean it’s OK to remove this information entirely. Lastly, unless I am wildly underestimating their capability, this proxying (man-in-the-middling) completely breaks certificate pinning on the client, something that generally seems a good thing to encourage.

    1. Anonymous Coward
      Anonymous Coward

      Re: Preventing good security practice

      The client, particularly when it's enhanced by the person at the keyboard, is the greatest risk.

      Certificate pinning is being deprecated because it mitigates precisely one problem and can cause an enterprise-wide outage when something goes awry. In the last decade I've experienced precisely three vendors using certificate pinning at all.It's the Betamax of TLS security.

  5. Anonymous Coward
    Anonymous Coward

    Risk acceptance. That's the actual end goal.

    I.E., what level of risk is the business willing to accept? a MiTM appliance that pulls multiple duties (https proxy and content filtering AND security filtering), or an entirely open connection and a Infosec group that's ten times it's size to deal with the increased number of security incidents, massive productivity issues from over half the company looking at facebook/youtube/[insert time wasting site]/adult sites, and the over-utilization of the internet connection that also deals with little things like payment processing and our phone system.

    My company chose the MiTM appliance. It's a pain in the butt to keep on top of it, there's a list of sites we've had to manually whitelist as long as my... well, it's pretty long, and there are interesting rendering issues with some sites due to the https proxy. (seems java does not play nicely with the internal CA we used to issue a subordinate issuer certificate to the appliance, which was by itself a pain in the butt to do.)

    anon for reasons

  6. jessehouwing

    Microsoft TMG and TLS support

    Older ssl versions and newer tls versions can vm be controlled through the Windows Policies and after tweaking the registry you have far more control over which hashing algorithms and protocols to support.

    https://serverfault.com/a/685278/154975

    But Microsoft has long let known that their network management tools based on windows are considered deprecated and that they're moving out of that business. It would have been nice to mention this as part of the article.

    http://techgenix.com/there-still-life-left-forefront-threat-management-gateway-tmg-2010/

    1. Amos1

      Re: Microsoft TMG and TLS support

      TMG effectively went EOL years ago. If your company is still using it they are not interested in securing their data.

  7. ysth

    TLS proxies for security?!?

    I thought TLS proxies were for spying on users; would anyone really buy one to *increase* security?

    1. Spamfast
      FAIL

      Re: TLS proxies for security?!?

      I thought TLS proxies were for spying on users; would anyone really buy one to *increase* security?

      The private citizen in me totally agrees and I hate the idea of breaking end-to-end encryption but IT departments do have a duty to protect end users and the infrastructure from attack and a (properly configured!) TLS proxy does allow this.

      What concerns me is that companies don't make it clear to employees that they MITM every HTTPS connection and that a rogue IT bod could syphon off sensitive info when employees use work computers for online banking and the like. Companies should be obliged to explain clearly to new employees that the 'lock' symbol on their browser does not provide privacy on the corporate LAN. Ideally, explicit written consent should be required.

      As Uncle Ben said, "With great power comes great responisbility." Any IT staff who have access to the key hardware need to be subject to criminal record & security checks as rigourous as for military, law enforcement or child-care roles. I'm damn sure that very few are.

      Actually, even if not malicious, most IT staff I've met don't have the competence to maintain good security policy yet are all still given administrative access to security infrastructure. I'd never connect any of my own sensitive devices directly to a company LAN - far too insecure!

      1. Anonymous Coward
        Anonymous Coward

        Re: TLS proxies for security?!?

        "The private citizen in me totally agrees and I hate the idea of breaking end-to-end encryption but IT departments do have a duty to protect end users and the infrastructure from attack and a (properly configured!) TLS proxy does allow this."

        We do it (as a bank) and we do not do it for spying on end users. We do it to protect the mass of data we hold about you, our customers.

        I really don;t care whether a single person's transactions are compromised; that happens every day for many reasons and are usually customer-caused. What I care about is losing the mass of data and causing a lot of people a lot of pain.

        Needs of the many outweigh the needs of the few kind of thing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like