Received an e-mail from PayPal yesterday saying that I may have to upgrade my browser or not be able to login to PayPal after the 30th June.
They are dropping support of TLS < 1.2 as of the 30th.
My browser passed the test.
As TLS 1.3 inches towards publication into the Internet Engineering Task Force's RFC series, it's a surprise to realise that there are still lingering instances of TLS 1.0 and TLS 1.1. The now-ancient versions of Transport Layer Security (dating from 1999 and 2006 respectively) are nearly gone, but stubborn enough that Dell …
"Chrome and Firefox support TLS 1.3."
True.
"Remind me, which versions of IE support it?"
None yet, and I didn't even allude to TLS 1.3 support. The article was about dropping support for TLS <1.2.
TLS 1.2 is not going to be deprecated soon. No-one is going to drop TLS 1.2 soon. Not because of IE but because of most mobile devices don't have don't have TLS 1.3 implemented. Plenty of web servers and cryptographic libraries still don't have TLS 1.3 support.
Here's a small tidbit about TLS 1.2:
TLS 1.2 was ratified back in 8/2008. IE had it implemented in 7/2009 when Win7 was released. Firefox/Chrome implemented TLS 1.2 in summer of 2013.
Care to comment on that?
"Nice trolling."
You didn't even have the balls to post with your screen name, silly person.
>I wonder if Microsoft is going to update Internet Explorer . . .
Probably not if you are still running IE6....
Interestingly, it does seem that it is the security updates (protocols, certificates and such like that support secure communications) that are forcing XP systems off the Internet(*) and I expect they will do similar to Win7 post 2020.
(*) Firefox on XP/Vista goes EoL this month.
Anyone still running Windows XP probably knows why, and the reason is *NOT* MSIE 8 (or 7 or 6).
Redmond was never good at browsers and MSIE / Edge are at best "graphical download tools for a real web browser".
Windows XPsp3 is still the the version of Microsoft Windows with the least amount of serious security holes (no kidding!), it is still under active maintenance (just needs a registry key set to keep downloading patches), and earlier this year, Microsoft actually did ship an implmenetation of TLSv1.2 update for it: KB4019276
https://sockettools.com/kb/support-for-tls-1-2-on-windows-xp/
https://support.microsoft.com/kb/4019276
Not only do they only support TLS1.0, but including RC4-SHA in their cipher list? Really?
Supported Server Cipher(s):
Preferred TLSv1.0 128 bits AES128-SHA
Accepted TLSv1.0 256 bits AES256-SHA
Accepted TLSv1.0 128 bits RC4-SHA
Accepted TLSv1.0 112 bits DES-CBC3-SHA
Accepted TLSv1.0 128 bits ECDHE-RSA-AES128-SHA Curve P-256 DHE 256
Accepted TLSv1.0 256 bits ECDHE-RSA-AES256-SHA Curve P-256 DHE 256
Accepted TLSv1.0 128 bits RC4-MD5
No joke, some of our customers are still using SSL 3.0 and want our software to support it!
You know that business critical legacy system in the back of the server room, that will go puff one day and will be really hard to resuscitate because spares are no longer available, expertise is expensive, the original developers have moved on or retired, and the source code is stored on a defective proprietary disk that nobody has tested in decades .... and nobody dares a migration because, well, it is such a clusterfuck that it will end up being the last thing you do at the company ... it still works, so why bother ...
I've run into this issue a few times with ancient B2B devices. Luckily, there are SSL proxy devices on the market that can sit in front of a problem client or server that can step up from or down to depreciated crypto versions (or no encryption at all).
If I can take a Commodore 64 running a web server and protect it with TLSv1.2 and PFS, you should be able to do the same with your servers.
If you have Avast, TLS 1.3 is pointless because Avast thought it was a great idea to MiM your secure connections, think that their software is immune to bugs, their MiM tactic is definitely perfect and cannot be 0wned ... maybe they implemented the famous backdoor that governments wanted, which might be why Kaspersky got some flack, they probably refused to implement the MiM.
Yeah, tough.
In the UK all cars go through an annual safety inspection (MOT, Ministry Of Transport test). If they don’t meet basic safety standards they’re not legally allowed on the road. You can still drive them (off the public road), you make your own risk assessment over how that may impact your life versus how much it will cost to fix.
No idea about Japan but if they have classic cars on the road it can't be the case.
The UK MOT test has changed over they years and got tighter (e.g. now a warning light for ABS or engine management fault is an automatic fail even if it passes brake efficiency/missions), but the underlying test criteria like seatbelts (must be sound if fitted, but not obligatory on old cars), exhaust emissions in terms of CO/particulates, etc, are those at the time it was first sold.
Hum, doesn't this accurately describe how emissions tests place new standards on old cars in the UK?
https://www.classiccarsforsale.co.uk/blog/market-trends/historic-cars-win-exemption-in-ultra-low-emission-zone
Yeah, classic cars get an exemption. But classic cars aren't used for everyday driving. They're used sparingly by collectors and museums.
"This is a bit like saying ALL cars must pass current standards and so most over a few years old are then automatically off to the scrappers."
Exactly, which is what we should be copying.
This is the case in Canada too. And California. And probably the rest of the USA.
Classic cars get an exemption -- but then classic cars are driven sparingly by their owners, and not driven commercially.
In the UK a car has to be pre-1980 to get the exemption. In Canada before 1988.
I've lived in California and still in the USA, and I've never had a car that was required to pass the current year emission standards rather than the ones in effect the year the car was made. People don't pay tends of thousands of dollars for a car that will have to be scrapped in a couple of years because the emission laws changed once again. That's just nuts.
ummm - not sure what part of Canada you are referring to, but here in Ontario the emissions test is based on the original requirements - not current standards...
and in fact, any car over a certain age is automatically exempt from the tests...furthermore, unless you are trying to sell an old car, the condition or drive ability is not checked for passenger vehicles - if it has insurance you can renew plates without doing ANY safety validation
No, fitting seat belts is just common sense.
Some really old cars don't have any points you can sensibly attach belt mechanisms to (or are so valuable as "original" you don't want to and don't drive much either), but probably most cars post 1950s are OK. In fact many had them as extra cost options until the law changed to mandate them, first for front seats and then also for rear.
Primary reason to abandon TLS 1.0 and TLS 1.1 is SHA-1: both signatures made by server and the handshake transcript integrity depends on SHA-1.
The SHA-1 HMAC in the TLS 1.0 era ciphers is still secure so they can be used with TLS 1.2, where they'll use SHA-256 for handshake transcript integrity (and a negotiated hash for server signatures).
Modern OS aren't the problem, embedded kit is. There's a variety of embedded kit that supports either HTTP, or TLS 1.0, and it isn't getting updated beyond that point.
It's all very well to say 'update to TLS 1.4', but when the response is 'where's 300 grand for new hardware and installation', even the more security conscious firms aren't likely to bite if the data involved aren't particularly sensitive. Then, beyond the 300 grand it turns out the new secure hardware isn't compatible with the old, so it needs work on both the client and server end, so add another ten grand plus by the time development and testing are complete.
What TLS endpoint vendors should really be doing is selective endpoint validation. So the majority of TLS clients go to the normal site and stay nice and secure. The few expensive holdouts only browse to www.mysite.com/URLUsedOnlyByExpensiveEmbeddedKit and are secured there.
Alternatively there's running the endpoint in HTTP and having a load balancer/TLS offloader that does selective permitting of TLS 1.0 as mentioned.
I'm sure someone will say "but almost all devices are able to update themselves these days" and while that may be true (ignoring the concerns over devices you never directly interface with updating themselves silently through a black box process) the problem will be that the vendors won't deliver updates.
If you had purchased a device in 1999 and it guaranteed updates for five years (better than any Android phones you can buy today, so probably pretty unlikely to see a guarantee like that for IoT) it would be stuck with TLS 1.0 when the updates stopped in 2004. While that might not be a worry for a throwaway device like a light bulb, something that you typically would keep using a lot longer like a "smart lock" or thermostat or fire alarm panel is likely to be woefully insecure during most of the time you own it.
Who's going to know - and if they do will they care - that most of their "smart home" tech is wide open to attack, even if they bought a brand name willing to give a 'really great' five year support guarantee?
Some devices that I'm thinking of do have remote firmware update capability, but it definitely isn't automatic as this isn't sensible in a corporate environment. They're still 'supported' but are a legacy product and later firmware isn't going to be produced.
It's also possible the hardware isn't capable of running TLS 1.4. In one instance I know of it does 'support' TLS 1.0, but badly. If TLS 1.0 is switched on fully (proper end to end certificate chain validation, etc) rather than its default setting of 'ignore the validation and assume everything is ok' (not ideal, but it does at least stop casual users snooping traffic), the commands it sends are delayed, which causes issues.
Sometimes hardware has plenty of resource to spare, the system tools are comprehensive, and a lack of updated firmware is entirely down to vendor laziness/stingyness. At other times the hardware is difficult to code with limited resource and space. Not everyone is NASA with millions of pounds and bright minds to throw at problems.
The other solution is to proxy the insecure device at the client end, but that solution has to be developed, installed, requires two power and network ports, and then you have two devices to secure..
Is that some browsers cant be updated. (the TLS wiki has a compat matrix)
For instance, old Kindle Fires are stuck with Kitkat, and the version of Chrome on it is stalled at a version that supports 1.1 but not 1.2.
Disabling a protocol blocks access to those divices, and that's a hard sell to customers who sell to impoverished nations, etc.
Made it a pain for devs
.NET 3.5 and below default was 1.0 (and not really a viable nasty hack workaround without major grief)
.NET 4, default 1.1 (1.2 not even supported unless you do a nasty hack)
Needed 4.6 for 1.2 as default
So, a lot of legacy .NET apps will have issues with TLS 1.1 / 1.2
MS "it's all about the developers" ... really????
The dates:
https://en.wikipedia.org/wiki/.NET_Framework_version_history
Things that were adequate in 2010 are out-of-date and inadequate now.
Is it really that surprising given the rate that hackers and academics find obscure bugs.
If we had to had to wait for ordinary profit-oriented criminal programmers to find the bugs, the products just might perhaps still be secure, against criminals for another year or two. But that would require living in an alternate reality.
(Of course nothing is secure against major state signals intelligence agencies. They can always find ways in. Even TLS 1.4 connections won't be secure, because if outfits like the NSA can't find ways through it, they have many ways around it.)
How did Microsoft make it a pain for devs, surely your code base follows some sort of obsolescence policy and you are keeping up to date with .Net frameworks they release, even if you have a n-1 policy you should happily be on 4.6. As for nasty hack, its a reg key or a couple of lines of code in your project to specify 1.2!
I wrote a PowerShell script the other day to download a bunch of stuff ... the script worked fine on Windows 10, not so well on Windows 7, because I was doing this:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
A Windows 7 box I had no control over did not like this, I was told it had the latest patches, I cannot tell if it is a limitation in Windows 7 and do not really care, if it is ... I was just trying to help a few coworkers stuck on SlurpOS.
Sadly, the website where I was downloading stuff, FLOSS stuff, was correctly configured TO MANDATE TLS 1.2.
You have secure protocols, when you wish to communicate with third parties in a secure manner, well, guess what ? It makes no sense NOT to use the safest ... if you don't agree, well, I guess you are whatever you would call me.
Corporate fallacy, legacy business critical (<--- THAT is nonsense, if it were business critical, you would get your act together), whatever your excuse, as I wrote back when IBM attempted to disable TLS 1.0 (yeah, TLS 1.0!!!) in its cloud and got flack from a bunch of n00bs - size does not matter in this one, you can be a multi-billion $ company with hundreds of thousands of employees, does not matter:
If you do not take security seriously, insecurity will take care of your business, just not the way you would expect and you will end up making headlines ...
Passenger aircraft. Railway tanker cars. Trucks. Cars.
Procor is junking tens of thousands DOT-111 tanker rail cars when the new tanker car standard comes into force in Canada and the USA. These think aren't cheap.
Old buildings must meet current fire codes. And old buildings that are extensively renovated must meet current building codes (building codes being more complete than fire codes).
Generally old buildings don't need to meet ALL current fire codes. They pick and choose (it is up to JHA)
Retrofitting old buildings with sprinklers is very expensive, and generally isn't enforced, at least not for all types of occupancy (i.e. might be for hotels, but not for offices)
I remember when I started seeing that some servers would turn you away if they could tell that you allowed SSL v3. That's right, some servers will mistrust clients. But, some clients only allow it SSL v3 after renegotiation, so the server would only really know if SSL v3 is allowed if it forced renegotiation, which wouldn't really be a workable solution.
On the other hand, with all the servers that I see with SHA1 as the strongest signing hash and/ unsafe legacy renegotiation (pre-RFC5746), and there are lots, it can be a real challenge to manage client applications that are strong and yet, ahem, reasonably compatible. Fortunately, 3DES is becoming rare, though I gag when I come across a server that allows SSL v2.
What I've ended up doing is consigning these laggards to a kind of kiddie pool. It's still a compromise; but... well, a number of commenters here have already spoken to how business forces just won't spend the money to keep up.
from the three TLS protocol versions
TLSv1.0
TLSv1.1
TLSv1.2
TLSv1.0 and TLSv1.1 are essentially equivalent in security, and TLSv1.2 is the weakest of the three.
Everyone with a crypto-clue should know that. TLSv1.2 contains stuff that is significantly weaker than in TLSv1.1 and TLSv1.0, such as "digitally signed", which newly added a ridiculously weak (rsa,md5) digital signture into TLS in 2008, at the time when (rsa,sha1) was supposed to put out of use end-of-2010.
But instead of making (rsa,sha256) mandatory and minimum, the de-facto standard is (rsa,sha1), which is still significantly weaker than (rsa,sha1+md5) used by earlier version of TLS and SSLv3.
And it is just mindboggling that implementers had to be persuaded to drop the (rsa,md5) support from their implementations, because it just didn't occur to a number of them just how obviously (a) stupid and (b) unnecessary it was to allow this mind-fart security problem from the TLSv1.2 spec in actual implementations.
And while there is a silly hype about AES-GCM (TLSv1.2 AEAD) cipher suites, those are in no way more secure than AES_CBC cipher suites. And TLSv1.2 contains a serious design flaw, which made a few implemenations of AES_GCM fail catastrophically without those goof resuling in interop failures. AEAD is cryptoglycerin / fragile.
https://www.cryptologie.net/article/361/breaking-https-aes-gcm-or-a-part-of-it/
Most of the so-called "attacks" that have been shown over the past decade, weren't weakness of the TLS protocol (based on the properties that the TLS is supposed to provide according to rfc5246 Appendix F), but instead security design flaws in web browsers, or abuse of the protocol for multiplexing arbitrary attacker-supplied data with data an attacker is not supposed to know _through_ the same TLS channel, such as "VPN tunnels", rather than using seperate and distinct tunnels for each data flow. Helping an attacker to perform inside attacks, which is what browsers are doing, means actively subverting TLS and going beyond its officially documented design limits. That is making TLS a scape goat for security design failures in web browsers.
"[...] it's a surprise to realise that there are still lingering instances of TLS 1.0 and TLS 1.1 [...]"
Right. Like www.theregister.co.uk...
Protocols
TLS 1.3 Yes
TLS 1.2 Yes
TLS 1.1 Yes
TLS 1.0 Yes
SSL 3 No
SSL 2 No
https://www.ssllabs.com/ssltest/analyze.html?d=www.theregister.co.uk&s=104.20.251.41&latest