Re: The most shocking part of this story
Absolutely correct. We have made amends and are keen to move on. Please find enclosed 2 tickets for The RegA☐ster to attend WWDC.
2545 publicly visible posts • joined 7 May 2012
Firstly, jumping to 64 bit, doubles your pointer sizes. Every array now takes up double the amount of memory as its 32 bit cousin. Every instruction now needs an extra 4 bytes to describe the memory address it applies to and so on.
Secondly, time is a finite resource in a development team. Optimisation takes time to both profile, figure out whether it is CPU/disk/memory bound and try alternatives. That is time that cannot be spent on other shiny shiny features and digging out other bugs. So the feature of having it work faster or having it work on older hardware gets weighed up. This is true in both open and closed source worlds.
Thirdly, optimisation changes with hardware evolution. 25 years ago you were probably trying to optimise to some maths coprocessor. Today, you are probably trying to parallelize loads and get GPUs or cloud load balancers to improve throughput.
Fourthly, developers fix what they see and experience. That's the reason why software can suck on low resolution laptops; the team writing it has a 4K dual monitor setup on their i7s with at least 8 cores and somewhere north of 16GB RAM and an SSD. They simply haven't had to tolerate it in a 5 year old netbook so the spend half a day making that feature quicker never gets prioritised.
A simple example of this is inlining. For example
foreach thing in myThings
{
this.ValidateThing(thing);
this.ProcessThing(thing);
}
Without inlining, every iteration of the loop need to jump to and back from each of the methods. If the valuation is pretty simple (say check something != null) then the time the CPU spends jumping in and out of those methods is going to be relatively significant. Inlining copies the method implementation so no jumps are required within that loop. You could do that manually but your code will be unmaintainable. The cost of the inlining operation by the compiler or jit is that your application will be bigger. And that is just one example.
We could consider the trade-off between binary size and boxing/unboxing operations. For example
List<Animal> pets = new List<Animal>();
pets.Add(new Dog {Name="Fido"});
Console.WriteLine(pets[0].Name);
If I didn't use generics then I would have just a List and the last line would be a much less performant
Console.WriteLine(((Animal)pets[0]).Name);
Plus all the other fun bugs that come from accidentally casting something to something it is not. But again, this costs file size because I need a separate definition for List<Animal> vs List<Commentards> Vs ....
Disclaimer: I have no idea how these regulations are written.
I don't see a technical reason why the first phase cannot be broadcast only. If other nearby AI or semi AI vehicles know some basic information about my speed, acceleration, direction then they can take that into account in their own emergency manoeuvre planning. My bigger concern would be digital tracking by some Slurpy Inc (although they can do that with a camera and number plate recognition right now)
> That means if eg. an Audi driver turns in front of - it signals your car to brake to let them in ?
You must be mistaken. I've never seen an Audi with signals. Maybe you have confused it with their adhoc parking space indicator lamps that designate a piece of road not otherwise required by the Audi driver and blink to indicate that it has been designated as a parking space.
> As far as pet hate subjects, Delphi IMHO is clearly the absolute leader.
That's interesting. I was kind of curious about why it was getting hate. To me, it was more a sadness about what might have been if better decisions were made by Borland, sorry I mean Inprise, sorry I mean Borland (again), sorry I mean Codegear, sorry I mean Embarcadero, sorry I mean Idera. The same sort of feeling one might have towards an upcoming sporting potential who through a series of bad life choices finds their careers over before they have reached their full potential. The stuff that annoyed me was rarely the language syntax. I actually much prefer the constructor chaining syntax in Delphi Vs C# (we won't talk about anonymous method or nullible value types or the clunky way that interfaces work). Maybe I just have a soft spot for the earlier versions which were light years ahead of their contemporaries.
For the record, pet peeve is JavaScript. If only it wasn't the lingua franca of the web. The fact we need typescript to make it tolerable speaks volumes.
> Do you know an insurer who would insure a human driver for speeding fines ?
I think this is why I'm having such trouble following the line of reasoning. Insurers have never covered you for breaking the law. If you are driving an unregistered vehicle and have an accident, your insurer won't pay out. Same if you are driving at an unsafe speed for the conditions or under the influence of a substance (prescribed or otherwise). They are not about to start now.
They will insure you against fire, theft, damage caused by another party etc. At most, they may accept to charge back to Ford/Toyota/BMW/whoever. The manufacturers themselves may have public liability insurance specifically to handle Takata scale recalls but carrying the can for this isn't something that retail insurance would want a bar of.
> The new Australian subs are rumoured to be even quieter than the swedes
Do you mean the ones that are still pretty much on the drawing board, where doubts have been publicly raised/leaked to press about the feasibility of the refit (the one it's based on is a nuke and in spite of Australia being incredibly well endowed with uranium, we seem incapable of countenancing anything more radioactive than a banana). Getting nailed together in south Australia because there's a bunch of seats that will swing to NXT if they dare buy something off the shelf (not saying that they should be built elsewhere, but it shouldn't be to prop up a local candidate because your party is on the nose). Oh and apparently can't keep our military secrets secret either.
> I don't see how that's possible, given that a low-level format erases the disk geometry and then writes 0 on every single magnetic bit on the platter.
We should first discuss what storage medium this is, because a HDD is very different to a SSD. Let's first consider HDD. They encode bits of information on a (bunch of) circular platters divided up into tracks and sectors. Due to the dark science of geometry, we know that these cannot be adjacent perfect circles. It isn't exactly 1 atom per bit. Atoms are so small [citation needed], it is closer to 100000 atoms for each bit stored on a typical drive. That sounds like a lot, but remember that disk is spinning somewhere between 75 and 200 times per second and the head is "floating" a handful of nanometres off the surface. That is two orders of magnitude smaller than visible light wavelengths. The fact that these devices work at all is a testament to some pretty incredible engineering. My point is that the magnetic field gets converted to a current by the read write head. If there's more than X amps (milli or micro, can't remember which) then it is considered a 1. If less than Y, it is considered to be a 0. You can tell the difference between overwriting a 0 with another 0 vs overwriting a 1 with 0 (at the magnetic field level), and this can be (ab)used by data recovery specialists to get a good idea of what was there before the most recent write.
That is why secure erasure schemes have multiple passes with a mixture of specially selected patterns and random writes. It is also why I suggested that an encrypted drive may be even better. It course the devil is in the detail, but assuming the encryption implementation is good, we can be comforted that the data is effectively random without that key. It doesn't matter if they can reconstruct the encrypted blocks because without the key, they cannot concert it to your data. By securely erasing the volume derivation blocks that convert your password to the volume key (which is maybe only a few KB in size), there is simply no way to the data, even knowing your password (that is why these tools have a volume header backup facility).
With SSD, there is an interesting new dimension in wear leveling. I'm not 100% on whether you can infer a poorly value from an SSD cell but there is the problem where overwriting doesn't because the wear leveling algorithm decides to redirect the write somewhere different can mean that the original data is still there if you know where to look.
> The write is unbuffered. This eliminates the possibility that some bits might retain a value of 1, and might yield some recoverable fragments of information.
That is not the purpose of the write buffer. The write buffer is just a piece of memory (sometimes battery backed memory on servers) where writes can queue up if the disk is busy at that moment in time. Most of the slowness of HDD is in seek time, which is about moving the head to the right spot then waiting for the platter to spin to the right spot. If you can write things as you pass over that track and sector or if you write things in nearby places in sequence, your performance will improve. But either way, unless you rip out the power cord, the os will ask the HDD controller to flush it's cache frequently and definitely before dismounting.
> 1000MPH is a ridiculous arbitrary number. If this were ancient Egypt, we’d claim an arbitrary number of cubits, elsewhere leagues, in civilization kilometers, etc... 1000MPH is of no particular scientific or engineering significance.
You must be fun at parties.
I'll grant you that many cultures throughout history would have no concept of how fast 1000MPH is, but it is easy enough to convert it to the globally and time understood 0.0149% of the maximum velocity of a sheep in a vacuum.
> You paid for the license to play that video. Any license has terms and conditions since, legally, the producers and/or publishers still hold final call (the copyright) over your material.
Ok cool. So what you're saying is that if my kids break a DVD by leaving it lying about, I can get a replacement media for my license at a nominal rate to cover the physical media and postage? Same for moving formats between VHS/DVD/Blu Ray (not remastering, just transfer at same quality)? Where do I sign?
/Rant over: apologies, but I can't stand arguing both sides of the street here. They claim it is a product when it suits them (repurchase when you break it or want it on Blu Ray) but a license when it suits them (transcode restrictions/geoblock).
> Do you have a technical answer to social engineering tactics and techniques? If you have, you can become very rich, selling how to protect from any fraud.
As a matter of fact, I do. I have compiled the 10 easy steps to avoiding social engineering into the attached document below:
[^10StepsToBetterSecurity.pdf.exe]
@rh587, fixing password reuse between services does not require 2FA to solve. It just requires users to think through the consequences of one of their services being hacked* and therefore leaking credentials from an otherwise unexposed service.
What I see as really problematic is the number of apps that think it acceptable to have "can read SMS" tokens. This is presumably on the premise of their internal logic to hook up a specific app install to a phone number of the account claimed. All the TFA messages from my bank originate from the same number. So a less than ethical app can monitor my messages for a token code, then trigger a fake sign in prompt on the device to get the credentials, giving them a 5-10 minute window to strike**. That is why I never do banking from a mobile device. It is my second factor. The same thing cannot be both without at least partially*** compromising your security. Also remember that we are only two years since the android lock screen bug that I let someone bypass the lock screen on lollipop. Imagine what that does for pretty much any banking app's security...
*Or more probably leaving a backup file on a public facing webserver or using mongodbs "terrific" no security by default config.
**Hint, it isn't rocket science to inform the user that the service is down, try again later.
***In spite of these flaws, it is a marked improvement over SFA. (Pun intended)
> There is only a 3.125% chance of tossing "heads" five times with a fair coin so the balance of probability is that the coin is loaded or some other trickery is at work.
No. As more than 32 people who have commented on this article so far, I would expect one of them to have tossed 5 consecutive heads on a fair coin.
> Pretty much anything can be decrypted given enough time and resources.
I've got a million bucks for you if you can prove that...
Unless you are accepting solutions that require more energy than we have at our theoretical disposal and in timeframes that exceed the life of our species by a couple of billion years.
And in the case of a one time pad, generated from a truly random source (IE, a QRNG/measurements of radioactive decay, not a classic RNG), time will not help you. It can't, there simply isn't enough information in the cyphertext to learn anything about the key.
With respect, some of those arguments don't really hold water. For a start, it not comparable to relying upon WAF to avoid worrying about input sanitisation. CSPs are effective to the extent that
1. The website has implemented it allowing only what is needed.
2. The browser reacts correctly to the directive
3. The site is designed in such a way to allow 1 to restrict enough things that miscreants might exploit.
It is only after 2 occurs that you can possibly receive an error report. Or looking from another angle, if the CSP didn't "save us", then neither could the owner "be informed" via the CSP rule. It is possible that my safety is improved because another user submitted a report from their browser where mine didn't react correctly. Which is a point that I made from the opposite angle. It is not their responsibility to protect me from my browser choice.
Do I have a specific exploit in mind? No, but miscreants are a lot more creative than me, but let's don my evil Adam1 hat and give it a go. A user may have some crazy notion that executing unverified code from a site who you have no prior knowledge about. So they may have scripts disabled either in the browser settings or via noscript etc. The site owner could still track by generating a fake rollover image at GUID.NewGUID().com and reconcile through the backend what I scrolled to etc. I imagine similar could be done to regenerate deleted cookies based on a browser fingerprint generated fake Uri.
That said, I don't have a fundamental problem with ubo providing users the option to whitelist specific report URIs or to even whitelist all same origin report URIs. It is problematic to generally assert that your service is fine because of your claimed privacy policy. That may be true (and fwiw I believe it to be true), but that is a point in time guarantee. There are plenty of examples of websites that were at one point highly trusted but over the years were sold to companies who sold to others and so on and today have quite ethically dubious practices. Look at other examples from plugins like adblock plus or wot which either changed how they operated or were less than upfront about it.
I would actually prefer that a CSP violation be treated like a broken cert than a silent telemetry. If the browser did not render the page but instead showed the message "Warning: This website attempted to download a resource in violation of its content security policy." with buttons like Get me out of here, add exception logic and a report error checkbox. Maybe we'll get there in a few years once CSP story improves across the board. You may argue warning fatigue here. That is certainly something to consider but to my mind if your site is running a script or downloading another resource that you, the website author, didn't expect, there are larger problems.
You are mixing up CSP with the optional report.
CSP itself ensures that resources can only be delivered from the places that the website author intended. It is a directive to your browser. When your browser encounters a request in violation of this policy, your browser will block it*. Where the report Uri is defined, your browser will inform the website about which policies are in violation.
This may alert the website if they are being attacked via their ad network or if their CSP is blocking needed resources. The CSP is almost certainly protecting you. The report might help other users.
*Just like all web things YMMV with different browsers. IE and Safari have notably lower support for the standard than Firefox or Chrome.
... and don't type that very often. They are some of the brightest minds in info sec.
CSP, for the uninitiated amongst you, allows you to specify the domains that are permitted to serve what types of content to your pages. So I can say that the only domains that may deliver inline scripts are xyz, and the only ones that can deliver media are cloudflare etc.
When the browser renders the page and is asked to fetch resources, less sucky browsers will refuse to load those resources. Basically, if your browser is submitting a report, it has already protected you. It could be that the site owner had misconfigured the CSP or that some MitM is modifying the http pages or that some advertising network is trying to fingerprint the visitor, but either way, the browser has correctly blocked it. The reporting allows the browser to submit details of the violation. The complaint is that this report is blocked. The only people to benefit from this report are the site owner (if misconfigured) or old IE/Safari users whose CSP isn't processed or isn't processed correctly. Why should my privacy be decreased because they choose a browser with less support for a security feature?
> What makes you think HTTPS is end-to-end?
What, you telling me that a cloudflare certificate might be encrypted at the caching/DDOS mitigation layer but flow as clear text between origin and cloudflare, permitting you to activate HTTPS on your website without needing to actually change your server configuration by checking a few checkboxes? Shirley not. It isn't like commentards were giving a hard time every time a new security story came up...