"...nobody's explained why..."
Maybe some people were using these subtle vunerabilities, and they needed time to tidy up after themselves.
/tinfoil_hat ON
Oh, just a second. There's a pounding on the front door. BRB...
Microsoft has implemented Intel's advice to reverse the chipmaker's Spectre variant 2 microcode patches. Redmond issued a rare weekend out-of-cycle advisory on Saturday here, to make the unwind possible. Intel's first patch was so bad, it made many computers less stable, sending Linux kernel supremo Linus Torvalds into a …
It was inevitable that the disclosure of these h/w bugs would lead to legal action and a drop in share prices and profits but placing an embargo on the disclosure has meant an extra six months of share prices and profits at a higher level than would have been achieved if disclosure had occurred earlier.
What seems most surprising to me is that, given they had six months in which to mitigate against these foreseeable consequences, they've still managed to make such a bad job of it.
I know why...
Intel CEO Brian Krzanich sold off a large chunk of his stake in the company after the chipmaker was made aware of serious security flaws, according to multiple reports
An SEC filing last November showed Krzanich sold off about 644,000 shares by exercising his options and another roughly 245,700 shares he already owned
That reduced Krzanich's total number of shares to 250,000, which is the bare minimum that an Intel CEO should own, according to The Motley Fool
Courtesy of searching for "intel director share sale". That last point tells you all you need to know - bare minimum holding for a CEO says "fuck this company"
"Never chalk up to malice that which can be adequately explained by Intel's silicon production,test and support"
Seriously. Worked there. They are shit.
AFAICT Intels product development methodology is throwing lots of products into the marketplace and seeing which ones work.
Sort of got away with it when it was individuals buying computers, and various MS bug covered the shit up.
Now, Intel has a small number of very large customers - Google, Amazon. They are going to get reamed.
" ...has compromised all PC security and the problem gets worse by the day. Both should be prosecuted for gross negligence and defective products"
Angry as so many of us are, I'm not convinced that prosecution is the appropriate action—but I'll probably change my mind if lawsuits, presumably major class actions, do not succeed, because Intel and others must be suitably punished with damages. In a case where the product is unarguably and seriously defective, and in a way which incurs major risk to customers down the line, the damages should hurt. Reputation aside, a massive financial hit is the only language big corporations actually understand.
Where the money goes is another question. It would be excruciatngly difficult to determine payouts to individuals unless they could show consequential harm, but perhaps most customers would be open to damages being paid into a fund, charity or body which has computing security as its mission? I am making a wild guess here that a few hundred million dollars could buy some serious thinking, analysis, planning, testing and standards-setting that would be good for the industry and its users. Who knows, it could fund some coding courses which return to quaint, old-fashioned skills like writing lean, efficient code that doesn't have to load 700Mb of libraries and require 16Gb of RAM plus multiple cores just to offer a welcome screen ...?
"Who knows, it could fund some coding courses which return to quaint, old-fashioned skills like writing lean, efficient code that doesn't have to load 700Mb of libraries and require 16Gb of RAM plus multiple cores just to offer a welcome screen ...?"
Those days are gone, its all about rapid deployment, being agile, continuous deployment, devops style. While that is the aim, lean efficient code isn't really possible as it would require a lower level language to be used, with the time and understanding needed to write it.
"While that is the aim, lean efficient code isn't really possible as it would require a lower level language to be used, with the time and understanding needed to write it."
Translation: Most fresh out of college code monkeys - or the 3rd world outsourced alternatives - just arn't up to it and can only cope with hand holding scripting languages where someone has done most of the hard work for them.
"Translation: Most fresh out of college code monkeys - or the 3rd world outsourced alternatives - just arn't up to it and can only cope with hand holding scripting languages where someone has done most of the hard work for them."
If you insist on doing everything from scratch and not using existing libraries and frameworks and not using a widely used (and easily understood) language then youre probably wasting your companies money.
There is a time and a place for lean efficient code.. and a time and a place for high level bolted together solutions if you dont know the difference you are part of the problem.
"If you insist on doing everything from scratch and not using existing libraries and frameworks and not using a widely used (and easily understood) language then youre probably wasting your companies money."
There's a difference between using libraries for tasks that would take ages to write by yourself - eg machine learning - and using libraries to do basic tasks such as for example - substring counting, which any competant coder should be able to write in less time that it takes to find some library to do it.
Also knowing some computer science helps even if you don't use it much. But when it is required its often pretty crucial. Eg when would you use a quicksort, shell sort or cycle sort. Just calling your libraries "sort()" function isn't always the best option.
"There is a time and a place for lean efficient code.. and a time and a place for high level bolted together solutions if you dont know the difference you are part of the problem."
There is also a time to know when its better to spend more time on initial development to save time and money in the future and be prepared to tell the boss why. Someone who only knows how to write lego brick style code won't be able to do that. Nor will they know what to do if the library fails in some unexpected way.
I disagree, even if you remove that from npm (I saw you there..)
Using libraries is most times the best solution. Reinventing the wheel is not only problematic.. it needs plenty of work, both to do and to mantain said wheel. Also, it might be a bad quality one.
Way better to use a good quality library.
"Premature optimization is the root of all evil." -- Sir Tony Hoare
Calling the library's sort() function may not *always* be the best option, but it's usually the place to start. Among other things it's more likely to have odd corner cases covered than something written based on vague memories from a freshman C++ class. ;)
One of the things I've learned, in this era of optimizing compilers, is trying to be clever about things often makes the code slower instead of faster. This is true even in high-level languages like Javascript. For example, trying to find a clever way to catenate a bunch of strings often ends up slower than just looping with the catenate operator, because that case is optimized in most Javascript interpreters.
@boltar
While I agree with you, the point that I was making is that for the VAST majority of what people are writing these days (At least business applications) speed of execution is not the primary concern.
The main concerns are (In no particular order, call .sort() on this list if you want ;) )
1. Can we implement feature x to give us an advantage over our competitor? Can we do it quickly.
2. Will the code be maintainable without having to spend a fortune (ie can we ship it to india)
3. When team member y who wrote the system leaves can we hire someone quickly and cheaply who will be able to maintain the system without spending months getting up to speed.
The point that I was making is that for a lot of what we need now there is no need to over engineer things, A web portal knocked up in ASP.NET MVC or PHP for the sales department might not be as fast or as lean as it COULD be as long as its FAST *ENOUGH* and it works.
Like I said, There is a time and a place for lean efficient code.. and a time and a place for high level bolted together solution. Knowing the time and the place for each is key get it wrong and you end up with either a badly performing system or a ridiculously over engineered system.
I think he's referring to people who can't even manage to left-pad their own strings.
These are the people dragging the internet to it's knees when a single page has to call code and other resources from 30 different domains just to display the page. The sort of code which should be in the forefront of the programmers mind and take less time to write than the link to the external source.
"I think he's referring to people who can't even manage to left-pad their own strings."
Yes, that is ridiculous.
I suspect much of it is laziness, I mean if you have enough understanding to work out what a function called left pad would do, you can write it yourself.. then again, if someone has already done it once, why do it again? (I completely agree with how stupid it is pulling that many resources from other domains)
"load 700Mb of libraries and require 16Gb of RAM plus multiple cores just to offer a welcome screen"
You have that around the wrong way, the welcome screens are to distract you from the lengthy amount of time taken to load the 700MB of libraries, initialise them, and otherwise fill that 16 GB of RAM with useless bloat.
"They should properly test their patches before they deploy them en-mass
Bit difficult when you've got the self-appointed owners of the internet threatening to air your dirty laundry based on their own timescales...
"Both should be prosecuted for gross negligence ..."
It is demonstrably not gross negligence and I would expect such claims to be tossed out of court on day one.
The technique of speculative execution has been widespread throughout the industry for twenty years. There were a few people in academia asking whether cache lines could be used as a side-channel. I think there was at least one of those 2 or 3 years ago, but since it came to nothing then I think we can conclude that it wasn't *obvious* that the answer was "yes".
For negligence, you need to have a situation where a knowledgeable person would, if aware of the action, think that it was careless or unwise. We had an entire industry full of such people for 20 years, well aware of what was being done, and the most damning criticism that any of them came up with was "This looks like a possible weakness but despite my best efforts I can't actually exploit it.".
Then, finally, six months ago, someone managed, and Intel's response was to start working on a solution whilst trying to keep the problem away from Black Hats to protect customers.
Yeah, *so* negligent ... not.
It is demonstrably not gross negligence and I would expect such claims to be tossed out of court on day one.
*Ahem*. What about continuing to sell new models with the issue still in place once you know about it? They need to be sued into oblivion because they simply don't give a shit.
For negligence, you need to have a situation where a knowledgeable person would, if aware of the action, think that it was careless or unwise.
See the above point about continuing to issue new models with the design fault in place. Any argument about "it takes time to redesign, test, and fabricate chips without the issue" should be met with "tough shit, that's a 'your problem' not a 'my problem'". There are very few industries where you can continue to knowingly sell defective goods. "Not fit for purpose" seems to spring instantly to mind in terms of consumer protection.
Come come.This must be the third report where the editor has blatantly gone "IT WAS US! IT WAS US!" like an excited child who has fried his first ant with a magnifying glass.
Ok, we get the idea. El Reg got a biggie. Cheers, well done and all that. Now, time to move on.
The other way to spin it is that The Register risked everyone's security by not practicing responsible disclosure and waiting for the vendors to get their patches in order - which Google Project Zero, not known for giving vendors extra time, were doing. Cue massive scramble and release of patches with problematic side-effects.
I think the editorial staff here need to take a good look at themselves.
The other way to spin it is that The Register risked everyone's security by not practicing responsible disclosure and waiting for the vendors to get their patches in order
Ok. How does that square with Intel saying that they were going to tell the world on January the 9th?
The Intel dude says "We use speculative because the customer demands speed at the cost of security."
The software dudes say "We use C because the customer demands speed at the cost of security."
I'm seeing a pattern here.
The Intel guy was so clearly constrained that he offered nothing, and his arguments against open hardware were weak - hardly a surprise. Bring on open hardware.
I'm not aware of evidence that the customers demand speed at the expense of security, but I suppose it may be true.
If the marketplace starts to offer chips that are Spectre-proof and chips that aren't, we'll see. As far as I'm aware, the latter aren't yet available. (And yes, ARM fans, I *am* going to restrict my argument to x64 chips because I'm not aware of an easy way to run all my closed-source x64 Windows apps on your ARM chips and I'm not inclined to take a few years off using a computer whilst the entire software industry pulls its finger out and re-writes everything, for free.)
"The software dudes say "We use C because the customer demands speed at the cost of security.""
You might want to check out what all common OS kernels and tools are written in and most scripting language intepreters. If you're using a computer , tablet or smartphone you're using something written in C and its not just down to speed - its also because its compiled to actual machine code and hence binaries can be standalone, the ability to integrate assembler, the almost direct mapping of a lot of C keywords to CPU opcodes and the small memory footprint. Much as the kool kids might wish it , C and C++ arn't going anywhere anytime soon.
I could be wrong and perhaps I'm getting old but the problems here seem to be
* A staggering number of x86 devices. Not just a few each for mobile, desktop and enterprise. If there were fewer they could spend more time on quality
* A rush to patch things without testing properly. In Intel's case the number of microcode variants required not helped by the above. In others' cases sometimes stupidity
* No clear documentation for patches for the end user. I'd like to clear on JUST ONE LINK to be told what exactly is an an update/patch
"No clear documentation for patches for the end user. I'd like to clear on JUST ONE LINK to be told what exactly is an an update/patch"
Oddly enough, Linux seems to be ahead of the game here. Windows PCs appear to have the constraint that they can't update microcode without permission from the BIOS, which requires the involvement of the BIOS vendor, who is reached through the OEM, and ... FFS! (Ordinary punter loses will to live and never does any of these things. Film at eleven.)
Whereas ... it appears to be the case that Linux systems will pick up a microcode update through their normal automatic updates mechanism and feed it to the CPU at the next reboot.
We had the same ridiculous dependencies for the IME bugs. Perhaps one good thing that might come out of this is that heads will be knocked together so that the OS vendor can do it by themselves. Otherwise, this is getting a bit like Android, with "BIOS vendors" playing the obstructive role of "phone vendors" or "carriers".
I'll be blunt here, since I rather agree with you Ken;
6 goddamn months. Intel should be dragged through broken glass, smouldering coals, salted icewater and back a dozen times for *NOT* having fully tested, validated and working fixes in hand when this broke. They could have had patches for compilers, firmware, and BIOSes ready in that time.
The original Microsoft update from 3rd January did not contain any microcode, it was dependent on microcode also being updated. The server OS additionally had the update disabled by default, sysadmins had to enable it via a registry key....the same registry key contained in Saturday's out-of-band update that disables the protection. If you hadn't applied the Intel/OEMHW patch, nothing the Spectre patch was not active anyway. Nothing much to see here, especially for server admins.