Yup
No nation state would ever take advantage of a flaw that took days. Ultra slow speed to lose the tiny packets in the mass of other traffic. Makes no sense at all.
Computer security researchers have devised a way to exploit the speculative-execution design flaws in modern processor chips over a network connection – a possibility that sounds rather more serious but may be something less than that. Until now, Spectre attacks have required malicious code to be running on a vulnerable …
"No nation state would ever take advantage of a flaw that took days. Ultra slow speed to lose the tiny packets in the mass of other traffic. Makes no sense at all."
Same thing could been said for Stuxnet - before awareness of damage at Natanz nuclear facility (Iran, 2010; Stuxnet have been in development since at least 2005).
https://en.wikipedia.org/wiki/Zero_Days
e.g. NSA with a little help of some neural network AI could override that "slow speed/tiny packets" limitation.
from Ars Technica: "These data rates are far too slow to extract any significant amount of data; even the fastest side channel (AVX2 over the local network) would take about 15 years to read 1MB of data."
Indeed, but: only one, simple but crucial recovered password is sufficient for penetration, at least in theory. e.g. 1KB instead 1MB or 1:1000. Well trained AI will reconstruct it from bits of slurped garbage much more faster / successful then one could imagine.
"The AVX2 side channel is much faster—one byte every eight minutes—but still very slow."
hmm... 1B@8' == 1000B@8000' = 1KB@332d-12h-20min ==> worth trying for right thing, imho.
Are you really unable to pick up on the sarcasm in their post ?
Poe's Law. Sarcasm needs to be a hell of a lot clearer than in the original post - if the original was meant sarcastically; it's not at all clear to me that it was.
Write for your audience if you want to be understood.
I am quite sure I don't understand all of this, but perhaps someone could fill me in. A Spectre gadget as it is not particularly well-defined in the article or at least I was a bit thrown off. It isn't one of the gadgets in the "billions of computers, gadgets, and gizmos at some degree of risk". Does it amount to any code in any remote API that can be abused to exfiltrate data using this method? If so, I would think that identifying them might be accomplished by defining normal, expected calls on each API and monitoring for any that fall outside that set, essentially what most whitelisting apps do during tuning. Easier said than done, I am sure, but perhaps a way to catch things that code review might miss.
Does [a Spectre Gadget] amount to any code in any remote API that can be abused to exfiltrate data using this method?
Yes, that is my understanding.
If so, I would think that identifying them might be accomplished by defining normal, expected calls on each API and monitoring for any that fall outside that set, ...
Unfortunately, that monitoring may itself be a Spectre Gadget.
Technically, a gadget is a vulnerable code pattern. This use is a common term of art in malware research. Offhand my impression is that it was popularized by discussions of ROP, but I may be misremembering.
The paper discusses some of the difficulties in identifying known gadgets, and much of the Spectre research, and other research into microarchitecture side channels, has focused on identifying new types of gadgets. It's tricky to detect unknown patterns.
That said, it's possible application behavior monitoring and analysis could be a useful mitigation for some microarchitecture side-channel attacks. It wouldn't catch all of them, but it could contribute to defense in depth. Essentially it's what most contemporary anti-malware products do already, just for a different class of suspect patterns.
... an article on the subject on Ars Technica.
After all, until the exploit has a flashy logo and associated website it doesn't really exist - coming up with something 'cooler' sounding than NetSpectre may or may not happen.
That certainly seems to be standard operating procedure in the current shitegeist.
It's a conspiracy folks! Planned obsolesce!
We already know CPUs are reaching the end of Moore's Law. This will ultimately lead to a decline in sales when all you get is incremental performance increases. It has likely already begun if you believe some of the YoY sales figures. Intel (and others) know these issues exist and let them trickle out to guarantee they will get sufficient press coverage to scare the $hit out of everyone. Future Intel comes to the rescue to save humanity by announcing a new line of hardened processors, future OS distributions require new hardened processors. Profits soar! Everyone wins! Well, except for the consumer and business that are forced to upgrade all computers, severs, networking gear, and anything else with a processor.
Given that this is a sidechannel attack on network response via a SPECTRE gadget, the logical defense is to make all network application responses constant-time. So pick the longest possible response time, and force all the network responses to wait that long.
Or just add some random jitter (possibly about the average of half the difference between a speculative and non-speculative lookup) into the response time at the network packet driver- you will increase the average network response time by some small figure,but you destroy the network side-channel.
Any resolution to SPECTRE class sidechannels means impacting performance - the only question is the cost to do so, and whether that cost is acceptable.