back to article Woo-yay, Meltdown CPU fixes are here. Now, Spectre flaws will haunt tech industry for years

Intel has borne the brunt of the damage from the revelation of two novel attack techniques, dubbed Meltdown and Spectre, that affect the majority of modern CPUs in various ways. The chipmaker's stock price is down, and it's being eyed for possible securities litigation, following reports CEO Brian Krzanich sold the bulk of his …

  1. Anonymous South African Coward Bronze badge

    heap big whizzkids talking heap big words

    and intel still denying everything.

    1. DropBear
      Devil

      I suppose it's a bit like "look, there are no problems with our fence! It's tall, sturdy and everything! It has no holes!" / "True, but I can dig a bit and pass right through under it, isn't it?" / "Well... yes, but it is a perfectly good fence...!"

      1. Adam 1

        > It's tall, sturdy and everything! It has no holes!" / "True, but I can dig a bit and pass right through under it, isn't it?"

        Look, it's not a bug. Our fences are acting exactly as designed. And in any case, we believe that these exploits do not have the potential to corrupt, modify or delete data.

        Based on the analysis to date, many types of fences are susceptible to these exploits.

        We are committed to product and customer security and are working closely with many other fence manufacturers to develop an industry-wide approach to resolve this issue promptly and constructively. We are making this statement today because of the current inaccurate media reports.

        We believe that our fences are the most secure in the world.

        1. Anonymous Coward
          Anonymous Coward

          If we're using the fence analogy ...

          ... then it sounds like Intel built an adequate fence to stop you coming in my garden, but you invented a way to use a long tool and take a bit of my garden over to your side of the fence where you can do what you want with it.

          Times change and things are so complicated these days that it's impossible to account for every eventuality. I'd guess that there are more hackers with evil intentions dotted around the globe than there are hands-on engineers working at all the chip designers combined, so I'd say they have done a pretty good job to date.

          There are always going to be times when vulns like this are discovered. It's not so much 'OMG how could they let this happen?' but more about 'How will they stop it once it's out there and handle the cleanup?'

          People are saying things like 'They won't survive this' etc. Of course they will. I agree that most large companies are run by execs and lawyers and I'd normally be on the anti-lawyer side of the fence, but this time I think it's fortunate that they have so much behind them. Plus it affects all the major chip manufacturers so they will all rise and fall together, and everything will bounce back just fine.

          In a few days it will be patched, lessons learned will go into future CPU design, this will fade into the past and we all move on.

          The only things that will annoy me is if any company takes too long to patch it, or uses it as a way to get people to upgrade their hardware.

      2. Captain DaFt
    2. Baldrickk
    3. BillG
      Mushroom

      Was Intel Aware?

      Intel is in denial. It insisted the vulnerabilities identified do not reflect flaws in its chips.

      Having worked with Intel two years ago on one of their new processors, I can tell you that the company is no longer run by engineers, it's run entirely by lawyers. Engineering, public relations, and marketing are all frustrated by the handcuffs Intel's legal department has placed on them.

      Thinking back on my experiences, I'm going to guess - and it's a GUESS - that a select few at Intel knew about Meltdown and SPECTRE as far back as 2015, and has been bracing for this inevitable exposure.

      I'm a fan - a HUGE fan - of Intel. Despite that, I'm doubtful Intel is going to survive this. My guess is it's going to weaken them so badly that someone is going to buy them out.

      1. Joerg

        Re: Was Intel Aware?

        Yeah sure.. the big evil Intel and the big evil Apple and the big evil Nvidia are all doomed .. because you worked for them all uh? And you know they can't make good products anymore, uh?

        While AMD is the savior of mankind uh?

      2. Anonymous Coward
        Anonymous Coward

        Re: Was Intel Aware?

        "I'm a fan - a HUGE fan - of Intel."

        Are you quiet? And how fast do you spin?

      3. Fluffy Cactus

        Re: Was Intel Aware?

        I don't know how address spaces inside Intel CPU's are accessed, and I don't know how they are protected from any access. With the benefit of wonderful un-informed ignorance, I am here to help!

        (Cue Andy Kaufman's "Here I come to save the daaaay!!" )

        So anyway, I remember when, in the 1980's and 1990's certain tricky viruses and worms would use "specific spaces" on a given hard drive, and would take advantage of the then existing technology of "hard-drive-management" which marked certain spots (clusters, specific hard drive memory areas, etc) as bad and unusable if they could not be read after a certain number of unsuccessful read attempts. So, these viruses would install themselves, and then mark the locations where they were hiding out, as "bad and unusable" to the system, while they themselves could still access their nasty programs.

        Given this flashback down memory lane, I now am wondering whether the various memory units (clusters, registers, whatever you call it) inside Intel CPU's have a similar method of denying memory

        access to the system? That is, an area marked unusable that actually is still good. Effectively invisible, but still accessible to those parts of the system that are informed about the "good bad spots".

        That's how i imagine one could fix this problem, together with another system of pseudo-randomly storing the vulnerable data in pseudo-randomly different spots.

        Anyone think this is possible, or able to tell why it's impossible?

        1. Michael Wojcik Silver badge

          Re: Was Intel Aware?

          That is, an area marked unusable that actually is still good. Effectively invisible, but still accessible to those parts of the system that are informed about the "good bad spots".

          That's how i imagine one could fix this problem, together with another system of pseudo-randomly storing the vulnerable data in pseudo-randomly different spots.

          No, that wouldn't help.

          This is really quite complicated, but: The issue with Spectre is that while speculative execution discards incorrect results, it has side effects on system state. When spec-ex loads data from memory into cache, that changes the contents of that cache line and the address it's associated with. An unprivileged process can't read those contents - that's essential to virtual memory in a multiprocessing system with process isolation, like every modern general-purpose OS. But an unprivileged process may be able to figure out something about the address the cache line is associated with.

          How? With a cache timing attack, for one. Cache timing is one of many side-channel attacks against CPU microarchitectures. Basically, you try to read a particular address and see how long it takes for the load to complete. If it's fast, then you know that address was already cached.

          Cache timing leaks information - it lets the attacker find out something about what addresses have recently been cached. That may not seem relevant, but if you're a security engineer, you know that any information leak has the potential to serve as a side channel that reveals some secret information.

          For one Spectre variant, you find a piece of code that does an indirect load based on an address you (the attacker) supply. That is:

          1. You run the code, supplying address X.

          2. The code loads value V1 from X.

          3. The code uses V1 to retrieve some value V2 from another location. There's a set of possible V2 addresses, and they depend on V1.

          For example, consider a bytecode interpreter which does something like:

          result = Functions[*X](...);

          that is, it looks at the byte at X, and uses that to index into an array of function pointers.

          Now: The attacker wants to know what byte is at some address A, but doesn't have read permission for that virtual address. So he passes A to that block of code. The CPU speculatively executes up through the point of retrieving the function address from the array slot. Then the attacker users cache timing to figure out which function address was loaded. That tells the attacker the value of the byte at A. (The attacker has previously done some setup work.)

          That's greatly oversimplified, but it's the basic idea for that form of Spectre.

          So having unreadable memory (which we already have), or moving sensitive data around (which we already do), don't help. It's the speculative load and its effect on the cache which matter.

      4. Anonymous Coward
        Anonymous Coward

        Re: Was Intel Aware?

        That is very interesting and I would be shocked if, even only a few, Intel engineers didn't know about this for years. Same for AMD and all other processor designers/manufacturers.

  2. Anonymous Coward
    Anonymous Coward

    My understanding on Android

    From the Google project zero blog that triggered all this, is that Specre is mostly mitigated on android devices due to restricted access to high precision timers.

    So whilst Pixel devices are fully patched in the Jan 2018 update, and other in-support devices will get these eventually, all other ARM based android devices are still pretty well protected by other means.

    Intel based Android devices however may still be subject to meltdown. Anyone know??

    1. Charles 9

      Re: My understanding on Android

      Most Intel Android devices use Atoms. Atoms are stripped down processors, and some are strictly in-order and immune to both attacks. It depends.

      1. Anonymous Coward
        Anonymous Coward

        Re: My understanding on Android

        Only pre-2013 Atoms are safe

        1. This post has been deleted by its author

    2. Adam 1

      Re: My understanding on Android

      Estimates vary, but our atoms all appear to be from 6-7 billion BC, so I believe we are all good.

  3. NiteDragon

    I love intel's 'everyone else is a bit rubbish too' defence. I'll try that if I ever fluff up my code security to the extent that my product appears on tech news.

    1. Voland's right hand Silver badge

      That defence does not stand to scrutiny

      AMD is exempt from the mandatory address space separation in the Linux tree as of now. Linus merged the patches.

      Based on the way the code stands (when code speaks marketing and PR hack should shut the f*** up), AMD is not vulnerable.

      Ryzen looked good prior to that anyway. It now does not just look good, it looks stunning. My guess is that AMD is about to get a serious inventory issue with not being able to print enough of them.

      1. Doctor Syntax Silver badge

        Re: That defence does not stand to scrutiny

        "My guess is that AMD is about to get a serious inventory issue with not being able to print enough of them."

        Subcontract production to Intel?

        1. Claptrap314 Silver badge

          Re: That defence does not stand to scrutiny

          Not only no, but hell no. I was at AMD for several years. The bad blood between those two is really, really bad. But the reason no is not hate, it is intellectual property theft. There is no way for AMD to trust Intel not to steal from its designs. (Again, if the first-hand report I got was correct.)

      2. AdamWill

        Re: That defence does not stand to scrutiny

        On the other hand, http://seclists.org/fulldisclosure/2018/Jan/12 .

  4. MacroRodent
    Headmaster

    Error?

    "This time difference is very small, so by keeping the resolution of the timers that are exposed to JavaScript high enough, we mitigate the ability of the attacker to perform this step."

    Isn't it the other way around? To mitigate the attack, the timer resolution must be LOW enough.

    1. BinkyTheMagicPaperclip Silver badge

      Re: Error?

      No. A high resolution enables you to 'see' more. High resolution=high precision.

      1. DaLo

        Re: Error?

        "No. A high resolution enables you to 'see' more. High resolution=high precision."

        Isn't that the point MacroRodent was making? A high precision timer will not mitigate it as JavaScript will have access to high precision/resolution timing. The mitigation would be to only allow it access to low precision timing.

        1. Julz

          Re: Error?

          Hum, lower resolution timers will just mean that the attack code will have to make more attempts to determine the difference between the two branches; it won't plug the vulnerability.

          1. Michael Wojcik Silver badge

            Re: Error?

            Hum, lower resolution timers will just mean that the attack code will have to make more attempts to determine the difference between the two branches; it won't plug the vulnerability.

            Not necessarily; if the resolution is low enough (the grain large enough) then error can dominate to the point where it's infeasible to ever gather sufficient information. It's pretty hard to determine cache timing with 1-second resolution, for example; you'd have to take an enormous number of measurements.

            More importantly, they also introduce jitter, so there's additional error.

            Even more importantly, though, timing is not the only side channel in CPU microarchitectures. It's just a matter of time until someone has a PoC using some other channel. It may not be pure software (it might use RFI or something, for example), but we've seen those side channels exploited in practice - for example using an innocuous-looking device placed near the target system.

        2. bombastic bob Silver badge
          Devil

          Re: Error?

          "The mitigation would be to only allow it access to low precision timing."

          They should all truncate it to millisecond resolution then. Why does javascript need microsecond-level performance timers?

          /me points out that I've profiled code effectively with millisecond-level resolution, MANY times. I'd explain why it works, but would probably get a dozen or so off-topic replies, half of which would contain pejoratives and whining about me using CAPITALIZATION for emphasis. I tried to explain it once on a Microshaft forum when I was profiling early insider versions of Win-10-nic that way, and I don't think they liked what I found, so I got "the flack about my methods" instead of a REAL discussion.

          1. Doctor Syntax Silver badge

            Re: Error?

            "using CAPITALIZATION for emphasis"

            You have alternatives such as bold and italics which are socially acceptable.

            1. davidp231

              Re: Error?

              Agreed - it's rude to shout, which in the context of forums, emails et al, is WHAT THIS IS.

          2. gnasher729 Silver badge

            Re: Error?

            "/me points out that I've profiled code effectively with millisecond-level resolution, MANY times."

            Apple's profiler built into Xcode just checks 1000 times per second or so where the program counter is, and that seems good enough to find bottlenecks in your code. You don't need any resolution at all as long as you can manage to sample at 1000 random points in time every second.

            1. Adam 1

              Re: Error?

              @gnasher, 9 times out of 10 I reckon a 1ms sampling profiler points you at the guilty target but it does depend on the problem domain. One good gotcha if you're doing this on Windows is that by default your clock has a resolution of around 10-20ms unless you use the high resolution counters. This caught me out more recently than I care to admit. Each call to my method was in the order of 0.3ms, which almost always appeared as 0ms. Once I had profiled with the high perform counters, it became easy to recognise where the time was being spent, so I could prioritise both speeding up that method and seeing if there were opportunities to call it less frequently.

          3. PaulFrederick

            Re: Error?

            Microseconds? That is 3 orders of magnitude slower than processor speed, which is clocked in nanoseconds. Millionths, billionths, you know? All of this sounds a lot like spitballing to me. Sure in theory there is a flaw, but practically exploiting it is another matter entirely. How does one know if they've just gotten a password, or random garbage? Without context and structure data is pretty worthless. Meltdown and Spectre strike me more as slow news cycle hype than anything else right now.

            1. Michael Wojcik Silver badge

              Re: Error?

              How does one know if they've just gotten a password, or random garbage?

              Perhaps you should look at the published proofs of concept, which do indeed show successful exploit of both Meltdown and Spectre.

              I know, I know. Reading is hard, while posting rubbish is easy.

    2. Adam 1

      Re: Error?

      @macrorodent, correct and probably (almost certainly) what was meant, but it is a quote so they can't change it without chatting to Wagner again.

      @julz, a lower resolution timer on its own won't mitigate, true, but in the case of Edge they are also adding jitter

  5. Anonymous Coward
    Anonymous Coward

    'Intel is in denial'

    Intel knew all about this at the very top (ceo share sales move confirms it). Who doesn't know much, are the Intel PR asses writing the blurbs. This all ties in with the other big scandal below. I bet the NSA had a hand in keeping this hushed up for so long. But then lots of independent civilians started to notice:

    ===================

    'Trusted Computing' Model 2.0'...

    "....."The design choice of putting a secretive, unmodifiable management chip in every computer was terrible, and leaving their customers exposed to these risks without an opt-out is an act of extreme irresponsibility," (EFF)..."

    http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

    ===================

    https://www.theregister.co.uk/2017/11/09/chipzilla_come_closer_closer_listen_dump_ime/

    1. Detective Emil
      Black Helicopters

      Re: 'Intel is in denial'

      Here's the icon your post needs, but which, understandably posting as A.C., you were unable to use.

    2. phuzz Silver badge

      Re: 'Intel is in denial'

      It's not conspiracy theorising to say that Intel knew about this. It's very widely reported that they (and AMD and ARM) were first told about the problems in June of 2017.

      1. Eddy Ito
        Big Brother

        Re: 'Intel is in denial'

        Right, the conspiracy theory would be that they have been working with the NSA since June to create a new backdoor for the next generation of chips now that this one has been found.

  6. Anonymous Coward
    Anonymous Coward

    Insider trading

    This looks really bad for him. Since he obviously knew about this issue several months before he filed to sell the shares, there are only two possibilities:

    1) he deliberately sold before he knew the stock would fall, though I'm not sure how he thought he'd get away with it

    2) he had some use for that money planned (building a big yacht or mansion or planning a large donation) but if that was the case he should have filed for the sale but held off selling until after the announcement just to remove all suspicion

    I'd say odds are better than even he won't be Intel's CEO by this time next year.

    1. DaLo

      Re: Insider trading

      What's great about the MotleyFool's write up is that the suspicion was raised at the time of the sale (i.e. not in hindsight based upon what we know now). So the author already suspected that his selling off of 100% of the shares he was allowed to sell off was highly suspicious and indicative that he might know something about the stock price.

      It's rare to see an analysis of wrongdoing before the alleged wrongdoing is actually known.

    2. Anonymous Coward
      Anonymous Coward

      Re: Insider trading

      That is a vile slur upon my integrity as Intel CEO.

      The share sale was nothing to do with upcoming exposure of serious flaws in our CPUs. I desperately needed the money because I am being blackmailed over my addictions to gambling, cocaine and whores.

      1. Anonymous Coward
        Anonymous Coward

        Re: Insider trading

        "I desperately needed the money because I am being blackmailed over my addictions to gambling, cocaine and whores."

        May need it now for security guards as some important people may be very, very seriously peed off.

        1. Anonymous Coward
          Anonymous Coward

          Re: Insider trading

          "May need it now for security guards ..." and a few Pantsir systems.

    3. Anonymous Coward
      Anonymous Coward

      Re: Insider trading

      He won't care. He will get a nice 'parachute' payoff to retire on. Oh you believe rich folks are punished for their illicit activities? Oh you poor, naive fool.

      1. Anonymous Coward
        Anonymous Coward

        Re: Insider trading

        Hey Martha Stewart did time, and she's richer than Krzanich could ever dream of being and her insider trading was a lot smaller and much less overt. So at least there's hope he gets more than a slap on the wrist.

        1. Ian Michael Gumby

          @Doug S Re: Insider trading

          There's a couple of things here...

          1) Martha Stewart did time because of her lying to the feds and not being smart about what she did.

          If you think she was the only one who does this... hardly. Its hard to get caught.

          2) There may be nothing illegal in the share sale. It depends on a couple of factors.

          If he is already in a programed sell of shares as a way of portfolio diversification. Meaning he gets a huge stock option grant, he then sets up a pattern of sells to cash out and diversify his portfolio.

          There could be an undisclosed life changing event.

          The point is that before you get a lynch mob, learn the facts.

          (Then heat up the tar.... )

        2. Mike 16

          Re: Insider trading

          Sure, Martha did time, but if she had had the insight to transition to "Marty" she may not have been prosecuted at all.

          1. stephanh

            just a routine diversification of his portfolio

            http://dilbert.com/strip/2000-02-15

    4. Updraft102

      Re: Insider trading

      He (Intel CEO) could simply claim he was divesting as a result of the AMT vulnerability, which had already been revealed. That's one advantage of always having some crap in the news about how bad your products are-- plausible deniability for insider trading.

  7. bazza Silver badge

    "Underlying vulnerability is caused by CPU architecture design choices. Fully removing the vulnerability requires replacing vulnerable CPU hardware."

    SPARC it is then. That seems to be about the only server grade semi competitive CPU out there that's completely resistant to both Meltdown and Spectre.

    The guys from Sun / Oracle must be feeling smug. I wonder if they had theorised about the possibility of this kind of thing and had designed for it, or were they simply lucky?

    1. bazza Silver badge

      Though I may be wrong. SPARC might be susceptible to Spectre...

      1. John Riddoch

        I think some of the older T-class chips didn't have out of order execution, so they'll probably be safe. They're crap for single threaded workloads, though. I seem to recall POWER 6 didn't have it either, which is how they clocked it so fast (up to 5GHz) without melting.

        As for other SPARC/POWER chips? Given that ARM is vulnerable and all of these are based on RISC design concepts, it's entirely plausible they're vulnerable as well. I don't know enough about chips to be able to answer that.

    2. A Non e-mouse Silver badge

      Has anyone actually tested Sparc processors for these vulnerabilities? And what about PowerPC or MIPS? Or a myriad of now obsolete CPUs that have speculative execution?

      Personally, I'm going to stick to my old faithful Z80

      1. Anonymous Coward
        Anonymous Coward

        > And what about PowerPC or MIPS?

        If only Apple had stayed with Power chips, providing a bit more diversity. And if only China's MIPS (moonsong, was it?) processors made it to the west.

        While the way we measure the utility of technology remains by "winners" and "losers", we will continue to build insufficiently diverse environments, and be forced to allow the likes of Intel to get away with statements like "it's all working fine - reality is at fault."

        1. A Non e-mouse Silver badge

          @ Tinslave_the_Barelegged

          You're missing the point. Unless someone's tested Loongson or PowerPC you don't know if they're vulnerable or not. So far, I've only seen reports on Intel, AMD & ARM processors. That does NOT mean the others are not vulnerable.

          1. Anonymous Coward
            Anonymous Coward

            Re: @ Tinslave_the_Barelegged

            > "So far, I've only seen reports on Intel, AMD & ARM processors. That does NOT mean the others are not vulnerable."

            Taken from: https://access.redhat.com/security/vulnerabilities/speculativeexecution

            "Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian)."

          2. Anonymous Coward
            Anonymous Coward

            Re: @ Tinslave_the_Barelegged

            > You're missing the point.

            Fair enough, but it was a general comment about culture using this issue as an example.

        2. Anonymous Coward
          Anonymous Coward

          Funny story about Apple & PowerPC. The purpose of the that project was to drive Motorolla out of the market. IBM spent about $2 billion doing it. IBM had to pay penalties (it was loosing money) every quarter once the G5 got 100% of Apple. After a year, Jobs was furious that IBM wasn't making its commitments. He threatened by showing a box running on x86. IBM did nothing to keep his business. When Apple quit, the internal IBM notices said NOTHING about winning the business back.

          Jobs might have been a visionary, but he didn't know his politics... (Not that I'm any better, mind you.)

      2. Dave Pickles

        DEC Alpha didn't do speculative execution, it had separate 'branch usually' and 'branch occasionally' instructions which the compiler selected depending on code analysis.

        1. Paul Crawford Silver badge
          Facepalm

          Re: DEC Alpha

          Yes, in its day a great CPU. Wiped the floor with x86 (especially on single-precision floating point maths).

          And once DEC was bought by HP they dumped it in favour of the Itanium, because it was clear that Intel's new design was going to be a great hit, eh?

          1. Peter Gathercole Silver badge

            Re: DEC Alpha @Paul Crawford

            You must also remember that HP was heavily invested in Itanium, as they had contributed their PA-RISC and EPIC technology to Intel, supposedly to make it easier for Intel to build am architecture that HP could move HP/UX to easily.

            As it turned out, Intel used a lot of the IP to make their x86 processor line run faster, and were late delivering the server grade Itanium chip that HP wanted (and which were not as easy to port to as HP expected).

            IIRC, it was so bad that HP developed two further generations of PA-RISC, which were, in fact, some of the best processor designs HP ever made, just to allow them to have competitive systems to sell while Intel faffed around with Itanium.

            So Intel took HP for a ride, and then ditched them once they had gained the IP they were after.

          2. Colin Bull 1
            Stop

            Re: DEC Alpha

            "And once DEC was bought by HP they dumped it in favour of the Itanium, because it was clear that Intel's new design was going to be a great hit, eh?"

            More likely they dumped it because Carly got a backhander from HP. It was obvious to everyone and his dog that the Itanic was going nowhere fast.

        2. gnasher729 Silver badge

          "DEC Alpha didn't do speculative execution, it had separate 'branch usually' and 'branch occasionally' instructions which the compiler selected depending on code analysis."

          Doesn't sound like they didn't have speculative execution. Only that they had static branch prediction instead of dynamic branch prediction.

      3. Doctor Syntax Silver badge

        "Personally, I'm going to stick to my old faithful Z80"

        I liked the Z80's trick of having two sets of registers and an instruction to flip between them. Very quick context change, no need to save registers or the like. Combine that with flipping between caches and mix in some notion of security rings and it could stage a come-back.

        1. Tom 7

          RE:"Personally, I'm going to stick to my old faithful Z80"

          Its only fast switching between two contexts - in a multi tasking system you still have to load the alternative registers with something useful.

          Having said that SymbOS is a bit of an eye opener.

      4. AndrueC Silver badge
        Boffin

        Personally, I'm going to stick to my old faithful Z80

        Speaking of Z80s and retpoline..

        Way back in the mists of time I used to reverse engineer games to get myself infinite lives and occasionally a mention in a magazine tips section. I remember one time being thwarted by this gem:

        PUSH AF

        RET

        That was one of those 'Put the debugger down and slowly walk away' moments :)

        1. Charles 9

          "PUSH AF

          RET

          That was one of those 'Put the debugger down and slowly walk away' moments :)"

          Perhaps you can enlighten us why it would've been too tricky to figure out the flags enough to realize where this "jump by return" was going.

          1. AndrueC Silver badge
            Boffin

            Perhaps you can enlighten us why it would've been too tricky to figure out the flags enough to realize where this "jump by return" was going.

            The problem as I remember was that the instruction appeared to be part of a state engine. I doubt I'd have called it that back then but that fits my adult memory of it. I think I found two maybe three branches that kept coming back to this block of code which performed several calculations against the accumulator and then the PUSH/RET. So the target address was the result of a serious of calculations and the flags thereof where the values being used depended to some extent on where the CPU had come from.

            I should also point out that debugging machine code on a microcomputer was not an easy task. There was no protection mechanisms because the CPU just didn't provide them. This meant it was quite easy for clever code to crash the debugger. Indeed some code seemed designed to do exactly that - although mainly that was around the custom loading code as an attempt to thwart pirates. I remember for instance code that used LDIR to overwrite the stack. I remember code that used the interrupts to jump to somewhere that the debugger was using. There weren't many debuggers available for the Sinclair Spectrum so the game developers knew the choices available and their weaknesses. Without virtual memory they and the game had to share the same address space and although mine could be told to relocate itself on loading it couldn't do it on the fly. And most games were tight fit in memory anyway so even getting the debugger to run the game code was difficult.

            So most likely I was just looking at the dissassembly listing. Figuring out the various possible flags from an assembly listing isn't easy. Not when you know the code is being called from several places.

            1. Charles 9

              "The problem as I remember was that the instruction appeared to be part of a state engine."

              I get it now. I'd never personally seen assembler code that intricate, but as you describe it, I can see it happening. Those two instructions were simply part and hints of a larger scheme.

      5. AdamWill

        yes.

        "Has anyone actually tested Sparc processors for these vulnerabilities? And what about PowerPC or MIPS?"

        Well, not sure about Sparc or MIPS, but PowerPC, yes:

        "Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian)." - https://access.redhat.com/security/vulnerabilities/speculativeexecution

        This is very definitely a real thing; we have some extremely good engineers internally at Red Hat who've spent the last several months working on this. If they say they have a PoC on Power, they do.

      6. Michael Wojcik Silver badge

        Spectre has been demonstrated for POWER and IBM z.

        I don't see how SPARC and MIPS would not be vulnerable to Spectre. Basically you need speculative execution, an L1 cache, and a high-precision timer. Those are all found in essentially all modern general-purpose CPUs.

        Specific Spectre variants might not be possible on a given architecture, but it's a broad family. The initial Spectre paper only describes two variants, but as Paul Kocher can tell you, there are side channels everywhere.

        Spectre is basically this year's Rowhammer.

    3. Dan 55 Silver badge

      It seems AMD is only vulnerable to Spectre Variant 1 if you're running Linux with a non-standard kernel setting so if it has to be x86 based then that's a fairly safe bet too. Link

      Also US-CERT has suddenly changed their advice and they don't want you to change your CPU now...

      1. Ken Hagan Gold badge

        "Also US-CERT has suddenly changed their advice and they don't want you to change your CPU now..."

        Perhaps someone pointed out that it is pointless to suggest everyone buys a new CPU if the new ones are vulnerable in the same way.

        Has anyone suggested a timescale for how long it will take to design, test and roll out production on a new CPU design that is immune? They ought to have started last June, so to a first approximation it is "the usual tick-tock period minus six months". I think that works out as a couple of years, making "a new CPU" pretty pointless until 2020.

        1. Dan 55 Silver badge

          Perhaps someone pointed out that it is pointless to suggest everyone buys a new CPU if the new ones are vulnerable in the same way.

          Well, they're not vulnerable in the same way. There's a degree of security by not running a CPU from a manufacturer which has 95% of the desktop and server market.

          I think that works out as a couple of years, making "a new CPU" pretty pointless until 2020.

          But human nature being what it is, by 2020 everyone will have forgotten and Intel will be good enough once again.

        2. DropBear
          Facepalm

          @Ken Hagan I was wondering about the same thing - having pretty much everything be vulnerable to Spectre seems bad enough, but I'd like to know how many people in sales at Intel/AMD/etc are going right now "what happens when people realize there's no point in buying anything from any of us anywhere in the near future...?"

          1. Doctor Syntax Silver badge

            "what happens when people realize there's no point in buying anything from any of us anywhere in the near future...?"

            That realisation will be followed by another: we have work to do Right Now and everything's running slower; quick, order more kit. There'll be celebrations in the sales depts right now, especially in AMD. Intel? Looking for the leftovers if AMD can't keep up with orders. It's a good time to be selling memory, motherboards and everything else as well.

            1. Aitor 1

              Intel will benefit.

              AMD cant produce mucho more than what they contracted.. so no huge slaes, as also the rest of the ecosystem would have to rump up production.

              So it seems that what will happen is that people will use more of the dominant CPUs... this is, of course, INTEL.

        3. Anonymous Coward
          Anonymous Coward

          2020 vision

          "Has anyone suggested a timescale for how long it will take to design, test and roll out production on a new CPU design that is immune?"

          For Intel? You may apparently have missed out the step to "fund and recruit a team of managers and technical people with a clue", which may not happen quickly.

          For Intel's competitors? Several of them have the people and products right now, maybe all they need is chip production capacity (which many of them contract out anyway), so a few weeks to get started and a few months for reliable volume production, if all goes well.

          Obviously it's actually a little more complicated than that, but 2020 seems a bit of a generous estimate.

          Meanwhile, is Dell still basically an Intel-only shop? Where does all this leave Dell (and Dell's customers) for the next few months?

          1. Graham Cobb Silver badge

            Re: 2020 vision

            For Intel? You may apparently have missed out the step to "fund and recruit a team of managers and technical people with a clue", which may not happen quickly.

            I am no lover of Intel (I have used AMD exclusively for many years because I believe it is important to support diversity) but I am absolutely certain that Intel have some of the very best managers and technical people in the CPU design business, and they also have strong links with excellent academics.

            Of course, with hindsight, both Meltdown and Spectre expose "obvious" design faults but it has taken, what? thirty years? for these to come to light since speculative execution became a common design feature.

            I am sure Intel have some of the world's best working on the various issues exposed: how to close the cache exfiltration side-channel specifically; careful review of all other (previously unnoticed) system state changes caused during speculative execution to find other side-channels before the world does; redesign of cache, branch-prediction, translation-buffer and other features to reduce the opportunities both for influence from one process/ring/core on another and to reduce their use as side channels for attacks; and a whole lot more which we (as not CPU designers) can't even think of.

            My fear is not that Intel doesn't have clever-enough people to do this, but that they will do it in private and not share their results. As the industry leader, I hope they are willing to share their learnings with the industry.

            1. Anonymous Coward
              Anonymous Coward

              Re: 2020 vision

              While they're at it, could they, please, remove the ME and write a decent BIOS, no UEFI?

      2. Alistair

        @Dan 55:

        "Also US-CERT has suddenly changed their advice and they don't want you to change your CPU now.."

        Anyone want to bet on the odds that CERT is down a body sometime yesterday afternoon, dismissed with prejudice?

    4. Steve Channell
      Unhappy

      Itanic, S/Z

      The two server grade processors that are definitely not effected are by this issue are Itanic and IBM's System/z... Itanic by design (VLIW), and S/z because there are simply better things for a mainframe to do than speculatively execute code + z/os Nucleus is too modular for peripheral stuff like passwords and certificates.

      When the dust settles, we should have a long cold look at OS architecture and what modern software design (async + parallel) patterns suggest for alternatives - my view is that monolithic kernels have had their day.

      1. Doctor Syntax Silver badge

        Re: Itanic, S/Z

        "S/z because there are simply better things for a mainframe to do than speculatively execute code"

        There's a post above, a couple of hours older then yours, pointing to a Red Hat note saying System Z is vulnerable.

      2. Alistair

        Re: Itanic, S/Z

        @Steve Channel:

        Ummm... there are system Z on the list of vulnerable entities. I've a list that is a "preliminary" - but system z is *not* utterly immune.

      3. Martin Gregorie

        Re: Itanic, S/Z

        Yes, you're probably right about monolithic kernels having had their day, but microkernels have their problems too. Going back a bit, the DEC Alphaserver, whose UNIX was based on a Mach-type microkernel, was a case in point.

        While the Alphaserver punched far above its weight (a single CPU, 0.5GB Alphaserver was easily able to support a 9 man dev team working on a RedBrick data warehouse-based system), we got our arses bitten by Mach kernel performance in one or two specific cases. Although the Mach central message switch could execute several system calls simultaneously, this was only true if the system calls were all different: no system call code module could execute multiple calls in parallel. As a result a burst of them would be serialised by queueing them and executing one at a time until the queue was empty.

        RedBrick segmented its tables and indexes, so a large database (ours was one) had each big table and associated indexes spread over many UNIX filesystem files. As a result some global DB operations, which applied the same system call to all segments holding a table and its indexes and involved at least one physical disk access per filesystem file holding these segments, caused major performance hits. This was a direct result of the need to serialize the associated system calls. We could watch it happen by looking at a tracing tool, so didn't need to guess at the cause or that the calls were serialized.

        Bottom line: microkernels are only good if their structure lets them execute multiple parallel calls to a any system call code module. At the time I'm talking about (1999-2001) Mach-based microkernels didn't do that.

      4. This post has been deleted by its author

      5. Michael Wojcik Silver badge

        Re: Itanic, S/Z

        The two server grade processors that are definitely not effected are by this issue are Itanic and IBM's System/z.

        "affected", not "effected".

        You're wrong about z. Redhat and SUSE have both announced Spectre PoC for z.

        I suspect you're wrong about Itanium too, since it does speculative loads.

    5. Julz

      Well, having got rid of all of their competent SPARC engineers, I wouldn't hold my breath waiting for Oracle to have anything authoritative to say on this matter.

      1. Doctor Syntax Silver badge

        "waiting for Oracle to have anything authoritative to say on this matter."

        We know what they'll say. "You're using more cores to run the same workload. Pay up."

        1. Loud Speaker

          Nonsense. You obviously have no experience of Oracle: Sparc is not susceptible - so pay an extra 30% for no reason at all!

  8. Blotto Silver badge
    Big Brother

    Stunned, but not surprised, when’s the next revelation due?

    These things are highly complex with contributions from tens of thousands of people over decades. Whether deliberate or accidental, there will be more functions in these chips that will be found to be vulnerable under researchers scrutiny.

  9. Blotto Silver badge

    Optional mitigation

    It would be good if performance degrading mitigation could be made optional, obviously turned on by default and difficult to turn off but for those of us that run stuff and want performance and whose systems are isolated from everything else the choice would be nice.

    Even if the system is vulnerable, mitigation by isolation taking the is a valid solution (obviously taken in context of the tasks and data being processed)

    1. Anonymous Coward
      Anonymous Coward

      Re: Optional mitigation

      Don't install the update?

    2. Baldrickk

      Re: Optional mitigation

      on Linux at least, there is a boot option to disable it.

    3. Dan 55 Silver badge

      Re: Optional mitigation

      performance degrading mitigation

      You're looking at it the wrong way round, you mean "security degrading mitigation".

  10. Anonymous Coward
    Anonymous Coward

    Intel execs - a warning from history

    Mildly off topic, but maybe not. Many years ago, I attended some tech conference at the corporate suites of a London premiership football ground. Among the great and the good presenters was due to be an Intel exec, one whose engineering background made me especially look forward to his talk. His time slot came and went, so other talks took his place until he finally graced us with his presence. He then ranted about how it was ridiculous that his helicopter was not allowed to fly directly to his destination and that, can you believe it, his pilot said he wasn't allowed to land on the football pitch. He never really got into his presentation.

    I spoke to him afterwards, and he was still fuming, and the only thing I can recall as a takeway about Intel was a mistrust owing to the disconnect and disdain this exec showed that day, his audience was one that was not appropriate to his petulance. He may just have been masking his embarrassment about being late, but it came across as not seeing why air traffic rules should not be bent for him, made worse by a football club wanting to protect its ground.

    I know this is an anecdote about my own prejudice based on one incident, but it has coloured my view of the processor world, and processor choice, since.

  11. Anonymous Coward
    Anonymous Coward

    Call me a cynic but is all this erm..... a ploy to get you buy our new soon to be released super secure (cough) CPUs ?

    When will the tech industry stop coming up with shit ideas such as keyless cars that are just security disasters waiting to happen that are built on fundamental designs and software from the 1960s/70s. The only way to solve a lot of these problems is a brand new CPU architecture and accompanying software that is not backward compatible which is built with security first and foremost. It's going to be painful, expensive and require a lot of effort but it needs to be done as current tools are not fit for purpose.

    Over to you really smart chip & software engineers, it's way above my pay grade.

    1. Anonymous Coward
      Anonymous Coward

      "which is built with security first and foremost"

      It is a nice mantra but it will not stop products being insecure. Every software product (and probably hardware product) had bugs and/or security issues.

      They will one day be exploited and then the redesign will begin again. Once stable and affordable high capacity quantum computers arrive then security will once again be thrown in the air.

      1. A Non e-mouse Silver badge

        "which is built with security first and foremost"

        It is a nice mantra but it will not stop products being insecure.

        You can't fully protect against all security issues as you don't know all the possible ways your product could be insecure.

        1. Warm Braw

          You don't know all the possible ways...

          Indeed.

          The issue here is that if you take a traditional view of processor "correctness", there is no real bug here: the software runs as it should and returns the right results.

          We are very much in a new world where we have to assume that having malicious software running on any system is a likely event and hence any observable side effects of "correct" operation that leak information are likely to be observed. I'd be surprised if there weren't a whole range of other attacks waiting to be discovered.

          1. misterinformed

            Re: You don't know all the possible ways...

            "The issue here is that if you take a traditional view of processor "correctness", there is no real bug here: the software runs as it should and returns the right results."

            I disagree with this, at least as far as Meltdown is concerned. The CPU is supposed to enforce a sandbox and there is a hole in it big enough to read privileged data. This is a bug, not a side-effect of correct operation.

        2. Anonymous Coward
          Anonymous Coward

          >You can't fully protect against all security issues as you don't know all the possible ways your product could be insecure.

          Original AC here, no it can't but what we can do is develop brand new CPU and subsystem architecture that isn't saddled with baggage of backward compatibility and all the inherent flaws it keeps. We can't just keep bolting on to 8080 or ARM RISC as it's a house of cards, these things were not designed for modern internonnectedness.

          The same issues are facing many industries, we can't keep on producing diesel cars by just throwing DPF filters etc at them as there are fundamental problems that cannot be resolved without a complete rethink.

          It hasn't happened yet because there is too much vested interest in the incumbent tech but it will need to happen one day.

          Who knows what's around the corner, that's what makes science exciting.

          1. dew3
            Facepalm

            RE: need a whole new architecture

            "what we can do is develop brand new CPU and subsystem architecture that isn't saddled with baggage of backward compatibility and all the inherent flaws it keeps. We can't just keep bolting on to 8080 or ARM RISC as it's a house of cards, these things were not designed for modern internonnectedness."

            A whole new architecture was already tried. Billions of $US were spent on it. It is called "Itanium". Rumour is it does not suffer from these security issues. I will leave it as an exercise for the reader whether it will now dominate the CPU market.

            A close relative has been on a couple of those US government blue ribbon technology panels. He enjoys recommendations like these and keeps them in a special pile marked "To fix this issue, first we boil away all the water in the oceans..."

            1. Roo
              Windows

              Re: RE: need a whole new architecture

              "A whole new architecture was already tried."

              Indeed, many many many times over and I suspect it'll continue for a while yet as the wheel of reincarnation makes another revolution... With respect to your close relative they should be paying attention to the folks in the ocean boiling business, the #1 HPC system uses a fairly unique CPU architecture - and it has been delivering better FLOPS/W (YMMV) than it's competitors running state of the art Intel + GPU combos out there for some years now...

              Sometimes folks using different tools get better results...

    2. A Non e-mouse Silver badge

      Speculative execution is a good idea: It's a way to keep the CPU busy with (hopefully) useful work whilst it waits for RAM to catch up. The issue is the way it's implemented.

      Switching off speculative execution (if it's possible) will kill CPU performance. Some of the reports say that current CPUs can execute several hundred instructions whilst waiting for a single memory access request to main memory. That's a lot of lost CPU performance if you switch off speculative execution.

      1. Baldrickk

        The fix isn't to disable it entirely, though it's true that when it is not done, the chips are not vulnerable.

        The problem is from when boundaries between processes need to be crossed. The speculative execution was crossing that boundary to continue its work, was then invalidated by a conditional that evaluated a different way to that which was predicted, and is then not cleaned up properly.

        The flaw is the last step.

        Because the data remains in the CPU's registers, it can be read by the new branch of code that is being executed.

        This works because some protected memory is mapped into the application's memory space. The fix stops this, so that there is complete separation between the application and the kernel. Going from one to the other (system call) now involves a complete context switch (which does properly wipe the registers)

        1. Anonymous Coward
          Anonymous Coward

          "Clean up" shouldn't be necessary

          If a speculative execution design is sensible, there is no need for "clean up".

          Suppose a branch could go two ways. Silicon is cheap so you duplicate the registers and stuff in a particular way which hides them from the outside world till it's clear which path was taken, and the hardware then "executes" both paths in parallel. But you don't let the real world see any results (or any side effects) till it's known which way the branch actually went (till the branch direction is "resolved").

          Once it's resolved, the wanted results can be made visible to the outside world, the unwanted results are discarded and the relevant resources freed up. It's fairly well understood technology, though not easy to find readable explanations.

          This works fine when the unresolved state changes (register values etc) are hidden from the rest of the world. It does require a certain amount of care for some global resources which can't realistically be duplicated. Condition codes (flags) and mode bits (e.g. interrupt enable) are an often quoted example.

          Someone (at Intel?) seems to have forgotten that cache memory (and its contents) is a non-duplicatable resource too.

          TLDR; speculative execution must not change any globally visible state till the instruction involved is clearly known to be on the "path taken" rather than the "path not taken". Otherwise, Bad Things will eventually happen. Cache memory contents can qualify as globally visible state.

          [sorry for for any erros, typed in a hurry]

      2. Doctor Syntax Silver badge

        "Switching off speculative execution (if it's possible) will kill CPU performance. Some of the reports say that current CPUs can execute several hundred instructions whilst waiting for a single memory access request to main memory. That's a lot of lost CPU performance if you switch off speculative execution."

        That doesn't stop better architectures restricting speculative execution to what they're allowed to see. Nor does it stop software architectures from being designed to better security standards.

      3. gnasher729 Silver badge

        Speculative execution that has no observable side effects is fine. Speculative execution that has observable side effects depending on the state of your own data is fine. Speculative execution that has observable side effects depending on the state of the data of another process is a problem. (Actually, anything that has observable side effects depending on the state of the data in another process is a problem).

        If my code accesses data in another process I'll get an exception, so that's fine. If my code speculatively accesses data in another process (that means it logically doesn't access it at all, but the processor goes ahead and tries anyway), then this may take different execution times. The solution would be that the processor must make sure in this situation that it will always take exactly the same time, which would have to be the maximum time possible.

        1. Anonymous Coward
          Anonymous Coward

          More on speculation and addressing and accessibility

          "Speculative execution that has no observable side effects is fine."

          OK.

          [Background: Typically in a modern siperscalar superpipeline RISC setup the avoidance of side effects and visible after effects is done by magically duplicating the relevant silicon resources, e.g. by providing "shadow" registers that don't become visible to the real world unless and until the (speculative) instruction whose operands and results they hold is known to have executed for real. Other duplicated resources (e.g. logic units, register sets, etc) whose particular shadow contents turn out to have been not needed are then freed up for use elsewhere in the processor core]

          "Speculative execution that has observable side effects depending on the state of your own data is fine."

          Probably OK, subject to the correctness of the definition of "own data" when used in a speculative setup. See below.

          "Speculative execution that has observable side effects depending on the state of the data of another process is a problem."

          Makes sense as written. Now, what about the detail, e.g. who can provide a clear, simple, unambiguous and (preferably) correct and secure definition of what "the data of another process" means in a speculative environment with one real unshadowed MMU/TLB, etc vs multiple uncommitted speculative instructions in flight ?

          Such a definition also will need to distinguish carefully between virtual addresses (as seen by software and as used in indexing *some* kinds of cache memory systems) and physical addresses (as seen by the main memory subsystem and as used by *some* kinds of cache system). It needs also to consider whether the contents of the MMU/TLB/etc may change between the time a speculative reference is first attempted, and the time the corresponding instruction would have been completed.

          Also consider that in a multi-tasking environment with a typical MMU(etc) setup, each separate running program (also including multiple instances of the same program) will usually see its own copies of stack space, its own local variables, its own heap. Even if each running program sees them at the same address from the program's point of view, the MMU/TLB/etc means that they are generally at separate places in physical memory and one instance's data will not generally be accessible by another instance's code, otherwise Bad Things[TM] may happen.

          Ready when you are. Clarity is good. So is brevity. How to fix that conundrum :)

          1. Anonymous Coward
            Anonymous Coward

            Re: More on speculation and addressing and accessibility

            Eben Upton from Raspberry Pi foundation has a nice writeup re superscalar, speculation, OoO, branch prediction etc at

            https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/

            but more detail is still welcome.

    3. Kernel_Ninja

      The thing is generally security and networking are for lack of a better analogy at the opposite ends of scale and they aren't overly compatible. Kind of like two formerly male trans genders trying to have a child without the use of a surrogate or womb transplant.

      Reality is we have sold our soul for convenience and ease of use. Security requires some level of ability in understanding of computers and there operations. Sadly the masses that consume tech are more concerned how kloe kardashian consumes a carrot than personal security.

      1. Wombat Copter

        "Kind of like two formerly male trans genders trying to have a child without the use of a surrogate or womb transplant."

        It is nothing like that, so you don't have to try to stretch it into an excuse to take random, rude potshots at transpeople.

    4. Ian Joyner Bronze badge

      "When will the tech industry stop coming up with shit ideas such as keyless cars that are just security disasters waiting to happen that are built on fundamental designs and software from the 1960s/70s"

      Absolutely agree. However, secure and correct computing and architectures have been around since the early 1960s. Just look at B5000 (now Unisys MCP machines). But scientific computing where every cycle matters rather than security won out.

      It is time to separate these two needs. Faster performance comes from faster electronics, not by ignoring fundamental issues because they cost processor cycles.

      1. Charles 9

        Except modern society forced them back together. It's not good enough to get it right OR get it fast. Now, you MUST get it RIGHT AND FAST at the same time. Just as you can't just pick any two of "Good, Fast, Cheap." No, now it's all or nothing.

  12. Paratrooping Parrot
    Mushroom

    "That leaves Spectre Variant 1 attacks, in which rogue software can spy on applications, unpatched. It's a good thing this variant is difficult to exploit in practice."

    You do know that the NSA, the Israelis, the Russians and the Chinese are now in a race to implement this?

    1. Voland's right hand Silver badge

      now in a race to implement this?

      For general purpose systems - do not think so.

      It is extremely hard on a real system which does an ungodly amount of stuff at the same time.

      There are plenty of easier ways left for them to try.

      It will remain an attack of last resort for GENERAL purpose systems. Now, on an embedded system during an attack on a high value target - that is a different matter. If the machine is doing just one well determined function, it may be worth trying it. So we may see it in let's say StuxNet v 8. Or higher. In something else - not likely.

    2. Anonymous Coward
      Anonymous Coward

      Race

      Some of them might be on a reace, but my 2 cents go for at least some of them having had a working hack long time ago.

  13. Aoyagi Aichou
    Mushroom

    Practicality

    So if I understand it correctly, neither Spectre is known to be used for any *practical* attacks? Like that password snooping demo of Meltdown floating around?

  14. Dominion

    Opportunity to sell more CPUs?

    So let me get this straight. In order to mitigate against this I have to take a performance hit. As I'm close to the limit of capacity already I'm going to have to buy new CPUs, which are known to be vulnerable. And then, once there are new CPUs available I have to buy those as well. I'm not sure why the Intel share price has dropped, at the moment they seem to have a prime opportunity to sell more of their faulty tat and then sell us supposedly fixed tat.

    1. Doctor Syntax Silver badge

      Re: Opportunity to sell more CPUs?

      "I'm not sure why the Intel share price has dropped, at the moment they seem to have a prime opportunity to sell more of their faulty tat and then sell us supposedly fixed tat."

      It's because they have competition, AMD, and this is making that competition look good. Why else would the Intel PR response be trying to make it look as if all CPUs are equally affected?

  15. jasonbrown1965
    Mushroom

    Good guy Edge

    Comic note - Edge joining other browsers in warning against Microsoft update site as "not secure"

    See: https://imgur.com/gallery/Dutvu

    #goodguyedge

    . . .

  16. Alistair
    Windows

    For the record, network tcp checksum offload is something one will want to consider turning off on 10G+ interfaces......

    (ouch)

    1. Alistair
      Windows

      Urrm. I'm going to retract that statement (about tcp checksum offload) for the moment, but the stats on networking (10G/eth) running on average at 60% to 80% saturation are (at this moment) not making us happy with the patches.....

      The four of us are testing against a hadoop install - there is a hint that there might be a nic driver update in the pipes.....

  17. Temmokan

    Of course they will deny

    Regardless of findings are, Intel will deny the above as fundamental design flaws. I doubt Intel will be punished with more than formal spanking, but even admitting the design had flaws means Intel's chips designers either completely ignored possible security considerations, or did not bother to look for possible security implications at all. If the same designers will fix the vulnerabilities, as Intel promises, guess what will can happen.

    Congratulations, Intel. Since the notorious FDIV bug this "spectral speculative meltdown" is much more impressive example of epic fail.

    I suppose no one from IT experts now has an illusion that Intel cares a bit about security.

  18. Kernel_Ninja
    Holmes

    Not Surprised

    I am not surprised in the slightest. This type of bug if you want to call it that has long been in the making between the previous article about encryption weakness and KRACK and various key technologies impeding privacy and the implications of that hit security none of the disclosures surprise me what surprises me is this issue was not found earlier as most of the major issues that have occurred where expected from people concerned with Privacy and Security.

  19. Doctor Huh?

    Itanic didn't hit this iceberg!

    Really, this is all just an Intel plot to breathe life back into the Itanium architecture!

  20. Anonymous South African Coward Bronze badge
  21. Kernel_Ninja

    https://www.youtube.com/watch?v=owI7DOeO_yg This is intel and the NSA.

  22. Borderliner

    Octogenarian Canadian rockers drafted in to fix latest Intel Snafu

    Headline from the BBC - Rush to fix 'serious' computer chip flaws

    Love the music, but honestly Geddy, are you guys really up to this.

  23. Aladdin Sane

    I have the cure for SPECTRE

    It involves an alcoholic, womanising misogynist with a gun.

    1. Anonymous South African Coward Bronze badge

      Re: I have the cure for SPECTRE

      And latest gadgets from Q utilizing Intel chippery? :)

  24. a_mu

    CPU comparison web sites

    So all those sites that have speed comparisons of cpu's

    Are they going to have a long weekend re testing every thing !

    1. Anonymous South African Coward Bronze badge

      Re: CPU comparison web sites

      Will they also test the 8086, 80186, 286, 386, 486, and early Pentiums as well? :p

      1. Charles 9

        Re: CPU comparison web sites

        Out of Order Execution wasn't introduced to the Intel processor line until the Pentium Pro. No need to test anything earlier. If you're REALLY paranoid, you'd be testing all the early chips for OTHER exploits or magic knocks.

  25. Anonymous Coward
    Anonymous Coward

    Stock price

    "But Spectre will be harder to mitigate than Meltdown because the most effective fix is redesigned computing hardware."

    Gee, where will people buy that hardware? (really? your average joe is going to switch to AMD? really?)

    1. Richard 12 Silver badge

      Re: Stock price

      Sure they are

      AMD are cheaper, so Joe Average is already leaning that way.

    2. John Savard

      Re: Stock price

      Switching to AMD only fixes Meltdown, not Spectre. One needs to buy next year's CPU which will take these problems into account.

  26. stephanh
    Happy

    here's a vendor which is not vulnerable to either attack

    https://www.raspberrypi.org/

    Raspberry C uses an ARM Cortex-A53, which is not vulnerable.

    Be sure to beat the rush.

    1. TonyHoyle

      Re: here's a vendor which is not vulnerable to either attack

      It does that by not supporting speculative branching at all.

      So it's merely too crap to run spectre..

  27. John Savard

    What?

    I had thought that Spectre was sufficiently similar to Meltdown that while it couldn't be fixed properly without redesigning processors, it could still be fixed - with a serious performance penalty - by operating system changes, because putting the kernel in a separate address space would fix both of them.

    Clearly I will have to carefully re-read the news stories about it.

  28. CommanderGalaxian
    FAIL

    So people pay good money...

    ...and then discover their systems will only run at about 70% of the advertised speed or efficiency once the patches have been applied.

    Why would anybody not be wanting - at the very least - a partial refund?

    Case in point - Volkswagen.

  29. John Savard

    Not Available in Stores

    It's a good thing that CERT changed its advice.

    Even if it is true that the only way to protect against Spectre is to get a new CPU... the replacement CPUs which are not vulnerable to it haven't been designed yet. So one can hardly go out and buy one.

    So it isn't buy a new CPU, it's turn your computer off and wait a year or two. Unless you have a time machine.

  30. Stjalodbaer

    road not taken

    Could things have been different if the industry had paid more attention to IBM and Frank Soltis’s Fortress Rochester ?

    E.g., see http://jakob.engbloms.se/archives/2111

    1. gnasher729 Silver badge

      Re: road not taken

      I followed your link, and while interesting, I cannot find anything that would have prevent the current problem. The whole article is about illegal access to data. The current problem is about subtle timing differences. Not something that anyone would have thought about in 2000.

      1. Stjalodbaer

        Re: road not taken

        My thought was that the exploits relied on there being normally privileged data in an unprotected cache and the timing was used to access this. Perhaps then the IBM with its apparently thoroughgoing approach to security might maintain the privilege of the data even in cache.

        But apparently not. IBM announces patches for their “i” system which seems to be the current descendant of Fortress R.

        https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/

  31. Claptrap314 Silver badge

    Read the label

    NOTICE: This processor NOT authorized for use with classified material.

    Every processor every sold to the public by IBM, Intel or AMD for at least the last 30 years. Think about it.

  32. stephanh

    lovely story from 1976

    http://www.multicians.org/timing-chn.html

    It's about a covert timing channel based on the memory hierarchy.

    A different level of hierarchy, to be sure (main memory/disk cache), and between two co-operating processes, but otherwise eerily similar to Meltdown.

    "When I thought about this I realized that any dynamically shared resource is a channel."

    Proposed solution in the paper is to only ever run programs certified by desk-checking.

  33. Spippo

    Let's not forget the ROWHAMMER attach on almost all CPUs: https://www.bleepingcomputer.com/news/security/new-rowhammer-attack-bypasses-previously-proposed-countermeasures/

    1. Anonymous Coward
      Anonymous Coward

      Sure but no need for Bleeping Computer link - we covered Rowhammer at the time.

  34. Ian Joyner Bronze badge

    Complete Rethink

    It is time to realise that we have mainly based CPU architectures on scientific computing needs – that is every cycle is precious. But to get that speed we have ignored essential aspects like correctness and security. For general-purpose computing – especially anything connected to the Internet – security must be built in at the lowest levels.

    Instead of trying to produce architectures that satisfy both the specifics of scientific computing and general computing the two should be divided. It seems they are irreconcilably different. I have reached this unhappy conclusion (perhaps temporary) after many rounds of debate with C-style programmers who cannot see that security and correctness are of prime importance and the problem with C is it is tuned to a particular way of thinking about CPU architectures.

    A complete reevaluation and rethink is needed. It is ambitious and will take a while. But the weak architectures of the past are no longer applicable. Security must be built in at the lowest levels and that includes the hated bounds checks which really are a fundamental of software correctness.

    At the lowest levels these can be optimised and built into the electronics as in ASICs or Network Processors (which are more programmable).

    1. James Hughes 1

      Re: Complete Rethink

      That's a ten year plan.....New CPU architecture don't grow on trees.

      1. Ian Joyner Bronze badge

        Re: Complete Rethink

        "That's a ten year plan.....New CPU architecture don't grow on trees."

        Only ten year? Maybe longer - so the sooner we start the better.

        CPU architecture and system-programming languages are well overdue for an overhaul to address the issues of modern computing which is to provide devices useful to people with little or no computer knowledge.

        Scientific computing is different, but it should stop dictating to the needs of the many.

      2. Ian Joyner Bronze badge

        Re: Complete Rethink

        "That's a ten year plan.....New CPU architecture don't grow on trees."

        What I am also saying is that we need to address the fundamentals. Otherwise we will continue to play catch-up by writing software at higher levels to try to detect what might be wrong but doing it in a way that is mostly guesswork, missing things, and being annoying by guessing false positives.

        The next generation of CPU architectures should be designed not only by software people, but those who are expert in security and secure architectures.

  35. 10111101101

    IBM Power Processors Also

    Also includes IBM's Power processors

    https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/

  36. Jim Birch

    Adding sandboxing to speculative execution is going to be easier than coming up with a complete new processor paradigm.

    1. Anonymous Coward
      Anonymous Coward

      Re: "Adding sandboxing to speculative execution"

      *Properly implemented* speculative execution etc already has the equivalent of sandboxing, if "sandboxing" means that the effects (and *side effects*) of stuff that shouldn't be executed aren't allowed to be visible.

      See e.g. register renaming and such.

      Speculative execution etc not done right (which is what appears to have happened with Intel here) allows the effects (including side effects, such as a cache fill) of stuff that shouldn't be executed to remain visible, and under those circumstances, clever outsiders can make Bad Things (tm) happen in ways that clever insiders may have foreseen but chosen to ignore.

  37. rfink13

    This just proves that the "Intel inside" sticker on your computer is really a warning label.

  38. Calin Brabandt

    "These new exploits leverage data about the proper operation of processing techniques common to modern computing platforms, potentially compromising security even though a system is operating exactly as it is designed to,"

    This is like arguing that a software bug isn't really a bug because the code is being executed exactly as it was written to do!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like