back to article Three more data-leaking security holes found in Intel chips as designers swap security for speed

Intel will today disclose three more vulnerabilities in its processors that can be exploited by malware and malicious virtual machines to potentially steal secret information from computer memory. These secrets can include passwords, personal and financial records, and encryption keys. They can be potentially lifted from other …

  1. This post has been deleted by its author

  2. Cynic_999

    Looking at the wrong holes

    In my opinion nobody should be expecting to rely on the hardware taking care of software security, and in many ways it's a great pity that it was ever attempted. If I understand correctly the original idea was only to protect against buggy software *accidentally* causing mayhem, not to protect against a deliberate attack.

    Once any malicious software has managed to get itself running on a computer it will always be able to do damage by one means or another. Effective security means preventing anything malicious being executed by the CPU in the first place, just as you protect machinery from sabotage by physically shielding it (or the entire building its housed in) in some way, you do not design it with gears that can withstand a spanner being poked between them, you prevent the spanner from getting to the gears in the first place.

    I develop embedded systems, and in critical systems, in order to protect against a malicious firmware update, all new code is signed with a private key that exists only on a secure machine within the company. All firmware contains the same public key used to check the signature before the code is flashed to memory or executed.

    Of course a malicious actor with physical access could remove the Flash memory and substitute firmware that contains no checks, but that's not the threat that's being guarded against. A person with a hammer could also crash the unit!

    1. Anonymous Coward
      Anonymous Coward

      Re: Looking at the wrong holes

      No, no and no. The CPU design must absolutely guarantee full isolation of virtual machines from each other and from the host OS, and must also absolutely guarantee that within a VM or the host OS, no unauthorised escalation of privilege, either direct or indirect, is possible. Otherwise what exactly is the point?

      Embedded systems? I know a thing or two about those having worked on them for 40 years, but the things you do in embedded systems often don't apply outside that limited domain.

      1. Cynic_999

        Re: Looking at the wrong holes

        "

        Otherwise what exactly is the point?

        "

        Which was exactly *my* point - there *is* no point in trying to gain such software security because it's like trying to nail jelly to the ceiling and you're not going to achieve it 100%. Anything significant that you *do* manage to achieve in the CPU itself will be at the expense of performance. If you cannot set up a VM to be as safe as the main OS, run it on a physically different machine.

        1. Anonymous Coward
          Anonymous Coward

          Re: Looking at the wrong holes

          It's horse for courses. Embedded systems are vulnerable to various forms of attack, so they are secured by means appropriate to the environment. But for general purpose computing, security as I defined in my first post MUST, and indeed can, be absolute. Otherwise the cloud as we know it is dead. Not that this would necessarily be a bad thing, but that's another discussion entirely.

          1. bazza Silver badge

            Re: Looking at the wrong holes

            I'm with Cynic_999 to a large extent with this one. Running random third party code is asking for trouble, and absolutely requires one's machine implementation to be exactly as per the manuals in order to be safe. The manuals are turning out to be mere pipe dreams...

            Trouble is that literally everything that happens today that has anything to do with the Web (i.e. almost everything) totally relies on running unsigned, random JavaScript. It doesn't matter what else we do, whilst we have this dirty habit we're going to be inviting potentially malevolent code to run on our machines. There's been some surprising successes in proof-of-concept demos written in JavaScript to exploit CPU flaws.

            I don't hear Google campaigning to do away with JavaScript. Rather the opposite in fact...

            It is possible that we'll look back on this episode and wonder what all the fuss was about communications encryption when we weren't bothering to check what code we were running at the end points.

            1. BinkyTheMagicPaperclip Silver badge

              Re: Looking at the wrong holes

              'Random third party code' ?

              You do realise this is almost Windows' entire software ecosystem, and a large proportion of non Windows apps?

              Unless you really do want all software to be signed, vetted, subjected to (possibly arbitrary standards), plus a 30% margin, then dumped in Windows Store?

              Obviously some sense is needed, you might not take a code snippet posted by 1337Haxx0r37 on a forum literally, but there's an awful lot of trust elsewhere that software is as it claims. Rarely is this a problem.

              1. Mark 110

                Re: Looking at the wrong holes

                @BlinkytheMagicPaperClip

                "You do realise this is almost Windows' entire software ecosystem, and a large proportion of non Windows apps?

                Unless you really do want all software to be signed, vetted, subjected to (possibly arbitrary standards), plus a 30% margin, then dumped in Windows Store?"

                Why are you picking on Windows? Not that I'm a fanboi (user yeah) but when I install something on Ubuntu I have as much clue what I am installing as I do on Windows, or Android or Mac. What are you trying to say? x86 CPU flaws are only going to affect Windows users?

                1. BinkyTheMagicPaperclip Silver badge

                  Re: Looking at the wrong holes

                  @Mark110

                  I'm picking on Windows because, mostly, it doesn't use an app store for its major applications (For games Steam has a lot of the market, but you should all be using GOG.com instead where possible..).

                  Plenty of Ubuntu software can be downloaded from third party sites, but most Unix these days uses a repository for installing software - at least the free stuff.

                  I'm also picking on Windows because in a hypothetical app store only model the user has less choice. With Unix you can always choose to use a third party FTP site. In a locked down Windows world software will cost more and be less diverse.

    2. Anonymous Coward
      Stop

      Re: Looking at the wrong holes

      > [ ... ] in critical systems, in order to protect against a malicious firmware update, all new code is signed with a private key [ ... ]

      I don't think you understand what these newly-disclosed vulnerabilities are all about.

      Executive Summary: Meet the new Intel CVE, same as the old Intel CVE.

      This has nothing to do with malicious firmware or removing sanity checks. These are CPU design flaws. The common denominator of all these CVE's, starting with Spectre, is careless cleanup - or even complete lack of cleanup - after speculative execution.

      > [ ... ] a private key that exists only on a secure machine within the company [ ... ].

      How secure is your secure machine if the CPU of this secure machine leaks your super-secret private key?

    3. Warm Braw

      Re: Looking at the wrong holes

      nobody should be expecting to rely on the hardware

      My first thought on reading that was to cry "rubbish", but I think you have a valid point.

      I don't think it's impossible to have a verifiably secure shared execution environment, but its price/performance is unlikely to be attractive. We haven't really thought the whole cloud idea through thoroughly enough: the economics of sharing look superficially attractive, but the security issues have largely been taken on trust. And, indeed, VMs have been used in the cloud because the technology was there, not because they were necessarily the best solution to the sharing problem.

      For large users, the problem is mostly immaterial - you'll be wanting multiple dedicated machines (or at least dedicated for the time they are provisioned). For small users, or for some types of scalability, throwing lambda functions at random compute engines may well be a better model than fractional virtual machines.

      1. Peter Gathercole Silver badge

        Re: Looking at the wrong holes @Warm Braw

        I think you're not following current deployments. "multiple dedicated machines" do not exist in large organizations any more. They're all doing virtual machine deployments because the hardware vendors are selling these expensive super-sized systems with the express intent of them being carved up into VMs.

        And here is the rub. If you cannot trust your process/vm hardware separation, you're in real trouble.

        Of course, we could go back to an operating model where we have hundreds of discrete systems rather than a couple of very large systems with dozens of VMs, but space, power, cabling etc would take us back more than a decade, and the loss of flexible sizing would result in wasted resource due to having to have different sized discrete systems for different workloads.

        Multi-user, multi-tasking systems have relied on access separation ever since they were invented more than 40 years ago. Pulling this out from under current operating systems would mean going back to the drawing board for OS design, even if it were possible.

    4. ZJ

      Re: Looking at the wrong holes

      The issue with these bugs is that even if your cloud VM is 100% secure, a malicious or careless user of another VM running on the same hardware can access your data. Security becomes out of your control.

      Some people previously questioned the security of sharing hardware with lower security instances or 3rd parties but were constantly told there was no issue....

    5. Anonymous Coward
      Anonymous Coward

      Re: Looking at the wrong holes

      It would be interesting to think how this could work for, say, a machine running a web browser. You'd need (say) all the JS that you ever ran from anywhere to be signed, or you'd want formal proofs of non-maliciousness of the JS. The second is not possible, the first is merely impractical.

      1. Cynic_999

        Re: Looking at the wrong holes

        "

        It would be interesting to think how this could work for, say, a machine running a web browser. You'd need (say) all the JS that you ever ran from anywhere to be signed, or you'd want formal proofs of non-maliciousness of the JS.

        "

        No, you just have to ensure that the JS interpreter that the browser runs when it downloads js ensures that no js program can ever do anything naughty. Similar to running it in a sandbox. It should not be possible for any script or plugin etc. downloaded from a web site to be able to access anything on the PC except a harmless portion of the system. It's the *browser software* providing the security, not the hardware, so only the browser needs to be signed and trusted.

        1. Tom Paine

          Re: Looking at the wrong holes

          ...you just have to ensure that the JS interpreter that the browser runs when it downloads js ensures that no js program can ever do anything naughty.

          There was this chap called Alan Turing who made an observation about this option.

          1. amusedscientist
            Holmes

            Re: Looking at the wrong holes

            After reading Turning's paper, look up Ken Thompson's 1984 Turing Award Lecture, "Reflections on Trusting Trust" Then meditate for a few moments on hardware that is designed and built using software that runs on hardware ...

        2. Destroy All Monsters Silver badge
          Linux

          Re: Looking at the wrong holes

          It's the *browser software* providing the security, not the hardware, so only the browser needs to be signed and trusted.

          Very NOPE. Once the browser forks of a separate process to run the JavaScript mystery meat (it DOES that, right ... right?), it's the hardware what takes over (with the kernel doing the management for the CPU or the CPU doing the management for the kernel, it depends on the point of view, very Necker-Cube like). Context switches, page tables, the whole shebang. Sure the software is in there ALSO, but it is mostly complexifying the problem with I/O, pipes, shared memory for IPC, locks, etc. whatever, opening potential holes in what should be a the base a simple, assuredly secure set of minimal principles for process isolation.

        3. Anonymous Coward
          Anonymous Coward

          Re: Looking at the wrong holes

          Unfortunately the JS interpreter, no matter how carefully it's written, up to and including being formally proved correct (which is not anything like plausible, but let's assume it is), relies on the hardware on which it is running to not leak information. And the hardware does, in fact, leak information.

          So either you fix the hardware, or you check and sign every bit of JS to warrant that the JS you run does not exploit the leaks in the HW. That's why I said that.

          And we're back where we started.

    6. amusedscientist
      Holmes

      Re: Looking at the wrong holes

      Has no one here read Ken Thompson's paper from 1984? Extend the thinking to include the chip design software, the operating system it runs on, the hardware that runs on, and ...

      https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf

    7. Locky

      Re: Looking at the wrong holes

      The important thing here is to quickly register a cool name and image for the media to jump on.

      They already have L1FT, so I propose https://ibb.co/ebfPi9

      1. onefang

        Re: Looking at the wrong holes

        While that's a cool image, I don't think anyone would want to jump in.

  3. Anonymous Coward
    Anonymous Coward

    Phrack did a good writeup of SMM a while back

    http://www.phrack.com/issues.html?issue=66&id=11#article

  4. Claptrap314 Silver badge

    Middle ground

    Mr. "I spent a decade doing microprocessor validation at AMD and IBM starting in the mid-nineties" here.

    `Horses for courses" is exactly correct, but the wrong inferences are being drawn. Yes, the embedded world, especially in industry, is very, very different from the general compute environment. As such, the rules are expected to be, and indeed are, very different.

    But complexity defies perfection. As an example, the fact that in the example given, all code is signed is both inappropriate in the world of general computing (because global certificate management is a joke), and insufficient to guarantee 100% security at its own level (because signing can has has been spoofed).

    Suggesting that hardware can be 100% side-channel free borders on silly. As I have often mentioned, making execution times data independent is going to be a major overhaul of the architecture, and is going to dramatically harm the performance/cost numbers.

    The solution in the datacenter is going to be mostly that people end up renting entire machines by default. The increased cost is going to be dwarfed by the performance cost of properly securing machines against this class of attack.

    1. Anonymous Coward
      Anonymous Coward

      Re: Middle ground

      Yep, before wishing for side-channel-free, reflect that avoiding leaking timing info generally means every instruction running for the worst-case time.

      1. Kobblestown

        Re: Middle ground

        "avoiding leaking timing info generally means every instruction running for the worst-case time."

        Not necessarily, IMO. You could have some flags in the cache that certain line was loaded speculatively and the flag is cleared when it's loaded for real. When it's in the cache but has the speculative flag then it's like not being there until the speculation is resolved. Obviously that would require hardware support but I think it's doable. I'm not a CPU designer though.

      2. Anonymous Coward
        Anonymous Coward

        Re: Middle ground

        "Yep, before wishing for side-channel-free, reflect that avoiding leaking timing info generally means every instruction running for the worst-case time."

        No it does not. It just means the timing of an instruction run in one VM or process depending on what you are looking at does not depend on anything outside that VM or process. This depending on how you do it will affect performance but nothing like as much as making everything run at worst case timing. The simplest possible approach which has the biggest performance impact is to flush all cached data on a context swicth, even this does not run everything at worst case timing. You could imagine having seperate caches per process/VM or other solutions which do better than this.

        The exploits which access memory within the same process which is supposedly protected by SW mechanisms are different. It is just wishful thinking that SW can protect areas of memory within a process, if you run someone elses code within a process you should assume it can access anything within that memory space.

    2. Anonymous Coward
      Anonymous Coward

      Re: Middle ground

      My experience with embedded dates back to 1980 and, as you point out, it is not relevant in this context. Where length/breadth of experience is relevant is in safety critical software and hardware engineering. Given that the body count started in the hundreds and no real upper limit on fiscal damage, you have to take a very restrictive approach in every assumption and requirement of the system(s). The best assurance I could ever provide was that things would fail safe. I see that salient point missing around Intel, at least in general computing.

      Something else missing in the discussion is that a "Cloud Provider" must make absolutely 100% certain that the client is using only a completely up-to-date operating system for the virtual instances the client is paying for and I can't see that happening at all. That requirement completely abrogates the whole idea behind IaaS, dropping all clients to the next higher abstraction of PaaS, with the cloudy provider assuring that only certified operating systems are in use as provided by the provider themselves. And that has to be equally true of every VM in operation on a virtual server. Implementing that, I'd rather not have to see done, but it's the only thing left to do if "The Cloud" is going forward as a concept.

      As someone else put, this class of bugs is the gift that will keep giving. Certainly insuring job security for those who are hunting and those who are defensively engineering solutions to those bugs identified.

      1. BinkyTheMagicPaperclip Silver badge

        Re: Middle ground

        A cloud provider does not need to ensure that the virtual instances are up to date.They should strongly recommend it, for the individual customer's sake, but the priority is ensuring the host OS/hypervisor is fully patched so that there is no information leakage or DoS between VMs.

        If an individual VM is not patched and some code involving speculative execution enables a third party access to data in that VM they are not authorised to see that's unfortunate, but really no different from any other OS or software vulnerability.

      2. Anonymous Coward
        Anonymous Coward

        Re: Middle ground

        @Jack of Shadows

        OK, so I'm sure that none of this was done by any of the hardware designers to purposely reduce security; but, perhaps because I'm cynical, what runs through my mind immediately is that the whole "cloud" thing was pushed to get big Corp to spend on hardware with the expectation that the "next big thing" would be the push to run all your own hardware. I do think that the push to cloud may have hit this security roadblock sooner than the chip sellers would have liked.

    3. ZJ

      Re: Middle ground

      Or guarantees that only VMs for the same customer of a given security level are running on the same machine. This allows Amazon/Google/etc to continue managing the nuts and bolts of balancing load between your VMs.

      1. Ken Hagan Gold badge

        Re: Middle ground

        "Or guarantees that only VMs for the same customer of a given security level are running on the same machine. "

        That eliminates a fair percentage of the economic benefits of moving stuff to a cloud you don't actually own yourself.

        With Spectre and Meltdown violating security in one direction and this SGX bug violating it in the other direction, the case for migrating your shit back to home turf is probably made. (In effect, yes it will cost a little more, but you'll be able to run all your processors at full speed rather than hobbled by mitigations, and so the equivalent hardware will cost you a lot less than it would cost (say) Amazon.)

        1. BinkyTheMagicPaperclip Silver badge

          Re: Middle ground

          Nice thought Ken, but you're effectively suggesting that all the VM hosted software is bug free. Without protections, find an exploit in the software in one VM, access all the others.. There's no way that will pass security compliance.

  5. Claptrap314 Silver badge

    Going a bit overboard, El Reg

    Certainly, Intel's attitude toward their customers and end users has been, to put it lightly, problematic. But the tone, and even some of the factual claims, are over the top.

    If consumers cared about security, we would be able to buy smart phones from RIM. The fact that the Blackberry went from almost being first in market to exiting the retail market completely means that the customer has spoken. And for decades, the customer said, "We don't care if others can snoop, we want the new shiny, and we want it NOW." Even today, if you went out on the street and did a poll, I would be shocked if 1% of consumers had any idea about these bugs.

    So, a year ago, some academics managed to exploit a weakness that every consumer chip for the previous twenty years has had, and that the entire industry was aware of. Certainly, this is a big deal in the sense of a lot of work needs to be done. And today, some more of that work has been exposed. But in terms of evil corporate behavior, Intel's abuse of monopoly wrt especially AMD is far, far worse.

    If Blackberry had 10% of the consumer market, this sort of tone might (might) be appropriate. But--THERE. HAS. BEEN. NO. DEMAND. Intel is a company. The customer is king. Always.

    1. Tom Chiverton 1

      Re: Going a bit overboard, El Reg

      Maybe if RIM didn't hand over the BBM keys to any government who asked they'd still have a business

    2. herman

      Re: Going a bit overboard, El Reg

      No, RIM folded when they subverted their own security and gave keys to their servers to the UAE and India. When their users realized that there is no security advantage anymore, they left RIM and bought iPhones and Androids. So RIM got a few payments from a few governments and lost their whole business.

      1. werdsmith Silver badge

        Re: Going a bit overboard, El Reg

        I'll bet that most BB users didn't and still don't know anything about those keys being shared with governments.

    3. Version 1.0 Silver badge

      Re: Going a bit overboard, El Reg

      It's simple, speed sells, security doesn't. Sure, we all pretend that we care about it but when given a choice between shiny, fast, cheap or secure - nobody asks about security.

    4. fruitoftheloon

      claptrap314: Re: Going a bit overboard, El Reg

      Ct,

      I disagree with some of your post.

      Jay.

      Typed on an excellent Blackberry KeyOne - which I bought from a high street phone shop in Exeter.

      Cheers,

      Jay.

    5. Anonymous Coward
      Anonymous Coward

      Re: Going a bit overboard, El Reg

      "If consumers cared about security, we would be able to buy smart phones from RIM. "

      Not sure that's the way it works, given the cartel which the telcos and hardware and software providers operate, especially in the mobile phone market. Customer choice (for the individual phone user) is largely irrelevant to the big players in this kind of picture.

      "Intel's abuse of monopoly wrt especially AMD is far, far worse."

      That I do agree with, absolutely. Customer choice (for the system purchaser) is largely irrelevant to the big players in this kind of picture, as evidenced by Intel paying Dell to discourage Dell from building AMD-based systems:

      https://www.sec.gov/news/press/2010/2010-131.htm

      https://www.theregister.co.uk/2010/07/26/after_the_dell_settlement/

      "The customer is king. Always."

      Maybe so, but Intel's important customers are the volume PC/server builders. E.g. Dell. See above.

    6. ZJ

      Re: Going a bit overboard, El Reg

      Maybe customers don't care or aren't aware of the dangers if people can snoop, but I doubt banks, governments and other big organisations holding sensitive info are the same.

  6. onefang

    At this rate, by the end of the year Intel CPUs will be exposed as being less secure than a wet paper bag with "free cash" printed on it in large letters.

    1. big_D Silver badge

      The problem is, the other chip manufacturers aren't much better.

      AMD, ARM and most other RISC platforms have been bitten by at least one variant of Spectre, although Meltdown seems to be a solely Intel affair.

    2. Chz

      It's a problem for their lucrative datacentre business, for sure. But from what I've heard from people who should know, virtualisation and multi-threading have both been regarded as security risks since their inception. (OoO not so much, admittedly) Most business customers are simply happy with "good enough" security, and I'd say that if you roll your own systems then Intel's processors still are just that. The problem, as emphasised in the article, is what will become of cloud computing.

      I'm not sure any of it is relevant in the consumer space, though. This sort of attack is wildly improbable in the wild. There are much easier ways to hack things than that.

      1. big_D Silver badge

        At the moment, it is looking like a single tenant on each physical cloud server, which pretty much defeats the point of going cloud.

      2. Destroy All Monsters Silver badge
        Paris Hilton

        virtualisation and multi-threading have both been regarded as security risks since their inception.

        Regarding virtualization, I can't remember anyone running around telling people "don't use this!". These kind of ideas need time to ripe, the CPUs need to become faster, the systems more complex & optimized until a problems pops up. Hell virtualization started with IBM's VM series in the 70s. It's been a long time. When did the first warning surface?

        Now for multi-threading I have found a Hyper-Threading Considered Harmful from 2004, so this point is well taken.

        1. Claptrap314 Silver badge

          ---- virtualisation and multi-threading have both been regarded as security risks since their inception.

          -- Regarding virtualization, I can't remember anyone running around telling people "don't use this!"

          You missed the point. "Risk" is not the same as "vulnerability". Lots & lots of work has gone into driving the risks of virtualisation into the dirt. And there were discoveries & recoveries from various side channels involving virtualization in the '90s. I don't know if any of them were discovered post-GA.

  7. _LC_
    Mushroom

    Refund

    Keep your crapware.

  8. BinkyTheMagicPaperclip Silver badge

    Intel only told the favoured few, again

    Windows, Linux, OS X had patches ready in advance. OpenBSD was left out in the cold, again (although due to the work they've already done, most of those vulnerabilities have no effect)

    1. Herring`

      Re: Intel only told the favoured few, again

      Netcraft confirms it ...

  9. Stuart21551

    Inventors

    do not trust intel

    1. Dal90

      Investors

      Buggy chips? 30% Performance hit to patch vulnerability...on systems meant to be run at near full utilization?

      Near monopoly on data center chips?

      Brilliant! Next few quarters looks good for everyone who makes stuff from chips through chassis!

      1. Mark 110

        It doesn't need to be patched. Security is like an onion (to quote Shrek). Not sure you need your processors patched if no one can get to them.

        Cloud presents different problems I agree.

  10. CPU

    AMD have been quiet through all this ¬_¬

  11. Anonymous Coward
    Anonymous Coward

    When will the hits stop?

    We know that the entire security issues in Intel CPUs is due to the fact Intel violated proper security level designs in their CPUs for minute speed increases. We also know that there is no means possible to eliminate these silicon level defects. What we don't know is how many criminals have used these defects to install malware on countless PCs world wide.

  12. Missing Semicolon Silver badge
    Mushroom

    Refund due

    Since THIS vulnerability is caused by the processor NOT conforming to the operating sequence as documented in the spec sheet this fault is absolutely an Intel design fault.

    All Intel processors exhibiting this fault are FAULTY, NOT FIT FOR PURPOSE, and should be replaced by Intel Free-of-charge.

    If the device has a non-replaceable CPU, Intel owe you a new device.

    This is absolutely as serious as the mandatory recalls that happen whith cars when a safety-critical fault is found. The Manufacturer stands for all of the losses incurred. If Intel gets away with this one (which they will, bastards, look at the share price) , nobody should buy an Intel device ever again!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like