back to article Here come the lawyers! Intel slapped with three Meltdown bug lawsuits

Just days after The Register revealed a serious security hole in its CPU designs, Intel is the target of three different class-action lawsuits in America. Complaints filed in US district courts in San Francisco, CA [PDF], Eugene, OR [PDF], and Indianapolis, IN [PDF] accuse the chip kingpin of, among other things, deceptive …

  1. Mark 85

    My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days.

    I would hope that the chip engineers kept their copy of the spec and hardcopies of any emails with "directions".

    1. Nick Kew
      Pirate

      My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days.

      Well, of course.

      No matter how evil a bigco, the lawyers are worse. Even when it's Dilbert's crowd deliberately releasing a product so harmful it hospitalised its test subjects.

      1. Unicornpiss

        Light speed

        "My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days."

        And as we know, as you approach light speed, time slows down and mass becomes infinite.

    2. G2

      re: lightspeed lawyers

      those are not lawyers, those are ambulance chasers.

      a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre, they all are, even ARM or Qualcomm. It's an industry-wide bug.

      Such a CPU has not been seen since speculative execution acceleration was introduced about ~20 years ago. If they want a CPU without speculative/pipeline execution they should go back to 80286, or better yet 8086 processors to be "safe".

      Either that or they should wait for the industry to design and release new silicon that's safe, and since silicon development, testing and release cycles take about 2 years, we should have the new CPUs by 2020 or 2019 if we're lucky.

      1. G2

        P.S.: in the above post by CPUs i mean manufacturers that offer x86/x64 compatible CPUs not special industrial / RISC CPUs... those are another kettle of fish.

      2. Tom Paine

        a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre, they all are, even ARM or Qualcomm. It's an industry-wide bug.

        Oh, well THAT'S alright, then!

        A real lawyer... would be entirely happy to sue ARM, AMD and any other processor designer turning out substandard products as well as Intel.

      3. Roo
        Windows

        "a real lawyer with IT knowledge would have known that there is practically NO SUCH thing as a CPU on the market these days that is not affected by Meltdown and/or Spectre"

        A real commentard with CPU architecture expertise would know that there are CPUs on the market that are not affected by those bugs... :)

      4. MacroRodent

        Pentium I

        No need to go all the way to 286. The original Pentium and Pentium mmx did not spekulate. They just executed two adjacent instructions at the same time, if the pair satisfied certain conditions. Fun for compiler writers.

      5. MarkSitkowski

        re: lightspeed lawyers

        At last! I knew that if I waited long enough, my Z80 and 8080 assembler skills would be in demand...

    3. Anonymous Coward
      Anonymous Coward

      OK, I'll bite

      How many of the claimed performance hits are estimates, and how many are based on real data? Most CPU cores these days are dynamically clocked, and not usually running at top (or turbo) speed. One would think that proper patches, as opposed to hastily hacked patches, could mitigate a lot of the performance hit by explicitly kicking the core clock into turbo when swapping between kernel & user modes.

      1. a_yank_lurker

        Re: OK, I'll bite

        The only fixes currently available are in the OSes so it is reasonable to expect some slowdown. The slowdown is not likely to be a problem for home computer, phone, or office drone box but with servers where the slowdown could be noticeable and potentially very severe for websites, database access, cloud based applications, etc. All areas that can affect business profitability and that is a big issue for Chipzilla, et. al. Businesses will probably start suing once they have better metrics on actual costs and losses and these numbers probably will be eye popping.

        OS suppliers are stating to expect a performance hit and why. It is partly defensive (avoid lawsuits) and partly giving best estimate of what to expect even if a bit vague now.

        1. CommanderGalaxian
          FAIL

          Re: OK, I'll bite

          "The slowdown is not likely to be a problem for home computer...."

          More specifically it has been stated that you won't experience slowdowns unless you are doing a lot of disk access or network access - so if you happen to be a freelance software developer working from home then expect your compile times to increase - or if you happen to be an online games player then expect to experience degraded performance - perhaps quite significantly so.

          1. Michael Wojcik Silver badge

            Re: OK, I'll bite

            it has been stated that you won't experience slowdowns unless you are doing a lot of disk access or network access

            That may have been stated (by whom?). That doesn't mean it's correct.

            The Meltdown remediations cause a performance hit for all kernel-to-user context switches. I/O is a major cause of such switches, but it certainly isn't the only one.

            Software-based Spectre remediations,[1] once they start appearing in software, will cause a performance hit every time they're encountered. As hardware assist for them is introduced in new CPUs (such as the new conditional branch instructions ARM describes in their Spectre whitepaper) the cost will drop, but for older CPUs the techniques that have been identified so far, such as memory fences and retpolines, are significantly expensive.

            [1] Aside from the initial stopgaps put into browsers, which were simply to defeat the two cache-timing mechanisms identified in the Spectre paper.

        2. Claptrap314 Silver badge

          Re: OK, I'll bite

          I spent 10 years in microprocessor validation--basically from the start of the speculative execution era. I've got some ideas about what might be done to mitigate this sort of thing in hardware. The obvious solution for Spectre would be to add some bits of the pointer to the head of the page table into the branch history table indices. Doing this, however, would require committing to an architectural feature which really, really is not something that you want to commit to.

          The next thing to consider would be to add cache state to the speculative state that gets rolled back on a branch mispredict. You create an orphan pool for the caches, and pull those back. This would be quite expensive, depending on how completely you want to block such an attack. It is FAR from clear to me how such an orphan pool should be treated to avoid a variant of such an attack that takes the orphan pool into account.

          If the papers are accurate, and modern CPUs really do have close to 200 instructions in flight, you would need at least 600 cache lines in your orphan buffers per level of cache--probably a lot more.

          1. Michael Wojcik Silver badge

            Re: OK, I'll bite

            The obvious solution for Spectre would be to add some bits of the pointer to the head of the page table into the branch history table indices

            That only helps with one of the two Spectre variants demonstrated in the original paper, and in that paper the authors point to other side channels which are probably also usable for Spectre attacks (e.g. ALU contention). Blinding the BTB would be a bandaid.[1]

            As anyone with a crypto background knows, it's really hard to reduce all your side channels to the point where they leak information too slowly to feasibly exploit. Paul Kocher showed us that more than 20 years ago, a mere 7 years after the DEC branch-prediction patent was granted.

            [1] Sticking plaster, for CPUs in the Commonwealth.

        3. CheesyTheClown

          Re: OK, I'll bite

          Not only are the fixes through software, hardware fixes wouldn’t work anyway.

          So, here’s the choices :

          1) Get security at the cost of performance by properly flushing the pipelines between task switches.

          2) Disable predictive branch execution slowing stuff down MUCH more... as in make the cores as slow as the ARM cores in the Raspberry Pi (which is awesome, but SLOW)

          3) Implement something similar to an IPS in software to keep malicious code from running on the device. This is more than antivirus or anti malware. This would need to be an integral component of web browsers, operating systems, etc... compiled code can be a struggle because finding patterns to exploit the pipeline would require something similar to recompiling the code to perform full analysis on it before it is run. Things like Windows Smart Screen does this by blocking unknown or unverified code from running without explicit permission. JIT developers for web browsers can protect against these attacks by refusing to generate code which makes these types of attacks possible.

          The second option is a stupid idea and should be ignored. AMDs solution which is to encrypt memory between processes is useless in a modern environment where threads are replacing processes in multitenancy. Hardware patches are not a reasonable option. Intel has actually not done anything wrong here.

          The first solution is necessary. But it will take time before OS developer do their jobs properly and maybe even implement ring 1 or ring 2 finally to properly support multi-level memory and process protection as they should have 25 years ago. On the other hand, the system call interface is long overdue for modernization. Real-time operating systems (and generally microkernels) have always been slower than Windows or Linux... but they all have optimized the task switch for these purposes far better than other systems. It’s a hit in performance we should have taken in the late 90’s before expectations became unrealistic.

          The third option is the best solution. All OS and browser vendors have gods of counting clock cycles on staff. I know a few of them and even named my son after one as I spent so much time with him and grew to like his name. These guys will alter their JITs to handle this properly. It will almost certainly actually improve their code as well.

          I’m pretty sure Microsoft and Apple will also do an admirable job updating their prescreening systems. As for Linux... their lack of decent anti-malware will be an issue. And VMware is doomed as their kernel will not support proper fixes for these problems... they’ll simply have to flush the pipeline. Of course, if they ever implement paravirtualization like a company with a clue would do, they could probably mitigate the problems and also save their customers billions on RAM and CPU.

          1. bombastic bob Silver badge
            Devil

            Re: OK, I'll bite

            "Get security at the cost of performance by properly flushing the pipelines between task switches."

            I would think this should be done within the silicon whenever you switch 'rings'. If not the OS should most definitely do this. Does the instruction pipeline (within the silicon) stop executing properly when you switch rings, like when servicing an ISR? If not, it may be part of the Meltdown problem as well, that is the CPU generating an interrupt, which is serviced AFTER part of the pipeline executes. So reading memory generates a trigger for an ISR, but other instructions execute 'out of order' before actually servicing the ISR...

            I guess these are the kinds of architecture questions that need to be asked by Intel (and others), what the safest way is to do a state change within the silicon, and how to preserve (or re-start) that state without impacting anything more than re-executing a few instructions...

            So I'm guessing that this would need to happen:

            a) pipeline has 'tentative' register values being stored/used by out-of-order instructions, branch predictions, etc.

            b) interrupt happens, including software interrupts (executing software interrupts should happen 'in order' in my opinion, but I don't know what the silicon actually does)

            c) ring switch from ISR flushes all of the 'tentative' register values, as if those instructions never executed

            If that's already happening, and the spectre vulnerabilities can STILL leverage reading memory across process and kernel boundaries, then I'm confused as to how it could be mitigated at ALL...

            the whole idea of instruction pipelining and branch prediction was to make it such that the software "shouldn't care" whether it exists or not. THAT also removes blame from the OS, really. But that also doesn't mean that the OS devs should sit by and let it happen [so a re-architecture is in order].

            But I wouldn't blame the OS makers at all. What we were told, early on, is that this would speed up the processors WITHOUT having to re-write software. THAT was "the promise" that was broken.

      2. CheesyTheClown

        Re: OK, I'll bite

        I agree.

        The patches which have been released thus far are temporary solutions and in reality, the need for them is because the OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline. Of course, I haven’t read the specific design documents from Intel describing the task switch mechanism for the effected CPUs, but following reading the reports, it was insanely obvious in hindsight that this would be a problem.

        I also see some excellent opportunities to exploit AMD processors using similar techniques in real world applications. AMD claims that their processors are not effected because within a process, the memory is shielded, but this doesn’t consider multiple threads within a multitennant application running within the same process... which would definitely be effected. I can easily see the opportunity to hijack for example Wordpress sites using this exploit on AMD systems.

        This is a problem in OS design in general. It is clear mechanisms exist in the CPU to harden against this exploit. And it is clear that operating systems will have to be redesigned, possibly on a somewhat fundamental level to properly operate on predictive out of order architectures. This is called evolution. Sometimes we have to take a step back to make a bigger step forward.

        I think Intel is handling this quite well. I believe Linux will see some much needed architectural changes that will make it a little more similar to a microkernel (long overdue) and so will other OSes.

        I’ll be digging this week in hopes of exploiting the VMXNET3 driver on Linux to gain root access to the Linux kernel. VMware has done such an impressively bad job designing that driver that I managed to identify over a dozen possible attack vectors within a few hours of research. I believe very strongly that over 90% of that specific driver should be moved to user mode which will have devastating performance impact on all Linux systems running on VMware. The goal is hopefully to demonstrate at a security conference how to hijack a Linux based firewall running in transparent mode so that logging will be impossible. I don’t expect it to be a challenge.

        1. bombastic bob Silver badge
          Devil

          Re: OK, I'll bite

          "OS developers decided to begin with that it was worth the risk to gain extra performance by not flushing the pipeline."

          read: they used CPU features as-documented to avoid unnecessary bottlenecks

          The problem is NOT the OS. It's the CPU not functioning as documented, i.e. NOT accessing memory in which the page table says "do not access it", even if it does so only briefly. The fact that a side-channel method of detecting this successful access exists does not preclude the somewhat lazy method in which Intel's code checks the access flags when out-of-order execution is happening. Security checks should never have been done after the fact, and yet they were.

          (my point focuses mostly on meltdown; branch prediction is another animal entirely)

          In short, Intel's benchmarks could have been *slightly* faster (compared to AMD, which apparently doesn't have THAT bug) because they delayed the effect of security checking just a *little* bit too long...

          fixing that in microcode may not even be possible without the CPU itself slowing down. If AMD's solution was to have more silicon involved with caching page tables so that the out-of-order pipeline's memory access would throw an exception at the proper time, then Intel may have to do some major re-design.

          So you could argue that NOT doing these security checks "at the proper time" within the out-of-order execution pipeline may have given Intel a competitive advantage by making their CPUs just 'faster' enough to allow the benchmarks to show them as "faster than AMD".

          And it's NOT the fault of OS makers, not even a little. They were proceding on the basis that the documentation represented what the silicon was really doing. And I bet that only a FEW people at Intel knew that the security checks on memory access were being 'delayed' a bit (to speed things up?).

          It's sort of like only a FEW people at VW knew that their 'clean diesel' tech relied on fudging the smog checks by detecting that the car was hooked up to a machine and running a smog check, and thus alter the engine performance accordingly so it would pass. THAT gave VW competitive advantage over other car makers. Same basic idea, as I see it.

          1. Michael Wojcik Silver badge

            Re: OK, I'll bite

            The problem is NOT the OS. It's the CPU not functioning as documented, i.e. NOT accessing memory in which the page table says "do not access it", even if it does so only briefly.

            While Meltdown does involve speculative access across privilege levels, Spectre does not. And if you believe either of the attacks violates something in the CPU specification, I'd like to see a citation. CPU specifications tend to be quite vague and leave a great deal of room for the implementation.

            In particular, memory-protection features are described in terms of their direct effects on registers and memory, not on microarchitectural features such as the caches. There's no magical guarantee that memory protection prevents ever loading anything from an unpermitted page into a CPU storage area that's not directly accessible by the executing program.

            What you wish CPUs would do, and what they're documented as doing, are two different things.

        2. smot

          Re: OK, I'll bite

          I'm quite affected by this post.

      3. Sonny Jim

        Re: OK, I'll bite

        > How many of the claimed performance hits are estimates, and how many are based on real data?

        Epic games posted a graph showing the CPU increase on a 'real' server:

        https://www.epicgames.com/fortnite/forums/news/announcements/132642-epic-services-stability-update

    4. handleoclast
      Coat

      My... not that this was unexpected but the lawyers seem to be approaching lightspeed these days.

      It's not so much that the lawyers are clocked any faster but that they employ pipelineing, branch prediction and speculative execution.

      I wonder when the equivalent of meltdown/spectre hacks will appear for lawyers.

    5. CommanderGalaxian

      Ambulance chasing lawyers have their place in the scheme of things - especially when the reason is that you've purchased premium kit - and discover some way down the line that the only way it can run safely is by turning it into crippleware, performance-wise.

  2. Lorribot

    We have only ourselves to blame

    If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so really it is our own fault for following the cheap and simple AMD64 route. We made Intel fuck up.

    1. tim292stro

      Re: We have only ourselves to blame

      "...If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so really it is our own fault for following the cheap and simple AMD64 route. We made Intel f**k up..."

      The market had built itself around x86 and Itanium would have broken compatibility rather suddenly, leaving a CPU without any software. AMD's x64 extension to x86 was easier for software people to get on board with while they evaluated their life choices on code management. When PowerPC shortly thereafter stopped getting produced and ARM came along a lot of software companies had a bit of a come-to-Jesus moment about how fragile the CPU sector could be and realized that a bit of code-base agility was the way forwards.

      Of course this whole time, Intel learned the exact opposite lesson - rather than still paving the way forwards with new clean and well though out ISAs, they reacted like a dog that got tazer'd and really dug-in to the trench of "Hey look! x86 is still compatible with all of your code!!! Don't think about any other ISA!!! EVER!!!" See KnightsCorner/Ferry, etc... They even dabbled in Arm for a bit with the XScale stuff, but never really wanted to impact their server/desktop market with that. Now Marvell has taken that business unit and run with it.

      1. Nano nano

        Re: We have only ourselves to blame

        Itanium had a x86-32 compatibility mode which allowed x86-32 code to run, albeit more slowly than might be expected at that clock speed. I had an Itanium desktop system in 2000 on which I ran IA64 and IA32 code and benchmarks ...

        1. Anonymous Coward
          Anonymous Coward

          Re: We have only ourselves to blame

          Only the first generation Itanium hardware included x86-32 hardware. Later versions used JIT to run x86 code in software.

    2. Updraft102

      Re: We have only ourselves to blame

      "If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so really it is our own fault for following the cheap and simple AMD64 route. We made Intel fuck up."

      Right. Intel good, AMD bad... even though somehow AMD managed not to be vulnerable to Meltdown using the same AMD64 instruction set, it's all AMD's fault, not Intel's, that Intel managed to mess it up so badly.

      1. Tom Paine

        Re: We have only ourselves to blame

        It's not AMD's /fault/ they came up with a good tactic for attacking Intel after the launch of Itanic. It does rather imply we're stuck with x64 forever, now, though, and that it's no-one's fault. How do you make that puzzled / thoughtful face emoticon?

    3. Flocke Kroes Silver badge

      Re: We have only ourselves to blame

      Itanium's first success was before it was even a product, R&D on existing 64-bit designs stopped on the assumption that they would not be able to compete with Intel. Anyone know if any of the old 64-bit designs could later have become susceptible to meltdown? Itanium took ages to get to market either because it was a difficult design or because with the competition gone there was no reason to rush.

      Itanium was not built for speed. The primary design goal was to use so many transistors that no-one would be able to manufacture a compatible product. This goal was achieved by such a large margin that the first version used too much power to become a product. Even when Itanium became a real product its performance per watt stank. Software was either non-existent or priced higher than the SLS so sales were crap leading to poor performance/$. Itanium was never a competitor to X86 and was a zombie incapable of eating brains before AMD64 was available.

      68020 had separate tables for user and supervisor address translations. It was meltdown proof, and the same went for 88110. I do not know if Itanium had a sane MMU design, but it was never an option for anyone without an unlimited budget and it did kill a bunch of architectures some of which were meltdown proof.

    4. Brian Miller

      Re: We have only ourselves to blame

      No, we didn't cause Intel to "fuck up." Intel did not take any lessons learned from its experiences with other chip architectures and apply them to the x86_64. Intel has a lot of experience with RISC and non-86 architectures. Choosing to ignore design deficiencies is their action.

      1. Anonymous Coward
        Facepalm

        Re: We have only ourselves to blame

        >>Intel did not take any lessons learned from its experiences with other chip architectures

        which architecture for example?? This has nothing to do with processor ISAs.. Architectures and the implementation of an architecture are two different things. AMD's implementation of x86_64 architecture is better when evaluated on this specific criteria. It might be worse on others. So could ARM8. Better and worse.

        >>Intel did not take any lessons learned from its experiences with other chip architectures

        On which of their other implementations did they make the same mistake to learn from and corrected it there but chose to ignore "the lesson" for their out of order CPUs?

        This is becoming an Intel withhunt - lots of comments but little fact.

        "Intel knew all along I tell you.. all along! The CEO must be burned alive! Burn Intel.. off with it's head.."

        "I'm afraid to enter my password.. Am I alive? Is my money safe? Can I fly? My phone! Will the internet stop? Bitcoin wallets are insecure causing market crash.. It's all intel's fault I tell you!"

        "Everyone should have used <favourite vendor of the week != Intel>. That's what I've been telling to you all.. Told ya..."

        I mean at what point does one state that The Register is spewing "fake news". If it became obvious in a few years that the commentary didn't hold or other cpus have even more fatal flaws, does The Register become a fake news outlet for what it printed ingood faith today??

        Prove Intel's bad faith...

        1. Patrician

          Re: We have only ourselves to blame

          "....Prove Intel's bad faith...". ...

          Intel knew about both issues back in June last year but carried on selling CPU's with that flaws since then; there is Intel's "bad faith".....

    5. Voland's right hand Silver badge

      Re: We have only ourselves to blame

      If we had all done 64 bit properly with Itanium

      Next time, use "joke alert" flags. Most of the el-reg readership speculatively executes a "turn off humor grand" branch the moment they see the Itanic name.

      1. Anonymous Coward
        Anonymous Coward

        Sorry, Itanium sucks

        It avoids these issues only because it is in-order, not because it is better designed. HP's engineers thought a smart compiler could make up the difference for an in-order processor, and sold Intel on the idea so they collaborated on what would have been PA-RISC 3.0 and it became the Itanium. Those engineers were wrong, which is why Itanium has never lived up to its performance processes.

        Not defending the turd that is x86-64, but its biggest problem is a refusal to drop backwards compatibility with old shit that goes back 40 years. Drop support for anything but 64 bit mode in hardware, handle 32 bit apps via JIT, and it would be a lot better. If you want a clean 64 bit ISA you should be looking at ARM64. It is not perfect but better by far than either x86-64 or Itanium!

        1. Joe Montana

          Re: Sorry, Itanium sucks

          ARM is not exactly a clean 64bit architecture either, like amd64 it's an extension to a 32bit architecture that was never intended to be extended. The only difference is that the 32bit architecture was cleaner in the first place.

          There are much cleaner 64bit implementations in the form of Alpha, POWER, MIPS and SPARC. Alpha was even a pure 64bit design with no 32bit mode at all.

          1. Anonymous Coward
            Anonymous Coward

            Re: Sorry, Itanium sucks

            Actually ARM64 is a totally independent ISA, unlike x86-64, so you can drop 32 bit mode entirely if you want. Which Apple did in the A11.

            Like I said, its not perfect, but it is so much better than x86-64 or IA64 its like comparing brownies made with chocolate with those made with dirt substituted for the chocolate. They're both edible, but one them you will only eat if you're really hungry.

    6. Anonymous Coward
      Coat

      Re: We have only ourselves to blame

      Isn't that an unnecessarily long-winded way to spell "race to the bottom"?

    7. Steve Channell
      Flame

      Re: We have only intel to blame

      When AMD introduced the AMD64 architecture they remapped the segment registers as general purpose registers because nobody was using them anymore... until Google came up with NaCL (that uses segment registers to provide a hardware sandbox).. intel had a chance (with x86-64) to keep one segment register for hardware security support, but they didn't.

      The fact the market leading chip designer chose not to support a kernel segment (in future we'll call that hardware support for operating systems) is down to politics... NSA politics.

    8. gnasher729 Silver badge

      Re: We have only ourselves to blame

      You must be joking.

      The Itanic processor was a monstrosity. The most complex beast ever created. If it had been successful, and if every laptop and desktop today were Itanic based, we would have hit more than one iceberg already.

      1. tiggity Silver badge

        Re: We have only ourselves to blame

        With the low performance per watt, the heat generated if we had all been using Itanics would mean no icebergs were left

    9. Destroy All Monsters Silver badge
      Trollface

      Re: We have only ourselves to blame

      If we had all done 64 bit properly with Itanium like Intel told us to we would not be in this situation so realy it is our own fault for following the cheap and simple AMD64 route. We made Intel fuck up.

      Why are you applying Bái Zuô (white left) logic to this particular case of less-than-perfect engineering?

      Anyway, can we bring freedom to downtrodden Intel? Maybe an "IA-64 affirmative action" program and subsidies to increase the dominance of IA-64. I'm totally in favour of a "Free Itanium March" through Washington D.C.!

    10. Brewster's Angle Grinder Silver badge

      Expert trolling!

      @Lorribot Your comment made me laugh, anyway. And the more I read it, the funnier it gets.

    11. This post has been deleted by its author

    12. bombastic bob Silver badge
      Thumb Down

      Re: We have only ourselves to blame

      blame the victims. nice. job.

  3. KH

    It could turn out alright for Intel. Maybe they'll sell a whole bunch of new chips because of it. Equifax certainly picked up a lot of new credit-watching-service clients when they released leaked millions of users' data. Financially speaking, it's probably the best thing they ever did for their bottom line. Some punishment. People are dumb, and keep going back to the idiots that burned them, despite their lousy performance records. (Think TSA, for example)

    Data breeches and design flaws -- the sex tapes of the business world! ;)

    1. Anonymous Coward
      Anonymous Coward

      Data breeches

      I need to get myself a pair of those for swanning about the data centre.

      1. Anonymous Coward
        Anonymous Coward

        Re: Data breeches

        They would go nice with a pair of flip-floppies.

        1. Anonymous Coward
          Anonymous Coward

          Re: Data breeches

          They'd make a nice replacement for my unsigned shorts.

          1. bombastic bob Silver badge
            Devil

            Re: Data breeches

            (voice of Samuel L. Jackson) "Honey? WHERE is my CYBER SUIT?"

      2. Loud Speaker

        Re: Data breeches

        Are they the ones with pockets big enough for full height 5 1/4" hard drives?

        1. Paul Crawford Silver badge

          Re: Data breeches

          No, those are hard drives. He is just pleased to see you.

  4. tim292stro

    Interesting about RISC-V, I'm on the mailing list for that, and I'm pretty sure with my vague cursory eavesdropping that RISC-V would at least be susceptible to Specter - though they are actively brainstorming on how to eliminate that possibility at the metal layer. The thread is in the ISA-DEV list, with the subject "Discussion: Safeguards on speculative execution?" For those who want to play along at home.

    Even the RISC-V ISA guys are still contemplating the entirety of the vulnerabilities, with many "solutions" put forward for simpler work-arounds only leading to new attack vectors - so it makes me wonder how Intel's PR people can stand out there and say Intel CPUs are effectively immune to Meltdown and Specter after this simple patch (which many companies and most end users will never upgrade to - and that so far doesn't include any microcode changes, and no one seems to offer a good reason why that would even work). Answer is probably simple: "because they are paid to".

    Lesson, know who is paid to lie to your face for profit, then look elsewhere for answers. ;-) Major kudos to El Reg for the un-spun distillation of the Intel Press Releases article.

    1. LaeMing

      I'm wondering if, in this day and age of multiprocessing chips, you shouldn't just have an entire core dedicated to running the OS with its own exclusive on-chip memory for OS code and data, while the user-space companion cores completely lack even the transistors for privileged execution. Save on all the privilege-level-managing logic and all??

    2. Michael Wojcik Silver badge

      RISC-V would at least be susceptible to Specter

      Yup. What the RISC-V folks are saying is that "no current RISC-V designs are vulnerable", not that it's impossible to design a RISC-V CPU which is.

      It's hard to prevent Spectre in hardware. Either you give up speculative execution, or you identify all the feasible side channels. Good luck with the latter, and I'm not yet convinced that there aren't Spectre-like attacks which don't need spec-ex.

      Where you see a side channel, start by assuming it can be used to leak information to an adversary, until proven otherwise. (Where "proven otherwise" usually means demonstrating that the maximum bandwidth of the channel, after accounting for errors, is too low to be useful.)

  5. teknopaul

    timing attacks

    Cant you just reduce timer accuracy for untrusted code and get all your performance back? Not good for cloud use but is that no OK for the rest of us?

    Or would that penalise the cloud so everyone making megabucks from it is trying to avoid mentioning this fact.

    1. tim292stro

      Re: timing attacks

      Reducing timer accuracy will mean you can only fire events on more coarse intervals, which would necessitate a slow down (missing an event at a near time slot would mean you now wait much longer for the next) - and let's be frank, while the "impact" may not be felt on the local client machine's CPU, most of us home bodies are touching something on a VM or a database in a datacenter over a network, so IMHO we will actually feel it at the screen. Even if just subtly. I believe yes, the datacenter people are going to spend most of their time talking up how there is no security impact, and stating that the performance slow-down is "moderate but your mileage may vary" (about as non-binding as they can get).

      1. Flocke Kroes Silver badge

        Re: timing attacks

        Who needs to fire off events at precise _times_? The usual events are "required data is in memory" or "disk has confirmed that the data will be read back as required even if the power fails right now". Delete the high resolution timer, and the vast majority of software would not even notice.

        Back when I was a PFY, the scheduler interrupt was 50Hz - if you hogged the (only!) CPU for 40ms the OS would give something else a turn. Even back then, if the current process stalled, the scheduler would pick a different unstalled process immediately. Later, Intel CPU's got caches huge enough to hold multiple copies of the enormous state required by the X86 architecture, so the tick could be moved to 1000Hz without continuously thrashing the cache. (Linux got tickless for battery life).

        Databases need to put requests into an order, and I always assumed they used a sequence number for that rather than the time. Make has difficulty with FAT's 2 second (!) resolution last modified time stamps. I am sure uuid and NTP actually need nanosecond accuracy, but apart for a few oddities the only contexts I have actually seen using nanosecond accuracy are performance monitoring for optimisation and malware cache timing attacks.

        Most software does not touch the high resolution timers at all, so I too am interested in why restricting access to them is not a solution.

        1. Voland's right hand Silver badge

          Re: timing attacks

          Who needs to fire off events at precise _times_?

          A lot of people funnily enough. You would be surprised how much timer activity is involved in inter-process communications and messaging.

          Most software does not touch the high resolution timers at all,

          As someone who has written a significant portion of the code for the high-res timers in one of the architectures in the kernel I can tell you that you are talking out of your arse.

          The moment you go into the land of multi-threading and AIO libc will start using them even if your app is not. On average a nearly empty embedded linux system which has nothing besides a busybox instance with a shell will have 4-8 high res timers active at all times. The moment you load an average desktop environment and open a web browser you are looking at hundreds. F.e. my desktop which is doing bugger all at the moment is showing 228 active ones.

          1. Flocke Kroes Silver badge

            Re: Vorland's right hand

            Thank you.

          2. Claptrap314 Silver badge

            Re: timing attacks

            High-resolution clocks to user space have been a known source of side channel attacks for a long time. (decade???) Moreover, nanoseconds are synthetic unless your distances are measured in single-digit inches. If your code needs this stuff, it is either very specialized or wrong. If your desktop is running that many, you probably are running some garbage code. AFAIK, user space has been limited to 1ms because of this.

            Amazingly, perhaps, it turns out that even a 1ms timer is probably sensitive enough to dig this stuff out--you just do the thing many times & watch the averages.

            Getting this right will be HARD.

          3. teknopaul

            Re: timing attacks

            I'm impressed as usual by reg commentards knowledge. One follow up from comments below, how many of these hires timers are in _untrusted_ code, (outside cloud and vm context) e.g JavaScript in your browser or perhaps some other source that gives timing info without directly calling timer apis. Presumably if privileged trusted code uses hires timers this is not a problem outside the cloud? Or is it? So question becomes are hires timers needed in _untrusted_ code on the desktop.

            Let assume for the argument downloaded userland exes _are_ trusted, but js and sandboxed code is not.

            And...

            fight.

    2. Anonymous Coward
      Anonymous Coward

      Re: timing attacks

      > Cant you just reduce timer accuracy for untrusted code and get all your performance back?

      There is a good discussion on this at LWN this week - https://lwn.net/Articles/742702/

      As comments there are by People Who Know, I can only understand every fifth sentence or so, but it boils down to the fact that nothing is simple any more.

      1. handleoclast
        Thumb Up

        Quote of the year

        Nothing is simple any more.

        Sums it up nicely. Sums everything up nicely.

        1. Stoneshop

          Re: Quote of the year

          Nothing is simple any more.

          Sums it up nicely. Sums everything up nicely.

          And outside of the domains where things aren't simple because of the subject matter, people are hard at work (and often failing, luckily, but still) to make simple things not simple any more. E.g. Juicero, the Otto lock and many other such ventures.

      2. stephanh

        Re: timing attacks

        > Cant you just reduce timer accuracy for untrusted code and get all your performance back?

        Note that this is currently being implemented as a software mitigation for Javascript in browsers.

        https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-class-timing-attack/

        Interestingly, they also needed to disable SharedArrayBuffer (shared memory between two threads). Because a second thread which is simply incrementing a counter in shared memory can be used to synthesize a high-resolution timer.

        For native code this would effectively require forbidding (shared-memory) multi-threading.

      3. CrazyOldCatMan Silver badge

        Re: timing attacks

        but it boils down to the fact that nothing is simple any more

        The only people that still think that things are simple are children and politicians. The former because they don't yet know any better and the latter because it hides the fact that they don't know anything about what they are talking about..

    3. This post has been deleted by its author

      1. Nick Ryan Silver badge

        Re: timing attacks

        High resolution timers in JS? Given that JS is interpreted and does not have direct memory access, just how is it is going to be used to trigger Specter, let alone Meltdown flaws? The asm code for these is relatively trivial, however unless one can trick an interpreter, or enven a JIT compiler, into generating specific asm code how is it to be executed.

        On the other hand there may be an issue with exploits allowing the execution of arbitrary (asm) code on a system - however these won't need to rely on JS for their timers... But executing arbitrary code on a system is a problem anyway.

        /confused

        1. Michael Wojcik Silver badge

          Re: timing attacks

          Given that JS is interpreted and does not have direct memory access, just how is it is going to be used to trigger Specter, let alone Meltdown flaws?

          If only there were a freely available paper that explained this...

          First, note that modern browsers all JIT Javascript into machine code. It's not interpreted by any of the major browsers, at least in their default mode.

          The Javascript PoC is for Spectre, not Meltdown. And it's quite straightforward. All you need is the pieces required for a cache-timing attack (easily achieved in Javascript with a high-res timer and a byte array), and a suitable gadget, which you can write in the script itself. The gadget just does a conditional read from memory; by (mis-)training the branch predictor, you can get the CPU to speculatively execute the load with an out-of-bounds address.

          The Spectre paper authors disassembled the JITed Javascript so they could tweak the source to produce the instruction sequence they wanted, but that's just a shortcut; a script could easily include different variations.

  6. Anonymous Coward
    Anonymous Coward

    Optimistic me...

    ...hopes that maybe this will make intel learn their lesson. Maybe we can even get the IME spy-computer thrown out together with this iteration of bollocks.

    1. CrazyOldCatMan Silver badge

      Re: Optimistic me...

      hopes that maybe this will make intel learn their lesson

      Nah. They'll just learn to hide it better in future.

      </cynical mode>

  7. Cl9

    Should Intel (and other chip makers) be held responsible for hardware flaws?

    It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug. Modern CPUs are just so incredibly complex, containing billions of transistors, that I don't think it's feasibly possible to create a 'flawless' CPU, there's always (and always will be) bugs and flaws, discovered or not.

    I'm also not sure if you could pin the potential performance loss on Intel either, as that's technically the operating system vendor who's implementing the slow down.

    Don't get me wrong, I've got an Intel CPU myself, and I can't say that I'm too happy about this either. But I can't really blame Intel for it either. And yes, Intel's PR release was absolute BS.

    1. Simon 15

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      Answer: Yes

      Perhaps if Intel doesn't have sufficient expertise in designing and fabricating processors they should outsource the job to another company such as AMD or Arm therefore leaving them to focus on their core business instead which is of course.... doh!

      1. Cl9

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        But both AMD and ARM are also vulnerable to related (both to do with speculative execution) flaws, such as Spectre. I'm not sure what your point is.

    2. Boris the Cockroach Silver badge
      Meh

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      Depends when they knew about it

      If it can be shown that intel manglement knew about the bug and yet kept on baking/selling chips regardless then I'd suspect they wont have a leg to stand on

      But the lights are on late at Intel HQ and that the nearest home office to them has just sold out of paper shredders.....alledgedly

      1. Roo
        Windows

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        "If it can be shown that intel manglement knew about the bug and yet kept on baking/selling chips regardless then I'd suspect they wont have a leg to stand on"

        There are plenty of published show-stopper errata that show Intel doing exactly that over several decades. Customers typically decide that the expense of the lawsuit combined with the publicity that shows their products/services are impacted by it would do more damage than the errata...

    3. coconuthead

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      Intel's x86-64 CPUs are more complex than CPUs need to be in order to support backward compatibility to the architecture. Their whole marketing strategy is "the complexity doesn't matter, we can do that and make it work". Well, it turns out they couldn't and didn't, but in the meantime other potential competitors either have reduced market share or were never deveoped.

      1. Michael Wojcik Silver badge

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        Intel's x86-64 CPUs are more complex than CPUs need to be in order to support backward compatibility to the architecture.

        That has nothing to do with Meltdown or Spectre.

        Meltdown exists because of an architecture choice: allow speculative loads across privilege boundaries (i.e., don't make a privilege check before allowing a speculative load). That's why AMD x86 CPUs don't suffer from it - it's just one of the choices you can make when implementing the x86-64 ISA.

        Spectre exists because the CPUs provide speculative execution and caches. So do pretty much all general-purpose CPUs. x86-64 would never have survived in the market, at least not for server systems, if it hadn't adopted those. Lack of speculation is one of the reasons Itanium had so much trouble bringing its performance up, and with the very deep pipelines of x86 (necessary for adequate performance with backward compatibility), a non-speculative x86 would not have been competitive.

        x86 has had spec-ex since 1995, with the Pentium Pro. In fact it had it back in 1994 with the NexGen Nx586, but that was never widely used and NexGen was purchased by AMD in '95. (As far as I know, neither the earlier NexGen design nor competing chips from AMD and Cyrix did spec-ex, though the techniques date back at least to the late 1980s.)

    4. SkippyBing

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      'Modern CPUs are just so incredibly complex, containing billions of transistors, that I don't think it's feasibly possible to create a 'flawless' CPU, there's always (and always will be) bugs and flaws, discovered or not.'

      Replace CPU with aircraft* and ask if you wouldn't blame Airbus for them not being flawless?

      *I was going to say 'and transistors with parts' but I think both apply these days.

      1. Anonymous Coward
        Anonymous Coward

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        It's widely reported that Intel was informed about this flaw in June 2017.

        I kinda missed at that time the Intel announcement that they stopped selling all CPUs effectively immediately and that they recommended you got an AMD,

        IANAL, but it seems they have been knowingly selling defective products since June.

        AC because whatever you can say about Intel's chip designers, their lawyers are top notch.

      2. Chris Miller

        @SkippyBing

        Airbus software is not flawless, nor is Boeing or any other large, complex, safety-critical software. Humans can't write millions of lines of perfect code, and I suspect that doing so will always be infeasible.

        But (of course) safety-critical systems are (or, at least, are capable of being) developed to higher standards than 'normal' software. It would be possible for Intel or any chip manufacturer to adopt similar development processes, but the effects would be to significantly slow development, while simultaneously increasing costs. It may be that there are loads of customers out there looking to pay a lot more for a chip that's two generations behind - but I somehow doubt it.

        1. Doctor Syntax Silver badge

          Re: @SkippyBing

          "But (of course) safety-critical systems are (or, at least, are capable of being) developed to higher standards than 'normal' software."

          And what's the point if the H/Wit runs on isn't?

          1. Anonymous Coward
            Anonymous Coward

            Re: What about auto-updates?

            I don't get the comparison to aircraft, they are specifically sold with safety assurances and hence as commented use a different development process.

            The CPU is a part, and it is the procuring entity/system manufacturer that is responsible for assessing suitability and fitness for purpose.

            If Intel claimed suitability this is a different matter. No one has pointed to any evidence of this.

            You can ask why do SW like linux and windows store critical data in such a fashion to gain performance? Intel will say this is not a secure implementation and the OS vendors mis-represented performance by compromising security.

            The corollary here is that insecure CPUs are illegal to be sold. Who said so? Which law forbids this??

            Bad PR for Intel yes, but this is not remotely the same as being illegal.

        2. Roo
          Windows

          Re: @SkippyBing

          "but the effects would be to significantly slow development"

          I suspect Intel's "Tick/Tock" development model with releases being pegged to a particular date in time years before they are even developed contributes to the problem. Intel been pushing stuff out of the door before it's been fully baked to meet a marketing deadline for a while now.

          1. Chris Miller

            Re: @SkippyBing

            I suspect Intel's "Tick/Tock" development model with releases being pegged to a particular date in time years before they are even developed contributes to the problem.

            You may well be right. But that's just another aspect of the need to get your latest fastest model out into the market asap, otherwise customers will start switching to your competitors. We see the same problem with software being released before it's quite ready. Customers don't really want security (though they will scream about it, but only after the event): they can't see it, they can't measure it; it slows things down - and they're certainly not prepared to pay extra for it.

      3. close

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        The biggest issue here is that there is no proper fix. You can't replace a few transistors, you can't receive a downloadable CPU block, all you can do is disable some of the built in functions, make them work in other ways than intended.

        In a plane you swap the offending part, you update the offending software and the end result is as expected, not gimped but sort of does the job. With billions of CPUs out there it will take years to hear the end of this.

    5. Doctor Syntax Silver badge

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      "It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug."

      So if you catch a nasty dose of food poisoning the restaurant with the poor hygiene shouldn't be held responsible because it wasn't an intentional bug?

      1. Cl9

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        Food hygiene processes are relatively simple and there's a set list of guidelines and requirements that need to be met.

        This is not the case with CPU design so you can't really compare the two. If a restaurant is breaching existing food safety standards, then of course they should be held liable. There are no such requirements or standards for CPU design.

    6. Ken Hagan Gold badge

      Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

      "It's an interesting one, but I don't personally think that Intel should be held liable for this, as it's not an intentional bug."

      I agree it is interesting, and I might even agree that Intel shouldn't be held liable, but if I did then I would have a different reason for doing so. The issue is not intent, but negligence. I don't think anyone close to the action is suggesting that Intel knew about this prior to mid-2017. It would be nice to think that our spooks knew about it before then, and distressing to imagine that the other side's spooks knew about it, but in neither case would we expect Intel to be informed. So the question is: is the flaw sufficiently obvious that we can call it negligence. Well ... given that it took just about everyone 20 years to work it out, I don't think we can call it obvious.

      Oh, and I also agree that Intel's PR release was BS. I'd be happy to see them prosecuted for *that*. I'm also pretty unhappy about the timescale surrounding their CEO's share dealings.

      1. Mark 85

        Re: Should Intel (and other chip makers) be held responsible for hardware flaws?

        I'm also pretty unhappy about the timescale surrounding their CEO's share dealings.

        I read in one news article that the SEC will be looking into this.

  8. GrapeBunch

    In one article I read that the problem affected all Intel processors manufactured since 1995. Or is it Intel 64-bit processors since 1995? Or some other subset?

    1. Steve Davies 3 Silver badge

      Re: Which Intel CPU's

      AFAIK,

      ATOM's are immune because they don't use branch preduction or Out of Order Execution.

      Everything else is vunerable.

      1. Pete 47

        Re: Which Intel CPU's

        'Fraid not, pre 2013 Atoms are, but later versions like the Z36/Z3700 series (like my Dell tablet) support OOOE.

        Hopefully given the usage profile of such devices the performance hit from the updates shouldn't be overly noticeable.

      2. MarcC
        Facepalm

        Re: Which Intel CPU's

        Not quite. I've researched all back issues of The Register and came up with this :

        "Deep inside Intel's new ARM killer: Silvermont"

        "the new Atom microarchitecture has changed from the in-order execution used in the Bonnell/Saltwell core to an out-of-order execution (OoO), as is used in its more powerful siblings, Core and Xeon, and in most modern microprocessors."

        http://www.theregister.co.uk/2013/05/08/intel_silvermont_microarchitecture/?page=2

        1. Claptrap314 Silver badge

          Re: Which Intel CPU's

          Out of order is not the same as speculative. You can do OoO for everything but branches and computed loads, and no instructions will be speculative. Better dig deeper into the specs.

  9. Sceptic Tank Silver badge
    Terminator

    Stephen Hawking

    This morning it occurred to me: Stephen Hawking has that Intel Inside logo on the screen attached to his wheelchair. So I'm assuming his speech synthesizer runs on some variant of Intel silicon. If he starts talking 30% slower he's going to sound like he is brain damaged.

  10. Anonymous Coward
    Anonymous Coward

    Is Intel guilty of negligence?

    According to reported CPU details, Intel could be deemed guilty of gross negligence for failing to maintain proper security level command execution in an effort to gain a minute performance increase. Neither AMD nor ARM CPU architectures suffer from this lapse of good judgment. As a consequence Intel is the only brand of CPU to actually experience a ~30% performance hit because Intel CPUs can only mitigate the security issue via software. Lawyers and consumers are bound to believe that Intel should be held accountable for their willful negligence in knowingly selling an insecurely designed CPU.

    1. Gordon 10

      Re: Is Intel guilty of negligence?

      Except we know that's not true. Both Arm and AMD are vunerable too albeit to a lesser extent. Which means there are at least genuinely novel and unforeseen aspects of these vunerabilities.

      Unless Intel did nothing for 6 months I'm not sure they deserve ambulance chasing. I rather suspect the back channels between the chip designers have been running hot for the last 6 months.

      These flaws are so severe that a reasonable case for secrecy can be made as long as those who needed to know (OS designers mostly) were kept informed.

      1. HieronymusBloggs

        Re: Is Intel guilty of negligence?

        "as long as those who needed to know (OS designers mostly) were kept informed"

        Looking at recent posts on the OpenBSD and other *BSD mailing lists, this did not include *BSD developers. The whole thing smells bad.

        1. Ellipsis

          Re: Is Intel guilty of negligence?

          > did not include *BSD developers

          Didn’t they burn their bridges by going public early with a patch for KRACK last year?

  11. Anonymous Coward
    Anonymous Coward

    Class action

    Ugh, class action. The only real winners are the lawyers. With a secondary victory for the corporation being sued. The losers are everyone who actually suffered a loss from the corporation's wrongdoing.

    Intel will be made to pay out millions of dollars, which may affect their stock price a little for a while, but ultimately, they can afford it. And having settled the class action, it's case closed for them, no-one can come after them for it any more.

    Meanwhile the affected millions of consumers each get their share of the millions of dollars (minus lawyers' fees), probably amounting to about 99 cents each.

    1. a_yank_lurker

      Re: Class action

      Not sure about these suits, they seem to be a SOHO based. I suspect the real damage will come when someone like Slurp, Failbook, or Chocolate Factory sues as they will have some eye popping metrics to show the financial damage Chipzilla caused to them. The first round of class action suits will look like chump change compared to some of the latter ones.

      Aslo, later suits will benefit from the pretrial discovery that will occur in the class action suits as it will expose more Chipzilla's legal weaknesses.

  12. D Moss Esq

    What didn't I know and when didn't I know it?

    ... the great and much-missed Ronald Reagan's questions to his officials allegedly before testifying on Contragate ...

    Similarly here, Meltdownwise and Spectrewise, what didn't Intel, AMD, ARM et al – "the chipsters" – know and when?

    They're going to have a hard time proving to the courts that they had no idea that there was a problem unless they can prove that their chip designs are unchanged since before 1971, 47 years ago:

    "Security

    "Time-sharing was the first time that multiple processes, owned by different users, were running on a single machine, and these processes could interfere with one another. For example, one process might alter shared resources which another process relied on, such as a variable stored in memory. When only one user was using the system, this would result in possibly wrong output - but with multiple users, this might mean that other users got to see information they were not meant to see.

    "To prevent this from happening, an operating system needed to enforce a set of policies that determined which privileges each process had. For example, the operating system might deny access to a certain variable by a certain process.

    "The first international conference on computer security in London in 1971 was primarily driven by the time-sharing industry and its customers."

    1. Anonymous Coward
      Anonymous Coward

      Re: What didn't I know and when didn't I know it?

      Erm this is a SW guideline, so your evidence actually is exactly what Intel, AMD and ARM will use.. the SW didn't use the HW right inspite of guidance since 1971.

      If you have ever worked with bare metal and custom ASICs, you'd know this is rather common. One side is blamed, the fence is between HW and SW.

      When a workaround exists, SW is at fault. Because the HW people will just say the workaround was always the right method... (you can't use the HW that way you were for performance unless you are willing to take the security risk, we'll improve documentation for next time...)

      1. D Moss Esq

        Re: What didn't I know and when didn't I know it?

        The prosecutors can use the article I cite to show that there was a problem there back in 1971 at the latest well-known to all qualified computer scientists, of which the chipsters must employ hundreds if not thousands.

        The chipsters' lawyers may well scoff and say "read the article, it says the problem can only be solved by software". The existence of the problem has nevertheless still been established.

        The prosecutors can then quote any number of erudite sources headed by ElReg to the effect that the solutions to Spectre and Meltdown are now known to be partially or entirely hardware, the chipsters have a case to answer and the prosecutors reserve the right to bring charges against the purveyors of operating systems as well in the future.

        1. Anonymous Coward
          Anonymous Coward

          Re: What didn't I know and when didn't I know it?

          "can only be solved by software"

          The "only" is a garnish there, that is not stated. Any more than it can "only" be solved by hardware, for a lawsuit.

          "all qualified computer scientists, of which the chipsters must employ hundreds if not thousands."

          So a lawsuit needs to show that Intel does not hire qualified people, or that those scientists raised the issue and were ignored and this was a promised product feature.

          "the solutions to Spectre and Meltdown are now known to be partially or entirely hardware"

          Not quite. The HW "accelerated" versions are, yes in HW. If there was no sw solution, yes it can get into faulty HW territory depending on what was promised. This is not the case.

          Was there wilful negligence.. i.e. it was common knowledge for OoO CPU design or Intel was informed and ignored it. This was what the original source cited implied - it implies this was known for *HW OoO CPU design* since 1971.

          If common knowledge, it should have been only Intel, but that is not so.

          If Intel was informed and still neglected subsequent design.. ok but no evidence yet. So innocent for now.

          If Intel specifically claimed guaranteed security, but knew of the weakness. Again no evidence.

          If Intel show in their design flow that they have taken reasonable steps to review designs etc, and that their design process is typical for the industry (or better), and that the problem was not recognisable, then it isn't a legal problem. It is a PR problem.

          It just becomes a lesson learnt in CPU design and engineering. As with all human created endeavours CPUs are imperfect too.

          The lawsuit seeks to assign guilt, but the verdict seems to be already here if you read the comments and articles. "Intel guilty" - I find this irrational.

          It is a fact that every product has bugs and so this line of thinking which is equating a bug to guilt would mean every manufacturer is guilty of selling faulty products. This is not a tenable stance.

          There needs to be more evidence than just finding a bug.

          1. D Moss Esq

            Re: What didn't I know and when didn't I know it?

            Understand, LOL123, I'm just trying to speed things up by looking ahead to what the courts will make of these cases.

            The courts will acknowledge that all of human endeavour is imperfect. They will know that many people once believed the earth to be at the centre of the universe and many people were all wrong.

            The courts may nevertheless be unimpressed by a chipster claim in this case that "these things happen". Professionals are meant to be masters of the body of knowledge in their profession. Lawyers, for example. And chip designers.

            Chip designers might be expected to know about the need for hiving off one process from another in a multi-tasking multi-user environment, a need identified no later than 1971 as noted. They might be expected to know that that's why the architecture is what it is. There's a kernel and there are outer rings with gradually lower and lower permissions. Since when? 1964, apparently.

            "We know that boundaries between processes must be enforced for security reasons but when we provided a way to ignore those boundaries it never occurred to us that there could be security implications" is not a powerful defence.

            The prosecutors will argue that there is no powerful defence. Any competent professional in the profession should definitively have known about the problem and understood it. In 1964. Or 1971. Or, at a pinch, 2003.

            4 April 2003, to be precise, when Keir Fraser of University of Cambridge Computer Laboratory (CCL) and Fay Chang of Google Inc. published Operating System I/O Speculation: How two invocations are faster than one. Under the heading Safety, they say: "It is easy to ensure that speculative execution is safe because operating systems already severely restrict the ways in which different processes can affect one another. As a result, a system needs to restrict speculative processes in only three simple ways to ensure safety".

            Chipster designers are academics, they know about university and industry research, they may even read it, that CCL/Google paper won't have been missed. And yet they failed to do the easy job. The defence is tottering by this stage. That's my pre-fetched take on the matter.

            Talking of which, have you seen Exploiting the DRAM rowhammer bug to gain kernel privileges on Google's Project Zero site: "When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory"?

            "... all of physical memory". Ouch.

            That was published in March 2015. God but the prosecutors are going to have fun with the claim that the problem came as a total surprise to the chipsters yesterday.

            1. Anonymous Coward
              Anonymous Coward

              Re: What didn't I know and when didn't I know it?

              Those are strong words - should, must, etc

              A paper stating "it must be secure" has little value. Any more than one saying "steps must be taken to ensure world peace."

              It has to be paper that is practical, specific to the issue and peer cited so that it is isn't an obscure one.

              i.e ensure that during OoO execution all HW access protection rules are honoured and considered. Even saying "HW access protection rules must be honoured" on its own is no good. There has to be something specific that constrains the attack surface so that the development cost is sensible. That is a valuable paper, because that is the challenge to get secure designs. i.e. when you have a 100 million lines of code, how do you make it secure?

              "Professionals are meant to be masters of the body of knowledge in their profession. Lawyers, for example. And chip designers."

              Fact - All major vendors affected - Intel, AMD, ARM, Apple, Qualcomm, etc i.e. This issue was not typical of the knowledge of the profession. All face the issue in different manifestations.

              From Intel T&C of sale, their caps, emphasis mine

              "SELLER SPECIFICALLY DISCLAIMS THE IMPLIED WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY AND FITNESS FOR A PARTICULAR PURPOSE, WHETHER OR NOT SELLER HAD REASON TO KNOW OF ANY SUCH PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHT OF A THIRD PARTY. NO ORAL OR WRITTEN INFORMATION OR ADVICE GIVEN BY SELLER OR AN AUTHORIZED REPRESENTATIVE SHALL CREATE A WARRANTY OR IN ANY WAY INCREASE THE SCOPE OF THIS WARRANTY. PURCHASER ACCEPTS THE RISKS OF USE AND SUCH RISKS FALL SOLELY ON PURCHASER. "

              That's the Intel lawyers being masters of their profession. By the same logic, if Intel lawyers are masters of their profession, there is no court case. But that is not proven yet either.

              Both (chip designers and lawyers) have taken step typical of their profession, this is what the present evidence says.

              1. D Moss Esq

                Re: What didn't I know and when didn't I know it?

                I'm trying to establish only one point – that the chipsters will have the devil of a job making anyone believe that the existence of the Spectre and Meltdown problems came as a surprise to them.

                What didn't they know? Anything. When didn't they know it? Ever.

                The rest of the world, the members of which do not spend all day every day ruminating about computer architecture, has not spent 50 years banging on about the issue because we thought/assumed that the chipsters had it gripped. How wrong we was. Bang OoO.

                NB

                Lenin had 100,000 people executed by the Cheka within a year of his taking office. The Tsar managed a paltry 17 in the year leading up to the Revolution – one heck of a performance improvement delivered by speculative execution.

  13. MarcC

    MINIX anyone ?

    The flaw lies arguably in the OS design rather than in the Intel hardware.

    A modern Operating System should use a microkernel architecture and have different address spaces for the kernel and the userland processes.

    Windows NT, Linux, Android do not use the hardware in a secure fashion, and this is the root cause of the problem. Arguably.

    1. GrapeBunch

      Re: MINIX anyone ?

      Is it time to dust off our 5.25" floppy disk copies of Concurrent CP/M and/or OS/2 ? Did they handle the problem better ? We live in a particularly irony-prone universe ... did some OSes fail commercially because they were properly designed relative to this problem?

    2. Ken Hagan Gold badge

      Re: MINIX anyone ?

      Is that why Intel used MINIX for their other 2017-security-related-disaster ?

      1. Mike 125

        Re: MINIX anyone ?

        >> Is that why Intel used MINIX for their other 2017-security-related-disaster ?

        Intel security team meeting held back in the day, (all records erased):

        "Guys, the lawyers say we're clean on any old x86 garbage. But the NSA access path - that's gotta be rock solid."

        1. Anonymous Coward
          Anonymous Coward

          Re: MINIX anyone ?

          If the NSA is involved, a FISA court will dismiss this in a heartbeat.

  14. Anonymous Coward
    Anonymous Coward

    Back in the day....

    .....I never had all this trouble with my CP/M computer. No hidden "management engines" in 8080 or Z80 CPUs.....anyone with access to the motherboard would find <gasp!> NOTHING THERE....because there was no hard drive. (Hint: the floppies were stored somewhere else.) There was no LAN, and the internet didn't exist, so "remote access" was also completely absent.

    *

    All that said, Wordstar and Supercalc and dBASE-II and BDS C were all very productive...a huge improvement on the manual methods used previously.....this was a huge stride in increased productivity.

    *

    Have we really made similar huge strides since then? It's pretty clear that the downsides today are MUCH more threatening. What am I missing?

    1. GrapeBunch

      Re: Back in the day....

      What you would have there, is a failure to communicate. Insert obligatory movie reference here.

    2. Ticowboy

      Re: Back in the day....

      Last year my son threw away my Osborne 1, the CPM manual and floppy disks. Broke my heart. I still remember using PIP (peripheral interchange program),Wstar, supercalc etc.

      Boy have we come a long way since then. I had two Osbornes, one for compiling and one for editing and testing COBOL programs. They each cost me £2600 ($4200 then). The screen on my iPhone X is bigger than that on the Osborne, it only cost me $1000 and does everything I need (today).

      In all my years of mainly windows OS updates there has always been an outcry about slowing down so that we can be forced onto the new hardware or software. In my humble opinion this is BS.

      The judges need to throw out these lawsuits but they are lawyers too so I am not optimistic.

  15. Florida1920
    Joke

    Wish I could be there

    When Vulture Central takes the stand as a witness in this case.

    COURT CLERK: Do you swear to tell the truth, the whole truth, and nothing but the truth?

    EL REG: AWK!

    DEFENSE COUNSEL: Your honor, this case is clearly for the birds!

  16. David Roberts
    Trollface

    Hard drive and CPU intensive?

    That is Window Update screwed, then.

  17. Anonymous Coward
    Anonymous Coward

    World Of Pain

    Looking at some of the articles on the Brent Ozar site it looks as though the patches to address the Meltdown and Spectre vulnerabilities is going to cause a world of pain for SQL Server DBAs

    https://www.brentozar.com/archive/2018/01/sql-server-patches-meltdown-spectre-attacks/

    In a work of pure genius Microsoft have advised turning off features of SQL Server that are required for another of their products SCCM to work. Another example of the different teams at Redmond apparently not talking to each other

    https://mobile.twitter.com/djammmer/status/949122372384141312

    No explanation from MS why the RDBMS needs patching along with Windows itself but one suspects that there are 'performance issues'.

    Maybe we should all be taking out Class Action suits against the chip vendors.

  18. Timbo

    So, what happens next?

    The lawyers do their bit, Intel is found guilty and offers compensation / replacement CPUs to all affected parties.

    Given that this is an "old problem" going back a number of years, maybe Intel will re-design all the CPUs affected by these bugs and offer some cash and free replacement parts?

    If so, then I'll look through my collection and get my affected CPUs packed up and returned to them (assuming they want proof of my ownership of the CPUs I'll be claiming for...) :-)

  19. Claptrap314 Silver badge

    Suability

    ALWAYS check the packaging, folks. I'm pretty certain that EVERY Intel, AMD, and IBM chips ever sold to consumers contained in the packaging the following notice: "This chips is not certified for use with classified information". Usually in bold. The same is very close to the front of the architecture manuals. I would dig mine out for reference, but they are in boxes in a garage somewhere...

    This is the importance of Intel's "works as designed" element in their press release. Hate on it all you want, this is not a "bug".

    BTW, I've turned off the L1 cache and/or the L2 cache for some reason or the other in the past. If you think 5-30% performance degradation is bad...

  20. Vanir

    Performance degradation on server farms

    equals a bigger electricity bill - for a few years.

    Intel's clever management and clever engineers seem to have taken decisions which may have an opportunity cost greater than they guessed.

  21. mark l 2 Silver badge

    If Intel have been continuing to sell silicon that is susceptible to meltdown after they were informed then they should be issuing a product recall for all those CPUs and replacing them or offing a refund.

    This is what car makers, white goods manufacturers have to do if their products are found to be dangerous, and while this vulnerability might not cause your PC to set on fire, it could result in a system being compromised that could cause major security concerns.

  22. ITnoob

    Looks like El Reg is off the Christmas card list then.

  23. DeeCee

    result will suck in either case

    if intel loses parasites get paid

    if intel wins - intel wins

  24. Nano nano

    Capability Needham

    Looks like Roger Needham's (+ Bjarne Stroustrup) Capability architecture should get another bite at the cherry ?!

    https://en.wikipedia.org/wiki/CAP_computer

  25. MarkSitkowski

    Here Come The Hackers, too

    Now that those clever researchers have told the world about a vulnerability that lay dormant and unknown for a couple of decades, every respectable hacker will be hard at work writing exploits - probably using the sample code issued with the release.

    Thanks, guys.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like