back to article Core-blimey! Intel's Core i9 18-core monster – the numbers

Intel's offered some more detail about the Core i9 range of desktop CPUs it announced in May. Here's what Chipzilla has planned for us all when these chips start to go on sale. The 12-core products will appear as of August 28th. 14-to-18-core kit will go on sale as of September 25th. Processor Name Core i9- 7980XE X- Core …

  1. Anonymous Coward
    Anonymous Coward

    Gamers?

    Is there even one game that benefits from having more than 4 cores? And 4K video editing, really? I thought professional-grade video editing suites made use of GPUs for rendering.

    1. Trevor_Pott Gold badge

      Re: Gamers?

      I am told by some of the hard core gamers in my sphere that if you want to do VR at 240hz then having 8+ cores @ 2..8Ghz or better is usually required. As I'm poor, and still working on a video card from 3 years ago and a Sandy Bridge-era CPU, I cannot confirm this.

      Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?

      1. Number6

        Re: Gamers?

        Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?

        This is probably one of those questions that you're best not trying to answer unless you've got plenty of money. If you try it and realise why you need it, you'll resent the expense if it's out of your reach.

    2. Robert Heffernan

      Re: Gamers?

      Well given the fact that most games don't need more than 4 cores aside, they tend to bolt on the following background tasks...

      * Stream encoding to upload to Twitch, etc.

      * Watching streams, youtube

      * Downloading torrents, etc

      Just because the game only uses a subset of cores doesn't mean the rest of the system isn't churning away on other processes.

      1. Anonymous Coward
        Anonymous Coward

        Re: Gamers?

        Well given the fact that most games don't need more than 4 cores aside, they tend to bolt on the following background tasks...

        Except it's cheaper and easier to do most of that with another machine!

        I've got an old clunker with a dual core AMD something or other in it, and a lump of ram which deals with the day to day of torrents, media streaming etc etc. It's pretty much always on, uses very little power, and produces even less noise.

        If I wanna play a game, then I'll spin up one of the big boys with a good GPU, play the games, then turn it off again, leaving the old clunker still streaming and torrenting.

      2. Aitor 1

        Re: Gamers?

        Sorry, but your "facts" are obsolete.

        These days you need at least 6 for the game and one (better 2) for the OS.. so eight core and six core systems are the best, in general (unless playing WoT).

    3. werdsmith Silver badge

      Re: Gamers?

      Is there even one game that benefits from having more than 4 cores? And 4K video editing, really? I thought professional-grade video editing suites made use of GPUs for rendering.

      Nobody needs more than 640K of RAM.

      1. Anonymous Coward
        Anonymous Coward

        Re: Gamers?

        Which of course no one, including Bill, ever remembers him saying.

        1. Lotaresco

          Re: Gamers?

          I can't remember the exact words, unfortunately, but he did say something about it during a TV interview with Sue Lawley back in the late 80s. It was in a discussion where he explained Microsoft philosophies such as "New releases are not there to fix bugs, they are there to add features" and he mentioned that either 640K was not a barrier to software development or that 640K was adequate for the intended use of a PC. If I recall correctly this was about the time of the release of the first extended memory boards and of a version of Lotus 123 that demanded the extra memory. It was also the time that the business unit I was in flipped to using Macs with 4 or 8 Mb of memory and Excel because 123 was creaking at the seams.

        2. keith_w

          Re: Gamers?

          It was an IBM engineer, not Bill. The 640K was a limitation of the design an 8 bit IBM personal computer. The maximum memory size was 1024K, of which IBM reserved the top 384K addresses for IO functions, so no Bill had no reason to say it.

      2. Lysenko

        Re: Nobody needs more than 640K of RAM.

        "I think there is a world market for maybe five computers."

        ... Thomas Watson, IBM, 1943 and it isn't apocryphal.

        1. Mage Silver badge

          Re: Nobody needs more than 640K of RAM.

          "I think there is a world market for maybe five computers."

          ... Thomas Watson, IBM, 1943 and it isn't apocryphal.

          "I think there is a world market for maybe five Clouds"

          Which is worrying. 19th C. potato famine comes to mind.

          By the late 1970s it was evident that there would be a clock speed limit and more performance would need a network of CPUs. Except the bottle neck is RAM and I/O. Not enough L1 Cache per core. Also the transputer was inherently a better architecture with local RAM per CPU. Serial interconnect needs to go 32 or 64 times faster than parallel, but at high speed the the interconnect is far easier to design (parallel traces need the same delay) and uses less chip area and pins than parallel. So I/O to slots, RAM, peripherals and additional CPU slots should all be serial except on chip. Even then if there are many cores with shared I/O using serial might be same speed per word and use less chip area.

          Ivor Catt did some good articles in the 1980s on this.

          Pity that Thatcher sold off Inmos.

          1. Lotaresco

            Re: Nobody needs more than 640K of RAM.

            "Pity that Thatcher sold off Inmos."

            one of my friends is the guy who wrote Occam. Even he admits that Inmos was going nowhere.

            1. Mage Silver badge

              Re: guy who wrote Occam

              Did Tony Hoare write Occam, or design it or just write papers about it?

              If Inmos was going nowhere it was due to fixation on Intel and lack of investment in Tech in UK, where companies relied on Military or BT spending and increasingly owned / controlled by asset strippers or bean counters with no vision.

              1. Lotaresco

                Re: guy who wrote Occam

                "Did Tony Hoare write Occam, or design it or just write papers about it?"

                None of the above. Tony Hoare (now Professor Sir C. A. R. Hoare) originated the theory of Communicating Sequential Processes, which was the foundation of the transputer concept. He is listed as "the inspiration for the occam programming language". David May created the architecture of the transputer and the development of Occam is not credited other than to "Inmos". However my friend was the person who wrote the Occam compiler.

                "If Inmos was going nowhere it was due to fixation on Intel and lack of investment in Tech in UK, where companies relied on Military or BT spending and increasingly owned / controlled by asset strippers or bean counters with no vision."

                I'm not convinced by the above explanation. Thorn EMI had underestimated the scale of investment needed and didn't realise until too late that booming transputer sales had been achieved by shipping as much product as possible but not investing in development. It was a slightly cynical exercise in making the company look a bargain for investors. My friend blamed the point-to-point link technology as a bottleneck in the technology.

                If you are interested in a potted history, including the financial, political and management cock-ups see the Inmos Legacy page by Dick Selwood on the Inmos web site.

          2. Anonymous Coward
            Anonymous Coward

            Re: Nobody needs more than 640K of RAM.

            "Ivor Catt did some good articles in the 1980s on this."

            Ivor Catt wrote some interesting articles but I wouldn't describe them as good.

            His ideas got as far as the Ferranti F-100/L microprocessor which had an internal serial architecture. The trouble was, compared to the TI 9989, another military microprocessor of the era, it was treacle to liquid helium. I know because I was on a project which used both of them.

            One place where Catt went very wrong was his assumption that power doesn't scale with clock speed. Another was that timing jitter wasn't fixable. With TTL and ECL there was truth in this; if you could clock an ECL circuit at 500MHz it would be hard to parallel due to timing problems and didn't use 10 times the power of the same circuit at 50MHz - because most of the ECL power consumption was its analog circuitry, even at DC.

            The coming of VLSI and CMOS destroyed both of Catt's assumptions; it became possible to parallel 64 data lines with clock speeds in the GHz range, which he never foresaw. As CMOS power scales very roughly with clock frequency for given design rules, a good parallel one will always beat a good serial one.

            It isn't a pity that Thatcher sold off INMOS but it was a disaster that she didn't save ICL. Politicians used to mantra that they couldn't pick technology winners, but for some reason that never applied to companies that made things that went bang, only to things that were slightly beyond the grasp of civil servants with degrees in Classics and a poor maths O level.

            1. Mage Silver badge

              Re: power doesn't scale with clock speed.

              Yes. power consumption is non-linear with clock, a square law. Higher speeds have been achieved by lower operating voltage and also smaller (related) gate area to reduce capacitance. That's partly why 14nm isn't 14nm in the sense that 90nm is 90nm. Not all aspects have been scaled down.

              That's why in the last 15 years number of cores and architecture rather than actual clock is the biggest change.

            2. Mage Silver badge

              Re: Parallel data lines

              Did you re-read Catt lately or try to design an motherboard?

              The issue isn't on chip (Catt wasn't espousing the F100L, which was rubbish) but BETWEEN chips. PCB design of CPU to RAM is a horror story at high clocks and wide buses.

              ICL was moribund long before Inmos. The UK was first with commercial computing, but by 1960s along with consumer electronics was destroying it. Read "The Setmakers".

        2. Sgt_Oddball

          Re: Nobody needs more than 640K of RAM.

          If only he 'each'.....(I own 7 at the moment...)

        3. 404

          Re: Nobody needs more than 640K of RAM.

          'The government never should have let the public own computers..'

          My Dad, to his son with a career in infosec, on why he didn't need to know anything about protecting himself/identity or get on the internet*.

          *yet oddly not dedicated to his beliefs to call that son on his copper line push button corded phone to look something up on that same evil should-be-banned internet for him...

        4. Richard Plinston

          Re: Nobody needs more than 640K of RAM.

          > "I think there is a world market for maybe five computers."

          Yes, he did say that. Given the cost of building those computers in 1943 and the number of companies and governments that could afford it at that time he was correct.

        5. Tronald Dump

          Re: Nobody needs more than 640K of RAM.

          No one needs a computer more than 16 megaliths.

          Builders of stone henge.

          (disclaimer : I don't really know if they said that)

    4. Gene Cash Silver badge

      Re: Gamers?

      No, from running xosview while playing a ton of steam games, I can say games do not use more than 2 cores. For example, KSP uses one core, then a tiny bit of another for mods like MechJeb. The new Oddworld uses 2 cores pretty heavily, but nothing else does. (note: I don't play FPSes, so I have no data on things like Modern Warfare or Call of Duty. I mostly do "sandbox" games.)

      The rest of my 8 threads sit there idle.

      I was interested in this because I wanted to see where the bucks I spent on my machine and my graphics card were being used.

      However, video editing tools avidemux2 just munch on all the cores when transcoding as ffmpeg is written to use multiple cores well.

    5. TheVogon

      Re: Gamers?

      "Is there even one game that benefits from having more than 4 cores?"

      Anything running Direct-X 12 that is CPU bound for a start.

    6. NoneSuch Silver badge
      Coffee/keyboard

      Re: Gamers?

      44 PCIe lanes. Two x16 PCIe GPU's in SLI consume 32 of those lanes leaving 12 lanes for only three x4 PCIe SSD, NVMe, etc.

      Keep your 36 cores. This is a kneejerk response to RyZen Threadripper and its 64 PCIe lanes. No serious gamer would touch this limiting hardware.

      1. Charles 9

        Re: Gamers?

        Doesn't the support chipset provide additional lanes for lower-priority stuff?

      2. tim292stro

        Re: Gamers?

        Many of those lanes will also be consumed by NIC and other on-board devices. It's probably the same two x16 and one x8 setup Intel has rinsed and repeated for a decade - still wondering when they'll get the memo that people want general purpose slots to plug items into their general purpose computer...

    7. Robert Jenkins
      WTF?

      Re: Gamers?

      Many games now support eight cores (at least).

      The first one I got dates from 2011.

      The "no more than four cores for games" thing is a total myth.

    8. Halfmad

      Re: Gamers?

      Even with Rizen you'll see better performance in games but it particularly shines when streaming or recording too. Having more cores just generally keeps things a lot smoother.

      The problem I increasingly have with Intel isn't core, it's locking down functionality on boards artificially behind paywalls purely to market them as different models. That's why my next CPU will be AMD, right now I've got an i7-6700K which is no slouch for video processing but there's little reason to head back to Intel and pay the premium.

  2. ilmari

    Software video encoding typically produces superior quality for a set bitrate, whereas GPU video encoding is quick and dirty.

    1. TheVogon

      "Software video encoding typically produces superior quality for a set bitrate, whereas GPU video encoding is quick and dirty."

      Uhm, no. Hardware encoding is usually better quality as it does the exact same thing but is much faster and therefore can use more iterations...

      1. Tom 38
        Thumb Down

        Uhm, no. Hardware encoding is usually better quality as it does the exact same thing but is much faster and therefore can use more iterations...

        Uhm, double no. Video encoding is almost always a three way trade-off between speed of encoding, visual quality of the outcome and bitrate of the outcome.

        Hardware encoding is more limited in terms of codec features and options, because putting the algorithm in hardware reduces the amount of options compared to the flexibility of software. Especially so in consumer hardware encoders, which are small independent dedicated pieces of silicon in the CPU/GPU.

        Now, this is dead easy to see because of CRF (Constant Rate Factor) in x264. You can tell an encoder that you want the visual quality of the outcome to the level indicated. It is trivial to produce one encoding using x264 and one encoding using a hardware encoder, both with the same CRF setting. The outputs will be visually comparable in quality terms, but the hardware encoded video will be larger in size.

        So hardware encoders; faster output, same visual quality, higher bitrate. These are lower "quality" videos than a software encoder would produce, for a given meaning of "quality". For "scene" releases, no-one is using hardware encoders, because they produce lower quality videos.

        1. Anonymous Coward
          Anonymous Coward

          "Uhm, double no."

          Uhm, Triple no. What do broadcast quality X264 and X265 codecs use? Hardware of course....

          (For instance DVEO)

          1. Tom 38

            What broadcasters use is not relevant to how consumer video encoding offload chips function.

            You think broadcasters use one of nvenc (Nvidia), Quick Sync Video (Intel) or Video Coding Engine (AMD)? Evidently not, as you know they use high end hardware encoders like DVEO that bake the algorithm in to silicon.

            I clearly stated that I was talking about consumer hardware video encoders, and I'll repeat it again: for a given bitrate, software encoders produce higher quality output than consumer hardware encoders. The only thing that consumer hardware encoders do better than software encoders is speed.

            If you are arguing otherwise, and don't want to appear foolish, an hour spent reading doom9 might help.

            1. Aitor 1

              Yes and no

              We setup a company using GPUs and CPUs on general purpose servers to encode/transcode movies/series for IP-TV.

              It worked as a charm, for a mere fraction of the cost.

              1. Charles 9

                Re: Yes and no

                Your clients probably aren't so interested in overall quality, so they're willing to sacrifice quality for speed (and thus turnover). OTOH, if you were say a BluRay mastering firm with a more generous time budget, you'd probably take a different approach.

                Also, historically, GPUs are less suited for a job like video encoding because the balance of quality and speed produces workloads that are less conducive to parallelization (think divergent decision making that can hammer memory or spike the workload).

        2. Anonymous Coward
          Anonymous Coward

          "Hardware encoding is more limited in terms of codec features and options, because putting the algorithm in hardware reduces the amount of options compared to the flexibility of software"

          Flexibility ! = quality. Given a requirement you can design a hardware codec to do whatever codec / settings you want to - it will be much faster in hardware.

          "So hardware encoders; faster output, same visual quality, higher bitrate."

          But therefore, for the same given encoding time, a hardware encoder will give a higher quality output / and / or at a lower bitrate.

          1. Tom 38

            But therefore, for the same given encoding time, a hardware encoder will give a higher quality output / and / or at a lower bitrate.

            No, not really. The hardware encoder cannot

            Encoders have "presets", ways of controlling how the encode works, and "levels", what features are available to use in the targeted decoder. Eg, streaming to a STB you might have level 5.1 content, but streaming to a mobile you might have level 3 content.

            Software encoders tend to have many presets to determine how much prediction/lookahead to use in encoding a frame. The more lookahead you use, the more efficient the encoding can be, and the smaller each frame can be whilst still encoding the same visual quality. Therefore, in software encoders you can optimise your encode to give the lowest bitrate for the chosen quality. Most videos that are made for distribution are encoded using the preset "ultraslow", because this reduces the file sizes significantly at the expense of a lot of speed.

            Consumer hardware encoders don't do this. They have short lookaheads, which keeps the speed high. They use fixed length GOPs, (i-P-B-B-P...), where as x264 will use irregular ones (better quality, better compression). You can't really make it go slower with higher quality per bit (although you can make it go faster with lower quality per bit).

  3. Anonymous Coward
    Anonymous Coward

    THe Intel I(2N+1) will cost you N times as much as it's predecessor and will be ( 1 + 1/N ) times faster.

  4. Anonymous Coward
    Anonymous Coward

    Intel's Core i9 revealed to reach 36 cores. Not.

    Execution threads /= cores, especially the way Intel implements them. For most workloads I care about, the benefit of these threads is at best modest, and at worst negative.

    The more important issue is that these 18 cores are supposed to see the main memory through just four memory channels shared between them. Given that you can saturate this memory subsystem with just two cores, and will almost certainly saturate it with half a dozen, the benefits of having another twelve cores sitting around are questionable for any real-world usage. Cramming more and more processing elements at the end of a thin straw connecting you to the memory system is not a solution; there must be a more sensible way of using these transistors.

    I am sure these CPUs will perform fantastically well on a few, carefully-selected benchmarks and will look amazing in demos. For the real-world usage, you'd be better of with quarter of the CPU cores and a few extra bucks in your pocket.

    1. Anonymous Coward
      Anonymous Coward

      Re: Intel's Core i9 revealed to reach 36 cores. Not.

      I agree whole heartedly, AMD had a range of chips that shared the memory system between pairs of cores; it was called FX, and it was a pile of shit compared to its predecessor the Phenom II, ESPECIALLY for gamers

      1. anonymous boring coward Silver badge

        Re: Intel's Core i9 revealed to reach 36 cores. Not.

        FX outperformed Phenom II.

        Although the improvement wasn't as high as the gaming community would have wanted.

        Tantrum ensued.

    2. Infernoz Bronze badge
      Meh

      Re: Intel's Core i9 revealed to reach 36 cores. Not.

      Lastly i9 are ridiculously expensive, so more GPU capacity maybe better value, in part because GPUs maybe better for parallel signal processing.

      Yes, most of those i9 cores will probably chock unless reserved for only 2 hyper-threads mostly working with code and data in the core L1 cache; the more context switches and L1 cache misses the slower the code will run!

      1. chuckufarley Silver badge

        Re: Not an asterisk on the price tag.

        Memory channels and cache considerations aside, just the price tag would have me running to Xeon CPUs that could fit in dual socket boards supporting buffered ECC RAM. If my data is worth that much to crunch it's worth more to do it right.

    3. TheVogon

      Re: Intel's Core i9 revealed to reach 36 cores. Not.

      "Given that you can saturate this memory subsystem with just two cores, and will almost certainly saturate it with half a dozen, the benefits of having another twelve cores sitting around are questionable for any real-world usage"

      That's what the large chunk of on CPU cache memory is for.

  5. Munkeh

    Average use case

    From my point of view there's the 'bang for your buck' factor to consider as well, at least for the 'average' home user.

    I have a desktop I game on, stream my rubbish gaming occasionally and do all the things gamers do while playing. Granted I don't have much time to game these days but more cores/threads makes that experience smoother and I found the recent AMD Ryzen 5 1600 (OC'd to 3.85Ghz) to be a good match, especially considering the relatively low price.

    Would I like an 18 Core/36 Thread i9? Probably. Could I justify the significant extra cost for my, probably quite common, use case? No. Same goes for the Ryzen 7 1700/1800 though - the extra cores don't add up to a useful performance boost for the price, in my use case.

    As always your mileage may vary - but the real winner of all this is the consumer. Actual competition between Intel and AMD is a GOOD thing. Whichever camp you prefer.

    1. Wade Burchette

      Re: Average use case

      I love competition. Do you really think Intel would release these if not for the AMD Ryzen Threadripper? I can't wait for actual benchmarks from independent testers on both the i9 and Threadripper. These are obviously niche products, but it puts pressure on the prices for mainstream products, which means our wallets win.

      We need to remember how good of a design Ryzen is. Rumors are the yields of the Ryzen are great. But the beauty of the design is that AMD can link cores together in a mesh. So when Intel needs a 16 core CPU, they have to make a large one. And the larger the die, the lower the yields. When AMD needs to make a 16 core CPU, they just make two 8 core ones and mesh them together. I can buy a 16 core Threadripper for $999, or a 10 core i9 for $999. The choice is easy. But the best thing is I actually have a choice. Intel must copy AMD's mesh design. But even if Intel started today, it would still take over a year to get to market.

      The next thing I hope is that the Vega video card is a winner. We need to put pressure on NVidia's prices now. I love competition: lower prices and better products. What is not to like?

  6. Your alien overlord - fear me

    $276 for top of the line i9 versus $1700 for the i7? You sure?

    1. Anonymous Coward
      Anonymous Coward

      The $276 figure is the premium you pay above the price of the i7.

  7. Anonymous Coward
    Anonymous Coward

    36 cores at 4.2 GHz?

    No, it's 18 cores at 2.6GHz. You only get to the rarefied heights of 4+GHz when only a few of the cores are active.

    1. phuzz Silver badge
      Flame

      Re: 36 cores at 4.2 GHz?

      According to this table (from PCGamer) it can manage 3.4GHz with 18 cores, 42.GHz with two cores.

  8. redpawn

    No need for a home heating system

    Just lots of money for the computer and the power to feed it.

  9. Ken Hagan Gold badge

    Nice L3 cache you've got there

    You could run WinNT quite nicely on that, although you'd need to tweak a BIOS setting to disable hyperthreading or else the number of cores on the top-end part would be too large.

    I don't know if you could run Win95 on it. Do these things still have a mode where they can run 16-bit instructions?

    1. joeldillon

      Re: Nice L3 cache you've got there

      Yes, they do, as every x86 chip does. They all still start out in 16 bit mode like it's 1985 until the OS switches them into long mode.

      1. Charles 9

        Re: Nice L3 cache you've got there

        Even with EFI-based systems?

        1. Lennart Sorensen

          Re: Nice L3 cache you've got there

          No on UEFI systems they start in 16bit mode then the firmware rather quickly switches to 64 bit mode and that's the mode it starts the OS in, unless you enable legacy boot mode, in which case it switches back to 16 bit for booting.

  10. Adam 52 Silver badge

    I haven't really cared about desktop CPU performance in years. The limiting factor on performance these days seems to be whether the bloated apps have consumed all the memory and started thrashing, if whether the badly written JavaScript has got stuck in an infinite loop or the anti-virus has taken it upon itself to scan every DLL load.

    More cores is nice, but only because it gives you a working core to use to kill the aforementioned JavaScript process.

  11. David Roberts
    WTF?

    Just me?

    Or is the 10 core budget version the highest performer?

    Highest base clock, highest boosted clock, lower rated power useage.

    For everything but the most obscure workloads individual core performance is likely to trump the number of cores once you get above, say, 8 (possibly 4 or less) especially with two threads per core.

    1. Beech Horn

      Re: Just me?

      14 core doesn't look bad either when you see the speeds across cores for TurboBoost. It requires a bit more in-depth analysis than an article which gets the core count wrong though...

    2. ArrZarr Silver badge

      Re: Just me?

      The only answer that anybody could give you and be right is: It depends.

      Heavy Multithreaded CPU workloads that aren't being palmed off to the GPU will definitely benefit from the extra cores at a lower frequency. Also there will probably be similar potential for overclocking across the chips so you'd probably be able to get any of these chips screaming along at 5GHz+ with watercooling.

      If you're gaming, most games will bottleneck on the graphics first even down at the mid i5 range which reach similar frequencies anyway. Bringing streaming into the mix, more cores are handy as it means that any encoding and CPU manged network activity isn't using the same core(s) you're gaming on.

      Video editing is dependent upon your setup but there will probably be some part of the workflow which is CPU intensive.

      VR gaming could probably use more cores due to the number crunching required to prevent motion sickness but is also highly GPU dependent.

      so yeah. It depends.

  12. This post has been deleted by its author

    1. ArrZarr Silver badge

      Re: Xenon

      Xeons are slightly different beasts. This price range of ~$2k gives you 14 cores with a base speed of 2GHz, turboing up to 2.8GHz and Xeons don't overclock in the same way as the Core processors. You'll also get all sorts of datacentre gubbins and probably improved warranties etc.

      For a Streamer, the i9 processors are a better deal.

    2. Roj Blake Silver badge

      Re: Xenon

      The problem with Xenon CPUs is that it's hard to contain all that gas.

  13. This post has been deleted by its author

    1. Michael Duke

      Re: Cost of AMD CPU In General

      Shadmeister.

      Look at the AMD Ryzen 3 1200 mate, much better option compared to a current gen APU.

      If you want an APU wait for the Zen based ones towards the end of the year.

    2. Lotaresco

      Re: Cost of AMD CPU In General

      "Is there something about AMD i am missing - and why don't vendors use AMD more ?"

      They seem to use AMD quite often. The thing you need to check is TDP in the specs. Some AMD CPUs gobble electrons, although they have been getting better recently.

    3. phuzz Silver badge

      Re: Cost of AMD CPU In General

      The short version (and I'm trying to get this comment in before the Intel and AMD fanbois start fighting) is that AMD hasn't been making competitive CPUs for a few years, until now.

      They've been making cheap CPUs, but Intel are still making the fastest. With their new Ryzen (silly name) architecture, it seems like AMD are finally at least in the same race as Intel, so I suspect you'll start to see them being used by more OEMs. AMD are cheap because otherwise nobody would buy them, they've been losing money to stay in the game.

      Unless you're looking at the high end, the AMD chip will probably be better value for you.

    4. anonymous boring coward Silver badge

      Re: Cost of AMD CPU In General

      AMD has better value CPUs if you don't need the absolutely fastest available. It has had this for a long time now. With Ryzen they may actually now compete, or beat, Intel in the top performance level too.

      Cheap motherboards for AMD are easier to find, and AMD traditionally has had good upgrade paths for faster CPUs on older motherboards (i.e sockets). Meaning often RAM and Mobo investments can be kept for longer.

      Sadly Ryzen isn't available for AM3+ sockets, so there is a definite break with the previous generation AMD CPUs. (AM3+ has had a good run though).

      I have run AMD in all my PCs for the last 18 years, so someone may want to add Intel info and correct me on the value aspect..

      P.S: There was a debacle about Intel's compilers fixing the binaries to run much faster on Intel CPUs, in effect making benchmark software (as well as actual applications) favour Intel. IRL AMDs are quite fast.

      P.P.S: "Is there something about AMD i am missing - and why don't vendors use AMD more ?"

      There is a lot of business decision making going on, with lock-ins, Intel leveraging it's size, sales trickery, and so on. Comparable to MS vs the rest.

      P.P.P.S: The value of having at least one other player competing with Intel is immense. That's one reason II never abandoned AMD.

      1. This post has been deleted by its author

    5. kain preacher

      Re: Cost of AMD CPU In General

      Because Intel gave out rebates to those that used Intel only. At one point Intel was giving Dell close to a billion dollars a year i rebates. When AMD offered to give HP 1 million free CPUs HP turned it down because the amount of money they would lose from intel was to great. Things have started to change in the last 5 years though.

      1. Anonymous Coward
        Anonymous Coward

        Re: Cost of AMD CPU In General

        If you want to know why they stopped these deals, the €1.43 BILLION fine helped.

        https://www.theverge.com/2014/6/12/5803442/intel-nearly-1-and-a-half-billion-fine-upheld-anticompetitive-practices

        1. kain preacher

          Re. Hp is still using low end AMD APU: Cost of AMD CPU In General

          But by then the damage has been done. Untill Ryzen came out Dell only used AMD cheapest chips on the lowest spec crap . HP is still using low end APU from AMD. Lenovo seems to have a decent spec AMD line up.

    6. BinkyTheMagicPaperclip Silver badge

      Re: Cost of AMD CPU In General

      Until recently AMD haven't been comparable except at the low end - their APU offerings are ok because the standard of bundled GPU is better (for desktops) than the Intel alternative. They also do some interesting embedded options.

      With Ryzen, at the high end they're not quite as good as Intel, but a lot cheaper. If you don't need the absolute fastest single threaded performance they're a decent deal.

      They haven't kept up with virtualisation enhancements like Intel, though, aside from the new encrypted memory options, which is a pity at the server end.

      For a low end box, I'd have no issue using AMD. For a reasonably high end desktop that's mostly concerned with running lots of processes, but also needs to be quite fast, I'd also consider AMD. For an all out gaming box I'd go Intel, and for virtualisation I'd look at a Xeon.

      For an embedded firewall I'm looking at an Alix APU2. AMD Jaguar core, fanless, decent encryption support on chip.

      1. kain preacher

        Re: Cost of AMD CPU In General

        Athlon 64 were. In fact for a tiny bit there were the fast clacked x86. Athlon's were the first to the GHz race

  14. P0l0nium

    Bragging rights ...

    The "high end" of this HEDT thing is all about bragging rights: Intel scrambled to release an 18 core part because they weren't about to have AMD deliver "moar coars" .

    So now they have problems getting the heat out of this thing and the AMD part has an advantage there because its "heat generating area" is larger and distributed (because it has 2 or 4 widely spaced die under a bigger slug of copper).

    So I guess we're about to find out if Intel's "process advantage" is real ... right ??

    1. This post has been deleted by its author

    2. phuzz Silver badge

      Re: Bragging rights ...

      And AMD are not cheaping out and using crappy thermal paste between the CPU die and the heatspreader.

      (seriously Intel, how much are you really saving on a £2000 part?)

  15. msknight

    Reizen...

    ...is out there and aimed at the same graphics processing audience... and like this i9, doesn't appear to be punching much at gamers.

    Reasonable, sort-of Reizen roundup...

    https://www.youtube.com/watch?v=6ZifJ3DvumA&t=336s

  16. Anonymous Coward
    Anonymous Coward

    Are 36 cores really that useful for anything? TBH i'd rather have a few really fast general cores, and a wodge of easily programmable FPGA silicon.

  17. Robert E A Harvey

    Scared again?`

    Is this just to shit on AMD?

    They could presumably have done this at any time in the last 3 years, so are they doing it now because AMD are climbng out of the well at last?

    1. Nimby
      Devil

      Re: Scared again?`

      "Is this just to shit on AMD?"

      Heh heh. Wouldn't you? Nothing to do with fear. Everything to do with needing a good laugh. A king needs a good court jester to kick every now and then.

  18. anonymous boring coward Silver badge

    If they insist on the top-of-the-line i9-7980 their desire will come at a cost of about $276"

    Sounds pretty reasonable to me!

    Still sticking with AMD though.

  19. Nimby
    Facepalm

    These aren't the cores you're looking for...

    Intel's i9 is a solution in search of a problem. As a gamer who builds his own boxes, unfortunately, I don't even remotely see how i9 helps gamers.

    There are basically two big bottlenecks to gamers today: PCIe lanes (aka more graphics cards and m.2 SSDs please!) and memory bandwidth.

    Throwing more cores at the problem is, at best, just making things worse. It lowers the top-end GHz. (To date that still matters a lot. It's why we OC. Duh!) Gamers need the opposite of the i9: less cores on a larger die with a better thermal interface and lower voltages so that they can push the need for speed with a chip that OCs well.

    If Intel really wants to help gamers, they need to ditch the more cores = better concept and get back to basics: Faster is better, bandwidth is your bottleneck, and cooling is king. It's a recipe as easy as π.

  20. Pascal Monett Silver badge

    I want one !

    18 cores. Miam.

    Got a 4 core i7-6700 since 2015. I also slapped in 32GB of DDR4-3200.

    Can I justify upgrading ? Not really. Doesn't matter. I want one of these babies. I'll get one in 2019 probably. With 64GB of DDR5 (by then).

    I'll be able to push 7 Days to the full 3840 x 2160 of my widescreen. Finally.

    Of course, by then another game will come out that will put my rig to its knees. As usual.

  21. deconstructionist

    To be honest the bottleneck in most PC's is the GPU not the CPU, I've been running a Hasswell 5820K 6 cores clocked to 4.5 per core for a while now and I have tried 3 different GPU's GTX980 , TITAN X, GTX1080TI on a ASUS IPS Gysnc 144htz 27" .

    Each card is happy to run everything at 1080p at 144htz , at QHD 2560 X 1440 the 980 wont hit 144htz in modern FPS's with all the goodies but the other two are happy , and at 4K UHD the TITAN X drops sub 144 htz every now and then but the 1080 seems flawless unless you start plugging in second monitors.

    4K play back/encoding the 980 is utter shit , the TITAN X is ok but the 1080 wins hands down (and the cpu or how many cores makes no difference).

    Spend your cash on a good monitor and GPU , and spend what is left on a mediocre CPU ...works a treat.

    Simple rules for gamers

    1. always spend more on your monitor and gpu than anything else.

    2. you only need SLI if you have more than one monitor or you are trying to get 2 crappy cards working which is usually pointless.

    3. you don't need a 60 core x 100 ghz to run Dota2/LOL/WOW/CSGO.

    4. V.R ready does require big blue or quantum computing for a 10min on rails zombie train shooter

  22. John Savard

    Price Premium

    Paying a higher price for a CPU to get the highest possible performance - particularly when the cost of the rest of the system reduces the percentage extra one is paying for higher performance - is not irrational. Which is part of why Intel can get away with its current pricing.

  23. Howard Long

    Multiple cores makes development a breeze

    My use case is cross platform and embedded development, my daily driver for this is a dual core Xeon E5-2697v2 (24C/48T) from the Ivy Bridge era. If you have thousands of source files to compile for multiple targets, or for the edit-compile-debug loop, it makes it a relative breeze. Going to the more mainstream i7-7700K or even Ryzen 7 1800x is really quite a disappointment for productivity (relatively speaking, of course!)

    1. GrumpenKraut
      Boffin

      Re: Multiple cores makes development a breeze

      I strongly suggest you look at ccache. If it is an option (it should) you may cut compile time by a factor of 100.

      In case you cannot use ccache and have some money to splash, Naples has 32 cores (64 threads) per socket. Dual socket systems should be available, that's 128 threads for you.

    2. Roj Blake Silver badge
      Headmaster

      Re: Multiple cores makes development a breeze

      The E5-2697 v2 has 12 cores, not 24.

      http://ark.intel.com/products/75283/Intel-Xeon-Processor-E5-2697-v2-30M-Cache-2_70-GHz

  24. nickx89

    Well, those are threads not physical cores.

    To put it simply, a thread is a single line of commands that are getting processed, each application has at least one thread, most have multiples. A core is the physical hardware that works on the thread. In general a processor can only work on one thread per core, CPUs with hyper threading can work on up to two threads per core. For processors with hyper threading, there are extra registers and execution units in the core so it can store the state of two threads and work on them both.

    1. Nimby
      Devil

      Re: Well, those are threads not physical cores.

      "For processors with hyper threading, there are extra registers and execution units in the core so it can store the state of two threads and work on them both."

      Are there though? Pretty sure the horrible inefficiency of logical cores I see on most systems (especially craptops) comes down to that NOT being the case. It's just trying to execute two threads in the same compute resource and, frankly, there just ain't enough to go around. It's why in my software I tend to limit execution by physical cores. Logical cores are only good for background processes and services. ;)

      (Which, in today's OSes at least, is a useful thing to have. But only helps in that it frees up the junk processes to run in their own hell of ineptitude so that everything else has real compute power to run on.)

      1. Anonymous Coward
        Anonymous Coward

        Re: Well, those are threads not physical cores.

        Yes -- the OP is incorrect ; execution resources are shared and not particularly increased

        for the second thread. The reg files are doubled up, but it is about utilising at higher fraction

        of a fixed superscalar resource.

        However, most cores on most code are idle most of the time. Waiting on memory. The sharing of execution resources isn't really why most people see "hype threads" as not working so well.

        The more fundamental problem is the sharing of the L1 cache and the bus to L2. L1

        thrashing in particular can be as painful to watch as two keystone cops vying for the same door

        and neither making it through.

        However

  25. Samsara

    I use Macs & Final Cut Pro X for a living, & the loads appear to be distributed between GPU & CPU (couldn't tell you *exactly* what is doing what) from seeing the performance of a variety of machines that I use...so yes, a big fast new chip (I assume this is whats going in the forthcoming iMac 'Pro') would be very much welcome...also in the audio processing side of things its 100% CPU load

  26. analyzer

    Curious

    Most people seem to recognise that more cores/threads is not important above a certain number. The issue, as always, is getting enough data into and out of the CPU.

    I would have thought that the deficit that Intel have regarding PCIe lanes is far more important to the high end people than actual core count. Ryzen will have 128 PCIe lanes and Intel are still stuck with 44, that amounts to a potentially huge data throughput deficit for Intel i9 processors and there is no indication from Intel that this will change.

    The caveat is wait for real systems to turn up and crunch the numbers on, but on a system wide basis, it's looking far better for AMD than it has for years. Of course the system builders and MoBo manufacturers have to take advantage of those extra 84 PCIe lanes for Ryzen to really shine.

  27. jason 7

    I'll stick...

    ...with my 5820K for now. Plus in a couple of years I'll have some affordable Xeons to play around with from Ebay.

  28. jeffdyer

    36 threads, not 36 cores.

  29. Anonymous Coward
    Anonymous Coward

    "If you are willing to pay that amount of money ... is it not better to go for a Xenon CPU ?"

    If the year falls before the year < 2022, yes. Technically, these are the old/current gen Xeons rebadged for desktop, I see nothing different than the exact 14c I have besides you aren't guaranteed anything with these i9's (xan. they even run in parallel?). I've been running 2 OK'ish 14 core ES chips for nearly 2 years, cost $300usd total for both. $720 for 128GB Hynix and $450 for a supermicro mobo. Still less than 1 of these CPU's.

    I think if I cared about gaming and streaming and all that, I clearly would buy AMD (actually I might stitch these CPU's and buy AMD anyway).

  30. TheElder

    Parallel CPU limitations

    There are fundamental limits on what may be achieved with multiple CPU architecture.

    Amdahl's law

    It may not mean anything soon.

    Graphene transistor could mean computers that are 1,000 times faster

    Less than a month since the above link we have this:

    First graphene transistor?

    And this:

    SAMSUNG Electronics Presents a New Graphene Device Structure

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like