nav search
Data Center Software Security Transformation DevOps Business Personal Tech Science Emergent Tech Bootnotes BOFH

back to article
Core-blimey! Intel's Core i9 18-core monster – the numbers

Anonymous Coward

Gamers?

Is there even one game that benefits from having more than 4 cores? And 4K video editing, really? I thought professional-grade video editing suites made use of GPUs for rendering.

11
3
Gold badge

Re: Gamers?

I am told by some of the hard core gamers in my sphere that if you want to do VR at 240hz then having 8+ cores @ 2..8Ghz or better is usually required. As I'm poor, and still working on a video card from 3 years ago and a Sandy Bridge-era CPU, I cannot confirm this.

Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?

23
1
Silver badge

Re: Gamers?

Apparently VR is a thing that some people do. I don't understand. Why do you need VR to play Scorched Earth?

This is probably one of those questions that you're best not trying to answer unless you've got plenty of money. If you try it and realise why you need it, you'll resent the expense if it's out of your reach.

4
0

Re: Gamers?

Well given the fact that most games don't need more than 4 cores aside, they tend to bolt on the following background tasks...

* Stream encoding to upload to Twitch, etc.

* Watching streams, youtube

* Downloading torrents, etc

Just because the game only uses a subset of cores doesn't mean the rest of the system isn't churning away on other processes.

10
2
Silver badge

Re: Gamers?

Is there even one game that benefits from having more than 4 cores? And 4K video editing, really? I thought professional-grade video editing suites made use of GPUs for rendering.

Nobody needs more than 640K of RAM.

44
10
Silver badge

Re: Gamers?

Which of course no one, including Bill, ever remembers him saying.

17
1
Silver badge

Re: Nobody needs more than 640K of RAM.

"I think there is a world market for maybe five computers."

... Thomas Watson, IBM, 1943 and it isn't apocryphal.

21
1
Silver badge

Re: Nobody needs more than 640K of RAM.

"I think there is a world market for maybe five computers."

... Thomas Watson, IBM, 1943 and it isn't apocryphal.

"I think there is a world market for maybe five Clouds"

Which is worrying. 19th C. potato famine comes to mind.

By the late 1970s it was evident that there would be a clock speed limit and more performance would need a network of CPUs. Except the bottle neck is RAM and I/O. Not enough L1 Cache per core. Also the transputer was inherently a better architecture with local RAM per CPU. Serial interconnect needs to go 32 or 64 times faster than parallel, but at high speed the the interconnect is far easier to design (parallel traces need the same delay) and uses less chip area and pins than parallel. So I/O to slots, RAM, peripherals and additional CPU slots should all be serial except on chip. Even then if there are many cores with shared I/O using serial might be same speed per word and use less chip area.

Ivor Catt did some good articles in the 1980s on this.

Pity that Thatcher sold off Inmos.

9
0
Silver badge

Re: Nobody needs more than 640K of RAM.

If only he 'each'.....(I own 7 at the moment...)

0
0
Silver badge

Re: Gamers?

No, from running xosview while playing a ton of steam games, I can say games do not use more than 2 cores. For example, KSP uses one core, then a tiny bit of another for mods like MechJeb. The new Oddworld uses 2 cores pretty heavily, but nothing else does. (note: I don't play FPSes, so I have no data on things like Modern Warfare or Call of Duty. I mostly do "sandbox" games.)

The rest of my 8 threads sit there idle.

I was interested in this because I wanted to see where the bucks I spent on my machine and my graphics card were being used.

However, video editing tools avidemux2 just munch on all the cores when transcoding as ffmpeg is written to use multiple cores well.

3
1
Silver badge

Re: Gamers?

I can't remember the exact words, unfortunately, but he did say something about it during a TV interview with Sue Lawley back in the late 80s. It was in a discussion where he explained Microsoft philosophies such as "New releases are not there to fix bugs, they are there to add features" and he mentioned that either 640K was not a barrier to software development or that 640K was adequate for the intended use of a PC. If I recall correctly this was about the time of the release of the first extended memory boards and of a version of Lotus 123 that demanded the extra memory. It was also the time that the business unit I was in flipped to using Macs with 4 or 8 Mb of memory and Excel because 123 was creaking at the seams.

2
1
Silver badge

Re: Nobody needs more than 640K of RAM.

"Pity that Thatcher sold off Inmos."

one of my friends is the guy who wrote Occam. Even he admits that Inmos was going nowhere.

5
1
Silver badge

Re: Gamers?

"Is there even one game that benefits from having more than 4 cores?"

Anything running Direct-X 12 that is CPU bound for a start.

3
2

Re: Gamers?

It was an IBM engineer, not Bill. The 640K was a limitation of the design an 8 bit IBM personal computer. The maximum memory size was 1024K, of which IBM reserved the top 384K addresses for IO functions, so no Bill had no reason to say it.

4
0
Silver badge

Re: Nobody needs more than 640K of RAM.

"Ivor Catt did some good articles in the 1980s on this."

Ivor Catt wrote some interesting articles but I wouldn't describe them as good.

His ideas got as far as the Ferranti F-100/L microprocessor which had an internal serial architecture. The trouble was, compared to the TI 9989, another military microprocessor of the era, it was treacle to liquid helium. I know because I was on a project which used both of them.

One place where Catt went very wrong was his assumption that power doesn't scale with clock speed. Another was that timing jitter wasn't fixable. With TTL and ECL there was truth in this; if you could clock an ECL circuit at 500MHz it would be hard to parallel due to timing problems and didn't use 10 times the power of the same circuit at 50MHz - because most of the ECL power consumption was its analog circuitry, even at DC.

The coming of VLSI and CMOS destroyed both of Catt's assumptions; it became possible to parallel 64 data lines with clock speeds in the GHz range, which he never foresaw. As CMOS power scales very roughly with clock frequency for given design rules, a good parallel one will always beat a good serial one.

It isn't a pity that Thatcher sold off INMOS but it was a disaster that she didn't save ICL. Politicians used to mantra that they couldn't pick technology winners, but for some reason that never applied to companies that made things that went bang, only to things that were slightly beyond the grasp of civil servants with degrees in Classics and a poor maths O level.

4
0
Coffee/keyboard

Re: Gamers?

44 PCIe lanes. Two x16 PCIe GPU's in SLI consume 32 of those lanes leaving 12 lanes for only three x4 PCIe SSD, NVMe, etc.

Keep your 36 cores. This is a kneejerk response to RyZen Threadripper and its 64 PCIe lanes. No serious gamer would touch this limiting hardware.

10
0
Silver badge

Re: Gamers?

Doesn't the support chipset provide additional lanes for lower-priority stuff?

0
2
Anonymous Coward

Re: Gamers?

Well given the fact that most games don't need more than 4 cores aside, they tend to bolt on the following background tasks...

Except it's cheaper and easier to do most of that with another machine!

I've got an old clunker with a dual core AMD something or other in it, and a lump of ram which deals with the day to day of torrents, media streaming etc etc. It's pretty much always on, uses very little power, and produces even less noise.

If I wanna play a game, then I'll spin up one of the big boys with a good GPU, play the games, then turn it off again, leaving the old clunker still streaming and torrenting.

4
1
WTF?

Re: Gamers?

Many games now support eight cores (at least).

The first one I got dates from 2011.

The "no more than four cores for games" thing is a total myth.

1
1
Silver badge

Re: Gamers?

Sorry, but your "facts" are obsolete.

These days you need at least 6 for the game and one (better 2) for the OS.. so eight core and six core systems are the best, in general (unless playing WoT).

1
0
404
Silver badge

Re: Nobody needs more than 640K of RAM.

'The government never should have let the public own computers..'

My Dad, to his son with a career in infosec, on why he didn't need to know anything about protecting himself/identity or get on the internet*.

*yet oddly not dedicated to his beliefs to call that son on his copper line push button corded phone to look something up on that same evil should-be-banned internet for him...

4
0

Re: Gamers?

Many of those lanes will also be consumed by NIC and other on-board devices. It's probably the same two x16 and one x8 setup Intel has rinsed and repeated for a decade - still wondering when they'll get the memo that people want general purpose slots to plug items into their general purpose computer...

2
0
Silver badge

Re: guy who wrote Occam

Did Tony Hoare write Occam, or design it or just write papers about it?

If Inmos was going nowhere it was due to fixation on Intel and lack of investment in Tech in UK, where companies relied on Military or BT spending and increasingly owned / controlled by asset strippers or bean counters with no vision.

0
0
Silver badge

Re: power doesn't scale with clock speed.

Yes. power consumption is non-linear with clock, a square law. Higher speeds have been achieved by lower operating voltage and also smaller (related) gate area to reduce capacitance. That's partly why 14nm isn't 14nm in the sense that 90nm is 90nm. Not all aspects have been scaled down.

That's why in the last 15 years number of cores and architecture rather than actual clock is the biggest change.

0
0
Silver badge

Re: Parallel data lines

Did you re-read Catt lately or try to design an motherboard?

The issue isn't on chip (Catt wasn't espousing the F100L, which was rubbish) but BETWEEN chips. PCB design of CPU to RAM is a horror story at high clocks and wide buses.

ICL was moribund long before Inmos. The UK was first with commercial computing, but by 1960s along with consumer electronics was destroying it. Read "The Setmakers".

0
0
Silver badge

Re: Nobody needs more than 640K of RAM.

> "I think there is a world market for maybe five computers."

Yes, he did say that. Given the cost of building those computers in 1943 and the number of companies and governments that could afford it at that time he was correct.

2
0

Re: Nobody needs more than 640K of RAM.

No one needs a computer more than 16 megaliths.

Builders of stone henge.

(disclaimer : I don't really know if they said that)

0
0
Silver badge

Re: Gamers?

Even with Rizen you'll see better performance in games but it particularly shines when streaming or recording too. Having more cores just generally keeps things a lot smoother.

The problem I increasingly have with Intel isn't core, it's locking down functionality on boards artificially behind paywalls purely to market them as different models. That's why my next CPU will be AMD, right now I've got an i7-6700K which is no slouch for video processing but there's little reason to head back to Intel and pay the premium.

0
0
Silver badge

Re: guy who wrote Occam

"Did Tony Hoare write Occam, or design it or just write papers about it?"

None of the above. Tony Hoare (now Professor Sir C. A. R. Hoare) originated the theory of Communicating Sequential Processes, which was the foundation of the transputer concept. He is listed as "the inspiration for the occam programming language". David May created the architecture of the transputer and the development of Occam is not credited other than to "Inmos". However my friend was the person who wrote the Occam compiler.

"If Inmos was going nowhere it was due to fixation on Intel and lack of investment in Tech in UK, where companies relied on Military or BT spending and increasingly owned / controlled by asset strippers or bean counters with no vision."

I'm not convinced by the above explanation. Thorn EMI had underestimated the scale of investment needed and didn't realise until too late that booming transputer sales had been achieved by shipping as much product as possible but not investing in development. It was a slightly cynical exercise in making the company look a bargain for investors. My friend blamed the point-to-point link technology as a bottleneck in the technology.

If you are interested in a potted history, including the financial, political and management cock-ups see the Inmos Legacy page by Dick Selwood on the Inmos web site.

0
0

Software video encoding typically produces superior quality for a set bitrate, whereas GPU video encoding is quick and dirty.

4
17
Silver badge

"Software video encoding typically produces superior quality for a set bitrate, whereas GPU video encoding is quick and dirty."

Uhm, no. Hardware encoding is usually better quality as it does the exact same thing but is much faster and therefore can use more iterations...

6
1
Silver badge
Thumb Down

Uhm, no. Hardware encoding is usually better quality as it does the exact same thing but is much faster and therefore can use more iterations...

Uhm, double no. Video encoding is almost always a three way trade-off between speed of encoding, visual quality of the outcome and bitrate of the outcome.

Hardware encoding is more limited in terms of codec features and options, because putting the algorithm in hardware reduces the amount of options compared to the flexibility of software. Especially so in consumer hardware encoders, which are small independent dedicated pieces of silicon in the CPU/GPU.

Now, this is dead easy to see because of CRF (Constant Rate Factor) in x264. You can tell an encoder that you want the visual quality of the outcome to the level indicated. It is trivial to produce one encoding using x264 and one encoding using a hardware encoder, both with the same CRF setting. The outputs will be visually comparable in quality terms, but the hardware encoded video will be larger in size.

So hardware encoders; faster output, same visual quality, higher bitrate. These are lower "quality" videos than a software encoder would produce, for a given meaning of "quality". For "scene" releases, no-one is using hardware encoders, because they produce lower quality videos.

4
3
Anonymous Coward

"Uhm, double no."

Uhm, Triple no. What do broadcast quality X264 and X265 codecs use? Hardware of course....

(For instance DVEO)

0
1
Anonymous Coward

"Hardware encoding is more limited in terms of codec features and options, because putting the algorithm in hardware reduces the amount of options compared to the flexibility of software"

Flexibility ! = quality. Given a requirement you can design a hardware codec to do whatever codec / settings you want to - it will be much faster in hardware.

"So hardware encoders; faster output, same visual quality, higher bitrate."

But therefore, for the same given encoding time, a hardware encoder will give a higher quality output / and / or at a lower bitrate.

1
0
Silver badge

What broadcasters use is not relevant to how consumer video encoding offload chips function.

You think broadcasters use one of nvenc (Nvidia), Quick Sync Video (Intel) or Video Coding Engine (AMD)? Evidently not, as you know they use high end hardware encoders like DVEO that bake the algorithm in to silicon.

I clearly stated that I was talking about consumer hardware video encoders, and I'll repeat it again: for a given bitrate, software encoders produce higher quality output than consumer hardware encoders. The only thing that consumer hardware encoders do better than software encoders is speed.

If you are arguing otherwise, and don't want to appear foolish, an hour spent reading doom9 might help.

3
0
Silver badge

But therefore, for the same given encoding time, a hardware encoder will give a higher quality output / and / or at a lower bitrate.

No, not really. The hardware encoder cannot

Encoders have "presets", ways of controlling how the encode works, and "levels", what features are available to use in the targeted decoder. Eg, streaming to a STB you might have level 5.1 content, but streaming to a mobile you might have level 3 content.

Software encoders tend to have many presets to determine how much prediction/lookahead to use in encoding a frame. The more lookahead you use, the more efficient the encoding can be, and the smaller each frame can be whilst still encoding the same visual quality. Therefore, in software encoders you can optimise your encode to give the lowest bitrate for the chosen quality. Most videos that are made for distribution are encoded using the preset "ultraslow", because this reduces the file sizes significantly at the expense of a lot of speed.

Consumer hardware encoders don't do this. They have short lookaheads, which keeps the speed high. They use fixed length GOPs, (i-P-B-B-P...), where as x264 will use irregular ones (better quality, better compression). You can't really make it go slower with higher quality per bit (although you can make it go faster with lower quality per bit).

2
0
Silver badge

Yes and no

We setup a company using GPUs and CPUs on general purpose servers to encode/transcode movies/series for IP-TV.

It worked as a charm, for a mere fraction of the cost.

1
0
Silver badge

Re: Yes and no

Your clients probably aren't so interested in overall quality, so they're willing to sacrifice quality for speed (and thus turnover). OTOH, if you were say a BluRay mastering firm with a more generous time budget, you'd probably take a different approach.

Also, historically, GPUs are less suited for a job like video encoding because the balance of quality and speed produces workloads that are less conducive to parallelization (think divergent decision making that can hammer memory or spike the workload).

1
0
Anonymous Coward

THe Intel I(2N+1) will cost you N times as much as it's predecessor and will be ( 1 + 1/N ) times faster.

25
0
Anonymous Coward

Intel's Core i9 revealed to reach 36 cores. Not.

Execution threads /= cores, especially the way Intel implements them. For most workloads I care about, the benefit of these threads is at best modest, and at worst negative.

The more important issue is that these 18 cores are supposed to see the main memory through just four memory channels shared between them. Given that you can saturate this memory subsystem with just two cores, and will almost certainly saturate it with half a dozen, the benefits of having another twelve cores sitting around are questionable for any real-world usage. Cramming more and more processing elements at the end of a thin straw connecting you to the memory system is not a solution; there must be a more sensible way of using these transistors.

I am sure these CPUs will perform fantastically well on a few, carefully-selected benchmarks and will look amazing in demos. For the real-world usage, you'd be better of with quarter of the CPU cores and a few extra bucks in your pocket.

43
2
Silver badge

Re: Intel's Core i9 revealed to reach 36 cores. Not.

I agree whole heartedly, AMD had a range of chips that shared the memory system between pairs of cores; it was called FX, and it was a pile of shit compared to its predecessor the Phenom II, ESPECIALLY for gamers

3
4
Bronze badge
Meh

Re: Intel's Core i9 revealed to reach 36 cores. Not.

Lastly i9 are ridiculously expensive, so more GPU capacity maybe better value, in part because GPUs maybe better for parallel signal processing.

Yes, most of those i9 cores will probably chock unless reserved for only 2 hyper-threads mostly working with code and data in the core L1 cache; the more context switches and L1 cache misses the slower the code will run!

3
0

Re: Not an asterisk on the price tag.

Memory channels and cache considerations aside, just the price tag would have me running to Xeon CPUs that could fit in dual socket boards supporting buffered ECC RAM. If my data is worth that much to crunch it's worth more to do it right.

8
1
Silver badge

Re: Intel's Core i9 revealed to reach 36 cores. Not.

FX outperformed Phenom II.

Although the improvement wasn't as high as the gaming community would have wanted.

Tantrum ensued.

1
0
Silver badge

Re: Intel's Core i9 revealed to reach 36 cores. Not.

"Given that you can saturate this memory subsystem with just two cores, and will almost certainly saturate it with half a dozen, the benefits of having another twelve cores sitting around are questionable for any real-world usage"

That's what the large chunk of on CPU cache memory is for.

0
2

Average use case

From my point of view there's the 'bang for your buck' factor to consider as well, at least for the 'average' home user.

I have a desktop I game on, stream my rubbish gaming occasionally and do all the things gamers do while playing. Granted I don't have much time to game these days but more cores/threads makes that experience smoother and I found the recent AMD Ryzen 5 1600 (OC'd to 3.85Ghz) to be a good match, especially considering the relatively low price.

Would I like an 18 Core/36 Thread i9? Probably. Could I justify the significant extra cost for my, probably quite common, use case? No. Same goes for the Ryzen 7 1700/1800 though - the extra cores don't add up to a useful performance boost for the price, in my use case.

As always your mileage may vary - but the real winner of all this is the consumer. Actual competition between Intel and AMD is a GOOD thing. Whichever camp you prefer.

25
0
Silver badge

Re: Average use case

I love competition. Do you really think Intel would release these if not for the AMD Ryzen Threadripper? I can't wait for actual benchmarks from independent testers on both the i9 and Threadripper. These are obviously niche products, but it puts pressure on the prices for mainstream products, which means our wallets win.

We need to remember how good of a design Ryzen is. Rumors are the yields of the Ryzen are great. But the beauty of the design is that AMD can link cores together in a mesh. So when Intel needs a 16 core CPU, they have to make a large one. And the larger the die, the lower the yields. When AMD needs to make a 16 core CPU, they just make two 8 core ones and mesh them together. I can buy a 16 core Threadripper for $999, or a 10 core i9 for $999. The choice is easy. But the best thing is I actually have a choice. Intel must copy AMD's mesh design. But even if Intel started today, it would still take over a year to get to market.

The next thing I hope is that the Vega video card is a winner. We need to put pressure on NVidia's prices now. I love competition: lower prices and better products. What is not to like?

9
0
Silver badge

$276 for top of the line i9 versus $1700 for the i7? You sure?

2
8
Anonymous Coward

The $276 figure is the premium you pay above the price of the i7.

12
0

36 cores at 4.2 GHz?

No, it's 18 cores at 2.6GHz. You only get to the rarefied heights of 4+GHz when only a few of the cores are active.

32
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

The Register - Independent news and views for the tech community. Part of Situation Publishing