back to article Intel confirms it’ll release GPUs in 2020

Intel has confirmed it will start to sell discrete GPUs in the year 2020. News of Chipzilla’s plans appeared in a post by analyst Ryan Shrout, who said that Intel CEO Brian Krasnich last week told an analyst event about the company’s plans. Intel confirmed Shrout’s piece, telling us "We’re pleased to confirm our first …

  1. Tom 64
    Holmes

    Beat them on packaging?

    Not sure about this. nVidia perhaps yes, but AMD has been doing very well in this regard recently. For example Radeon Fury, Vega GPUs with HBM, and EPYC/Threadripper on the CPU side. Correct me if I'm wrong, but intel are only just getting into this game and they had to use an AMD GPU die to do it.

    1. Dave 126 Silver badge

      Re: Beat them on packaging?

      Yet it was an Intel packaging technology, EMIB, that allowed them to combine an AMD GPU with an Intel processor:

      https://www.anandtech.com/show/12003/intel-to-create-new-8th-generation-cpus-with-amd-radeon-graphics-with-hbm2-using-emib

    2. Anonymous Coward
      Anonymous Coward

      Re: Beat them on packaging?

      Intel's biggest problem will be delivering drivers that don't suck, something they've had a lot of problems doing in the graphics world even at lower performance levels. They can make the best GPU hardware there is but if the drivers don't work people won't buy them.

      1. Ken Hagan Gold badge

        Re: Beat them on packaging?

        "Intel's biggest problem will be delivering drivers that don't suck, "

        They only have to suck less than the other two. I've seen plenty of sucky drivers from both over the years.

        "... if the drivers don't work people won't buy them."

        I think the phrase here is "citation needed". There are a handful of people who have almost religious fervour for grovelling over the latest high-end hardware. They care, but they are only 0.0001% of the market.

  2. Anonymous Coward
    Anonymous Coward

    Always good to have competition to rein in that nVidia/AMD duopoly

    I remember 3dfx and Matrox.

    1. kuiash

      Re: Always good to have competition to rein in that nVidia/AMD duopoly

      And the rest. Rendition, S3, 3DLabs, Cirrus, Tseng and then there were all the optimisers like VIdeologic (still around in a different guise), Guillemot and thousands more.

      The great shake out did for them. A combination of the move to 3D accelerators (well, more than DMA engines with an ALU run the middle) and the eventual (and inevitable) death of the board/optimiser companies.

      Building GPUs in now a job for a 1000 people. Only the 5 richest kings in Europe can afford it.

      There are many other GPU creators out there. Qualcomm, ARM, Imagination. They just don't play in the PC space.

      What Intel got their hands on a few years back was the engineering talent from the once mighty 3DLabs (Ziilabs -> Intel). Maybe they've done something with it but as a company Intel are becoming the new IBM of the 80's. Slow moving, single minded and inflexible.

      The last 5 years has seen Intel best everybody on process. The industry as whole is having problems scaling.

      1. Anonymous Coward
        Anonymous Coward

        Re: Always good to have competition to rein in that nVidia/AMD duopoly

        ...Intel are becoming the new IBM of the 80's. Slow moving, single minded and inflexible.

        What do you mean "becoming"? Intel have always been all of those things.

      2. Dave 126 Silver badge

        Re: Always good to have competition to rein in that nVidia/AMD duopoly

        > There are many other GPU creators out there. Qualcomm, ARM, Imagination

        And Apple and Google too. The former having ended its relationship with Imagination Technologies. Google and Apple likely looking at mobile GPUs that do more than shade polygons, but can also be more efficiently put to other tasks such as DSP and object recognition.

        1. nerdbert

          Re: Always good to have competition to rein in that nVidia/AMD duopoly

          Google isn't doing graphics. Look at the papers on the Tensor chips they've been doing and you can see that while the architectures are similar (SIMD machines with massive high bandwidth memory access), there are distinct differences between a Tensor machine and a graphics card. But from just the papers Google has published you can get an estimate of what their Tensor chips run, and a reasonable estimate is that those chips alone, not counting the HBM, assembly, and all else, run much more than a maxed out 1080 Ti card. Google may be large as companies go, but they're still not large enough to get the massive discounts you get from volume Si production.

          1. MonkeyCee

            Re: Always good to have competition to rein in that nVidia/AMD duopoly

            "run much more than a maxed out 1080 Ti card."

            I wouldn't compare a 1080ti with a TPU. No-one who is planning on using one would substitute the other. Even Titans and Vegas are not really comparable, lack of precision and cost aren't really on the same scale.

            Comparing a TPU with a Tesla is more viable, since they would be used for equivalent workloads.

            You could possibly use half a dozen Titans to do Tesla like stuff, but why would you bother?

            In general people either have the budget, so want the best in the smallest form factor (TPU or Tesla), or don't and then want the best bang for buck (retail GPU).

            It will be interesting to see what Intel come out with, whether they are aiming at the retail or industrial end of the spectrum.

            1. Cederic Silver badge

              Re: Always good to have competition to rein in that nVidia/AMD duopoly

              I had to read it three times to be sure, but he's actually talking about cost.

              It would be lovely to know how Tensor stacks up on the processing capability front.

      3. CheesyTheClown

        Re: Always good to have competition to rein in that nVidia/AMD duopoly

        The big difference between desktop and mobile GPUs is that a mobile GPU is still a GPU. Desktop GPUs are about large scale cores and most of the companies you mentioned in the mobile space lack the in-house skills to handle ASIC cores. When you license their tech, usually you’re getting a whole lot of VHDL (or similar) bits that can be added to another set of cores. ARM I believe does work a lot on their ASIC synthesis and of course Qualcom does as well, but their cores are not meant to be discrete parts.

        Remember most IP core companies struggle with high speed serial busses which is why USB3, SATA and PCIe running at 10Gb/sec or more is hard to come by from those vendors.

        AMD, Intel and NVidia have massive ASIC simulators that cost hundreds of millions of dollars from companies like Mentor graphics to verify their designs on. Samsung could probably do it and probably Qualcomm, but even ARM may have difficulties developing these technologies.

        ASIC development is also closed loop. Very few universities in the world offer actual ASIC development programs in-house. The graduates of those programs are quickly sucked up by massive companies and are offered very good packages for their skills.

        These days, companies like Google, Microsoft and Apple are doing a lot of ASIC design in house. Most other new-comets don’t even know how to manage an ASIC project. It’s often surprising that none of the big boys like Qualcomm have sucked up TI who have strong expertise in DSP ASIC synthesis. Though even TI has struggled A LOT with high speed serial in recent years. Maxwell’s theory is murder for most companies.

        So most GPU vendors are limited to what they can design and test in FPGA which is extremely limiting.

        Oh... let’s not even talk about what problems would arise for most companies attempting to handle either OpenCL or TensorFlow in their hardware and drivers. Or what about Vulcan. All of these would devastate most companies. Consider that AMD, Intel and NVidia release a new driver almost every month for GPU. Most small companies couldn’t afford that scale of development or even distribution.

  3. Chloe Cresswell Silver badge

    If this is their first discrete GPU, what was the i740? (Apart from crap..)

    1. Voland's right hand Silver badge

      It was crap. Without the "apart".

      Intel loves to pretend it is its first foray into something while in fact, it has tried it before and has f*** it up.

    2. wabbit02

      i740 for sale

      As someone who saved up from their after-school job for an i740 I can honestly say it rates as one of the worst purchases I have ever made. Up until recently I'm sure I still had it in the loft.

      Cold day in hell before I'll ever forget that.

      1. Chloe Cresswell Silver badge

        Re: i740 for sale

        We used to use them at my first IT job, we had 4. We never ordered any more. 1 of them was thrown out still in it's antistatic shipping box...

    3. commonsense

      I think the term GPU didn't catch on until nVidia started using it when they launched the GeForce 256. The i740 came before then, and I don't remember a successor to it. Marketing and all that.

      1. Lennart Sorensen

        The term GPU is much older than nvidia making it popular (in 1999). The i740 was 1998. So GPU was a thing, and the i740 probably qualified (as a bad one) even if people tended to call them graphics accelerators at the time instead.

  4. Lee D Silver badge

    So... just as AMD are talking about putting a real GPU into the chip directly, Intel want to make a GPU that goes onto a plug-in board?

    I think they missed this boat too, which is ironic because they did make integrated graphics chipsets for over a decade before this announcement. They were just always pants.

    Honestly, guys, just make a decent GPU. 2020 will be too late. You'll probably hold the processor market for a while but you've left everything else far too late.

    To be honest, we're following the "floating point co-processor" model. At the moment graphics GPUs are basically add-in cards on expansion slots that you have to talk to with non-standard protocols and every one has a different instruction set. Next they'll be a separate standardised socket on the motherboard, next to the CPU and sharing its cooling, and things like Vulkan APIs determining a base instruction set. Before you know it, every chip will really be a GPU first, with a legacy CPU inside.

    AMD, for once, are following the right path, while Intel just flounder like they usually do.

    1. Anonymous Coward
      Anonymous Coward

      I'm not convinced the AMD route is the right way to go to be honest. I feel they're clutching at straws a bit in a way to differentiate themselves from Intel/nVidia.

      While a single die makes sense for the mobile sector (including laptops), external cards for the desktop/server markets make much more sense. People (read: gamers) are much more likely to upgrade their GPUs than their CPUs (which will usually involve new motherboard and RAM too). Having add-in cards also makes it far easier to run multiple GPUs in parallel, and, again, update those cards as technology improves.

      1. Lee D Silver badge

        " People (read: gamers) are much more likely to upgrade their GPUs than their CPUs"

        Which is why I see the eventual evolution towards a standardised "GPU socket" rather than a PCIe slot.

        Every PCIe card at the moment has a different height, thickness, cooling and power requirements. By putting it straight on the motherboard, you can standardised it, bring it into the standard cooling and power arrangements, keep it close to the CPU and RAM, and separate it out from PCIe peripherals. You can then upgrade it individually.

        But over time we'll hit a limit (e.g. X amount of PCIe x16 lanes) and then they'll just get folded into the CPU directly and upgrading them individually will hardly matter. All that will then happen if you'll get "dual-GPU" boards and expansions that include TWO such sockets. Four such sockets. Etc. And you end up with the same "I can have 12 10,000 core GPUs" but you also get 12 controlling CPUs for free by doing so.

        If anything, I see the CPU disappearing into the GPU, not the other way around. People will still buy a dozen GPUs for their mining rig, they just won't care that there are also a dozen bog-standard CPU issuing the commands to them inside the same chip.

        1. ilmari

          Oh, like MXM?

      2. Korev Silver badge

        People (read: gamers) are much more likely to upgrade their GPUs than their CPUs (which will usually involve new motherboard and RAM too

        Also, "home" CPUs haven't really got much faster in the past half decade or so; GPU speed has increased massively.

      3. MJB7

        Single die vs plugin card

        Sure *gamers* will upgrade their GPU - but the really *big* market for GPUs is not processing graphics!

        We have pretty much run out of steam improving single-threaded performance. Multiple cores is the only way to improve performance. Once you start doing that at scale, you can drastically reduce the cost of each core by not trying to squeeze every last drop of performance out of it (you also design out Spectre et al). Once standard desktop software needs a GPU to perform well, everybody is going to want one - and they won't want it on a separate card.

        The commentard who compared GPUs to floating point hardware had it exactly right.

        1. MonkeyCee

          Re: Single die vs plugin card

          "Sure *gamers* will upgrade their GPU - but the really *big* market for GPUs is not processing graphics!"

          Well, nVidia disagree with you there.

          2019 Q1 results (Jan 2018 - Mar 2018)

          Growth is change from smae period last year.

          Gaming revenue: $1.7B, 68% growth

          Datacenter: $700M, 71% growth

          Professional visualisation: $250M, 22% growth

          Automotive: $145M, 4% growth

          Crypto miners: $290M.

          So roughly two thirds of nVidia revenue (not just GPUs) is *only* processing graphics. Automotive is also processing graphics, but doing other stuff too, same as data centre. So counting dual use as not doing graphics, the majority use case is still crunching numbers for graphics.

          "Once standard desktop software needs a GPU to perform well"

          Did I miss something? Isn't that ALREADY the case, which is why CPUs have had a GPU on them for a decade or more?

          The majority of GPUs I own are not used for graphics. But I'm a pretty odd case, and most of my usage of them doesn't need a lot of grunt from the rest of the system, as they are being run on hardware several generations behind (2Ghz Xeons, DDR3, x4 PCIe2 slots) since they get the same performance on shiny new kit as the bottleneck is still on the card itself.

    2. Dave 126 Silver badge

      > AMD, for once, are following the right path, while Intel just flounder like they usually do.

      Intel are playing the on-package GPU game too. See EMIB and Intel CPUs with AMD GPUs.

    3. Anonymous Coward
      Anonymous Coward

      Sounds good in theory till you have to upgrade your motherboard to get a new GPU simply because it doesnt have the right power needs, or intel/amd/nvidia have decided to change the gpu socket pin numbers.

    4. Peter Gathercole Silver badge

      @Lee

      I would go one stage further. I can see the GPU becoming not just a co-processor on the same die, but execution units in a super-scalar processor. Once this happens, writing code for the GPU will be much easier, as the compilers will include the ability to compile code directly, rather than the rather haphazard methods being used now.

  5. BinkyTheMagicPaperclip Silver badge

    Hopefully not just the high end..

    I'm hoping these cards will also cover the lower end, and be open source. Yes, Intel graphics are somewhat underpowered compared to Nvidia and AMD, but they have decent documentation and open source drivers enabling a large amount of open source support, especially on the BSDs.

    For OS that reject binary blobs (hello, OpenBSD), Nvidia support died not far off a decade ago - Xorg is still using the nv driver. AMD support is quite up to date, and Intel support is doing quite well. Given a lot of developers use laptops, it would be useful to have a discrete GPU version of those chipsets when using a CPU that doesn't have a GPU built in.

    1. Anonymous Coward
      Anonymous Coward

      Re: Hopefully not just the high end..

      Enthusiasts will keep splashing enthusiast cash on high end cards.

      All Intel needs to do is to conquer the budget or mid-range cards, holding an absolute price-performance advantage over similarly priced cards from both Red and Green.

      Focus on minimizing power intake and heat/noise dissipation, instead of benchmark scores.

    2. imanidiot Silver badge

      Re: Hopefully not just the high end..

      Not very likely to happen. And even if it does, Intel has the attention span of a 5 year old with ADHD on a diet of triple shot espressos laced with meth. It's products won't be supported long enough to get any sort of following.

      *ooohhh, shiny*

      1. Yet Another Anonymous coward Silver badge

        Re: Hopefully not just the high end..

        For real work you need CUDA, or at least OpenCL support that is as good as CUDA

        For games you need Intel to stay in a market for more than 5mins so games support the card, so gamers buy the card, so games support the card

        For home/office use the builtin Intel GPU is good enough

        Good luck Intel

  6. Anonymous Coward
    Anonymous Coward

    I have a Skull Canyon

    And the integrated graphics is actually quite good. So I can see they've been pumping money into it.

    1. Dave 126 Silver badge

      Re: I have a Skull Canyon

      https://www.anandtech.com/show/10343/the-intel-skull-canyon-nuc6i7kyk-minipc-review/4

      In which Intel's Iris Pro graphics are discussed. Tl.dr should be fine for light gaming.

  7. Anonymous Coward
    Anonymous Coward

    I think we are going to

    see an advancement of their work on clusterability (is that a word?) of processor units that they started with Larabee and Xeon Phi, but with GPUs rather than x86 based cores.

    The first batch will probably be very expensive, and very boring, blue shrouded stuff with big TDPs and will probably be a bunch of multi core GPUs set up in a symmetrical bus with a 16GB bank of Optane stuck on it and aimed at enterprise.

  8. HPCJohn

    Larrabee - what goes around comes around

    Referring to Xeon Phi, remember that this descended from Larrabee which was Intels multi-core graphics card https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)

    So it is kind of ironic that Intel is coming full circle here.

    One thing that is interesting is for large visualizations Intel is pushing CPU based rendering, using Ospray https://www.ospray.org/

    I believe that will remain very relevant at the high end.

    Looking at Nvidia, there success in the market is not an overnight thing, or an accident. Years ago they made conscious efforts to move into the compute market, and sustained that with Nvidia centres of excellence, continued developments of CUDA, NVlink etc. etc.

    So are we in the future going to see the rather bizarre situation of Nvidias chips being bought for compute mainly, and Intel producing the most popular graphics chips?

    Time will tell!

  9. adam payne

    As we’ve previously stated, our intent is to expand our leading position in integrated graphics for the PC

    Your leading position is based on cheap crap that motherboard manufacturers put on cheap motherboards. That's not really a leading position you should be proud of.

  10. BinkyTheMagicPaperclip Silver badge

    Also, oh joy! Server chipsets with intel graphics.

    I don't know if it will happen, but I can dream. Integrated BMC chipsets with an Intel graphics chipset.

    I realise they're servers, and servers don't require graphics. Even so it would be nice to have more than a G200e with 8MB RAM running over a PCI-e 1x link (slower than AGP...). Something with better acceleration and PCI-e compression. Haven't checked how the more modern AST chipsets are, but if you briefly need to run even a vaguely modern desktop the G200e is just glacially slow.

    Suspect it won't happen with specialists like AST sewing up the market.

  11. mark l 2 Silver badge

    Intel's effort in anything other than x86 processors have not been particularly successful and often result in them selling it off or just abandoning it. So i don't hold out much hope that them taking at least 18 months to release a GPU will amount to much.

  12. 89724102372714531892324I9755670349743096734346773478647852349863592355648544996313855148583659264921

    Wordprocessing, browsing and the odd bit of excel has enough CPU for the majority of people.

    GPU grunt for cheap would revive the PC market... in a very small way.

  13. TheSkunkyMonk
    Stop

    Year 2030

    AMD announces its new a$$sucker with built in ram

  14. Anonymous Coward
    Anonymous Coward

    Hah!

    Second, Intel recently discontinued its Xeon Phi co-processor line. GPUs would be a more-than-handy replacement for the Phi.

    And here I was actually believing Intel statements that Phi was the best approach to do seriously heavy and flexible SIMD in a well-known optimized de-facto-industry-standard instruction set.

  15. rsl

    Intel740 was their first discrete GPU

    Back in 1998, Intel740 was their first discrete GPU.

    As its actual embedded graphics solutions, i740 was plagued with driver bugs.

    There is no point to hire the better hardware guys to build it's hardware, if Intel is going to keep it's tradition of supporting its graphic solutions poorly and for short periods, on the driver/software front.

    Anyway, I bet Intel is more interested in GPGPU than videogames here.

  16. Anonymous Coward
    Anonymous Coward

    Intel in Graphics? I think I have heard this before

    Intel has tried this a few times before. (ok more than a few times)

    Winners never quit and quitters never win. But those who don't win and don't quit are IDIOTS

    Cue the Peanuts Cartoon: "I'm really going to kick the ball this time...."

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like