back to article Intel introduces Sandy Bridge chippery

Intel took the wraps off its new Sandy Bridge microarchitecture Monday morning — now officially branded as the 2nd Generation Intel Core Processor — revealing a number of notable improvements over its current Nehalem-based processor line, including what the company claims are greatly improved on-chip integrated graphics. " …

COMMENTS

This topic is closed for new posts.
  1. Mark76

    Sorry

    But I really can't help thinking of the old Absolutely sketches about that Scottish town council every time I see "Sandy Bridge"

    1. Simon Harris

      Sorry, sorry

      but that was Stonybridge...

      Anyway, I can't help thinking that a Sandy Bridge isn't a very good idea - wouldn't it get eroded rather quickly and collapse?

    2. Anonymous Coward
      Anonymous Coward

      "Users want tae compress their vido in seconds."

      "It's VID-AY-O!"

      Yes, all keynotes should be presented in Stoneybridgese.

  2. Anonymous Coward
    Boffin

    crap quality Badaboom/avivo video converter etc ?

    "users want video encoding to take place in seconds, not minutes, and how he claims that Sandy Bridge's video-transcoding capabilities will deliver that level of performance."

    actually what users want is <b>X264</b> type high quality ,high profile 16:9 HD 1920×1080 16:9 level 4.1 encoding.....

    non of this crap GPU baseline/main profile ipaid 3:2 SD and lower crap as your main/best option in 2010/11, will this internal video-transcoding ASIC give us x264 visual quality and flexibility or more of the same GPU type crap video quality at at the same bit-rate that is Nvidia/cuda/Badaboom , AMD/avivo video converter and even the pro PS3's Cell chip Fixstars Corporation assisted video encoder was crap and couldn't provide x264 high profile visual quality and AVC/H.264 setting's tweakability etc.

  3. Adam Salisbury
    Unhappy

    I'll be reserving judgement

    Going on Intel's treack history of graphics adapter performance (including the ones they made Vista Basic for) I'll be sticking to my AMD/nVidia for the foreseeable future

  4. John 62

    chip baker?

    I like that term!

    Actually AMD was implicitly described as a chip baker, but it doesn't bake silicon anymore, Global Foundries does that.

  5. Giles Jones Gold badge

    Revolutionise?

    I keep hearing how bits of hardware will revolutionise the computer world, but it's largely untrue.

    These are just minor iterations and speed increases. In effect it's just comparable to the day they added an FPU into a processor.

    The addition of an FPU made certain operations faster, but unless you were a ray tracer you probably didn't notice. The processors were that slow back then (25Mhz) that a dedicated floating point unit made sense. Processors are much faster now.

    So will a GPU inside a CPU really make that much difference? maybe it will be cheaper. A single fan for the CPU and GPU. But what about multiple displays and dual cards linked for performance?

    I think the only revolutions are in computer form factor and software. The hardware used is largely secondary.

    1. Nigel 11
      Boffin

      Bandwidth matters

      Obviously it depends on how good the on-chip GPU is.

      However, an on-chip GPU is connected to the rest of the chip with "wires" millimeters at most in length. Speed-of-light latency: a few picoseconds. A GPU that's separated off on PCI-express is at a remove of several centimeters. Latency: at least twenty times worse. Bandwidth: much harder to maintain. Latency is the speed of light at work, it can't be finessed by any sort of engineering.

      There's a biological analogue. Our eyes are as close to our brains as nature can arrange. Nerves are quite slow and bulky: there's a penalty for eyes on stalks, or for putting the brain in a safer location deep inside the torso. Which is why brains are perilously exposed on the end of necks: better visual bandwidth.

  6. Mark 65

    Socket to 'em

    Yep, that really makes me want to go out and buy a PC knowing full well the socket architecture will be redundant in less than 12 months. Surely they could give the things more life? I know, I know, the price of progress. Gets on your tits after a while though.

  7. Michael 82
    Paris Hilton

    All this and?

    The graphics will still be siht!

    Paris, she always looks good in hi-res but not on this shady-bridge...

  8. Urs Keller
    WTF?

    "Swiss Army processor" ?!?!

    What is so Swiss Army-ish with this processor? Could it be more "Swiss Army Knife"-ish?

  9. Henry Wertz 1 Gold badge

    encoders and GPGPU

    @Giles Jones, having an FPU still makes sense, or math operations would be pretty glacial -- the CPU has it built in. I do agree with the main point though, having a GPU built in is really not revolutionary compared to having a discrete one. It does the same thing, and the integrated ones tend to be poor compared to discrete too (good enough for a lot of users but nothing to get excited about.)

    @AC re: "non of this crap GPU baseline/main profile" etc etc.: The GPU is pretty general purpose. If the current encoders are crap that's up to the programmer, it's not some inherent limitation of the GPU. Particularly hopeful for having any arbitrary encoder run GPU-accelerated, GPGPU ("General Purpose GPU") should allow almost unmodified code to be compiled to run on the GPU, meaning that almost any encoder can be trivially converted. (The "almost" part, currently the compiler doesn't autoparallelize so loops that can be split up to run efficiently on the GPU have to be flagged -- but, these flags are treated as C comments if you compile for the CPU instead, so the modified code will still compile and run on the CPU as well.)

  10. Anonymous Coward
    Anonymous Coward

    So what happens when I want to do some OpenCL shit?

    Do I Turbo Boost the CPU, or do I Turbo Boost the GPU? Or both? Or neither?

This topic is closed for new posts.

Other stories you might like