Intel Halts Xeon Phi accelerator Knights Hill Development

Discussion in 'Frontpage news' started by Hilbert Hagedoorn, Nov 15, 2017.

  1. Hilbert Hagedoorn

    Hilbert Hagedoorn Don Vito Corleone Staff Member

    Messages:
    48,544
    Likes Received:
    18,856
    GPU:
    AMD | NVIDIA
  2. Clawedge

    Clawedge Guest

    Messages:
    2,599
    Likes Received:
    928
    GPU:
    Radeon 570
    Raja has spoken!!!
     
    -Tj- likes this.
  3. That's pretty interesting. it was a pretty big project for a good while. i have always wondered why intel has never gotten serious about graphics.
     
    rl66 likes this.
  4. rl66

    rl66 Ancient Guru

    Messages:
    3,931
    Likes Received:
    840
    GPU:
    Sapphire RX 6700 XT
    Maybe because the XEON Phi is even more "niche" product than NVidia (1st place) and AMD (2nd place)... it is very good for computation but product from green and red are realy much more easy, much more versatile... arm/x86/x64/custom config... the experience they have is a real advange over intel wich is pretty young in this exercice.

    About Intel in graphic... at each release of their IGP it is better, now you can no more say "their IGP are so weak that you can do nothing with them"... at one point i guess they will do some great stuff.
     

  5. -Tj-

    -Tj- Ancient Guru

    Messages:
    18,103
    Likes Received:
    2,606
    GPU:
    3080TI iChill Black
    Was about to say its something with Raja for sure, they don't need that anymore..

    I think Raja will.make some wonders in intel gpu world. Can't wait!
     
  6. Texter

    Texter Guest

    Messages:
    3,275
    Likes Received:
    332
    GPU:
    Club3d GF6800GT 256MB AGP
    Oh they were serious all right nearly a decade ago...what did they spend on LRB? $5 Billion? Apparently hooking up a bunch of Pentiums on a single die wasn't competitive enough for graphics, even though they were so boastful of their accomplishments at first...Xeon Phi was the Larrabee salvage job.
     
  7. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    I can't say I'm surprised by this. Considering Intel's approach against Epyc (or just Epyc in of itself) Knights Hill just doesn't make sense. And yes, I'm aware they're for servers - in the server market, these products are too niche.

    To my understanding, this wasn't meant to be a GPU, but rather a large cluster of x86 Atom CPUs as an AIB.

    But yes, it does beg the question why Intel didn't just use their existing GPU architecture for a discrete card and improve upon it. Despite what people think, it is actually decent, but we just happen to get really underwhelming and crippled variants. I wouldn't run out and buy an Intel GPU for gaming, but I'd be open to one for workstations, OpenCL, and transcoding.
     
  8. rl66

    rl66 Ancient Guru

    Messages:
    3,931
    Likes Received:
    840
    GPU:
    Sapphire RX 6700 XT
    it is... but made for computing
     
  9. schmidtbag

    schmidtbag Ancient Guru

    Messages:
    8,020
    Likes Received:
    4,398
    GPU:
    Asrock 7700XT
    That's what Nvidia's Tesla or AMD's FirePro S series are; they're still GPUs but intended only for parallelization. Those still have all the capabilities necessary to do 3D rendering if you really wanted them to, without having to install any 3rd party drivers. Xeon Phi is not a GPU, it's a cluster of CPUs. Xeon Phis are designed to run x86 code, and to my understanding, their drivers are not capable of graphics rendering (without the need of 3rd party software renderers). With actual GPUs, you can use something like OpenCL and it works for any hardware that supports it out of the box. Though Xeon Phi supports OpenCL, there are caveats. Intel seems to prefer consumers use their own proprietary compilers.

    Keep in mind that the main difference between a many-core CPU and a GPU is how they're designed to handle calculations. GPUs are designed to process massively-parallel tasks; they're awful at multitasking. CPUs, meanwhile, are best for multitasking rather than parallelization. This is why CPUs use things like Hyper Threading while GPUs don't/shouldn't. This has good visual representations why many-core CPU are fundamentally different than GPUs:
    https://code.msdn.microsoft.com/windowsdesktop/NVIDIA-GPU-Architecture-45c11e6d
    Scroll down to the part that says "2. Hardware Architecture". You can see in the diagrams how each CPU core gets its own dedicated L1+L2 cache and registers. Meanwhile, GPUs do clusters instead.
     
  10. I know that phi was not a gpu per se, but one cannot help but think that mpp\gpgpu wouldn't be where it is without gpu tech being where it is. I guess i brought it up to muse on intel's lack of interest in the graphics/gpu side of chipmaking. You old farts (and i include myself within this category) will remember Intel i740/750, and all the fanfare surrounding it as they partnered with real3d and C&T, and what a disappointment it was. How about the Intel "extreme" graphics on the 800 series stuff, or the 900 series with radeon xpress.......all crappy, always. I go back that far to illustrate the point that they have always half-assed it in the graphics department, and it seems to have bitten them slightly, but they will never admit that. intel has been around longer than nvidia or ati/amd, and could have easily caught up to them in the gpu field. Hell, they could have easily purchased both over time........intel's first graphics controller came out 1982 (Intel 82720), and neither of the other two really got going until 1995, and they still had some competition in the graphics market at that point (think hercules/Ali/3dfx/matrox/s3/etc.) After nvdia came along with on-chip T&L around 2000, graphics chip design didn't really hit another major paradigm shift until shader model 4 stuff started rolling out (programmable shader cores and all this mpp/gpgpu stuff), and that was 2005..................now intel is badly behind in that area (and yes, they have gotten much better, but still not great.......GT1 and GT2 stuff still awful). Is it unwillingness to travel too far outside x86? They ignored arm until recently too. I just can't help but feel like they really missed the boat on getting over on nvidia and amd in the graphics chip market. Larrabee was years too late, Phi was too. Intel's Nervana (yes they spelled it like that :rolleyes:) is running on titan X's till they can get their own chip made, so they are now lagging behind in the tpu/nnp market too........and that is shaping up to be the next big thing.
     

Share This Page