Witcher 3 Announcement From CD Projekt Red

Discussion in 'Videocards - AMD Radeon Drivers Section' started by LtMatt81, May 15, 2015.

  1. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    I wrote about The Witcher series been The Witcher II the last PC game and TW3 a port to PC because this affect the performance and optimization of the game in PC (Nvidia or AMD).

    I didn't say a word about Nvidia or AMD. :)

    Speaking about AMD on consoles and in PC:

    If AMD hardware on consoles was an important factor for us AMD GPU PC gamers the things will be very different in PC ports but the facts show us that the REALLY important factor is the software.

    In the software department Nvidia GPUs has the lead and by a LARGE margin.

    A better hardware (AMD GPU) with inadequate software (AMD driver) can't surpass the performance of a worst hardware (Nvidia) with an optimized software (Nvidia drivers).

    Gaming experience and market share clearly reflects these facts. :(
     
    Last edited: May 22, 2015
  2. Burningcoals

    Burningcoals Guest

    Messages:
    22
    Likes Received:
    0
    GPU:
    EVGA 980 Ti SC+ @1450Mhz
    I don't mind console ports done well. However, everyone was thinking that when the consoles went AMD that AMD PC users would have an advantage. I remember Nvidia forums being full of hate that Nvidia didn't get any of the new consoles. This has proven as like you said wrong, and its really AMD fault for not taking an advantage of it, in addition some PC gamers love to use Xbox controllers and such, if there is one positive from a game being on consoles its better controller configuration for PC.

    Nvidia is definitely winning the GPU wars these days, and I think Witcher 3 is a good example of the amount of dysfunction at AMD, this game could have had TressFX and Hairworks, AMD dropped the ball.
     
  3. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    To be honest, i think AMD is focusing on DX12, Windows10 and 390 driver.
    I've got a feeling AMD are going to own with DX12 and soon to be released 390.
     
    Last edited: May 22, 2015
  4. theoneofgod

    theoneofgod Ancient Guru

    Messages:
    4,677
    Likes Received:
    287
    GPU:
    RX 580 8GB
    If turning the camera around uses my HDD then the game is unoptimized :) The data should be cached lol...
     

  5. Deathchild

    Deathchild Ancient Guru

    Messages:
    3,969
    Likes Received:
    2
    GPU:
    -
    Can't wait for the 390X.
     
  6. DiceAir

    DiceAir Maha Guru

    Messages:
    1,369
    Likes Received:
    15
    GPU:
    Galax 980 ti HOF
    Same here dude. With HBM and the ability to utilize RAM much more effeciently in DX12 it should help a lot. i'm so excited for HBM v2 with 8GB Vram. if stack memory becomes a big part of gaming then 2x r9 390x will be king with 8GB RAM and HBM so you get best of both worlds.
     
  7. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    I hope they blow us away. If not, i'll have lost my faith in them.
     
  8. DarthElvis

    DarthElvis Guest

    Messages:
    215
    Likes Received:
    0
    GPU:
    Gigabyte gtx980 g1
  9. gx-x

    gx-x Ancient Guru

    Messages:
    1,530
    Likes Received:
    158
    GPU:
    1070Ti Phoenix
    every time micron/hynix (+amd) made new memory type worth having, nvidia was first to implement and sell it. nVdia is not touching HBM atm. That tells us a lot.
     
  10. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    That only happened with GDDR3. GDDR5 was first implemented by AMD.
    NVIDIA has been able to use both GDDR3 and GDDR5 because AMD has been submitting them as JEDEC standards.
    Another quote from this very interesting PCWorld article, that has an interview of the AMD Chief Technology Officer:
    HBM is being co-developed by AMD and Hynix since 2010, and it is a JEDEC standard since 2013. NVIDIA is talking so much about Pascal because the fastest thing they have right now is "fat" Maxwell, eg Titan X, and they don't have a partner that actually creates memory to cooperate with and come up with a design faster. The moment the new AMD cards arrive (and I have a feeling that they'll time them on purpose with Windows 10 and a new batch of drivers), they will probably destroy Maxwell dollar for dollar, and NVIDIA will have to drop their prices significantly.
    If you think about it, the whole GameWorks thing is going on intensely since a bit before the announcement of Mantle/DX12/Vulkan. AMD's drivers are worse in DX11, but not in DX12 and Vulkan, who are literally designed around GCN.
    GameWorks is a way for NVIDIA to skew performance in a game they know they would probably lose until they switch architecture.
     
    Last edited: May 23, 2015

  11. gx-x

    gx-x Ancient Guru

    Messages:
    1,530
    Likes Received:
    158
    GPU:
    1070Ti Phoenix

    we must be reading different sources then, my wikipedia source says that, yes, amd+ some insignificant company that develops and manufactures and sells memory were first to come up with but nvdia was first to MASS BUY and then MASS PRODUCE while amd was sitting on their asses.
    Your links show just that. Do you really think that micron or hynix would give exclusivity to amd that makes up less than 20% of the market? And it clearly reads there : nVidia was the first to use it in their mass produced products, not amd.

    so your point was?

    Anyway, nVidia was VERY quick to jump on DDR3/5 ,very very :) but not on HBM, in fact, they said "we will wait for second iteration" what does that tell you?
    Tells me HBM is crap that's what it tells me. You can bet that if HBM was something that good - nV would have jumped on it and Titan X would have had it. (it's 2011 project so there was ample time)

    edit: also, don't forget, nV 70% market share, amd ~25%. Hynix looks at nVidia and sees a majority of production going that way. AMD = small fish. AMD helped? I'd like to read that tech doc really :D
     
    Last edited: May 23, 2015
  12. Blackfyre

    Blackfyre Maha Guru

    Messages:
    1,388
    Likes Received:
    391
    GPU:
    RTX 3090
    GX-x... With all due respect. Go read up more about HBM, the fact that nVidia came late to the party this time around doesn't mean it's going to be "crap". Otherwise they wouldn't have announced they're joining the HBM party at all.

    Also your stats are ridiculous. You say that AMD holds 25% of the market share? That's only PC Stats... You know that right?

    To say that RAM and vRAM manufacturers favour nVidia is ridiculous.

    Read up on how many "next-gen" consoles have been sold so far, and remember that for every single one of those, AMD has sold a video card. For every single PS4 also, add on top of that another 8GB of vRAM.

    THIS perception that AMD has only 25% of the market share is ridiculously stupid because it doesn't factor in the massive market share of the console world.

    The frustrating thing is they're beginning to treat PC as secondary from how much money they're making with the consoles. I've already expressed how that will be detrimental not only to their image, but their future too. They cannot lose the PC market. If they do, they lose their future in my opinion.
     
  13. gx-x

    gx-x Ancient Guru

    Messages:
    1,530
    Likes Received:
    158
    GPU:
    1070Ti Phoenix
    OK, look, nvidia can't come late to the party. nVidia IS THE party. Now, read back what I've said - nVidia jumped on all the good memory stuff, jumped right away! Didn't jump on HMB.
    Wanna know why? There is another company doing similar thing, more efficient. nVidia said that they will wai out for that one. You know what that means? It means what AMD is doing is not good enough for nVidia to take a s$it, they will wait it out, Again, they did jump on ddr 3 and 5 and were first to release products with those.
    again, your point is? what? amd will be the first to fail with hbm because 4% power save and 2 % performance gain is ... AMD kinda thing?
     
  14. sammarbella

    sammarbella Guest

    Messages:
    3,929
    Likes Received:
    178
    GPU:
    290X Lightning CFX (H2O)
    VBS on the DX11 overhead thread pointed the weakest (?) point of HMB: Max 4 GB.

    http://forums.guru3d.com/showpost.php?p=5078315&postcount=642

    My thoughs are next "gen" of GPUs will need more than this: 8 GB, at least at high end level and in a year or two at medium level.

    I'm not going to buy a bottlenecked GPU at VRAM level HMB "Phase 1" (390X/395X).
     
  15. Blackfyre

    Blackfyre Maha Guru

    Messages:
    1,388
    Likes Received:
    391
    GPU:
    RTX 3090
    gx-x... Stop pulling random statistics and stupid assumptions out of god knows where.

    If you're pissed off with AMD (because of personal experiences or issues you have), that's understandable, but to say that HBM is "crap" because nVidia didn't do it first, or didn't jump on it straight away, is just ridiculous.
     

  16. xacid0

    xacid0 Guest

    Messages:
    443
    Likes Received:
    3
    GPU:
    Zotac GTX980Ti AMP! Omega
    topkek, Nvidia invested in HMC and that thing is too expensive so Nvidia jump to HBM which Hynix and AMD co-developed.
     
  17. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    You are confusing things. First of all, AMD is a bigger company than NVIDIA. In fact it almost bought NVIDIA instead of ATI back then but because the (infamous for being an *******) CEO of NVIDIA wanted to be the CEO of the new merged company too, the merger got cancelled.
    Hynix is not an insignificant company. It is one of the so-called DRAMurai. Its research partnerships with AMD, IBM and HP have enabled it to be one of the pioneers in memory research, including memristors. NVIDIA doesn't make any hardware, they give orders to manufacturers like TSMC. TSMC and Global Foundries are still on 28nm for GPUs, and that's the reason that we don't have significant performance jumps since the days of Tahiti/Kepler.

    NVIDIA was the first to mass-produce GDDR3 products, not GDDR5. The only thing I said about GDDR3 was that it was in fact developed by ATI who made it an open standard. NVIDIA is behind in the memory game because no company wants to have anything to touch them after what happened to Microsoft and Sony (who still have a bitter taste in their mouths regarding RSX performance). Let's not even mention the patent wars against Qualcomm and Samsung.
    AMD on the other hand, was always a company that was trying to form research alliances. They even have a huge patent agreement with Intel. Their partnership with IBM got them to 32nm, their partnership with Samsung will take them to 14nm, and their partnership with Hynix will get them HBM first. Sometimes it pays off to play as a relatively "good guy" I guess.
    Nobody will deny to sell HBM to NVIDIA, nobody is crazy. That doesn't mean that NVIDIA is going to have the same know how as the people who were involved into the actual research of the thing, hence the delay in adoption. Having the chips in your hands, and being able to implement them and interface them with your hardware are two very very different things. NVIDIA is promising HBM since 2013, but they still postpone it indefinitely. They were quoting some ridiculous 1TB/sec numbers, which was the usual PR spin I guess.

    My point is that you said this,
    and it couldn't be further from the truth. You make it sound like HBM is damaged goods or something, while you clearly have no idea what you're talking about.

    DDR3/5 are iterations on the same technology that everybody has had experience with since the days of the GeForce DDR. HBM is a paradigm shift, and NVIDIA had nothing to do with its research. What it tells me is that people smarter than me and you at NVIDIA feel uneasy on the interface of their newer chips with a type of memory they have no experience with.

    Titan X is just "fat" Maxwell. It is a 33% enlarged GTX 980 at double the price. It is nothing special like the GTX 750Ti was (the first iteration of Maxwell), just a honey trap for people who have forgotten that the "Titan" used to be a way to get an NVIDIA card that is not completely gimped in its compute. The Titan X is not even that, it's just a name for people to dump 1,000$.

    According to the Steam Hardware Survey, NVIDIA holds a 52% share of the Steam userbase, which is around 50 million computers. AMD is at 27%, and the rest is Intel. But that's just Steam, and there are 3 billion PCs on the planet.
    Look at the chips sold of the small fish AMD (just graphics chips, PS4, XboxOne, Wii U sales are not included):
    [​IMG]
    Who would have thought that AMD are selling the same amount of GPUs as NVIDIA, and that Intel dwarfs everybody else :D
    Oh wait.

    The other company that makes anything similar is Intel, who has long term legal disputes with NVIDIA. And even Intel is switching to HBM because they have a cross license patent agreement.
    Your performance numbers are completely wrong, and I guess pulled out of your ass like the rest of your post. The performance improvement in the first implementations will be close to 65% in memory operations, and the power savings due to lower operating voltages will be closer to 40%.
    The reason I made this post so big and analytical btw, is so that someone has a quick link to use in case this idiocy is heard somewhere else again.

    I'm gonna wait and see first, to be honest. Since most modern games (with Shadow of Mordor being the only exception really) will be made with a console asset budget in mind, I can't see them needing more than 4GB for 1080p/1440p. The consoles only have 5,5GB available for the whole budget (including "normal" ram). It would surely be nice if we see 8GB versions though, although I would honestly not have a problem with the 4GB ones, since I play at 1080p/1440p.
     
  18. gx-x

    gx-x Ancient Guru

    Messages:
    1,530
    Likes Received:
    158
    GPU:
    1070Ti Phoenix
    vidija will jump on first useful noticeable improvement. AMD needs that power and heat reduction asap, nV does not. If you look at gddr3 and ggdr5 history you would see the pattern. first gen HBM - crap - because nVidia didn'ttake it. Loveme, hate me, I dont care.
     
  19. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    8,129
    Likes Received:
    971
    GPU:
    Inno3D RTX 3090
    At least you know how to make facts-based convincing arguments. It is wonderful how many people still believe in magic. NVIDIA doesn't have magic hands dude, they made a bet on a more fixed-function architecture, versus a compute one. It paid off in the DX11 era, it probably won't without software lockouts in the DX12 era.
    And no, they have no partners for memory research, and that's why they are gonna be late on the bandwagon that everybody is going to be on. I guess that GDDR5 was bad because it took NVIDIA a year to adopt it, while the 4870 was destroying what they had at the time. :infinity:
     
  20. mR Yellow

    mR Yellow Ancient Guru

    Messages:
    1,935
    Likes Received:
    0
    GPU:
    Sapphire R9 Fury
    Amen brother. GNC was way a head of it's time and will pay off in DX12.
     

Share This Page