New Upcoming ATI/AMD GPU's Thread: Leaks, Hopes & Aftermarket GPU's

Discussion in 'Videocards - AMD Radeon' started by OnnA, Jul 9, 2016.

  1. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
  2. Chastity

    Chastity Master Guru

    Messages:
    826
    Likes Received:
    59
    GPU:
    Sapph Nitro 390 BP
    It's ok, no one will ever own a Vega AIB card cuz miners will buy them all on pre-purchase
     
  3. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    Fine Wine Edition :D

    "

    Named after Adrenalin Rose, Radeon Software Adrenalin Edition continues AMD’s commitment to releasing major driver updates annually. The fully redesigned and supercharged Radeon Software Crimson Edition in 2015 received the highest user satisfaction rating of any AMD software ever, while Radeon Software Crimson ReLive Edition continued the 90 percent user satisfaction for 12 straight months in a row.

    As a reminder, over the past three years, Radeon Software has delivered to users 70 software releases, launch day support for more than 75 games and over 50 new or enhanced features, with more than 250 million downloads across the globe. Radeon Software continues to lead the way in elevating high-performance gaming and VR experience for gamers, professionals and game developers. "

    [​IMG]

    -> https://radeon.com/radeonsoftware/feedback

    [​IMG]
     
    Last edited: Nov 30, 2017
  4. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    Some interesting tidbits have been shared during OverclockersUK live stream with special guest from AMD: James Prior (Senior Product Manager)

    AMD Vega 11 is integrated into Raven Ridge APU
    The mysterious Vega 11 is not a GPU by itself. It’s a solution for AMD Raven Ridge APUs with 11 Compute Units enabled. James Prior confirmed that Ryzen APUs offer up to 11 Compute Units. So far AMD only released two mobile APU variants, which feature either 8 or 10 CUs (Vega 8/10 Graphics). That said, the chip with 11 Vega Compute Units would be the top tier Raven Ridge APU. No details about desktop APUs have been shared.

    AMD RX Vega 56 and 64 to receive an increased supply
    It has been confirmed that RX Vega stocks will be increased shortly. This will allow retailers, such as OverclockersUK, to adjust the price accordingly. Our sources have confirmed that AMD is finally supplying partners with Vega chips, which will allow them to introduce custom SKUs in satisfactory number, while reference designs will no longer be produced.

    AMD (Ry)Zen 2 will use AM4 socket
    James Prior reassured that AM4 socket is here to stay (till 2020). The work on Zen 2 has already begun when fundamental parts of Zen 1 were already known. The important thing here is to distinguish Zen 2 from Zen 1 tick-tock process. The upcoming Ryzen 2000 series are likely to use refined Zen+ architecture. A die shrink and architecture optimizations are to be expected. So the Ryzen 2, or more precisely Zen 2 might actually arrive with Ryzen 3000 series, while Ryzen 2000 (or Ryzen 1×50) will use refined Zen1/Zen+ 12nm process instead.

    If everything goes according to the plan, forward compatibility for Zen+ and Zen2 will be available with a simple BIOS flash on existing AM4 motherboards.

    -> https://videocardz.com/74260/amds-james-prior-talks-ryzen-2-and-vega-11
     
    Last edited: Dec 2, 2017

  5. HK-1

    HK-1 Member

    Messages:
    41
    Likes Received:
    7
    GPU:
    XFX RADEON RX460
    yep simply waiting for Adrenaline edition :)
     
    OnnA likes this.
  6. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    AMD Working On GDDR6 Memory Technology For Upcoming Graphics Cards!

    A rumor recently surfaced involving the LinkedIn page of an AMD technical engineer that listed the company working on a GDDR6 memory controller. I reached out to some people and can confirm that AMD is indeed working on the GDDR6 standard and will be actively using it in future graphics cards. As for the question that I am sure everyone will ask next, no AMD will still be sticking to HBM2 for its high end next generation graphics cards in 2018 (aka Navi).


    HBM2 will remain the memory of choice for the high end AMD Radeon GPUs in 2018, working on GDDR6 graphics cards

    The leak originally stemmed from a picture of a LinkedIn profile showing an AMD engineer listing the GDDR6 memory controller technology in his portfolio. This is usually a fairly obvious way of confirming a leak but the profile in question was nowhere to be found. This is why I decided to reach out to sources familiar with the matter myself and can confirm that AMD is indeed working on GDDR6 memory technology and will be adopting it.
    The next obvious question becomes when and where will we see it used, to that, the only reply I got was that AMD will still be sticking with HBM technology in high end graphics cards in 2018. Samsung, Micron and SK Hynix all have roadmaps that show their GPU SKUs rolling out by the end of 2017 or early 2018. In either case it looks like video card manufacturers will have access to the incredibly fast memory standard from Q1 2018. Since pricing will almost certainly be expensive in the beginning and it remains to be seen just how it fares in comparison with HBM.

    Now AMD has previously teased that Navi will next-gen HBM memory, so it remains to be seen whether they are talking about HBM2 or the elusive HBM3 standard (which should be still faster than the GDDR6 standard) Edit: they are talking about HBM2. In any case, we do know for sure the company will be sticking with HBM technology for all its high end graphics cards indicating that either they will not be rolling out any GDDR6 based cards in 2018, or keeping it limited to mid-end or the professional side of things.

    Samsung, Micron and SK Hynix have all officially stated that they will be producing the fastest and lowest-power DRAM for next generation products. Samsung has currently listed a 16Gb GDDR6 DRAM in their portfolio but that can be expanded upon in the future when production hits full swing. With a transfer rate of 16Gbps, the DRAM will be able to pump out 64 GB/s bandwidth (per chip). The memory operates at just 1.35V.

    Samsung 16Gb GDDR6 Memory – The fastest and lowest-power DRAM for next generation, graphics-intensive applications. It processes images and video at 16Gbps with 64GB/s data I/O bandwidth, which is equivalent to transferring approximately 12 full-HD DVDs (5GB equivalent) per second. The new DRAM can operate at 1.35 volts, offering further advantages over today’s graphics memory that uses 1.5V at only 8Gbps. via Samsung

    Compared to current generation GDDR5 DRAM, we are looking at both, increased bandwidth and transfer speeds (8Gbps vs 16 Gbps) at lower power consumption (1.5V vs 1.35V). The specifications can easily be compared to current DRAM standards.

    The GDDR5X memory operates at much faster speeds and has practically hit 16 Gbps as confirmed by Micron themselves. While GDDR5X can hit same speeds as GDDR6, the latter comes with better optimizations and higher densities. – and Samsung it claiming it to be an upgrade over G5X. We are looking at speeds of 12-14 Gbps becoming standard in the graphics industry while 16 Gbps will ship out in the high performance sector. There’s also up to 32 Gb density support while GDDR5/X max out at 16 Gb.


     
    Maddness likes this.
  7. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    AMD Assures Radeon RX Vega 64 and RX Vega 56 Receiving Increased GPU Supply Shortly

    AMD has confirmed some interesting bits regarding their current and upcoming products in an interview with OverclockersUK.
    These products are based on AMD’s CPU and GPU technologies that have received major updates in 2017 and will continue to get better as the company moves forward.

    James has also assured that they are going to bring an increased supply of Radeon RX Vega 64 and Radeon RX Vega 56 GPUs in the market. This will have two direct impacts on reference and custom boards. First up, the increased supply will mean that we won’t be looking at further stock issues that are a major contributor to the price gouging that affects graphics cards in low supply. Second, the increased supply will put the prices back in check and that will lead to manufacturers bringing out more custom boards to market that will offer better cooling and graphics performance.

    [​IMG]

    I think custom boards will start popping out sooner in good quantities and manufacturers that have delayed the launch due to supply issues will now release their cards to market. There is no time period mentioned by AMD but I believe we will be seeing it sooner since the mining craze has diminished a lot during the current quarter and more GPU supply can end up in desktops rather than crypto mining rigs.
     
  8. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    AMD Develops GDDR6 Controller for Next-generation Graphics Cards, Accelerators

    This news may really not come as such; it's more of a statement in logical, albeit unconfirmed facts rather than something unexpected. AMD is working (naturally) on a GDDR6 memory controller, which it's looking to leverage in its next generations of graphics cards. This is an expected move: AMD is expected to continue using more exotic HBM memory implementations on its top tier products, but that leaves a lot of GPU space in their product stack that needs to be fed by high-speed memory solutions.
    With GDDR6 nearing widespread production and availability, it's only natural that AMD is looking to upgrade its controllers for the less expensive, easier to implement memory solution on its future products.

    The confirmation is still worth mention, though, as it comes straight from a principal engineer on AMD's technical team, Daehyun Jun. A Linked In entry (since removed) stated that he was/is working on a DRAM controller for GDDR6 memory since September 2016. GDDR6 memory brings advantages of higher operating frequencies and lower power consumption against GDDR5 memory, and should deliver higher potential top frequencies than GDDR5X, which is already employed in top tier NVIDIA cards.
    GDDR6, when released, will start by delivering today's GDDR5X top speeds of roughly 14 Gbps, with a current maximum of 16 Gbps being achievable on the technology.
    This means more bandwidth (up-to double over current 8 Gbps GDDR5) and higher clock frequency memory. GDDR6 will be rated at 1.35 v, the same as GDDR5X.

    SK Hynix, Samsung, and Micron have all announced their GDDR6 processes, so availability should be enough to fill NVIDIA's lineup, and AMD's budget and mainstream graphics cards, should the company choose to do so.
    Simpler packaging and PCB integration should also help in not lowering yields from usage of more complex memory subsystems.

    [​IMG]

    [​IMG]
     
  9. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    HBM3 Memory Will Double Transfer Rates To 4 GT/s For At least Twice The Memory Bandwidth – DDR5 Design Specs Aiming To Offer Up To 2x Performance

    In an interesting announcement, RAMBUS has revealed (via Computerbase) the specifications of the DDR5 and HBM3 memory standard.
    These are early specs and could change, but for HBM3 we at least have one spec locked, while for DDR5 a basic overview is now available.
    HBM3 will be the successor for the HBM2 memory while DDR5 will be the general successor to DDR4 on the primary PC platform.

    RAMBUS reveals early HBM3 and DDR5 memory specifications


    Before we go any further, let me just say that these specs are at their very early stage and we don’t expect to see anything before 2019 at the earliest. Keep in mind that AMD will be using HBM2 memory for its high end GPUs throughout 2018 and even NVIDIA isn’t expected to utilize the new standard anytime soon. The same goes for DDR5 memory, DDR4 still has a couple of years of life left in it, not to mention the specifications for the same aren’t set in concrete yet.

    Now the the numbers revealed by RAMBUS at least confirm one thing, both memory standards will offer the following upgrades: HBM3 will offer twice the performance as a bare minimum, while the DDR5 standard will offer anywhere from 1.5x to 2x the performance of DDR4.

    For HBM3, I say twice the performance minimum because what they have revealed is the new transfer rate, which will be 4 GT/s. Now depending on the access, (which was historically 1024) the final bandwidth speed could be anywhere from 512 GB/s to 1 TB/s per package. To put this into perspective, HBM2 can reach 256 GB/s with a 1024 wide access and 2 GT/s of transfer speed. This is something that is really welcome because unlike CPUs, GPUs scale in performance extremely fast between generations and a bandwidth bottleneck could cripple performance. HBM3 will be the bread and butter of high end GPUs when it comes out.

    As far as DDR5 goes, they have mentioned that the transfer speeds that you are looking at will between 4.8 GT/s to 6.4 GT/s. Historically, DDR4 memory has had an average of 3.2 GT/s. This represents a performance increase as well, but not as huge as the one between HBM2 and HBM3 since we are looking at an incremental as opposed to a big leap.

    Both standards are slated for manufacture on the 7nm process, which would mean that they would automatically benefit form the increased economies and power efficiencies that come with such a shrink. Since we are only now getting into 10nm territory on the PC side of things, and a node usually lasts for 2 years, I do not expect either of this standard hit the shelves before late 2019 or 2020. Competitiveness from AMD, could catalyze a GPU arms race that could see HBM3 come around a little earlier but even that eventuality is unlikely to happen before 2019.

    Rambus ended the note with a brief comment on its existing partners, which continue to pay licensing fee to the company to use its PHY solutions. The DDR5 standard shown is consistent with the direction JEDEC is taking. The company already has functional silicon of DDR5 that is undergoing tests but since the target platform is 7nm, it is unlikely to see the light of day anytime soon.

    [​IMG]

    Side note: My Fury Nitro-X (Vega can go up to ~700GB/s ;) )

    [​IMG]
     
  10. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570

  11. JonasBeckman

    JonasBeckman Ancient Guru

    Messages:
    13,586
    Likes Received:
    171
    GPU:
    Sapphire R9 Fury OC
    Theoretically AMD's Fury GPU can do 512 GB/s with it's stock HBM configuration.

    In testing however it is held back, delivering roughly 380 GB/s under optimal conditions.

    https://www.reddit.com/r/Amd/comments/7b7gfb/disappointed_in_the_lack_of_fiji_optimizations/

    And this post in particular and the benchmarks it links to.
    https://www.reddit.com/r/Amd/comments/7b7gfb/disappointed_in_the_lack_of_fiji_optimizations/dpgtbjj/

    https://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4

    Overclocking the GPU can help but the core clocks can't really be pushed that much further than some 1100 Mhz without a lot of voltage and HBM overclocking was locked down almost completely in the 17.x drivers.

    For Vega while it doesn't have the same 4096 bus from 4x HBM modules (1024 but 4x via stacks.) it still has a pretty good bus width (2048 was it?) and the high clock speeds allows for a bandwidth of around 480 GB/s which AMD also lists for the specs for the GPU.
    (Even for the fully stacked Vega Frontier and the WX9100 or what it was called workstation GPU with 16 GB HBM2 instead of 8 GB.)
    (EDIT: 484 GB/s after re-checking. https://pro.radeon.com/en/product/wx-series/radeon-pro-wx-9100/ and then details and the memory tab there having info on clock speed, bus width and bandwidth.)

    From searching around a bit the 1080Ti has around 484 GB/s (Though it's using a 384-bit bus and GDDR5X memory.) and the 1080 on a 256-bit bus offers 320 GB/s so it'll be interesting to see what GDDR6 can do for AMD.

    256-bit bus might still allow for some pretty fast bandwidth speeds, I think AMD mostly sticks to 256 to 384 for their higher-end GPU's with the 290X being the oddity using some very complex ring bus 512 bit controller but it didn't manage to compete with Nvidia's offerings due to other limitations.


    And still trying to search for some info on the CSAA modes and settings. Plenty of benchmarks and articles talking about how these work but less findings on more in-depth info though that's much as I expected.
    Perhaps the open source drivers on the Linux platform could offer some info, it's unfortunate AMD's drivers doesn't have something like NV Inspector to really poke around with more complex settings for specific profiles and such. :)
     
    Last edited: Dec 7, 2017 at 12:05 PM
  12. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    GDDR6 in Radeons are for Mainstream ;) (up to 199USD, was 299USD :D)
    We will still have Mighty HBM2/3 for our Big GPUs. (up to 499USD was 550-650USD)
     
  13. Maddness

    Maddness Master Guru

    Messages:
    446
    Likes Received:
    9
    GPU:
    Asus RX480 Strix
    I'm glad AMD is going to use GDDR6. Hell, i'd be more than happy if that was for there high end cards. As long as we don't need to wait for it like all the HBM delays.
     
  14. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    Delay is the cost of Progress :D
     
  15. Maddness

    Maddness Master Guru

    Messages:
    446
    Likes Received:
    9
    GPU:
    Asus RX480 Strix
    Those delays are costing AMD money though. That's not good.
     

  16. PrMinisterGR

    PrMinisterGR Ancient Guru

    Messages:
    6,864
    Likes Received:
    5
    GPU:
    Sapphire 7970 Quadrobake
    Vega is fine, if sold at MSRP. The only Nvidia card that really makes sense at this point is the 1080 Ti.

    The purchasing order that makes sense is:

    RX570

    Vega 56

    1080Ti

    Everything else has bad price/performance, bad feature sets, or it will plainly be slower in the next six months.
     
  17. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    Making DX12 games in 2017/18 is it easy or hard?

    [​IMG]

    Here some guy rainslacker from n4g:


    Q: is DirectX 12 losing its steam?
    ->
    "I think this statement is a bit off. i work with DX12, making tools for game engines and developers, and overall, I haven't found it to be that difficult to do stuff in it. It has a few nuances which are baffling at times, but that's what keeps me employed.:)

    Anyhow, it's not really any harder to get up and running on DX12, however, since a lot of games are started with DX11, it makes the porting of those to DX12 a bit less manageable. DX12 actually will run the DX11 code, however, it runs it in a kind of compatibility mode which isn't as optimized as DX12 can be.

    There is more work required with native DX12 code, but games that start off in DX12 are just as easy to get up and running as their DX11 counterparts. Optimization on the other hand is a different issue. Routine functions actually are about the same as DX11, however if the developer uses the low level API's, they can be burdensome to actually get working properly. The reason for this is more a hardware issue, as not all hardware makers provide a full set of low level interfaces for their hardware, so what may be available on one card, isn't available on another. This means that for the most part, the low level stuff is mostly restricted to what is the minimum requirement for getting DX12 compatibility approval. This means that they have to make two versions of the same code....one that runs on higher levels like we normally see with DX12 functions, and one that actually takes advantage of specific hardware that may be available. If the dev doesn't do the former, then they greatly reduce the number of people who can run the game properly, and that almost never happens, as the lowest common denominator is still a thing.

    -->
    All engine makers have built it into their engine. That doesn't mean that all the hardware out there is using it. High profile games in general don't usually always use the engines default implementations of API code, as that would not be as optimized as the fine tuning that can be done through developmental experimentation.

    Also, if one is building their own engine, DX12 is much better than DX11, as the interface level is much more fluid. DX11 is much higher level than DX12, so you kind of have to do workarounds to get certain functions to work the way you may want to, whereas in DX12, the hardware level is much more exposed."

    Q: Is DX12 overhyped?

    ->
    "Actually it wasn’t over hyped... DX12 is better than 11 in every conceivable way...

    Microsoft released DX12 while most pf the games that just recently released were still in development..

    What piblisher/development team is gonna scrap what their doing just to use a new API??
    They would be wasting millions..."

    ->
    "
    The API itself is fine. The hype came from people expecting it to see immediate improvements to games.

    This was never going to happen. It has never happened in the history of Graphics API's....particularly on PC. There is just too many people who don't have the required hardware, or operating system in the case of PC, to warrant developers moving as fast into the new API's as they may want to.

    It doesn't help that MS itself hinders the adoption by restricting these API's to newer operating systems for no reason other than to push the new operating system. We have seen several DX generations go completely unused because of this, because people don't see the need to update their OS.

    DX12 itself is a very capable AI, whose adoption is hindered by OS adoption and more importantly hardware implementation. Unlike many prior versions of DX, you actually do need a DX12 capable GPU in order to see the most benefit from what DX12 has to offer. The creation of these GPU's has been slow, because MS itself hasn't seemed to finalize what exactly is low level, and what isn't, and they keep adding new things, or changing the requirements to be able to use the low level aspect of it.
    On top of that, a full on DX12 GPU requires a departure from DX11, so DX11 still has to be supported in GPU's, and making a GPU which supports two completely different and distinct rendering pipelines is expensive, so for the time being, every GPU runs DX11 more than DX12, while DX12 runs in a kind of compatibility mode which is more brute force than streamlined design."

    "That, I think is DX12's real downfall. It's restricted to Windows 10."

    "There have been several versions of DX that have gone unused, or barely used, because MS locked it out of older versions of windows.

    What's sad is that there was absolutely no reason to do so, because Win10 runs on what is basically the Windows Vista kernel. Even if it didn't, DX was never fully integrated into the OS to the point that it was necessary to lock it out. There were changes over time that made certain features necessary in the OS which couldn't be implemented at the OS level, but that was usually when there were major kernel revisions...like from the Win95 kernel based systems to Windows Vista based kernels. So....once in history.

    MS cripples DX adoption more than developers do to be honest. DX is is additionally being crippled because MS can't nail down the low level implementation which makes it so much better to begin with. It's changed several times already, and those first DX12 GPU's aren't going to have everything available that came just months after their release."
     
  18. Maddness

    Maddness Master Guru

    Messages:
    446
    Likes Received:
    9
    GPU:
    Asus RX480 Strix
    It was always going to take at least a few years before games built from the ground up on DX12 to come out. That is when the true benefits should shine through.
     
    OnnA likes this.
  19. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
  20. OnnA

    OnnA Ancient Guru

    Messages:
    3,692
    Likes Received:
    136
    GPU:
    Nitro Fiji-X HBM 1150/570
    VESA Announces the DisplayHDR v1.0 Specification

    [​IMG]

    The Video Electronics Standards Association (VESA) today announced it has defined the display industry's first fully open standard specifying high dynamic range (HDR) quality, including luminance, color gamut, bit depth and rise time, through the release of a test specification. The new VESA High-Performance Monitor and Display Compliance Test Specification (DisplayHDR) initially addresses the needs of laptop displays and PC desktop monitors that use liquid crystal display (LCD) panels.

    The first release of the specification, DisplayHDR version 1.0, establishes three distinct levels of HDR system performance to facilitate adoption of HDR throughout the PC market. HDR provides better contrast and color accuracy as well as more vibrant colors compared to Standard Dynamic Range (SDR) displays, and is gaining interest for a wide range of applications, including movie viewing, gaming, and creation of photo and video content.VESA developed the DisplayHDR specification with the input of more than two dozen active member companies. These members include major OEMs that make displays, graphic cards, CPUs, panels, display drivers and other components, as well as color calibration providers. A list of participating companies is available here.

    DisplayHDR v1.0 focuses on LCDs, which represent more than 99 percent of displays in the PC market. VESA anticipates future releases to address organic light emitting diode (OLED) and other display technologies as they become more common, as well as the addition of higher levels of HDR performance. While development of DisplayHDR was driven by the needs of the PC market, it can serve to drive new levels of HDR performance in other markets as well.

    Brand Confusion Necessitates Clearly Defined HDR Standard
    HDR logos and brands abound, but until now, there has been no open standard with a fully transparent testing methodology. Since HDR performance details are typically not provided, consumers are unable to obtain meaningful performance information. With DisplayHDR, VESA aims to alleviate this problem by:
    • Creating a specification, initially for the PC industry, that will be shared publicly and transparently;
    • Developing an automated testing tool that end users can download to perform their own testing if desired; and
    • Delivering a robust set of test metrics for HDR that clearly articulate the performance level of the device being purchased.
    What DisplayHDR Includes
    The specification establishes three HDR performance levels for PC displays: baseline (DisplayHDR 400), mid-range (DisplayHDR 600) and high-end (DisplayHDR 1000). These levels are established and certified using eight specific parameter requirements and associated tests, which include:
    • Three peak luminance tests involving different scenarios - small spot/high luminance, brief period full-screen flash luminance, and optimized use in bright environments (e.g., outside daylight or bright office lighting);
    • Two contrast measurement tests - one for native panel contrast and one for local dimming;
    • Color testing of both the BT.709 and DCI-P3 color gamuts;
    • Bit-depth requirement tests - these stipulate a minimum bit depth and include a simple visual test for end users to confirm results;
    • HDR response performance test - sets performance criteria for backlight responsiveness ideal for gaming and rapid action in movies by analyzing the speed at which the backlight can respond to changes in luminance levels.
    "We selected 400 nits as the DisplayHDR specification's entry point for three key reasons," said Roland Wooster, chairman of the VESA task group responsible for DisplayHDR, and the association's representative from Intel Corp. for HDR display technology. "First, 400 nits is 50 percent brighter than typical SDR laptop displays. Second, the bit depth requirement is true 8-bit, whereas the vast majority of SDR panels are only 6-bit with dithering to simulate 8-bit video. Finally, the DisplayHDR 400 spec requires HDR-10 support and global dimming at a minimum. With this tiered specification, ranging from baseline to high-end HDR performance levels, PC makers will finally have consistent, measurable HDR performance parameters. Also, when buying a new PC, consumers will be able to view an HDR rating number that is meaningful and will reflect actual performance."

    "Developing this specification is a natural expansion of our range of video standards," said Bill Lempesis, VESA executive director. "Moreover, we are the first standards body to develop a publicly available test tool for HDR qualification, utilizing a methodology for the above-mentioned tests that end users can apply without having to invest in costly lab hardware. Most of the tests require only a colorimeter, which many users already own. Ease of testing was a must-have requirement in order to make DisplayHDR a truly viable, consumer-friendly spec."

    New products complying with the DisplayHDR specification will be demonstrated at the Consumer Electronics Show (CES), January 9-12, 2018 at the Las Vegas Convention Center South Hall, DisplayPort booth #21066.
     

Share This Page