Discussion in 'Videocards - AMD Radeon' started by WhiteLightning, Sep 28, 2018.
Up to three games with RX 5500 XT purchases
Eve's Spectrum line of gaming displays offers either 1440p at 165 Hz, 1440p at 240 Hz, or 4K resolution at 144 Hz.
The Spectrum line of gaming display offers three different models, and the basic model has a 1440p resolution, 450 nits of maximum brightness, and up to a 165 Hz refresh rate.
The model that has the fastest refresh rate offers a 1440p resolution, 750 nits for maximum brightness, and has a refresh rate of 240 Hz. The advanced model features a resolution of 3840 x 2160 and has a max brightness of 750 nits and a 144 Hz refresh rate.
Each of these monitors utilizes an 8-bit + AFRC IPS panel from LG, which is equipped with proprietary backlighting, and these monitors use a special polarizer that enables the LCDs to display 98% of the DCI-P3 color gamut.
All of these monitors feature VESA's Adaptive-Sync variable refresh rate technology and are AMD FreeSync Premium Pro as well as NVIDIA G-Sync Compatible certified.
These displays also support HDR10 and are either VESA DisplayHDR 400 or 600 certified, depending on the model.
Connectivity is one of the best features of Eve's Spectrum monitors, all of these models feature one DisplayPort input and output, one HDMI input, and two USB-C inputs with one supporting up to 100-watts of power delivery.
In addition to those ports, these monitors feature a triple-port USB 3.1 Gen 2 Type-A.
Since these displays are aimed at gamers, these monitors don't come with a stand, the stand for these monitors will cost an additional $99.
These monitor's minimalistic design doesn't let on about their gaming nature.
The basic model is currently priced at $349, while the 240 Hz model is priced at $489, and the 4K model is priced at $589. These prices are just pre-order prices and are require committing to buying the hardware before it ships.
If buyers want to see some reviews, they will have to wait to buy until after quarter 3 of 2020.
Eve Technology has stated that the prices of these monitors will increase when these monitors hit the market.
I like this specs
FreeSync Range 48Hz - 240Hz (IMO 30-240 is possible).
New AMD certification hints at soon-to-be-released Radeon graphics card
We know that AMD plans to release new RDNA2 graphics cards in 2020, and it seems like the release of these GPUs may arrive sooner than expected, at least according to a new Korean RAA product certification.
South Korea's Radio Research Agency database contains references to many Radeon product.
However, the site does list these under the "ATI Technologies ULC" banner, a holdout from the pre-AMD ATI days.
A recent tweet from @raa_bot on Twitter, as retweeted by @Komachi and later reported on by PCGamesN,
has revealed a new ATI product listing that's dated today, suggesting that a new AMD Radeon product will be released within the next few weeks/months.
AMD has already confirmed that it will reveal more about its product roadmap at its 2020 Financial Analyst Day on March 5th.
When combined with this Radeon listing, it also suggests that AMD's Financial Analyst Day will see AMD reveal new Radeon products.
Why RDNA 2 matters
AMD's next-generation RDNA 2 graphics architecture is due to offer gamers new features and further IPC and/or power efficiency improvements over today's RDNA/Navi architecture.
These changed will help Radeon become more competitive with Nvidia, and bring PC gaming in-line with the feature set of the next-generation consoles from Sony and Microsoft.
RDNA 2 is due to bring support for hardware-accelerated raytracing and variable rate shading (VRS) to AMD's Radeon graphics lineup, securing the future of real-time raytracing into the PC gaming landscape.
If AMD released a "Big Navi" graphics card, Nvidia would be forced to respond. That said, it is unknown how close Nvidia's next-generation RTX graphics are to market,
or it AMD's next-generation Radeon graphics cards are strong enough to worry team Geforce.
Soon we will have frirst 3Dmark leaks -> then we will know for sure to hype Moar
If AMD weren't in Bed with Nvidia(just like Dem and Republicans)then I'd hold high hopes, but you can Bet your Bottom Dollar they are always working together to get more out of our collective wallets then ever before. Its been proven in the past, and will repeat itself again..........................
I am holding out for Big navi anyway, I want a new GPU but to spend $800 then see another better GPU come out soon after would make me ill.
If and only if users stop buying 800+ € / $ GPUs, prices will go down. As long as they can continue milking the 1%, they will and all the line up will have higher prices.
I see What your saying Goiur. These people keep posting like prices r gonna fall from the sky, cause AMD "MIGHT" have a better GPU then Nvidia are only fouling themselves. why would AMD want to sell it for less when Ngeerdia is selling it for more....
I was looking at the 2080Super, Now considering a 2070 Super it's a itch i just need to scratch.
Samsung announces 3rd Gen Flashbolt HBM2E memory modules with insane bandwidth levels
Samsung's HBM2 memory is continuing to evolve, with the company's latest Flashbolt memory stacks offering users faster data rates and the promise of further performance scaling.
Samsung's latest 16-gigabyte HBM2E modules can offer data rates of 3.2Gbps, which translates to 410GB/s when applied across a memory stack. For context, AMD's Radeon RX Vega 56 offers 410GB/s of memory bandwidth over two HBM2 memory chips. Yes, Samsung's Flashbolt HBM2 memory offers a 2x boost in speed over the memory used in the RX Vega 56. If these modules were used in a Radeon RX Vega 56-like graphics card, which features two HBM2 modules, Samsung's Flashbolt HBM2 memory stacks would offer users 820GB/s of bandwidth and 32GB of VRAM.
Furthermore, Samsung has stated that it has reached speeds of 4.2Gbps to deliver 538GB/s of memory bandwidth per stack of Flashbolt memory.
These speeds are listed as being for "certain future applications", which means that these speeds are unlikely to be seen on shipping products anytime soon. That said, Flashbolt's standard 410GB/s data rates are already impressive.
Samsung plans to start volume production of these HBM2E memory modules in the first half of this year. These memory chips were created using a series of ‘through silicon via’ (TSV) interconnects, with each HBM2E package interconnecting over 40,000 microbumps. Another factor that's worth noting is that while Samsung calls its memory HBM2E, JEDEC formerly recognises this memory as an HBM2 product, making HBM2E a "marketing name" that denotes the memory's enhanced bandwidth capabilities.
At this time it is unknown when Samsung's Flashbolt HBM2 memory will become readily available within a consumer-grade or enterprise-grade product, but at a minimum, these high-speed HBM2E chips will help inject new life into the HBM memory standard.
If this memory was used to build AMD's Radeon VII GPU, the graphics card would offer 1,640GB/s of memory bandwidth, which would be a staggering increase over the 1,024GB/s that the card shipped with.
RVII is old tech. It wouldn't benefit from anything anymore. Vega is old, it cannot mature more than that like the majority of GCN family. RDNA is the future, at least for consumers.
The second half of this year is not what people wanna hear.
Samsung announces HBM2E with up to 538 GB/s
Samsung has announced HBM2E with up to 538 GB / s. The chips are now stacked in up to 8 layers, which allows 16 GBit.
However, you will probably no longer see the memory in end customer cards anytime soon. One could only imagine a candidate.
Even if HBM memory on graphics cards in the player area still does not play a (large) role and, despite a few attempts to walk, could not prevail, the development continues in the background for data center accelerators.
Samsung has now announced HBM2E and doubled its capacity. By stacking eight layers, Samsung achieves 16 gigabits per chip. It is manufactured in 10 nm.
The new design means that Samsung can also increase the speeds. Up to 538 GB / s and 4.2 Gbit / s are now possible.
It won't do much for the normal player, because at the moment HBM2 doesn't seem to be one. AMD had cards in the program, but the actual benefit was not given in relation to the price.
For this reason, cards with GDDR memory are currently being produced for end customers, with quantity seeming to be a somewhat larger issue than speed.
HBM, meanwhile, is likely to reserve cards for data centers in the medium term. Ampere and Navi do not provide for use by end customers.
So 4x4 Stacks will have 2.152 TB/s !!
I hope they put it again.
I use it on resolve and Vram goes to 12g instant
Love my HBM on the vega 64
Three Navi 2x GPUs have been uncovered in macOS' latest beta alongside VRS support
Apple's latest beta for macOS (version 10.15.4 beta 1) includes some useful hints at AMD's next-generation graphics products, revealing three Navi products in the form of Navi23, Navi22 and Navi 21.
These notations were uncovered by @_rogame on Twitter, who has also discovered that GPUs that are "Navi2xBased" will also "supportVRS".
This notation seemingly confirms that AMD's next-generation Radeon hardware will support Variable Rate Shading (VRS), a feature which Microsoft will utilise as part of its next-generation Xbox Series X console.
With these notations in mind, we can confirm that Apple plans to continue utilising AMD's graphics hardware in its upcoming Mac products, though this is unsurprising given the company's distaste for Nvidia.
Apple already uses AMD's existing Navi products in the MacBook and Mac Pro products, which makes it unsurprising that Apple plans to utilise AMD's future Radeon graphics cards.
Apple will be particularly interested in AMD's planned high-end Radeon Navi products, as this will enable Apple to deliver higher-end desktop systems to its customers.
AMD Radeon Instinct MI100 With Arcturus GPU Spotted – 32 GB HBM2 Memory, 200W TDP In Early Prototype
AMD's upcoming Radeon Instinct MI100 HPC accelerator which would feature the Arcturus GPU has been spotted by Komachi. The existence of the AMD Arcturus GPU was confirmed all the way back in 2018 and two years later, we are finally starting to get details regarding the specifications for AMD's next HPC/AI accelerator.
AMD Arcturus GPU:
The "Arcturus" codename comes from the red giant star which is the brightest in the constellation of Bootes and among the brightest stars that can be seen from space.
Similar to Vega and Navi, both of which are also some of the brightest stars visible in the night sky, the naming scheme takes inspiration from the time since RTG was created and the founding father,
Raja Koduri (ex AMD RTG President), put a lot of emphasis on bright stars when they first introduced Polaris.
Previously, we have seen support for Arcturus GPU added to HWiNFO, in particular, the XL variant. To our surprise, the new variant that has leaked out 'D34303' is also based on the XL die and would go on to power the Radeon Instinct MI100.
The information for this part is based on a test board so it is likely that final specifications would not be the same but here are the key points:
Based on Arcturus XL GPU
Test Board has a TDP of 200W
Up To 32 GB HBM2 Memory
HBM2 Memory Clocks Reported Between 1000-1200 MHz
AMD MI100 HBM2 D34303 A1 XL 200W 32GB 1000M.
— 比屋定さんの戯れ言@Komachi (@KOMACHI_ENSAKA) February 7, 2020
The Radeon Instinct MI100 test board has a TDP of 200W and is based on the XL variant of AMD's Arcturus GPU. The card also features 32 GB of HBM2 memory with pin speeds of 1.0 - 1.2 GHz.
The MI60 in comparison has 64 CUs with a TDP of 300W while clock speeds are reported at 1200 MHz (Base Clock) while the memory operates at 1.0 GHz along with a 4096-bit bus interface, pumping out >1 TB/s bandwidth.
There's a big chance that the final design of the Arcturus GPU could be featuring Samsung's latest HBM2E 'Flashbolt' memory which offers 3.2 Gbps speeds for up to 1.5 Tb/s of bandwidth.
It is also mentioned that the Arcturus XL GPU could be a single huge monolithic die and not a chiplet based design like AMD's Zen 2 based Ryzen CPU lineup.
The naming of the Radeon Instinct MI100 itself gives us a hint of its absolute performance metrics which would be around 100 TFLOPs of INT8. That's a 66% increase in INT8 (AI/DNN) compute horsepower.
Similarly, the FP16 compute would be rated at around 50 TFLOPs, 25 TFLOPs of FP32 and 12.5 TFLOPs of FP64.
The extra GPU horsepower could be coming through either an updated graphics architecture, much higher clocks or higher CUs, which is the best assumption.
We have only seen little details which are also speculation at best such as the GPU cache info that is part of the Virtual CRAT (vCrat) size. The GPU cache correlates with the CU count. In the case of AMD Arcturus GPU, the cache size has been increased and so have the CU count from 64 to 128. That is twice as many CUs as Vega 10 which would give us 8192 stream processors if AMD is using 64 stream processors per CU like their current and modern-day GPU designs.
While Arcturus is a Vega derivative, it's also a custom design solely for the HPC segment. This way, AMD can focus on parallel developments for the gaming/consumer segment and the HPC market which consists of AI/DNN and datacenter customers.
Just a few days ago, some interesting speculation based on the new configuration for the Big Red 200 supercomputer was posted by Dylan522p who suggests that NVIDIA's next-generation Ampere GPU based HPC parts could potentially feature up to 18 TFLOPs of FP64 compute.
That would almost be a 50% lead over the Instinct MI100, but AMD has proved that they can offer more FLOPs at a competitive price so maybe that is where Arcturus would be targetting.
There's no word on when Arcturus would land, but AMD has hinted at an Instinct product later this year.
THX to WCCftech
Spoiler: More Info
MI100 = 100 TFLOPS (overall).
Arcturus' debut as a Radeon Instinct product follows the pattern of AMD debuting new big GPUs as low-volume/high-margin AI-ML accelerators first, followed by Radeon Pro and finally Radeon client graphics products. Arcturus is not "big Navi," rather it seems to be much closer to Vega than to Navi, which makes perfect sense given its target market.
AMD's Linux sources mention "It's because Arcturus has not 3D engine", which could hint at what AMD did with this chip: take Vega and remove all 3D raster graphics ability, which shaves a few billion transistors off the silicon, freeing up space for more CUs.
For gamers, AMD is planning a new line of Navi 20-series chips leveraging 7 nm EUV for launch throughout 2020. Various higher-ups at AMD, including its CEO, publicly hinted that a big client-segment GPU is in the works, and that the company is very much interested at taking another swing at premium 4K UHD gaming.
Clocks are very low. It wont even beat 2080ti let alone Ampere cards.
This chip has no 3d raster engines, it's headless and can't be used for gaming, also this is of the same architecture of Vega with heavy modifications (speculation from bios info) I coined it as GCN 6.0 (Unofficial). Gaming cards will be Navi.
Khronos will discuss "standardised ray tracing" at GDC - AMD and Nvidia will be there
To date, only one retail game has delivered raytracing support through the Vulkan API, Wolfenstein Youngblood.
If you want to include more games in this list, you'll need to include Vulkan-based projects like Quake II RTX and other non-retail releases.
As it stands, the Vulkan API lacks support for raytracing. Yes, Nvidia has released RTX specific extensions for Vulkan raytracing, but this support is hardware exclusive and lacks the multi-platform/multi-vendor raytracing support that Microsoft's DXR (DirectX Raytracing) delivers.
At GDC 2020, the Khronos Group plans to discuss "Ray Tracing in Vulkan" with engineers from both AMD and Nvidia.
That's right AMD and Nvidia, standardised ray tracing support will require support for multiple hardware vendors, and AMD's Radeon Technologies Group has plans to enter the ray tracing arena soon.
Sadly, we do not know what form Khronos' ray tracing implementation will take. Still, given Khronos' recent moves with Vulkan 1.2, we guess that Vulkan will closely align with Microsoft and its DXR implementation.
Vulkan 1.2 already supports HLSL (DirectX's Shading Language) with support for up to Shader Model 6.2.
Support for Shader Model 6.3 will bring with it support for DXR HLSL code, and this code should be usable with Vulkan's planned raytracing implementation.
Why align so closely with Microsoft? The simple answer is that multi-platform game releases will likely come to Microsoft's next-generation console, the Xbox Series X, and with that comes the need to utilise DirectX 12.
Aligning with Microsoft with regards to raytracing will make it easier for developers to utilise their existing code with Vulkan, or create new code which will function in both DirectX 12 and Vulkan.
At this time, it is unknown when official ray tracing support will come to Vulkan, though we should expect to bear more at GDC.
THX -> https://www.overclock3d.net/news/so...acing_at_gdc_-_amd_and_nvidia_will_be_there/1
Khronos already decided they are going with nvidia's vkraytracing - it's being adapted for AMD.
Good, they need work together for Us
There is an issue with this, hardware RT is not the way. It needs to be software implemented and work with all the the hardware available. CPU GPU and RAM, only having a gpu that is RT capable is a side step.