One of the things at Computex that popped out a bit is the new Intel VROC technology that can be used with certain X299 motherboards.... Computex 2017: VROC Technology Passing 10 GB/sec with M2
I didn't completely get it - where is the RADI controller itself? In CPU? Or somewhere on Motherboard?
No, it's on the CPU itself ( "Virtual RAID On CPU" ) which is why Kaby Lake-X doesn't support it on the same mobo. I hadn't realised that the default unlocked RAID 0 was software based, that's interesting. You won't necessarily need to buy one of these cards, some of the mobos have DIMM.2 slots next to the RAM slots which have direct CPU access for 2 M2 (Intel Only) SSDs. So to get hardware accelerated RAID 0 you'd need 2x Intel 600p drives, and the unlock key. Not cheap, but not crazy expensive either. There aren't really enough native lanes, IMO, at this point. You'd want at least 52 lanes so you don't need to choose (example: SLI + Hyper M2 = compromised. GPU + Red Rocket X + Hyper M2 = compromised). CDJay
Looking forward to the x399 AMD thread ripper benchmarks. Plenty of lanes there on all Thread ripper cpu variants. Intel have dropped a clanger, only the 7900 ($999 ) and upwards will have 44 lanes. Competition is a wonderful thing.
Another overly expensive technology from Intel that will be used in 0.001% of sold computers. I know there's never enough bandwidth and speed, but to me it looks like VERY expensive (compatible board, x cpu, m2 units and key) technology with serious drawbacks (cpu load).
So VROC is software RAID-0 running on the CPU at up to 20% CPU utilization. I get that part. What I don't understand is what happens when you use the physical key for "hardware RAID" as stated in the article. Is there an actual hardware RAID controller on the CPU? Or, is there a hardware RAID controller on the motherboard, and if so, is it using PCI-e lanes to connect to the CPU? If yes, how many lanes, and are those lanes part of the 44 normal lanes for some CPU's?
Dongles are just as common now as they ever were - definitely not a 90's thing. Intel RAID keys are common on server boards, and they have been around for a long time. Since the XE chipsets are basically server parts, I'm surprised they didn't do this earlier. From the slides: "a new storage I/O technology, supported WITHIN the RSTe 5.x driver" "allows you to use CPU PCIe lanes" "A CPU PCIe x16 3.0 slot" The bit about the RSTe driver makes me think it's "hardware assisted/hybrid RAID" and that the CPU is still doing work. It's using 16 of the CPU's PCIe lanes to connect directly. Traditionally, on server boards you were forced to buy the key, or buy a real controller if you needed RAID. So VROC really is adding something I guess - it adds the option of crippling your new high-end hardware. Devices like the HighPoint SSD7101 - a x16 card with it's own RAID controller and 4 M.2 sockets - make more sense to me.
so with the key, it active "RAID" function on CPU, without overheading cpu ? or because it directly work hardware base, it use less cpu than software based ? well if it work as good as real raid controller, then no complain i guess think it save $ buying raid controller
The Asus M.2 board is simply a x16 PCIe board that has four x4 M.2 slots on it. To my knowledge, it has no RAID controller functionality on it. That means something else is doing the RAID. The article states for RAID 0, then the CPU is doing the RAID at a penalty of 20% CPU usage. But, the article then states when you buy the "key" it's now hardware RAID. Did he really mean to say hardware RAID? My question is what is doing the hardware RAID? If it's true hardware RAID, then there has to be a RAID chip somewhere? Is it embedded on the CPU? Or is it a chip on the motherboard, and if so how's it connected (number of lanes, etc.)
Big No-No to CPU Eater. It probably puts strain on CPU cache system too. So running workloads on 8C/16T and then benchmarking this thing on remaining 2C/4T of 10C/20T chip will choke caches needed by main workload. Yes, in case of running database from it and running some crazy query, this will be vastly superior to regular disk/ssd subsystem DB can run on. (With exception of Oracle Coherence In-Memory Data Grid...)
Bazooka - the hardware RAID chip is on the motherboard already. It's just unable to be used if you don't buy the $150 unlock key/code.
Yes, that's the way I read it as well. It is greed. I don't see it being that practical either if it chews 10 to 20 percent CPU power. Keep in mind the CPU's used in conjuction with this isn't a typical i5-7600K, that 20 percent CPU use is astronomical on the CPU's it's intended for.
see also Highpoint SSD7101A x16 4 x NVMe M.2 SSDs no dongle Highpoint contact says they are working on making it bootable.
Trying to get 3x 512GB Samsung 960 Pro nvme RAID-0 working on my BIL's Gigabyte Aorus Gaming 7 motherboard (i9-7900X CPU) No go. For starters, it won't even recognize one of the slots. In the manual, there is a vague comment that the "M2M_32G connector must work with an Intel VROC Upgrade Key to support RAID configuration" - it looks like it won't work at all without the VROC key. I have been completely unable to get it to see any drive in that slot (moving sticks around). I managed to get two of the slots to configure as a RAID array, but Windows 10 install will not recognize the array. Oddly, there is no "NVME" menu option in BIOS. The drives seem to be recognized in inconsistent ways... iRST seems to absolutely hate these non-Intel Optane drives. I have an odd suspicion that if my BIL bought the VROC key, everything would suddenly work. My own system is a Gigabyte Z170X Gaming 7 and I run 2x Plextor nvme drives in a RAID-0 array as my boot drive without issue. I also have 30 years of experience building systems. It seems like VROC is just a scam by Intel to pry more money out of enthusiasts' hands. Why should somebody spend ANOTHER $100 after buying a $400 motherboard and $1000 CPU just to get the promised functionality to work? This is what we've come to.
"My own system is a Gigabyte Z170X Gaming 7 and I run 2x Plextor nvme drives in a RAID-0 array as my boot drive without issue. I also have 30 years of experience building systems." Well yeah, but isnt that using 2x m.2 nvme ports, both on the very bandwidth limited chipset? ie. both 4 lane ssdS share just 4 lanes via the chipset. If u dont mind ~maxing out your chipset, it would help for 2x lesser ssdS (such as yours vs samsung), but a single samsung 960 pro can almost max out 4 lanes, so raid 0 is pointless~. I am not being anti. just asking mostly. clarity to the folks here.