A Year With NVMe RAID 0 in a Real World Setup

/ 8 months ago

Next Page »


RAID 0 setup 1

There are a lot of misconception and some right information out there when it comes to the combination of solid state drives with RAID setups and what better way is there to get to the root of them than to create a real world test setup and find out. I have been running this setup for a year now and it is time to take a closer look at how well the two drives in my setup have performed.

Performing a test like this would already be a fun one with normal SSDs, but it would be even more fun to do it with NVMe drives that on themselves are around four times faster than SATA3 drives. Since I got a motherboard that supports two NVMe M.2 drives and has Intel RAID onboard, why not go full hawk and go for the best. At least, that was my thinking.

RAID 0 setup 2

The two drives that I used for this test are Samsung SM951 drives with 256GB capacity each. At the time when I started this test, the Samsung SM951 drives were the only M.2 NVMe drives available for consumers, at least for a reasonable price. On top of that, I also scored a great deal on drives which makes this who project even more fun. And I can ensure that it is a lot of fun to enjoy the performance, speed, and responsiveness of a RAID NVMe setup on a daily base.

SEE ALSO:  Cooler Master MasterFan Pro RGB Fans and Controller Review


Having your drives set up in a RAID does have a few disadvantages, at least when running Windows. For example, we don’t have direct access to the S.M.A.R.T. information of the individual drives without breaking the RAID and examining them individually. Since I will need to do this for this article, I’ll simply create a clone of my setup onto another drive and that will allow me to free up the RAID array, destroy it, and examine the drives individually.

Technically, TRIM shouldn’t be a problem with modern drives even during RAID setups, but people are still worried about it and whether it will have any effect on their drives. The same goes for the wear-leveling and garbage collection on the drives. This long-term and real-world test will allow us to see the impact for ourselves which should leave no more doubt in people’s minds about the effectiveness and costs of running an SSD RAID setup. It will also give us a great view on how much data actually is written within a year’s usage.

Basic Drive Specification

  • Client SSD: MZVPV256HDGL
  • M.2 2280 Form factor
  • PCI Express Gen3 x4 and NVMe 1.1
  • Sequential read/write performance: 2150/1260 MB/s
  • Random read/write performance: 300K/100K IOPS

Topics: , , , , , , , , , , , , , , ,

Next Page »

  • GregAndo

    Another thing that should definitely be mentioned as a con is that the likelihood of complete data loss is significantly increased with RAID0…

    • roadkill612

      Which, even IF true, is irrelevant if the array is virtualised cache or scratch file space – as the alternative is way riskier (and more power hungry & expensive) dram.

      Ar worst, apps can recover from any outages. Nothing valuable is lost.

  • SupremeLaw

    The Z170 chipset has a DMI 3.0 link with x4 lanes @ 8 GHz and the 128b/130b “jumbo frame”:
    x4 lanes @ 8 GHz / 8.125 bits per byte = 3.94 GB/second. As such, this is the max upstream
    bandwidth of M.2 devices that are connected downstream of that DMI 3.0 link, regardless of
    the sheer number of such M.2 devices assigned to a RAID-0 array. As long as Intel caps
    its DMI link at 32 Gbps, an x16 lane NVMe RAID controller is needed to exceed the maximum
    imposed by that DMI link e.g. Highpoint RocketRAID 3840A (x16 edge connector + 4 U.2 ports).

    • roadkill612

      Exactly. A single 960 pro could saturate~ the chipsetS 4 lanes.

      I wonder how a pair of 960 proS would go on honest native mobo nvme ports like threadripper or epyc.?

  • Rafael Hartke

    Really cool article, Bohs.
    Just a small correction: your “Access Times” graph title on page 6 says “higher is better” when it should read “lower is better”.