A Year With NVMe RAID 0 in a Real World Setup



/ 1 year ago

Next Page »

Introduction


RAID 0 setup 1

There are a lot of misconception and some right information out there when it comes to the combination of solid state drives with RAID setups and what better way is there to get to the root of them than to create a real world test setup and find out. I have been running this setup for a year now and it is time to take a closer look at how well the two drives in my setup have performed.

Performing a test like this would already be a fun one with normal SSDs, but it would be even more fun to do it with NVMe drives that on themselves are around four times faster than SATA3 drives. Since I got a motherboard that supports two NVMe M.2 drives and has Intel RAID onboard, why not go full hawk and go for the best. At least, that was my thinking.

RAID 0 setup 2

The two drives that I used for this test are Samsung SM951 drives with 256GB capacity each. At the time when I started this test, the Samsung SM951 drives were the only M.2 NVMe drives available for consumers, at least for a reasonable price. On top of that, I also scored a great deal on drives which makes this who project even more fun. And I can ensure that it is a lot of fun to enjoy the performance, speed, and responsiveness of a RAID NVMe setup on a daily base.

aywr-photo-drives-top

Having your drives set up in a RAID does have a few disadvantages, at least when running Windows. For example, we don’t have direct access to the S.M.A.R.T. information of the individual drives without breaking the RAID and examining them individually. Since I will need to do this for this article, I’ll simply create a clone of my setup onto another drive and that will allow me to free up the RAID array, destroy it, and examine the drives individually.

Technically, TRIM shouldn’t be a problem with modern drives even during RAID setups, but people are still worried about it and whether it will have any effect on their drives. The same goes for the wear-leveling and garbage collection on the drives. This long-term and real-world test will allow us to see the impact for ourselves which should leave no more doubt in people’s minds about the effectiveness and costs of running an SSD RAID setup. It will also give us a great view on how much data actually is written within a year’s usage.
aywr-photo-drives-bottom

Basic Drive Specification

  • Client SSD: MZVPV256HDGL
  • M.2 2280 Form factor
  • PCI Express Gen3 x4 and NVMe 1.1
  • Sequential read/write performance: 2150/1260 MB/s
  • Random read/write performance: 300K/100K IOPS
Topics: , , , , , , , , , , , , , , ,

Next Page »


Support eTeknix.com

By supporting eTeknix, you help us grow. And continue to bring you the latest news, reviews, and competitions. Follow us on Facebook and Twitter to keep up with the latest technology. Share your favourite articles, chat with the team and more. Also check out eTeknix YouTube, where you'll find our latest video reviews, event coverage and features in 4K!
eTeknix FacebookeTeknix TwittereTeknix Instagram

Check out our Latest Video

Comments

11 Responses to “A Year With NVMe RAID 0 in a Real World Setup”
  1. GregAndo says:

    Another thing that should definitely be mentioned as a con is that the likelihood of complete data loss is significantly increased with RAID0…

    • roadkill612 says:

      Which, even IF true, is irrelevant if the array is virtualised cache or scratch file space – as the alternative is way riskier (and more power hungry & expensive) dram.

      Ar worst, apps can recover from any outages. Nothing valuable is lost.

    • Steve says:

      considering the miniscule failure rate of these drives doubling your chances leaves you with better odds than a hard drive.

  2. SupremeLaw says:

    The Z170 chipset has a DMI 3.0 link with x4 lanes @ 8 GHz and the 128b/130b “jumbo frame”:
    x4 lanes @ 8 GHz / 8.125 bits per byte = 3.94 GB/second. As such, this is the max upstream
    bandwidth of M.2 devices that are connected downstream of that DMI 3.0 link, regardless of
    the sheer number of such M.2 devices assigned to a RAID-0 array. As long as Intel caps
    its DMI link at 32 Gbps, an x16 lane NVMe RAID controller is needed to exceed the maximum
    imposed by that DMI link e.g. Highpoint RocketRAID 3840A (x16 edge connector + 4 U.2 ports).

    • roadkill612 says:

      Exactly. A single 960 pro could saturate~ the chipsetS 4 lanes.

      I wonder how a pair of 960 proS would go on honest native mobo nvme ports like threadripper or epyc.?

  3. Rafael Hartke says:

    Really cool article, Bohs.
    Just a small correction: your “Access Times” graph title on page 6 says “higher is better” when it should read “lower is better”.

  4. JSizzle says:

    Interesting article, but I am not convinced RAID is a good idea with SSD/Flash memory. As you stated: “No trim…wear-leveling.. garbage collection…” etc
    Those drives work with a “controller” which is basically a little mini-computer with firmware, and everything I have researched says that those controllers cannot tell the drive to evenly wear the drive in RAID 0… Not to mention that I believe that you get zero real-world benefits with your memory controller both Saturated and tasked with playing bit-traffic-controller.

    Also, you said your computer was heavily used, and this is a bit relative or subjective, but my math says your drives were LIGHTLY used. I used my own (in my opinion MODERATE) numbers as a baseline of comparison. I do not consider myself someone who puts a lot of data on my drives.

    I downloaded CrystalDiskInfo and checked my less than 4-year old Samsung EVO SSDs. They showed 27765 hours… (true 24×7 operation) with over 4 times the data written to the drive, and all I do is play Skyrim, Witcher3, and Planet Coaster… and watch netflix. I also dealt with a couple reimages and a bunch of wedding and honeymoon pics, but these were anomolies. The point is: your test is, respectfully, far too anecdotal, and is based on too-limited a data set to draw conclusions. Your tools cannot account for all the NAND cells and determine how much life remains (whether wear was evenly spread) and with your capacities and the moderate amount of data you’ve written to it in (less-than) 1 year of actual (moderate) use… it would be irresponsible, IMO, for anybody to draw conclusions about putting these expensive drives in RAID.

    If anybody reads my comments, and sees the comments of others, please DO NOT follow this example. RAID and NVME / SSDs should NOT be mixed. It WILL inevitably reduce the lifecycle and/or performance of the drive relative to its life without RAID. You may not see it in a year or even two (these things are rated to LAST for 171 years of light-use (1.5 Million hours Average time before failure *MTBF)… But if any of these really ever last a lifetime (doubtful) it will NEVER happen in RAID.

    Also ask yourselves (WHY!!!) put this in RAID? Even with older HDD’s RAID had no real-world benefits in things like FPS or game-loading times, system responsiveness at startup, or windows boot-times (which due to the customary RAID screen during post) would typically be slower. RAID was only good for reading huge files, which I admit I rarely ever did. Just use these things in AHCI, by themselves. A Samsung 960 EVO m.2 at 3200MBps is over 6 times faster than most SSDs. It would make great bragging rights as a boot drive. In an all SSD system, I would opt for a cheaper standard SSD for storage, and keep the NVME drive as my boot drive only, at which point 256Gb/ 500Gb Max… would do the trick.

  5. Spikbebis says:

    Having multiple drives is an headache – getting Users to save on different places just dont work…
    Reading big files happens here, i got some folks working with point clouds, there is a lot of data to shuffle there =)

  6. Shiv says:

    Why do you say TRIM is not important? There is no way a drive can tell if a datablock has been erased without TRIM. Because you write in 4k pages but a SSD writes in larger multiples of this, if a block is marked as having data on it, the whole datablock has to be read into memory then merged with the new data and written back. If the datablock has been flagged as empty via TRIM, then you do not need to do the READ at all.

    There is literally NOTHING ELSE that replaces TRIM functionality. Without TRIM, you WILL get degraded performance over time/use. Many people don’t understand garbage collection is not the same thing.

    See articles like https://searchstorage.techtarget.com/definition/TRIM for more info.

    • Bohs Hansen says:

      Where does it say that TRIM isn’t important? it says it shouldn’t be a problem with modern drives. That the only thing it says about trim. Why isn’t it a problem? because most modern drives had RAID engine within the used controller, allowing TRIM to work just fine in raid setups too

Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!


Optimized with PageSpeed Ninja