Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost all NVMe RAID products—including both that you've linked to—are software RAID schemes. So if you're on Linux and already have access to competent software RAID, you should only concern yourself with what's necessary to get the drives connected to your system. In the case of two drives, most recent desktop motherboards already have the slots you need, and multi-drive riser cards are unnecessary.


PERC HP740 controllers in Dell servers iirc are hardware raid for the flex port U.2 and backplane pcie nvme drives.


Yes, that's one of the cards that use Broadcom/Avago/LSI "tri-mode" HBA chips (SAS3508 in this case). It comes with the somewhat awkward caveat of making your NVMe devices look to the host like they're SCSI drives, and constraining you to 8 lanes of PCIe uplink for however many drives you have behind the controller. Marvell has a more interesting NVMe RAID chip that is fairly transparent to the host, in that it makes your RAID 0/1/10 of NVMe SSDs appear to be a single NVMe SSD. One of the most popular use cases for that chip seems to be transparently mirroring server boot drives.


So stay under 8 physical NVME and it should be fine?


A typical NVMe SSD has a four-lane PCIe link, or 2+2 for some enterprise drives operating in dual-port mode. So it usually only takes 2 or 3 drives to saturate an 8-lane bottleneck. Putting 8 NVMe SSDs behind a PCIe x8 controller would be a severe bottleneck for sequential transfers and usually also for random reads.


I need to think about this for a second.

You’re saying the performance gains stop at two drives in raid striping. RAID10 in two strip two mirror would still bottleneck at 8 total lanes?

I also need to see about the PERC being limited to 8 lanes - no offense - but do you have a source for that?

Edit: never mind on source, I think you are exactly right [0] Host bus type 8-lane, PCI Express 3.1 compliant

https://i.dell.com/sites/doccontent/shared-content/data-shee...

To be fair; they have 8GB NV RAM, so it’s not exactly super clear cut how obvious a bottleneck would be.


I can’t edit my other post anymore, but I it’s worse than I thought. I’m not sure the PERC 740 supports nvme at all. Only examples I can find are S140/S150 software raid.

No idea if 7.68TB RAID1 over two drive with software RAID is much worse than a theoretical RAID10 over 4 3.92TB drives.... apparently all the RAID controllers have a tough time with this many IOPs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: