Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So stay under 8 physical NVME and it should be fine?


A typical NVMe SSD has a four-lane PCIe link, or 2+2 for some enterprise drives operating in dual-port mode. So it usually only takes 2 or 3 drives to saturate an 8-lane bottleneck. Putting 8 NVMe SSDs behind a PCIe x8 controller would be a severe bottleneck for sequential transfers and usually also for random reads.


I need to think about this for a second.

You’re saying the performance gains stop at two drives in raid striping. RAID10 in two strip two mirror would still bottleneck at 8 total lanes?

I also need to see about the PERC being limited to 8 lanes - no offense - but do you have a source for that?

Edit: never mind on source, I think you are exactly right [0] Host bus type 8-lane, PCI Express 3.1 compliant

https://i.dell.com/sites/doccontent/shared-content/data-shee...

To be fair; they have 8GB NV RAM, so it’s not exactly super clear cut how obvious a bottleneck would be.


I can’t edit my other post anymore, but I it’s worse than I thought. I’m not sure the PERC 740 supports nvme at all. Only examples I can find are S140/S150 software raid.

No idea if 7.68TB RAID1 over two drive with software RAID is much worse than a theoretical RAID10 over 4 3.92TB drives.... apparently all the RAID controllers have a tough time with this many IOPs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: