Almost all NVMe RAID products—including both that you've linked to—are software RAID schemes. So if you're on Linux and already have access to competent software RAID, you should only concern yourself with what's necessary to get the drives connected to your system. In the case of two drives, most recent desktop motherboards already have the slots you need, and multi-drive riser cards are unnecessary.
Yes, that's one of the cards that use Broadcom/Avago/LSI "tri-mode" HBA chips (SAS3508 in this case). It comes with the somewhat awkward caveat of making your NVMe devices look to the host like they're SCSI drives, and constraining you to 8 lanes of PCIe uplink for however many drives you have behind the controller. Marvell has a more interesting NVMe RAID chip that is fairly transparent to the host, in that it makes your RAID 0/1/10 of NVMe SSDs appear to be a single NVMe SSD. One of the most popular use cases for that chip seems to be transparently mirroring server boot drives.
A typical NVMe SSD has a four-lane PCIe link, or 2+2 for some enterprise drives operating in dual-port mode. So it usually only takes 2 or 3 drives to saturate an 8-lane bottleneck. Putting 8 NVMe SSDs behind a PCIe x8 controller would be a severe bottleneck for sequential transfers and usually also for random reads.
I can’t edit my other post anymore, but I it’s worse than I thought. I’m not sure the PERC 740 supports nvme at all. Only examples I can find are S140/S150 software raid.
No idea if 7.68TB RAID1 over two drive with software RAID is much worse than a theoretical RAID10 over 4 3.92TB drives.... apparently all the RAID controllers have a tough time with this many IOPs.