Yes, that's one of the cards that use Broadcom/Avago/LSI "tri-mode" HBA chips (SAS3508 in this case). It comes with the somewhat awkward caveat of making your NVMe devices look to the host like they're SCSI drives, and constraining you to 8 lanes of PCIe uplink for however many drives you have behind the controller. Marvell has a more interesting NVMe RAID chip that is fairly transparent to the host, in that it makes your RAID 0/1/10 of NVMe SSDs appear to be a single NVMe SSD. One of the most popular use cases for that chip seems to be transparently mirroring server boot drives.
A typical NVMe SSD has a four-lane PCIe link, or 2+2 for some enterprise drives operating in dual-port mode. So it usually only takes 2 or 3 drives to saturate an 8-lane bottleneck. Putting 8 NVMe SSDs behind a PCIe x8 controller would be a severe bottleneck for sequential transfers and usually also for random reads.
I can’t edit my other post anymore, but I it’s worse than I thought. I’m not sure the PERC 740 supports nvme at all. Only examples I can find are S140/S150 software raid.
No idea if 7.68TB RAID1 over two drive with software RAID is much worse than a theoretical RAID10 over 4 3.92TB drives.... apparently all the RAID controllers have a tough time with this many IOPs.