There are specialized models, but even generic ones like Gemini 2.0 Flash are really good and cheap, you can use them and embed the OCR inside the PDF to index to the original content.
This fundamentally misunderstands the problem. Effective OCR predates the popularity of ChatGPT and e-Discovery folks were already using it--AI in the modern sense adds nothing to this. Indexing the resulting text was also already possible--again AI adds nothing. The problem is that the resultant text lacks structure: being able to sort/filter wiretap data by date/location, for example, isn't inherently possible because you've obtained text or indexed it. AI accuracy simply isn't high enough to solve this problem without specialized training--off the shelf models simply won't work accurately enough even if you can get around the legal problems of feeding potentially-sensitive information into a model. AI models trained on a large enough domain-specific dataset might work, but the existing off-the-shelf models certainly are not accurate enough. And there are a lot of subdomains--wiretap data, cell phone GPS data, credit card data, email metadata, etc., which would each require model training.
Fundamentally, the solution to this problem is to not create it in the first place. There's no reason for there to be a structured data -> PDF -> AI -> structured data pipeline when we can just force people providing evidence to provide the structured data.