Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey! Yes, false positives are are a problem that we’re hyper aware of, and a big challenge with most security tools. We have less of a problem with this than other tools for a few reasons :)

1) We're a bit different from a standard fuzzing tool, instead of generating traffic to send to an API we find vulnerabilities by analyzing real production/staging traffic. This gives our models a better understanding of how the API actually works. Although we might add a fuzzer at some point!

2) We split out very high signal vulns (https://demo.metlo.com/vulnerabilities) from vulns/attacks that we detect with our ML models that may have some false positives (https://demo.metlo.com/protection)... Different classifications give you a better way to triage any alerts.

3) We're putting a lot of effort into making our models really good, we're not comfortable with our tool having high false positive rates so when there's a model that is returning 90%+ false positives were aware of that and don't even add it :)

Thanks for the feedback and playing devils advocate!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: