Which model are you using? There are a bunch of different background removal models, many with configuration options, but most of the services only provide one and without configuration. I need to remove backgrounds for my ecommerce business, and the results vary widely between models and confuguring alpha-matting can make a difference too. So I've been developing a tool that has all the models in one place, along with upcaling, enhancing, and inpainting models. It spins up Vultr GPU instances on demand but that's kind of slow so I'm also hitting API's, like replicate, huggingface, and runpod. I will integrate yours too.
For background removal, I get good results with isnet-general-use and u2net, available through rembg or huggingface. I've also been getting decent results with DIS-v1 on replicate.
The results vary so widely, especially if there are blurry or light areas, it's necessary to have options. It can also be very helpful to do preprocessing image enhancement, to remove blur or upscale, prior to the background removal. I'm sure you could even take the alpha mask from the enhanced image and use it on the original image, to help in cases where the source image has issues.
There also needs to exist a service for interactive background removal, via automatic and/or interactive segmenting. Sometimes the models need a little help, and I think it's rediculous that I still have to trace paths when the models fail.
Anyway, I love the idea and pricing model, will def try it out, but I'd like to see more details on the models being used, and I'd like to see more options and configuration.
This looks like a fun startup, I've thought of doing something similar. There's a lot of room to grow with other AI image manipulation models, not just for background removal. Shoot me an email if you would like to discuss.
We'll likely add complementary AI models (e.g. super-resolution, stable diffusion, etc), with the broad bet being that businesses definitely don't want to run their own service, and would prefer off-the-shelf to custom models (for which there's a bunch of hosting options).
For background removal specifically, all models will inevitably have some failure rate. https://clippingmagic.com is the only one with a serious editor that enables you to get exactly the result you want on any image (it's our legacy service with "old-school" SaaS pricing).
I signed up and used it via API. Decent results and FAST! I'll keep using it via API. Looking forward to seeing what else you guys come up with. Thanks!
I think that broad bet is a good one. Simple API endpoints with a good selection of curated models will certainly be a hit. There are lots of options for hosting, and quite a few API providers, but they're all some combination of overly complicated, slow, brittle, or functionally limited.
For background removal, I get good results with isnet-general-use and u2net, available through rembg or huggingface. I've also been getting decent results with DIS-v1 on replicate.
The results vary so widely, especially if there are blurry or light areas, it's necessary to have options. It can also be very helpful to do preprocessing image enhancement, to remove blur or upscale, prior to the background removal. I'm sure you could even take the alpha mask from the enhanced image and use it on the original image, to help in cases where the source image has issues.
There also needs to exist a service for interactive background removal, via automatic and/or interactive segmenting. Sometimes the models need a little help, and I think it's rediculous that I still have to trace paths when the models fail.
Anyway, I love the idea and pricing model, will def try it out, but I'd like to see more details on the models being used, and I'd like to see more options and configuration.
This looks like a fun startup, I've thought of doing something similar. There's a lot of room to grow with other AI image manipulation models, not just for background removal. Shoot me an email if you would like to discuss.