Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to make the inference a bit more faster? #2

Open
CriusFission opened this issue Apr 21, 2023 · 8 comments
Open

Is it possible to make the inference a bit more faster? #2

CriusFission opened this issue Apr 21, 2023 · 8 comments

Comments

@CriusFission
Copy link

Hi, @Curt-Park, thanks for sharing your work. I'm trying to run this on a bunch of images in a folder. It contains around 100 images. It takes a few hours on my RTX 3070. Is it possible to make it a bit faster?

@Curt-Park
Copy link
Owner

The execution speed is heavily dependent on the make generator algorithm in Segment Anything, which runs 1024 predictions per a single image.

We have two solutions:

  1. Use a smaller model.
  2. Enhance the performance of SAM's mask generator.

@CriusFission
Copy link
Author

Thanks for the reply. Using a smaller model is helpful.
I apologize if the question is silly, but how do I go about enhancing the performance of SAM's mask generator?

@Curt-Park
Copy link
Owner

I made a new algorithm that runs fast on CPU.
However, I can not open it because I made it for work. I am sorry for that.

DEMO:
https://youtu.be/y9AAPsTCW3I

@CriusFission
Copy link
Author

Looks cool!
Is it possible to do batched inferencing with this model? As I'm having about 100 images, it would make sense to provide the input as a batch.

@Curt-Park
Copy link
Owner

@Kirang96
Copy link

Kirang96 commented May 5, 2023

I meant running multiple images at a time in different cores.

@Curt-Park
Copy link
Owner

I meant running multiple images at a time in different cores.

As for this repository, you can run multiple gradio apps at once.

@Curt-Park
Copy link
Owner

I made a new algorithm that runs fast on CPU.
However, I can not open it because I made it for work. I am sorry for that.

FYI, we opened the demo pages on hugging face spaces!

  • Fast Segment Everything: Re-implemented Everything algorithm in iterative manner that is better for CPU only environments. It shows comparable results to the original Everything within 1/5 number of inferences (e.g. 1024 vs 200), and it takes under 10 seconds to search for masks on a CPU upgrade instance (8 vCPU, 32GB RAM) of Huggingface space.
  • Fast Segment Everything with Text Prompt: This example based on Fast-Segment-Everything provides a text prompt that generates an attention map for the area you want to focus on.
  • Fast Segment Everything with Image Prompt: This example based on Fast-Segment-Everything provides an image prompt that generates an attention map for the area you want to focus on.
  • Fast Segment Everything with Drawing Prompt: This example based on Fast-Segment-Everything provides a drawing prompt that generates an attention map for the area you want to focus on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants