-
Notifications
You must be signed in to change notification settings - Fork 1.2k
HuggingFaceM4/Idefics3-8B-Llama3 crash fix #3267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Wang, Yi A <[email protected]>
following is the crash in XPU 2025-06-16T01:25:16.438091Z INFO text_generation_launcher: image_id 0 start_idx 0 end_idx 2314, length 2314 [W616 01:23:57.772114900 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden. |
) | ||
} else { | ||
(height, width) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why keep 2 loops ? Why not just change the value of 1456 to 4096 (Although I guess 1456 was done on purpose)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the first loop is equal to https://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics3/image_processing_idefics3.py#L139. # Find the output size, when rescaling the longest edge to max_len and preserving the aspect ratio
the second loop is equal to https://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics3/image_processing_idefics3.py#L141 # # Find the output size when scaling the image to be below the MAX_IMAGE_SIZE
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the second loop is used after first loop in case the width/height output in the first loop is larger than 4096
server:
text-generation-launcher --model-id=HuggingFaceM4/Idefics3-8B-Llama3 -p 8080
client
What is in the picture?\n\n","parameters":{"max_new_tokens":100, "seed": 42, "do_sample":true}}'
curl -N 0.0.0.0:8080/generate_stream
-X POST
-d '{"inputs":"
-H 'Content-Type: application/json'
crash happens because slot allocated in rust is not large enough.
gaudi and XPU has the same issue, assume cuda has same issue too.
the image process logic is copied from https://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics3/image_processing_idefics3.py#L118