Skip to content

HuggingFaceM4/Idefics3-8B-Llama3 crash fix #3267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sywangyi
Copy link
Contributor

server:
text-generation-launcher --model-id=HuggingFaceM4/Idefics3-8B-Llama3 -p 8080

client
curl -N 0.0.0.0:8080/generate_stream
-X POST
-d '{"inputs":"What is in the picture?\n\n","parameters":{"max_new_tokens":100, "seed": 42, "do_sample":true}}'
-H 'Content-Type: application/json'

crash happens because slot allocated in rust is not large enough.
gaudi and XPU has the same issue, assume cuda has same issue too.
the image process logic is copied from https://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics3/image_processing_idefics3.py#L118

@sywangyi
Copy link
Contributor Author

@Narsil @regisss please help review, thanks

@sywangyi
Copy link
Contributor Author

following is the crash in XPU

2025-06-16T01:25:16.438091Z INFO text_generation_launcher: image_id 0 start_idx 0 end_idx 2314, length 2314
2025-06-16T01:25:17.269690Z ERROR batch{batch_size=1}:prefill:prefill{id=0 size=1}:prefill{id=0 size=1}: text_generation_router_v3::client: backends/v3/src/client/mo
d.rs:45: Server error: transport error
2025-06-16T01:25:17.269815Z ERROR batch{batch_size=1}:prefill:clear_cache{batch_id=Some(0)}:clear_cache{batch_id=Some(0)}: text_generation_router_v3::client: backend
s/v3/src/client/mod.rs:45: Server error: error trying to connect: Connection refused (os error 111)
2025-06-16T01:25:17.269831Z ERROR generate_stream{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, frequency_penalty: None
, top_k: None, top_p: None, typical_p: None, do_sample: true, max_new_tokens: Some(100), return_full_text: None, stop: [], truncate: None, watermark: false, details:
false, decoder_input_details: false, seed: Some(42), top_n_tokens: None, grammar: None, adapter_id: None }}:async_stream:generate_stream:schedule:infer:send_error:
text_generation_router_v3::backend: backends/v3/src/backend.rs:546: Request failed during generation: Server error: transport error
2025-06-16T01:25:17.333341Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:

[W616 01:23:57.772114900 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden.
Overriding a previously registered kernel for the same operator and the same dispatch key
operator: aten::geometric_(Tensor(a!) self, float p, *, Generator? generator=None) -> Tensor(a!)
registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
dispatch key: XPU
previous kernel: registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:37
new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/gpu/xpu/ATen/RegisterXPU_0.cpp:186 (function operator())
2025-06-16 01:23:59.160 | INFO | text_generation_server.utils.import_utils::76 - Detected system ipex
AssertHandler::printMessage
/pytorch/third_party/torch-xpu-ops/src/ATen/native/xpu/sycl/Indexing.h:620: operator(): global id: [2080,0,0], local id: [0,0,0] Assertion index >= -sizes_[i] && in dex < sizes_[i] && "index out of bounds" failed.
/pytorch/third_party/torch-xpu-ops/src/ATen/native/xpu/sycl/Indexing.h:620: operator(): global id: [2081,0,0], local id: [1,0,0] Assertion index >= -sizes_[i] && in dex < sizes_[i] && "index out of bounds" failed.
/pytorch/third_party/torch-xpu-ops/src/ATen/native/xpu/sycl/Indexing.h:620: operator(): global id: [2082,0,0], local id: [2,0,0] Assertion index >= -sizes_[i] && in dex < sizes_[i] && "index out of bounds" failed.

)
} else {
(height, width)
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why keep 2 loops ? Why not just change the value of 1456 to 4096 (Although I guess 1456 was done on purpose)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the first loop is equal to https://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics3/image_processing_idefics3.py#L139. # Find the output size, when rescaling the longest edge to max_len and preserving the aspect ratio
the second loop is equal to https://github.com/huggingface/transformers/blob/main/src/transformers/models/idefics3/image_processing_idefics3.py#L141 # # Find the output size when scaling the image to be below the MAX_IMAGE_SIZE

Copy link
Contributor Author

@sywangyi sywangyi Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the second loop is used after first loop in case the width/height output in the first loop is larger than 4096

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants