Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama 3.2 vision finetuning error (Unsupported: hasattr ConstDictVariable to) #1325

Open
adi7820 opened this issue Nov 22, 2024 · 3 comments
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster URGENT BUG Urgent bug

Comments

@adi7820
Copy link

adi7820 commented Nov 22, 2024


Unsupported Traceback (most recent call last)
Cell In[10], line 22
20 from transformers import TextStreamer
21 text_streamer = TextStreamer(tokenizer, skip_prompt = True)
---> 22 _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128,
23 use_cache = True, temperature = 1.5, min_p = 0.1)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/utils/_contextlib.py:116, in context_decorator..decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/unsloth/models/vision.py:63, in _wrap_fast_inference.._fast_generate(*args, **kwargs)
61 # Autocasted
62 with torch.autocast(device_type = device_type, dtype = dtype):
---> 63 output = generate(*args, **kwargs)
64 pass
65 return output

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/peft/peft_model.py:1704, in PeftModelForCausalLM.generate(self, *args, **kwargs)
1702 with self._enable_peft_forward_hooks(*args, **kwargs):
1703 kwargs = {k: v for k, v in kwargs.items() if k not in self.special_peft_forward_args}
-> 1704 outputs = self.base_model.generate(*args, **kwargs)
1705 else:
1706 outputs = self.base_model.generate(**kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/utils/_contextlib.py:116, in context_decorator..decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/transformers/generation/utils.py:2215, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
2207 input_ids, model_kwargs = self._expand_inputs_for_generation(
2208 input_ids=input_ids,
2209 expand_size=generation_config.num_return_sequences,
2210 is_encoder_decoder=self.config.is_encoder_decoder,
2211 **model_kwargs,
2212 )
2214 # 12. run sample (it degenerates to greedy search when generation_config.do_sample=False)
-> 2215 result = self._sample(
2216 input_ids,
2217 logits_processor=prepared_logits_processor,
2218 stopping_criteria=prepared_stopping_criteria,
2219 generation_config=generation_config,
2220 synced_gpus=synced_gpus,
2221 streamer=streamer,
2222 **model_kwargs,
2223 )
2225 elif generation_mode in (GenerationMode.BEAM_SAMPLE, GenerationMode.BEAM_SEARCH):
2226 # 11. prepare beam search scorer
2227 beam_scorer = BeamSearchScorer(
2228 batch_size=batch_size,
2229 num_beams=generation_config.num_beams,
(...)
2234 max_length=generation_config.max_length,
2235 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/transformers/generation/utils.py:3206, in GenerationMixin._sample(self, input_ids, logits_processor, stopping_criteria, generation_config, synced_gpus, streamer, **model_kwargs)
3203 model_inputs.update({"output_hidden_states": output_hidden_states} if output_hidden_states else {})
3205 # forward pass to get next token
-> 3206 outputs = self(**model_inputs, return_dict=True)
3208 # synced_gpus: don't waste resources running the code we don't need; kwargs must be updated before skipping
3209 model_kwargs = self._update_model_kwargs_for_generation(
3210 outputs,
3211 model_kwargs,
3212 is_encoder_decoder=self.config.is_encoder_decoder,
3213 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/accelerate/hooks.py:170, in add_hook_to_module..new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/transformers/models/mllama/modeling_mllama.py:2098, in MllamaForConditionalGeneration.forward(self, input_ids, pixel_values, aspect_ratio_mask, aspect_ratio_ids, attention_mask, cross_attention_mask, cross_attention_states, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep)
2096 raise ValueError("aspect_ratio_ids must be provided if pixel_values is provided")
2097 # get vision tokens from vision model
-> 2098 vision_outputs = self.vision_model(
2099 pixel_values=pixel_values,
2100 aspect_ratio_ids=aspect_ratio_ids,
2101 aspect_ratio_mask=aspect_ratio_mask,
2102 output_hidden_states=output_hidden_states,
2103 output_attentions=output_attentions,
2104 return_dict=return_dict,
2105 )
2106 cross_attention_states = vision_outputs[0]
2107 cross_attention_states = self.multi_modal_projector(cross_attention_states).reshape(
2108 -1, cross_attention_states.shape[-2], self.hidden_size
2109 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/accelerate/hooks.py:170, in add_hook_to_module..new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/transformers/models/mllama/modeling_mllama.py:1422, in MllamaVisionModel.forward(self, pixel_values, aspect_ratio_ids, aspect_ratio_mask, output_attentions, output_hidden_states, return_dict)
1420 _, num_patches, dim = hidden_state.shape
1421 hidden_state = hidden_state.reshape(batch_size * num_concurrent_media, num_tiles, -1, dim)
-> 1422 hidden_state = self.pre_tile_positional_embedding(hidden_state, aspect_ratio_ids)
1424 # Add cls token
1425 hidden_state = hidden_state.reshape(batch_size * num_concurrent_media * num_tiles, num_patches, dim)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/accelerate/hooks.py:170, in add_hook_to_module..new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)

File ~/SageMaker/vlm-unsloth-finetune/unsloth_compiled_cache/unsloth_compiled_module_mllama.py:105, in MllamaPrecomputedAspectRatioEmbedding.forward(self, hidden_state, aspect_ratio_ids)
104 def forward(self, hidden_state: torch.Tensor, aspect_ratio_ids: torch.Tensor) -> torch.Tensor:
--> 105 return MllamaPrecomputedAspectRatioEmbedding_forward(self, hidden_state, aspect_ratio_ids)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:433, in _TorchDynamoContext.call.._fn(*args, **kwargs)
428 saved_dynamic_layer_stack_depth = (
429 torch._C._functorch.get_dynamic_layer_stack_depth()
430 )
432 try:
--> 433 return fn(*args, **kwargs)
434 finally:
435 # Restore the dynamic layer stack depth if necessary.
436 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
437 saved_dynamic_layer_stack_depth
438 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:1116, in CatchErrorsWrapper.call(self, frame, cache_entry, frame_state)
1110 return hijacked_callback(
1111 frame, cache_entry, self.hooks, frame_state
1112 )
1114 with compile_lock, _disable_current_modes():
1115 # skip=1: skip this frame
-> 1116 return self._torchdynamo_orig_callable(
1117 frame, cache_entry, self.hooks, frame_state, skip=1
1118 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:472, in ConvertFrameAssert.call(self, frame, cache_entry, hooks, frame_state, skip)
458 compile_id = CompileId(frame_id, frame_compile_id)
460 signpost_event(
461 "dynamo",
462 "_convert_frame_assert._compile",
(...)
469 },
470 )
--> 472 return _compile(
473 frame.f_code,
474 frame.f_globals,
475 frame.f_locals,
476 frame.f_builtins,
477 self._torchdynamo_orig_callable,
478 self._one_graph,
479 self._export,
480 self._export_constraints,
481 hooks,
482 cache_entry,
483 cache_size,
484 frame,
485 frame_state=frame_state,
486 compile_id=compile_id,
487 skip=skip + 1,
488 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_utils_internal.py:84, in compile_time_strobelight_meta..compile_time_strobelight_meta_inner..wrapper_function(*args, **kwargs)
82 if "skip" in kwargs:
83 kwargs["skip"] = kwargs["skip"] + 1
---> 84 return StrobelightCompileTimeProfiler.profile_compile_time(
85 function, phase_name, *args, **kwargs
86 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_strobelight/compile_time_profiler.py:129, in StrobelightCompileTimeProfiler.profile_compile_time(cls, func, phase_name, *args, **kwargs)
124 @classmethod
125 def profile_compile_time(
126 cls, func: Any, phase_name: str, *args: Any, **kwargs: Any
127 ) -> Any:
128 if not cls.enabled:
--> 129 return func(*args, **kwargs)
131 if cls.profiler is None:
132 logger.error("profiler is not set")

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/contextlib.py:79, in ContextDecorator.call..inner(*args, **kwds)
76 @wraps(func)
77 def inner(*args, **kwds):
78 with self._recreate_cm():
---> 79 return func(*args, **kwds)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:817, in _compile(code, globals, locals, builtins, compiler_fn, one_graph, export, export_constraints, hooks, cache_entry, cache_size, frame, frame_state, compile_id, skip)
815 guarded_code = None
816 try:
--> 817 guarded_code = compile_inner(code, one_graph, hooks, transform)
818 return guarded_code
819 except (
820 Unsupported,
821 TorchRuntimeError,
(...)
828 BisectValidationException,
829 ) as e:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/utils.py:231, in dynamo_timed..dynamo_timed_inner..time_wrapper(*args, **kwargs)
229 with torch.profiler.record_function(f"{key} (dynamo_timed)"):
230 t0 = time.time()
--> 231 r = func(*args, **kwargs)
232 time_spent = time.time() - t0
233 compilation_time_metrics[key].append(time_spent)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:636, in _compile..compile_inner(code, one_graph, hooks, transform)
634 CompileContext.get().attempt = attempt
635 try:
--> 636 out_code = transform_code_object(code, transform)
637 break
638 except exc.RestartAnalysis as e:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py:1185, in transform_code_object(code, transformations, safe)
1182 instructions = cleaned_instructions(code, safe)
1183 propagate_line_nums(instructions)
-> 1185 transformations(instructions, code_options)
1186 return clean_and_assemble_instructions(instructions, keys, code_options)[1]

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:178, in preserve_global_state.._fn(*args, **kwargs)
176 cleanup = setup_compile_debug()
177 try:
--> 178 return fn(*args, **kwargs)
179 finally:
180 cleanup.close()

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py:582, in _compile..transform(instructions, code_options)
580 try:
581 with tracing(tracer.output.tracing_context), tracer.set_current_tx():
--> 582 tracer.run()
583 except exc.UnspecializeRestartAnalysis:
584 speculation_log.clear()

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:2451, in InstructionTranslator.run(self)
2450 def run(self):
-> 2451 super().run()

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
891 try:
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
895 except BackendCompilerFailed:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
802 self.update_block_stack(inst)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
807 except exc.ObservedException:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported..decorator..wrapper(self, inst)
497 return handle_graph_break(self, inst, speculation.reason)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
501 if self.generic_context_manager_depth > 0:
502 # We don't support graph break under GenericContextWrappingVariable,
503 # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1457 args = self.popn(inst.argval)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
741 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py:838, in UnspecializedNNModuleVariable.call_function(self, tx, args, kwargs)
832 ctx = (
833 record_nn_module_stack(str(id(mod)), self.source, tx, mod)
834 if self.source
835 else nullcontext()
836 )
837 with ctx:
--> 838 return variables.UserFunctionVariable(fn, source=source).call_function(
839 tx, [self] + list(args), kwargs
840 )

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
288 if self.is_constant:
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
745 def inline_user_function_return(self, fn, args, kwargs):
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2663 @classmethod
2664 def inline_call(cls, parent, func, args, kwargs):
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call
(parent, func, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call(parent, func, args, kwargs)
2780 try:
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
2784 msg = f"Observed exception DURING INLING {code} : {e}"

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
891 try:
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
895 except BackendCompilerFailed:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
802 self.update_block_stack(inst)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
807 except exc.ObservedException:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported..decorator..wrapper(self, inst)
497 return handle_graph_break(self, inst, speculation.reason)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
501 if self.generic_context_manager_depth > 0:
502 # We don't support graph break under GenericContextWrappingVariable,
503 # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1500, in InstructionTranslatorBase.CALL_FUNCTION_EX(self, inst)
1498 # Map to a dictionary of str -> VariableTracker
1499 kwargsvars = kwargsvars.keys_as_python_constant()
-> 1500 self.call_function(fn, argsvars.items, kwargsvars)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
741 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:796, in FunctoolsPartialVariable.call_function(self, tx, args, kwargs)
794 merged_args = self.args + args
795 merged_kwargs = {**self.keywords, **kwargs}
--> 796 return self.func.call_function(tx, merged_args, merged_kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
288 if self.is_constant:
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
745 def inline_user_function_return(self, fn, args, kwargs):
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2663 @classmethod
2664 def inline_call(cls, parent, func, args, kwargs):
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call
(parent, func, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call(parent, func, args, kwargs)
2780 try:
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
2784 msg = f"Observed exception DURING INLING {code} : {e}"

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
891 try:
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
895 except BackendCompilerFailed:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
802 self.update_block_stack(inst)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
807 except exc.ObservedException:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported..decorator..wrapper(self, inst)
497 return handle_graph_break(self, inst, speculation.reason)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
501 if self.generic_context_manager_depth > 0:
502 # We don't support graph break under GenericContextWrappingVariable,
503 # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1500, in InstructionTranslatorBase.CALL_FUNCTION_EX(self, inst)
1498 # Map to a dictionary of str -> VariableTracker
1499 kwargsvars = kwargsvars.keys_as_python_constant()
-> 1500 self.call_function(fn, argsvars.items, kwargsvars)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
741 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:344, in UserMethodVariable.call_function(self, tx, args, kwargs)
342 fn = getattr(self.obj.value, self.fn.name)
343 return invoke_and_store_as_constant(tx, fn, self.get_name(), args, kwargs)
--> 344 return super().call_function(tx, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
288 if self.is_constant:
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
745 def inline_user_function_return(self, fn, args, kwargs):
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2663 @classmethod
2664 def inline_call(cls, parent, func, args, kwargs):
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call
(parent, func, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call(parent, func, args, kwargs)
2780 try:
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
2784 msg = f"Observed exception DURING INLING {code} : {e}"

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
891 try:
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
895 except BackendCompilerFailed:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
802 self.update_block_stack(inst)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
807 except exc.ObservedException:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported..decorator..wrapper(self, inst)
497 return handle_graph_break(self, inst, speculation.reason)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
501 if self.generic_context_manager_depth > 0:
502 # We don't support graph break under GenericContextWrappingVariable,
503 # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1512, in InstructionTranslatorBase.CALL_FUNCTION_KW(self, inst)
1510 kwargs = dict(zip(argnames, kwargs_list))
1511 assert len(kwargs) == len(argnames)
-> 1512 self.call_function(fn, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
741 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:293, in UserFunctionVariable.call_function(self, tx, args, kwargs)
288 if self.is_constant:
289 return invoke_and_store_as_constant(
290 tx, self.fn, self.get_name(), args, kwargs
291 )
--> 293 return super().call_function(tx, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:90, in BaseUserFunctionVariable.call_function(self, tx, args, kwargs)
87 def call_function(
88 self, tx, args: "List[VariableTracker]", kwargs: "Dict[str, VariableTracker]"
89 ) -> "VariableTracker":
---> 90 return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:749, in InstructionTranslatorBase.inline_user_function_return(self, fn, args, kwargs)
745 def inline_user_function_return(self, fn, args, kwargs):
746 """
747 A call to some user defined function by inlining it.
748 """
--> 749 return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2666, in InliningInstructionTranslator.inline_call(cls, parent, func, args, kwargs)
2663 @classmethod
2664 def inline_call(cls, parent, func, args, kwargs):
2665 with patch.dict(counters, {"unimplemented": counters["inline_call"]}):
-> 2666 return cls.inline_call
(parent, func, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/dynamo/symbolic_convert.py:2782, in InliningInstructionTranslator.inline_call(parent, func, args, kwargs)
2780 try:
2781 with strict_ctx:
-> 2782 tracer.run()
2783 except exc.ObservedException as e:
2784 msg = f"Observed exception DURING INLING {code} : {e}"

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:893, in InstructionTranslatorBase.run(self)
891 try:
892 self.output.push_tx(self)
--> 893 while self.step():
894 pass
895 except BackendCompilerFailed:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:805, in InstructionTranslatorBase.step(self)
802 self.update_block_stack(inst)
804 try:
--> 805 self.dispatch_table[inst.opcode](self, inst)
806 return not self.output.should_exit
807 except exc.ObservedException:

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:499, in break_graph_if_unsupported..decorator..wrapper(self, inst)
497 return handle_graph_break(self, inst, speculation.reason)
498 try:
--> 499 return inner_fn(self, inst)
500 except Unsupported as excp:
501 if self.generic_context_manager_depth > 0:
502 # We don't support graph break under GenericContextWrappingVariable,
503 # If there is, we roll back to the checkpoint and fall back.

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1459, in InstructionTranslatorBase.CALL_FUNCTION(self, inst)
1457 args = self.popn(inst.argval)
1458 fn = self.pop()
-> 1459 self.call_function(fn, args, {})

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:743, in InstructionTranslatorBase.call_function(self, fn, args, kwargs)
741 if inner_fn and callable(inner_fn) and is_forbidden(inner_fn):
742 raise AssertionError(f"Attempt to trace forbidden callable {inner_fn}")
--> 743 self.push(fn.call_function(self, args, kwargs))

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:962, in BuiltinVariable.call_function(self, tx, args, kwargs)
958 if not handler:
959 self.call_function_handler_cache[key] = handler = self._make_handler(
960 self.fn, [type(x) for x in args], bool(kwargs)
961 )
--> 962 return handler(tx, args, kwargs)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:837, in BuiltinVariable._make_handler..builtin_dipatch(tx, args, kwargs)
836 def builtin_dipatch(tx, args, kwargs):
--> 837 rv = handler(tx, args, kwargs)
838 if rv:
839 return rv

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:764, in BuiltinVariable._make_handler..call_self_handler(tx, args, kwargs)
762 def call_self_handler(tx, args, kwargs):
763 try:
--> 764 result = self_handler(tx, *args, **kwargs)
765 if result is not None:
766 return result

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:1460, in BuiltinVariable.call_hasattr(self, tx, obj, attr)
1458 if isinstance(obj, variables.BuiltinVariable):
1459 return variables.ConstantVariable(hasattr(obj.fn, name))
-> 1460 return obj.call_hasattr(tx, name)

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/variables/base.py:296, in VariableTracker.call_hasattr(self, tx, name)
295 def call_hasattr(self, tx, name: str) -> "VariableTracker":
--> 296 unimplemented(f"hasattr {self.class.name} {name}")

File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/_dynamo/exc.py:221, in unimplemented(msg, from_exc)
219 if from_exc is not _NOTHING:
220 raise Unsupported(msg) from from_exc
--> 221 raise Unsupported(msg)

Unsupported: hasattr ConstDictVariable to

from user code:
File "/home/ec2-user/SageMaker/vlm-unsloth-finetune/unsloth_compiled_cache/unsloth_compiled_module_mllama.py", line 83, in MllamaPrecomputedAspectRatioEmbedding_forward
embeddings = self.embedding(aspect_ratio_ids)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/accelerate/hooks.py", line 364, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/accelerate/utils/operations.py", line 149, in send_to_device
if is_torch_tensor(tensor) or hasattr(tensor, "to"):

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

@Nazzaroth2
Copy link

Not sure if it is the exactly same issue and solution as your error output is a lot longer than mine, but the error code atleast is the same, so you can try this:
If you are not using pytorch 2.5.1 deinstall your current torch version and install the latest version. You might have to also upgrade your xformers version. If you are on windows you can use this install code to get the newest version that is build against 2.5.1
https://github.com/facebookresearch/xformers#installing-xformers

@danielhanchen danielhanchen added currently fixing Am fixing now! URGENT BUG Urgent bug labels Nov 25, 2024
@danielhanchen
Copy link
Contributor

I need to confirm if Torch 2.4 works - I've only tested it on Colab with 2.5 - I might add a flag to turn it on / off based on the torch version

@danielhanchen danielhanchen added fixed - pending confirmation Fixed, waiting for confirmation from poster and removed currently fixing Am fixing now! labels Nov 26, 2024
@danielhanchen
Copy link
Contributor

@adi7820 @Nazzaroth2 Apologies just fixed it! Sorry on the delay! Please update Unsloth via: (or rerun Colab / Kaggle)

pip uninstall unsloth unsloth-zoo -y
pip install --upgrade --no-cache-dir --no-deps unsloth unsloth-zoo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster URGENT BUG Urgent bug
Projects
None yet
Development

No branches or pull requests

3 participants