Replies: 2 comments
-
Having the same issue here |
Beta Was this translation helpful? Give feedback.
0 replies
-
Has it been resolved? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi All,
Thank you to the contributors for making Chronos available to the community. I am trying to fine tune the amazon/chronos-t5-tiny but using PEFT(LoRA) so after executing Trainer.train() I am trying to use ChronosPipeline.predict but the problem is this method does not take the model instance directly rather it takes the ChronosPipeline class instance which in turn looks for either the path to the local file or hugging face model id. If I try to save the model using model.save_pretrained("./my_model2") the directory saves adapter_config.json and adapter_model.savetensors. I create a custom ChronoPipeline class to make the from_pretrained() method read adapter_config.json as it looks for config.json otherwise and fails to load the peft model from the local saved directory. But this causes an issue at line “”assert hasattr(config, "chronos_config"), "Not a Chronos config file" as the adapter_config.json does not have the dictionary item “chronos_config”. Please find custom implementation below:
class CustomChronosPipeline(ChronosPipeline):
To workaround my way through this, I tried to write my own predict method and use it inside generate_sample_forecasts() method in of your evaluate.py. Since predict() mainly requires the tokeniser and model config (which it gets from ChronosPipeline) I try to pass my model and tokeniser directly in the custom predict method like below:
def predict(context: Union[torch.Tensor, List[torch.Tensor]],
model,
tokenizer,
prediction_length: Optional[int] = None,
num_samples: Optional[int] = None,
temperature: Optional[float] = None,
top_k: Optional[int] = None,
top_p: Optional[float] = None,
limit_prediction_length: bool = True,
) -> torch.Tensor:
However, trying to predict using LoRa trained peft model, I get the below Traceback
Traceback (most recent call last):
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/air/execution/_internal/event_manager.py", line 110, in resolve_future
result = ray.get(future)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/_private/worker.py", line 2659, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/_private/worker.py", line 871, in get_objects
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(ValueError): ray::ImplicitFunc.train() (pid=79850, ip=127.0.0.1, actor_id=980037407f0e837f6f5ead7b01000000, repr=peft_trainer_chronos)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 331, in train
raise skipped from exception_cause(skipped)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/air/_internal/util.py", line 98, in run
self._ret = self._target(*self._args, **self._kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 45, in
training_func=lambda: self._trainable_func(self.config),
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 250, in _trainable_func
output = fn()
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/ray/tune/trainable/util.py", line 130, in inner
return trainable(config, **fn_kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/autotune/trainers/peft_chronos.py", line 1075, in peft_trainer_chronos
sample_forecasts = generate_sample_forecasts(
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/autotune/trainers/peft_chronos.py", line 418, in generate_sample_forecasts
predict(context,
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/autotune/trainers/peft_chronos.py", line 376, in predict
samples = model(
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/peft/peft_model.py", line 1785, in forward
return self.base_model(
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/peft/tuners/tuners_utils.py", line 188, in forward
return self.model.forward(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1702, in forward
encoder_outputs = self.encoder(
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/shivanitomar/Documents/Implementations/FM_TUNE/fm-tune/fm_tune_env/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 997, in forward
raise ValueError(
ValueError: You cannot specify both input_ids and inputs_embeds at the same time
I am not sure how it taking inputs_embeds in this line of code :
samples = model(
token_ids.to(model.device),
attention_mask.to(model.device),
min(remaining, model.config.chronos_config["prediction_length"]),
num_samples,
temperature,
top_k,
top_p,
)
I am really struggling to make this work. Do you have any suggestions or if you can point me to some directions/resources where chronos has been fine-tuned using LoRa. I would really appreciate your help here.
Beta Was this translation helpful? Give feedback.
All reactions