Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add model format flexibility of AutoModel.from_pretrained() #28

Open
DavidFarago opened this issue May 29, 2024 · 1 comment
Open

Add model format flexibility of AutoModel.from_pretrained() #28

DavidFarago opened this issue May 29, 2024 · 1 comment

Comments

@DavidFarago
Copy link

DavidFarago commented May 29, 2024

Since I cannot load models from the huggingface hub (see #27), I am downloading models to a local directory. However, they are either in the format

generation_config.json, pytorch_model.bin.index.json, adapter, special_tokens_map.json, added_tokens.json, pytorch_model-00001-of-00003.bin, tokenizer.model, pytorch_model-00002-of-00003.bin, tokenizer_config.json,
config.json, pytorch_model-00003-of-00003.bin

or in the format

config.json, model-00004-of-00006.safetensors, model-00001-of-00006.safetensors , model-00005-of-00006.safetensors,
model-00002-of-00006.safetensors,  model-00006-of-00006.safetensors,
model-00003-of-00006.safetensors,  model.safetensors.index.json

Could you either add the flexibility of AutoModel.from_pretrained() to wrapped_model.py or explain how I can store my huggingface models locally in a format that wrapped_model.py can digest?

@thegallier
Copy link

Same with the save_pretrained. I did not see that functionality in the current code base and hence can not leverage open source packages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants