Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues running with gpt4-x-alpaca-native #5

Open
MarkSchmidty opened this issue Apr 1, 2023 · 2 comments
Open

Issues running with gpt4-x-alpaca-native #5

MarkSchmidty opened this issue Apr 1, 2023 · 2 comments

Comments

@MarkSchmidty
Copy link
Contributor

This is a 13B full finetune, not a peft, usinf a large GPT-4 dataset on top of a previous full finetune of Alpaca (cleaned) 13B.

It can be found in GPTQ 4bit .pt format here: https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/tree/main

I ran into a lot of issues trying to get it to work. But I figure @manyoso can probably easily swap out the above for Alpaca peft for a quick test and post the results. I'm curious to see how this one performs. 

(Original gpt4-x-alpaca in 16bit can be found here: https://huggingface.co/chavinlo/gpt4-x-alpaca on the creator's HuggingFace)

@manyoso
Copy link
Owner

manyoso commented Apr 1, 2023

What issues did you find? I can try and run it in the next week of so

@MarkSchmidty
Copy link
Contributor Author

Nothing major. I tried renaming it to llama7b and launching it as llama7b. But llama needs to be prompted like Alpaca, which currently expects a peft. I'm not quite sure how to do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants