Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

`load_in_8bit_fp32_cpu_offload=True #39

Open
thibaudart opened this issue Apr 18, 2023 · 4 comments
Open

`load_in_8bit_fp32_cpu_offload=True #39

thibaudart opened this issue Apr 18, 2023 · 4 comments

Comments

@thibaudart
Copy link

Any idea how to solve this:

Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom
device_map to from_pretrained. Check
https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.

I have 48gb of vram the GPU RAM must be enough!

@TsuTikgiau
Copy link
Collaborator

48GPU ram should be enough for the demo without the 8bit. Can you set the low_resource to False in eval_configs/minigpt4_eval.yaml and check whether you still have this issue?

@vrunm
Copy link

vrunm commented May 2, 2023

I have followed the code given in the huggingface docs:

device_map = {
    "transformer.word_embeddings": 0,
    "transformer.word_embeddings_layernorm": 0,
    "lm_head": "cpu",
    "transformer.h": 0,
    "transformer.ln_f": 0,
}

quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)

model = AutoModelForCausalLM.from_pretrained("AlekseyKorshuk/vicuna-7b",device_map='auto', quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained("AlekseyKorshuk/vicuna-7b")

Getting this error

TypeError: __init__() got an unexpected keyword argument 
'load_in_8bit_fp32_cpu_offload'

@ryzn0518
Copy link

try this:

model = AutoModelForCausalLM.from_pretrained("AlekseyKorshuk/vicuna-7b",device_map=device_map, quantization_config=quantization_config)

@mirajdeepbhandari
Copy link

mirajdeepbhandari commented Mar 10, 2024

i solve that error like this you can do it same for your model

Load model and tokenizer

quantization_config = BitsAndBytesConfig(load_in_8bit_fp32_cpu_offload=True)

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", quantization_config=quantization_config)
model = PeftModel.from_pretrained(model, "mirajbhandari/mistral-7b-chat-finetune", device_map="auto")

tokenizer = AutoTokenizer.from_pretrained("mirajbhandari/mistral-7b-chat-finetune")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants