You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to use the transformer directly, but I ran out of memory. So, I attempted to load the transformer in 4-bit precision, but it gave me an error due to missing arguments.
here code and error
code -
!pip install --upgrade transformers bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModel
import torch
import os
offload_folder = "/content/drive/MyDrive/text_to_song"
Set environment variable before importing PyTorch
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
from transformers import AutoTokenizer, AutoModel, BitsAndBytesConfig
import torch
import os
TypeError Traceback (most recent call last) in <cell line: 29>()
27 prompt = ' Total 7 lines. The first line:可,,<137>,<79>|惜,<D#4>,<137>,<79>|这,,<137>,<88>|是,,<121>,<79>|属,,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,,<130>,<79>|风,,<151>,<79>|景,<A#3> ,<181><137>,<79>\n'
28 inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
---> 29 outputs = model.generate(**inputs, max_new_tokens=50)
30 result = tokenizer.decode(outputs[0], skip_special_tokens=True)
31
16 frames /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1748
1749 result = None
TypeError: Linear4bit.forward() takes 2 positional arguments but 3 were given
The text was updated successfully, but these errors were encountered:
I appreciate your help it worked,
but can this generate song in audio format?
i am asking this because when I run above code its generates text file only so if you have the programme to generate an audio song please help me with it
I tried to use the transformer directly, but I ran out of memory. So, I attempted to load the transformer in 4-bit precision, but it gave me an error due to missing arguments.
here code and error
code -
!pip install --upgrade transformers bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModel
import torch
import os
offload_folder = "/content/drive/MyDrive/text_to_song"
Set environment variable before importing PyTorch
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
from transformers import AutoTokenizer, AutoModel, BitsAndBytesConfig
import torch
import os
Offload folder for weights
offload_folder = "/content/sample_data/offload_weights"
Set PyTorch environment variable
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
Define 4-bit quantization configuration
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
Model checkpoint path
ckpt_path = "Mar2Ding/songcomposer_sft"
Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True)
Load model with 4-bit quantization
model = AutoModel.from_pretrained(
ckpt_path,
device_map="auto",
offload_folder=offload_folder,
trust_remote_code=True,
quantization_config=quantization_config
)
Confirm device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
Prepare inputs
prompt = 'Create a song on bravery and sacrifice with a rapid pace.'
inputs = tokenizer(prompt, return_tensors="pt").to(device)
Generate tokens step-by-step
with torch.no_grad():
generated_tokens = model.inference(
inputs["input_ids"],
pad_token_id=tokenizer.eos_token_id, # Prevent pad/eos token conflicts
)
Decode and print output
output = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
print(output)
error -
TypeError Traceback (most recent call last)
in <cell line: 29>()
27 prompt = ' Total 7 lines. The first line:可,,<137>,<79>|惜,<D#4>,<137>,<79>|这,,<137>,<88>|是,,<121>,<79>|属,,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,,<130>,<79>|风,,<151>,<79>|景,<A#3> ,<181><137>,<79>\n'
28 inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
---> 29 outputs = model.generate(**inputs, max_new_tokens=50)
30 result = tokenizer.decode(outputs[0], skip_special_tokens=True)
31
16 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1748
1749 result = None
TypeError: Linear4bit.forward() takes 2 positional arguments but 3 were given
The text was updated successfully, but these errors were encountered: