Skip to content
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.

Commit

Permalink
Update README.md (#83)
Browse files Browse the repository at this point in the history
  • Loading branch information
a32543254 authored Jan 22, 2024
1 parent 12a17ee commit abcc0f4
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
```
>**Note**: For llama2/ mistral/ neural_chat/ codellama/ magicoder models, we can only support the local path to model for now.
>**Note**: For llama2/ mistral/ neural_chat/ codellama/ magicoder/ chatglmv1/v2/ baichuan models, we can only support the local path to model for now.
GGUF format HF model
```python
from transformers import AutoTokenizer, TextStreamer
Expand Down

0 comments on commit abcc0f4

Please sign in to comment.