- text-generation-webui from https://github.com/oobabooga/text-generation-webui (found under
/opt/text-generation-webui
) - includes CUDA-optimized model loaders for:
llama.cpp
exllama2
AutoGPTQ
transformers
- see the tutorial at the Jetson Generative AI Lab
Warning
If you're using the llama.cpp loader, the model format has changed from GGML to GGUF. Existing GGML models can be converted using the convert-llama-ggmlv3-to-gguf.py
script in llama.cpp
(or you can often find the GGUF conversions on HuggingFace Hub)
This container has a default run command that will automatically start the webserver like this:
cd /opt/text-generation-webui && python3 server.py \
--model-dir=/data/models/text-generation-webui \
--listen --verbose
To launch the container, run the command below, and then navigate your browser to http://HOSTNAME:7860
./run.sh $(./autotag text-generation-webui)
While the server and models are dynamically configurable from within the webui at runtime, see here for optional command-line settings:
For example, after you've downloaded a model, you can load it directly at startup like so:
./run.sh $(./autotag text-generation-webui) /bin/bash -c \
"cd /opt/text-generation-webui && python3 server.py \
--model-dir=/data/models/text-generation-webui \
--model=llama-2-13b-chat.Q4_K_M.gguf \
--loader=llamacpp \
--n-gpu-layers=128 \
--listen --chat --verbose
See here for instructions for downloading models - you can do this from within the webui, or by running their download-model.py
script:
./run.sh --workdir=/opt/text-generation-webui $(./autotag text-generation-webui) /bin/bash -c \
'python3 download-model.py --output=/data/models/text-generation-webui TheBloke/Llama-2-7b-Chat-GPTQ'
This will download specified model from HuggingFace Hub and place it under the /data/models/text-generation-webui
mounted directory (which is where you should store models so they aren't lost when the container exits)
- The fastest model loader to use is currently llama.cpp with 4-bit quantized GGUF models
- Remember to set
n-gpu-layers
to 128 in the loader settings - If you're using Llama-2-70B, set
n_gqa
to 8 (otherwise an error will occur) - Tested using the recommended
Q4_K_M
model quantizations
- Remember to set
- Unless you loaded a model fine-tuned for chat, use text completion mode in the
Default
orNotebook
tab - If you're using a Llama-2 chat model, use the
Instruct
chat mode and set the Instruction Template toLlama-v2
(in theParameters
tab)- This will make sure the correct chat prompt format is being used for Llama-2
- You can alter your agent's behavior/personality/ect by tweaking this prompt template (the
Answer the questions.
part)
>> What games do you like to play?
I'm a large language model, so I can play text-based games and answer questions on a wide variety of topics. Here are some game ideas:
- 20 Questions: You or I can think of an object, and the other player tries to guess what it is by asking up to 20 yes-or-no questions. The goal is to guess the object in as few questions as possible.
- Hangman: I can think of a word or phrase and you try to guess it by suggesting letters. Each incorrect guess will bring the hangman one step closer to being complete!
- Word association: I'll give you a word and you have to come up with a word that is related to it. We can keep playing back and forth as long as you'd like!
- Trivia: I can provide questions on a wide range of topics, from history to science to pop culture. See how many questions you can answer correctly!
- Storytelling: I can start telling a story and then stop at a cliffhanger. You can then try to guess what happens next or even take over the storytelling and continue it in your own
- Jokes: I love to tell jokes and make people laugh with my "Dad humor"! Knock knock! *giggles*
CONTAINERS
text-generation-webui:main |
|
---|---|
Aliases | text-generation-webui |
Builds | |
Requires | L4T >=34.1.0 |
Dependencies | build-essential cuda cudnn python tensorrt numpy cmake onnx pytorch torchvision huggingface_hub rust transformers auto_gptq gptq-for-llama exllama:v1 exllama:v2 llama_cpp:gguf bitsandbytes |
Dockerfile | Dockerfile |
Images | dustynv/text-generation-webui:main-r36.2.0 (2023-12-18, 8.1GB) |
text-generation-webui:1.7 |
|
---|---|
Requires | L4T >=34.1.0 |
Dependencies | build-essential cuda cudnn python tensorrt numpy cmake onnx pytorch torchvision huggingface_hub rust transformers auto_gptq gptq-for-llama exllama:v1 exllama:v2 llama_cpp:gguf bitsandbytes |
Dockerfile | Dockerfile |
Images | dustynv/text-generation-webui:1.7-r35.4.1 (2023-12-05, 6.4GB) |
text-generation-webui:6a7cd01 |
|
---|---|
Requires | L4T >=34.1.0 |
Dependencies | build-essential cuda cudnn python tensorrt numpy cmake onnx pytorch torchvision huggingface_hub rust transformers auto_gptq gptq-for-llama exllama:v1 exllama:v2 llama_cpp:gguf bitsandbytes |
Dockerfile | Dockerfile |
CONTAINER IMAGES
Repository/Tag | Date | Arch | Size |
---|---|---|---|
dustynv/text-generation-webui:1.7-r35.4.1 |
2023-12-05 |
arm64 |
6.4GB |
dustynv/text-generation-webui:main-r36.2.0 |
2023-12-18 |
arm64 |
8.1GB |
dustynv/text-generation-webui:r35.2.1 |
2023-12-18 |
arm64 |
6.5GB |
dustynv/text-generation-webui:r35.3.1 |
2023-12-21 |
arm64 |
6.5GB |
dustynv/text-generation-webui:r35.4.1 |
2023-12-21 |
arm64 |
6.4GB |
dustynv/text-generation-webui:r36.2.0 |
2023-12-21 |
arm64 |
8.1GB |
Container images are compatible with other minor versions of JetPack/L4T:
• L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
• L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)
RUN CONTAINER
To start the container, you can use the run.sh
/autotag
helpers or manually put together a docker run
command:
# automatically pull or build a compatible container image
./run.sh $(./autotag text-generation-webui)
# or explicitly specify one of the container images above
./run.sh dustynv/text-generation-webui:r36.2.0
# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/text-generation-webui:r36.2.0
run.sh
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
To mount your own directories into the container, use the -v
or --volume
flags:
./run.sh -v /path/on/host:/path/in/container $(./autotag text-generation-webui)
To launch the container running a command, as opposed to an interactive shell:
./run.sh $(./autotag text-generation-webui) my_app --abc xyz
You can pass any options to run.sh
that you would to docker run
, and it'll print out the full command that it constructs before executing it.
BUILD CONTAINER
If you use autotag
as shown above, it'll ask to build the container for you if needed. To manually build it, first do the system setup, then run:
./build.sh text-generation-webui
The dependencies from above will be built into the container, and it'll be tested during. See ./build.sh --help
for build options.