diff --git a/README.md b/README.md index 93baf04..07e0a8b 100644 --- a/README.md +++ b/README.md @@ -3,8 +3,8 @@ PyTorch implementation of Stable Diffusion from scratch ## Download weights and tokenizer files: -1. Download `vocab.json` and `merges.txt` from https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main/tokenizer and save them in the `data` folder -2. Download `v1-5-pruned-emaonly.ckpt` from https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main and save it in the `data` folder +1. Download `vocab.json` and `merges.txt` from https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main/tokenizer and save them in the `data` folder +2. Download `inkpunk-diffusion-v1.ckpt` from https://huggingface.co/Envvi/Inkpunk-Diffusion/tree/main and save it in the `data` folder ## Tested fine-tuned models: diff --git a/sd/demo.ipynb b/sd/demo.ipynb index 5eb9d56..5b9a0d8 100644 --- a/sd/demo.ipynb +++ b/sd/demo.ipynb @@ -28,7 +28,7 @@ "print(f\"Using device: {DEVICE}\")\n", "\n", "tokenizer = CLIPTokenizer(\"../data/vocab.json\", merges_file=\"../data/merges.txt\")\n", - "model_file = \"../data/v1-5-pruned-emaonly.ckpt\"\n", + "model_file = \"../data/inkpunk-diffusion-v1.ckpt\"\n", "models = model_loader.preload_models_from_standard_weights(model_file, DEVICE)\n", "\n", "## TEXT TO IMAGE\n",