-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into refactor-torchao-serialization-tests
- Loading branch information
Showing
41 changed files
with
3,794 additions
and
40 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
<!--Copyright 2024 The HuggingFace Team, The Black Forest Team. All rights reserved. | ||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
the License. You may obtain a copy of the License at | ||
http://www.apache.org/licenses/LICENSE-2.0 | ||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
specific language governing permissions and limitations under the License. | ||
--> | ||
|
||
# FluxControlInpaint | ||
|
||
FluxControlInpaintPipeline is an implementation of Inpainting for Flux.1 Depth/Canny models. It is a pipeline that allows you to inpaint images using the Flux.1 Depth/Canny models. The pipeline takes an image and a mask as input and returns the inpainted image. | ||
|
||
FLUX.1 Depth and Canny [dev] is a 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image. **This is not a ControlNet model**. | ||
|
||
| Control type | Developer | Link | | ||
| -------- | ---------- | ---- | | ||
| Depth | [Black Forest Labs](https://huggingface.co/black-forest-labs) | [Link](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev) | | ||
| Canny | [Black Forest Labs](https://huggingface.co/black-forest-labs) | [Link](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev) | | ||
|
||
|
||
<Tip> | ||
|
||
Flux can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out [this section](https://huggingface.co/blog/sd3#memory-optimizations-for-sd3) for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to [this blog post](https://huggingface.co/blog/quanto-diffusers) to learn more. For an exhaustive list of resources, check out [this gist](https://gist.github.com/sayakpaul/b664605caf0aa3bf8585ab109dd5ac9c). | ||
|
||
</Tip> | ||
|
||
```python | ||
import torch | ||
from diffusers import FluxControlInpaintPipeline | ||
from diffusers.models.transformers import FluxTransformer2DModel | ||
from transformers import T5EncoderModel | ||
from diffusers.utils import load_image, make_image_grid | ||
from image_gen_aux import DepthPreprocessor # https://github.com/huggingface/image_gen_aux | ||
from PIL import Image | ||
import numpy as np | ||
|
||
pipe = FluxControlInpaintPipeline.from_pretrained( | ||
"black-forest-labs/FLUX.1-Depth-dev", | ||
torch_dtype=torch.bfloat16, | ||
) | ||
# use following lines if you have GPU constraints | ||
# --------------------------------------------------------------- | ||
transformer = FluxTransformer2DModel.from_pretrained( | ||
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="transformer", torch_dtype=torch.bfloat16 | ||
) | ||
text_encoder_2 = T5EncoderModel.from_pretrained( | ||
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="text_encoder_2", torch_dtype=torch.bfloat16 | ||
) | ||
pipe.transformer = transformer | ||
pipe.text_encoder_2 = text_encoder_2 | ||
pipe.enable_model_cpu_offload() | ||
# --------------------------------------------------------------- | ||
pipe.to("cuda") | ||
|
||
prompt = "a blue robot singing opera with human-like expressions" | ||
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png") | ||
|
||
head_mask = np.zeros_like(image) | ||
head_mask[65:580,300:642] = 255 | ||
mask_image = Image.fromarray(head_mask) | ||
|
||
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf") | ||
control_image = processor(image)[0].convert("RGB") | ||
|
||
output = pipe( | ||
prompt=prompt, | ||
image=image, | ||
control_image=control_image, | ||
mask_image=mask_image, | ||
num_inference_steps=30, | ||
strength=0.9, | ||
guidance_scale=10.0, | ||
generator=torch.Generator().manual_seed(42), | ||
).images[0] | ||
make_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save("output.png") | ||
``` | ||
|
||
## FluxControlInpaintPipeline | ||
[[autodoc]] FluxControlInpaintPipeline | ||
- all | ||
- __call__ | ||
|
||
|
||
## FluxPipelineOutput | ||
[[autodoc]] pipelines.flux.pipeline_output.FluxPipelineOutput |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,127 @@ | ||
# DreamBooth training example for SANA | ||
|
||
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject. | ||
|
||
The `train_dreambooth_lora_sana.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [SANA](https://arxiv.org/abs/2410.10629). | ||
|
||
|
||
This will also allow us to push the trained model parameters to the Hugging Face Hub platform. | ||
|
||
## Running locally with PyTorch | ||
|
||
### Installing the dependencies | ||
|
||
Before running the scripts, make sure to install the library's training dependencies: | ||
|
||
**Important** | ||
|
||
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: | ||
|
||
```bash | ||
git clone https://github.com/huggingface/diffusers | ||
cd diffusers | ||
pip install -e . | ||
``` | ||
|
||
Then cd in the `examples/dreambooth` folder and run | ||
```bash | ||
pip install -r requirements_sana.txt | ||
``` | ||
|
||
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: | ||
|
||
```bash | ||
accelerate config | ||
``` | ||
|
||
Or for a default accelerate configuration without answering questions about your environment | ||
|
||
```bash | ||
accelerate config default | ||
``` | ||
|
||
Or if your environment doesn't support an interactive shell (e.g., a notebook) | ||
|
||
```python | ||
from accelerate.utils import write_basic_config | ||
write_basic_config() | ||
``` | ||
|
||
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. | ||
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment. | ||
|
||
|
||
### Dog toy example | ||
|
||
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example. | ||
|
||
Let's first download it locally: | ||
|
||
```python | ||
from huggingface_hub import snapshot_download | ||
|
||
local_dir = "./dog" | ||
snapshot_download( | ||
"diffusers/dog-example", | ||
local_dir=local_dir, repo_type="dataset", | ||
ignore_patterns=".gitattributes", | ||
) | ||
``` | ||
|
||
This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform. | ||
|
||
Now, we can launch training using: | ||
|
||
```bash | ||
export MODEL_NAME="Efficient-Large-Model/Sana_1600M_1024px_diffusers" | ||
export INSTANCE_DIR="dog" | ||
export OUTPUT_DIR="trained-sana-lora" | ||
|
||
accelerate launch train_dreambooth_lora_sana.py \ | ||
--pretrained_model_name_or_path=$MODEL_NAME \ | ||
--instance_data_dir=$INSTANCE_DIR \ | ||
--output_dir=$OUTPUT_DIR \ | ||
--mixed_precision="bf16" \ | ||
--instance_prompt="a photo of sks dog" \ | ||
--resolution=1024 \ | ||
--train_batch_size=1 \ | ||
--gradient_accumulation_steps=4 \ | ||
--use_8bit_adam \ | ||
--learning_rate=1e-4 \ | ||
--report_to="wandb" \ | ||
--lr_scheduler="constant" \ | ||
--lr_warmup_steps=0 \ | ||
--max_train_steps=500 \ | ||
--validation_prompt="A photo of sks dog in a bucket" \ | ||
--validation_epochs=25 \ | ||
--seed="0" \ | ||
--push_to_hub | ||
``` | ||
|
||
For using `push_to_hub`, make you're logged into your Hugging Face account: | ||
|
||
```bash | ||
huggingface-cli login | ||
``` | ||
|
||
To better track our training experiments, we're using the following flags in the command above: | ||
|
||
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before. | ||
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected. | ||
|
||
## Notes | ||
|
||
Additionally, we welcome you to explore the following CLI arguments: | ||
|
||
* `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only. | ||
* `--complex_human_instruction`: Instructions for complex human attention as shown in [here](https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55). | ||
* `--max_sequence_length`: Maximum sequence length to use for text embeddings. | ||
|
||
|
||
We provide several options for optimizing memory optimization: | ||
|
||
* `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used. | ||
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done. | ||
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library. | ||
|
||
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
accelerate>=1.0.0 | ||
torchvision | ||
transformers>=4.47.0 | ||
ftfy | ||
tensorboard | ||
Jinja2 | ||
peft>=0.14.0 | ||
sentencepiece |
Oops, something went wrong.