Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux lora train selects Flux.1-dev, but it will download files such as text_encoder in FLUX.1-schnell instead of the corresponding files in Flux.1-dev #204

Open
wen313 opened this issue Oct 26, 2024 · 1 comment

Comments

@wen313
Copy link

wen313 commented Oct 26, 2024

if os.path.exists(transformer_path):

image

The processing here is different from is_v3 above, which causes the text_encoder folders to download FLUX.1-schnell instead of the actual configuration. Can we also add:

else:
      # is remote use whatever path we were given
      base_model_path = model_path

I also want to confirm whether the text_encoder files in FLUX.1-schnell and FLUX.1-dev are the same. Will using these files in FLUX.1-schnell affect the training results?
image

@emabellor
Copy link

emabellor commented Nov 14, 2024

It's interesting, because I did some tests weeks ago with the flux-dev trainer and I noticed that the program was downloading files from both FLUX.1-dev and FLUX-1.schnell. But now, I updated the repo, did some tests and now the program is downloading only FLUX.1-dev files.

I checked the commits and the issue seems to be fixed in the commit 4aa19b5. But, from my last tests, I'm getting a different quality of my LORAs and I needed to revert back to the previous commit.

If you compare the files between FLUX.1-dev a FLUX.1-schnell excluding the transformer folder, the only folder that changes is the scheduler. So, previously, the trainer was using the scheduler from FLUX.1-schnell, and now its using the scheduler from FLUX.1-dev.

Will the different configuration in the scheduler affect the training? If thats the case, does that mean that the training was wrong previously and now it should be as expected? Because it seems that people were having good results with the FLUX.1-schnell scheduler on FLUX.1-dev. So, I wonder if the trainer should keep using the scheduler from FLUX.1-schnell.

Just curious.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants