Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What model was used for the anime results? #92

Closed
vedantroy opened this issue Oct 17, 2023 · 14 comments
Closed

What model was used for the anime results? #92

vedantroy opened this issue Oct 17, 2023 · 14 comments

Comments

@vedantroy
Copy link

For this result:
image

What custom model was used?

@williamyang1991
Copy link
Owner

@vedantroy
Copy link
Author

vedantroy commented Oct 17, 2023

Thanks for the quick response -- how did you manage to load this LoRA into your method?
Did you use the method mentioned here: #39 (comment)?

More generally, do you mind posting the controlnet weight + prompts you used for this example?
image

I am trying to replicate this result as a starting point, but having a difficult time doing so.
I understand you are probably busy, so no worries if this isn't possible at the moment.

@williamyang1991
Copy link
Owner

Did you use the method mentioned here: #39

Yes

the controlnet weight + prompts you used for this example?

base model: Counterfeit
lora model: Ghibli
prompt: "ghibli style, a handsome man"
contronet strength: 0.7
denoising strength: 0.25

@zoeouyang2543
Copy link

hello:
For this result:
1、a cartoon tiger
2、a traditional mountain in chinese ink wash painting
3、Hermione Granger
What custom model was used?
do you mind posting the controlnet weight + prompts you used for this example?

@williamyang1991
Copy link
Owner

williamyang1991 commented May 22, 2024

hello: For this result: 1、a cartoon tiger 2、a traditional mountain in chinese ink wash painting 3、Hermione Granger What custom model was used? do you mind posting the controlnet weight + prompts you used for this example?

Please refer to https://drive.google.com/file/d/1HkxG5eiLM_TQbbMZYOwjDbd5gWisOy4m/view?usp=sharing

image

@zoeouyang2543
Copy link

hello:
我把c站上的模型和lora转换成diffusers可以读的形式:
python ./diffusers-0.20.2/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path ./Rerender_A_Video/models/counterfeitV30_v30.safetensors --dump_path ./Rerender_A_Video/models/Counterfeit --from_safetensors
python ./diffusers-0.20.2/scripts/convert_lora_safetensor_to_diffusers.py --base_model_path ./stable-diffusion-v1-5/ --checkpoint_path ./Rerender_A_Video/models/moxin.safetensors --dump_path ./Rerender_A_Video/models/moxin
然后在用这个方案进行加载 python ./Rerender_A_Video/rerender.py --cfg ./Rerender_A_Video/config/real2sculpture.json,
得到的结果不是项目中给出来的效果,难道只能用weiui才能复现你给的效果吗?麻烦解答一下
real2sculpture.json的内容是:
{
"input": "./Rerender_A_Video/videos/pexels_antoni_shkraba_8048492_540x960_25fps.mp4",
"output": "./Rerender_A_Video/videos/man/blend.mp4",
"work_dir": "./Rerender_A_Video/videos/man",
"key_subdir": "keys",
"sd_model": "./Rerender_A_Video/models/countfeit_ghibli",
"frame_count": 102,
"interval": 10,
"crop": [
0,
180,
0,
0
],
"prompt": "ghibli style, a handsome man",
"a_prompt": "RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
"n_prompt": "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation",
"x0_strength": 0.95,
"control_type": "canny",
"canny_low": 50,
"canny_high": 100,
"control_strength": 0.7,
"seed": 0,
"warp_period": [
0,
0.1
],
"ada_period": [
0.8,
1
],
"freeu_args": [
1.1,
1.2,
1.0,
0.2
]
}

@williamyang1991
Copy link
Owner

我印象里用的是比v30更低的版本

@zoeouyang2543
Copy link

我印象里用的是比v30更低的版本

好的,谢谢,我试试counterfeitV30_v30.safetensors 更低的版本看看。
还有一个问题,config文件有什么特别注意的吗?您这边用的是哪个config文件执行的呢

@williamyang1991
Copy link
Owner

williamyang1991 commented May 31, 2024

"a_prompt": "RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
"n_prompt": "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation",
是用于realistic vision的
卡通风格用
"a_prompt": "best quality, extremely detailed",
"n_prompt": "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",

甚至extremely detailed可以去掉

@zoeouyang2543
Copy link

"a_prompt": "RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3", "n_prompt": "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation", 是用于realistic vision的 卡通风格用 "a_prompt": "best quality, extremely detailed", "n_prompt": "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",

甚至extremely detailed可以去掉

您好,按照你的指引,得到的结果还是很奇怪,我下载的是v2.5的版本
会不会是某些参数需要配置呢,还是说合并模型和lora后调用,跟直接放在webui上使用差异还是很大的?
{
"input": "/home/notebook/data/group/oyp/Rerender_A_Video/videos/pexels_antoni_shkraba_8048492_540x960_25fps.mp4",
"output": "/home/notebook/data/group/oyp/Rerender_A_Video/videos/man/blend.mp4",
"work_dir": "/home/notebook/data/group/oyp/Rerender_A_Video/videos/man",
"key_subdir": "keys",
"frame_count": 102,
"interval": 10,
"sd_model": "/home/notebook/data/group/oyp/Rerender_A_Video/models/countfeit_25_ghibli",
"prompt": "a handsome man in cartoon style",
"a_prompt": "best quality, extremely detailed",
"n_prompt": "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
"x0_strength": 0.75,
"control_type": "canny",
"canny_low": 50,
"canny_high": 100,
"control_strength": 0.7,
"seed": 0,
"warp_period": [
0,
0.1
],
"ada_period": [
1,
1
]
}

@williamyang1991
Copy link
Owner

你的prompt也和我的不一样
我的是 ghibli style, a handsome man

我不太记得我当时用的canny还是HED,你可以都试一下

@zoeouyang2543
Copy link

你的prompt也和我的不一样 我的是 ghibli style, a handsome man

我不太记得我当时用的canny还是HED,你可以都试一下

我都试过了,但是都得到的结果都不是你这边给到的examples,哈哈哈哈
会不会是某些参数的设置不一样呢,当时的json文件还有吗

@williamyang1991
Copy link
Owner

因为我离职回国了,服务器上很多文件就删掉了。
而且代码是中稿之后才整理的,整理前的代码还没有用json文件来指定标注,我基本都是用txt来记录实验设置的。
我翻了一下当时就记录了:
ghibli style, a handsome man, 0.7, 0.25, Counterfeit(Ghibli)
其中0.25对应1-"x0_strength",0.7对应"control_strength"

@zoeouyang2543
Copy link

因为我离职回国了,服务器上很多文件就删掉了。 而且代码是中稿之后才整理的,整理前的代码还没有用json文件来指定标注,我基本都是用txt来记录实验设置的。 我翻了一下当时就记录了: ghibli style, a handsome man, 0.7, 0.25, Counterfeit(Ghibli) 其中0.25对应1-"x0_strength",0.7对应"control_strength"

好的,谢谢您,祝好

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants