diff --git a/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/bug_report.yaml b/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/bug_report.yaml
deleted file mode 100644
index 90a5772..0000000
--- a/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/bug_report.yaml
+++ /dev/null
@@ -1,53 +0,0 @@
-name: Bug report
-description: Create a report
-title: "[Bug]: "
-labels:
- - bug
-
-body:
- - type: textarea
- attributes:
- label: Describe the bug
- description: A clear and concise description of what the bug is.
- placeholder: |
- Any language accepted
- 아무 언어 사용가능
- すべての言語に対応
- 接受所有语言
- Se aceptan todos los idiomas
- Alle Sprachen werden akzeptiert
- Toutes les langues sont acceptées
- Принимаются все языки
-
- - type: textarea
- attributes:
- label: Screenshots
- description: Screenshots related to the issue.
-
- - type: textarea
- attributes:
- label: Console logs, from start to end.
- description: |
- The full console log of your terminal.
- placeholder: |
- Python ...
- Version: ...
- Commit hash: ...
- Installing requirements
- ...
-
- Launching Web UI with arguments: ...
- [-] ADetailer initialized. version: ...
- ...
- ...
-
- Traceback (most recent call last):
- ...
- ...
- render: Shell
- validations:
- required: true
-
- - type: textarea
- attributes:
- label: List of installed extensions
diff --git a/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/feature_request.yaml b/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/feature_request.yaml
deleted file mode 100644
index c496137..0000000
--- a/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/feature_request.yaml
+++ /dev/null
@@ -1,24 +0,0 @@
-name: Feature request
-description: Suggest an idea for this project
-title: "[Feature Request]: "
-
-body:
- - type: textarea
- attributes:
- label: Is your feature request related to a problem? Please describe.
- description: A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
-
- - type: textarea
- attributes:
- label: Describe the solution you'd like
- description: A clear and concise description of what you want to happen.
-
- - type: textarea
- attributes:
- label: Describe alternatives you've considered
- description: A clear and concise description of any alternative solutions or features you've considered.
-
- - type: textarea
- attributes:
- label: Additional context
- description: Add any other context or screenshots about the feature request here.
diff --git a/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/question.yaml b/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/question.yaml
deleted file mode 100644
index 3c79454..0000000
--- a/src/code/images/sd-resource/extensions/adetailer/.github/ISSUE_TEMPLATE/question.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-name: Question
-description: Write a question
-labels:
- - question
-
-body:
- - type: textarea
- attributes:
- label: Question
- description: Please do not write bug reports or feature requests here.
diff --git a/src/code/images/sd-resource/extensions/adetailer/.github/workflows/stale.yml b/src/code/images/sd-resource/extensions/adetailer/.github/workflows/stale.yml
deleted file mode 100644
index 79ab8fa..0000000
--- a/src/code/images/sd-resource/extensions/adetailer/.github/workflows/stale.yml
+++ /dev/null
@@ -1,13 +0,0 @@
-name: 'Close stale issues and PRs'
-on:
- schedule:
- - cron: '30 1 * * *'
-
-jobs:
- stale:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/stale@v8
- with:
- days-before-stale: 23
- days-before-close: 3
diff --git a/src/code/images/sd-resource/extensions/adetailer/.gitignore b/src/code/images/sd-resource/extensions/adetailer/.gitignore
deleted file mode 100644
index ce19e6c..0000000
--- a/src/code/images/sd-resource/extensions/adetailer/.gitignore
+++ /dev/null
@@ -1,196 +0,0 @@
-# Created by https://www.toptal.com/developers/gitignore/api/python,visualstudiocode
-# Edit at https://www.toptal.com/developers/gitignore?templates=python,visualstudiocode
-
-### Python ###
-# Byte-compiled / optimized / DLL files
-__pycache__/
-*.py[cod]
-*$py.class
-
-# C extensions
-*.so
-
-# Distribution / packaging
-.Python
-build/
-develop-eggs/
-dist/
-downloads/
-eggs/
-.eggs/
-lib/
-lib64/
-parts/
-sdist/
-var/
-wheels/
-share/python-wheels/
-*.egg-info/
-.installed.cfg
-*.egg
-MANIFEST
-
-# PyInstaller
-# Usually these files are written by a python script from a template
-# before PyInstaller builds the exe, so as to inject date/other infos into it.
-*.manifest
-*.spec
-
-# Installer logs
-pip-log.txt
-pip-delete-this-directory.txt
-
-# Unit test / coverage reports
-htmlcov/
-.tox/
-.nox/
-.coverage
-.coverage.*
-.cache
-nosetests.xml
-coverage.xml
-*.cover
-*.py,cover
-.hypothesis/
-.pytest_cache/
-cover/
-
-# Translations
-*.mo
-*.pot
-
-# Django stuff:
-*.log
-local_settings.py
-db.sqlite3
-db.sqlite3-journal
-
-# Flask stuff:
-instance/
-.webassets-cache
-
-# Scrapy stuff:
-.scrapy
-
-# Sphinx documentation
-docs/_build/
-
-# PyBuilder
-.pybuilder/
-target/
-
-# Jupyter Notebook
-.ipynb_checkpoints
-
-# IPython
-profile_default/
-ipython_config.py
-
-# pyenv
-# For a library or package, you might want to ignore these files since the code is
-# intended to run in multiple environments; otherwise, check them in:
-# .python-version
-
-# pipenv
-# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
-# However, in case of collaboration, if having platform-specific dependencies or dependencies
-# having no cross-platform support, pipenv may install dependencies that don't work, or not
-# install all needed dependencies.
-#Pipfile.lock
-
-# poetry
-# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
-# This is especially recommended for binary packages to ensure reproducibility, and is more
-# commonly ignored for libraries.
-# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
-#poetry.lock
-
-# pdm
-# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
-#pdm.lock
-# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
-# in version control.
-# https://pdm.fming.dev/#use-with-ide
-.pdm.toml
-
-# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
-__pypackages__/
-
-# Celery stuff
-celerybeat-schedule
-celerybeat.pid
-
-# SageMath parsed files
-*.sage.py
-
-# Environments
-.env
-.venv
-env/
-venv/
-ENV/
-env.bak/
-venv.bak/
-
-# Spyder project settings
-.spyderproject
-.spyproject
-
-# Rope project settings
-.ropeproject
-
-# mkdocs documentation
-/site
-
-# mypy
-.mypy_cache/
-.dmypy.json
-dmypy.json
-
-# Pyre type checker
-.pyre/
-
-# pytype static type analyzer
-.pytype/
-
-# Cython debug symbols
-cython_debug/
-
-# PyCharm
-# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
-# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
-# and can be added to the global gitignore or merged into this file. For a more nuclear
-# option (not recommended) you can uncomment the following to ignore the entire idea folder.
-#.idea/
-
-### Python Patch ###
-# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
-poetry.toml
-
-# ruff
-.ruff_cache/
-
-# LSP config files
-pyrightconfig.json
-
-### VisualStudioCode ###
-.vscode/*
-!.vscode/settings.json
-!.vscode/tasks.json
-!.vscode/launch.json
-!.vscode/extensions.json
-!.vscode/*.code-snippets
-
-# Local History for Visual Studio Code
-.history/
-
-# Built Visual Studio Code Extensions
-*.vsix
-
-### VisualStudioCode Patch ###
-# Ignore all local history of files
-.history
-.ionide
-
-# End of https://www.toptal.com/developers/gitignore/api/python,visualstudiocode
-*.ipynb
diff --git a/src/code/images/sd-resource/extensions/adetailer/CHANGELOG.md b/src/code/images/sd-resource/extensions/adetailer/CHANGELOG.md
deleted file mode 100644
index f6e41a8..0000000
--- a/src/code/images/sd-resource/extensions/adetailer/CHANGELOG.md
+++ /dev/null
@@ -1,275 +0,0 @@
-# Changelog
-
-## 2023-07-31
-
-- v23.7.11
-- separate clip skip 옵션 추가
-- install requirements 정리 (ultralytics 새 버전, mediapipe~=3.20)
-
-## 2023-07-28
-
-- v23.7.10
-- ultralytics, mediapipe import문 정리
-- traceback에서 컬러를 없앰 (api 때문), 라이브러리 버전도 보여주게 설정.
-- huggingface_hub, pydantic을 install.py에서 없앰
-- 안쓰는 컨트롤넷 관련 코드 삭제
-
-
-## 2023-07-23
-
-- v23.7.9
-- `ultralytics.utils` ModuleNotFoundError 해결 (https://github.com/ultralytics/ultralytics/issues/3856)
-- `pydantic` 2.0 이상 버전 설치안되도록 함
-- `controlnet_dir` cmd args 문제 수정 (PR #107)
-
-## 2023-07-20
-
-- v23.7.8
-- `paste_field_names` 추가했던 것을 되돌림
-
-## 2023-07-19
-
-- v23.7.7
-- 인페인팅 단계에서 별도의 샘플러를 선택할 수 있게 옵션을 추가함 (xyz그리드에도 추가)
-- webui 1.0.0-pre 이하 버전에서 batch index 문제 수정
-- 스크립트에 `paste_field_names`을 추가함. 사용되는지는 모르겠음
-
-## 2023-07-16
-
-- v23.7.6
-- `ultralytics 8.0.135`에 추가된 cpuinfo 기능을 위해 `py-cpuinfo`를 미리 설치하게 함. (미리 설치 안하면 cpu나 mps사용할 때 재시작해야함)
-- init_image가 RGB 모드가 아닐 때 RGB로 변경.
-
-## 2023-07-07
-
-- v23.7.4
-- batch count > 1일때 프롬프트의 인덱스 문제 수정
-
-- v23.7.5
-- i2i의 `cached_uc`와 `cached_c`가 p의 `cached_uc`와 `cached_c`가 다른 인스턴스가 되도록 수정
-
-## 2023-07-05
-
-- v23.7.3
-- 버그 수정
- - `object()`가 json 직렬화 안되는 문제
- - `process`를 호출함에 따라 배치 카운트가 2이상일 때, all_prompts가 고정되는 문제
- - `ad-before`와 `ad-preview` 이미지 파일명이 실제 파일명과 다른 문제
- - pydantic 2.0 호환성 문제
-
-## 2023-07-04
-
-- v23.7.2
-- `mediapipe_face_mesh_eyes_only` 모델 추가: `mediapipe_face_mesh`로 감지한 뒤 눈만 사용함.
-- 매 배치 시작 전에 `scripts.postprocess`를, 후에 `scripts.process`를 호출함.
- - 컨트롤넷을 사용하면 소요 시간이 조금 늘어나지만 몇몇 문제 해결에 도움이 됨.
-- `lora_block_weight`를 스크립트 화이트리스트에 추가함.
- - 한번이라도 ADetailer를 사용한 사람은 수동으로 추가해야함.
-
-## 2023-07-03
-
-- v23.7.1
-- `process_images`를 진행한 뒤 `StableDiffusionProcessing` 오브젝트의 close를 호출함
-- api 호출로 사용했는지 확인하는 속성 추가
-- `NansException`이 발생했을 때 중지하지 않고 남은 과정 계속 진행함
-
-## 2023-07-02
-
-- v23.7.0
-- `NansException`이 발생하면 로그에 표시하고 원본 이미지를 반환하게 설정
-- `rich`를 사용한 에러 트레이싱
- - install.py에 `rich` 추가
-- 생성 중에 컴포넌트의 값을 변경하면 args의 값도 함께 변경되는 문제 수정 (issue #180)
-- 터미널 로그로 ad_prompt와 ad_negative_prompt에 적용된 실제 프롬프트 확인할 수 있음 (입력과 다를 경우에만)
-
-## 2023-06-28
-
-- v23.6.4
-- 최대 모델 수 5 -> 10개
-- ad_prompt와 ad_negative_prompt에 빈칸으로 놔두면 입력 프롬프트가 사용된다는 문구 추가
-- huggingface 모델 다운로드 실패시 로깅
-- 1st 모델이 `None`일 경우 나머지 입력을 무시하던 문제 수정
-- `--use-cpu` 에 `adetailer` 입력 시 cpu로 yolo모델을 사용함
-
-## 2023-06-20
-
-- v23.6.3
-- 컨트롤넷 inpaint 모델에 대해, 3가지 모듈을 사용할 수 있도록 함
-- Noise Multiplier 옵션 추가 (PR #149)
-- pydantic 최소 버전 1.10.8로 설정 (Issue #146)
-
-## 2023-06-05
-
-- v23.6.2
-- xyz_grid에서 ADetailer를 사용할 수 있게함.
- - 8가지 옵션만 1st 탭에 적용되도록 함.
-
-## 2023-06-01
-
-- v23.6.1
-- `inpaint, scribble, lineart, openpose, tile` 5가지 컨트롤넷 모델 지원 (PR #107)
-- controlnet guidance start, end 인자 추가 (PR #107)
-- `modules.extensions`를 사용하여 컨트롤넷 확장을 불러오고 경로를 알아내로록 변경
-- ui에서 컨트롤넷을 별도 함수로 분리
-
-## 2023-05-30
-
-- v23.6.0
-- 스크립트의 이름을 `After Detailer`에서 `ADetailer`로 변경
- - API 사용자는 변경 필요함
-- 몇몇 설정 변경
- - `ad_conf` → `ad_confidence`. 0~100 사이의 int → 0.0~1.0 사이의 float
- - `ad_inpaint_full_res` → `ad_inpaint_only_masked`
- - `ad_inpaint_full_res_padding` → `ad_inpaint_only_masked_padding`
-- mediapipe face mesh 모델 추가
- - mediapipe 최소 버전 `0.10.0`
-
-- rich traceback 제거함
-- huggingface 다운로드 실패할 때 에러가 나지 않게 하고 해당 모델을 제거함
-
-## 2023-05-26
-
-- v23.5.19
-- 1번째 탭에도 `None` 옵션을 추가함
-- api로 ad controlnet model에 inpaint가 아닌 다른 컨트롤넷 모델을 사용하지 못하도록 막음
-- adetailer 진행중에 total tqdm 진행바 업데이트를 멈춤
-- state.inturrupted 상태에서 adetailer 과정을 중지함
-- 컨트롤넷 process를 각 batch가 끝난 순간에만 호출하도록 변경
-
-### 2023-05-25
-
-- v23.5.18
-- 컨트롤넷 관련 수정
- - unit의 `input_mode`를 `SIMPLE`로 모두 변경
- - 컨트롤넷 유넷 훅과 하이잭 함수들을 adetailer를 실행할 때에만 되돌리는 기능 추가
- - adetailer 처리가 끝난 뒤 컨트롤넷 스크립트의 process를 다시 진행함. (batch count 2 이상일때의 문제 해결)
-- 기본 활성 스크립트 목록에서 컨트롤넷을 뺌
-
-### 2023-05-22
-
-- v23.5.17
-- 컨트롤넷 확장이 있으면 컨트롤넷 스크립트를 활성화함. (컨트롤넷 관련 문제 해결)
-- 모든 컴포넌트에 elem_id 설정
-- ui에 버전을 표시함
-
-
-### 2023-05-19
-
-- v23.5.16
-- 추가한 옵션
- - Mask min/max ratio
- - Mask merge mode
- - Restore faces after ADetailer
-- 옵션들을 Accordion으로 묶음
-
-### 2023-05-18
-
-- v23.5.15
-- 필요한 것만 임포트하도록 변경 (vae 로딩 오류 없어짐. 로딩 속도 빨라짐)
-
-### 2023-05-17
-
-- v23.5.14
-- `[SKIP]`으로 ad prompt 일부를 건너뛰는 기능 추가
-- bbox 정렬 옵션 추가
-- sd_webui 타입힌트를 만들어냄
-- enable checker와 관련된 api 오류 수정?
-
-### 2023-05-15
-
-- v23.5.13
-- `[SEP]`으로 ad prompt를 분리하여 적용하는 기능 추가
-- enable checker를 다시 pydantic으로 변경함
-- ui 관련 함수를 adetailer.ui 폴더로 분리함
-- controlnet을 사용할 때 모든 controlnet unit 비활성화
-- adetailer 폴더가 없으면 만들게 함
-
-### 2023-05-13
-
-- v23.5.12
-- `ad_enable`을 제외한 입력이 dict타입으로 들어오도록 변경
- - web api로 사용할 때에 특히 사용하기 쉬움
- - web api breaking change
-- `mask_preprocess` 인자를 넣지 않았던 오류 수정 (PR #47)
-- huggingface에서 모델을 다운로드하지 않는 옵션 추가 `--ad-no-huggingface`
-
-### 2023-05-12
-
-- v23.5.11
-- `ultralytics` 알람 제거
-- 필요없는 exif 인자 더 제거함
-- `use separate steps` 옵션 추가
-- ui 배치를 조정함
-
-### 2023-05-09
-
-- v23.5.10
-- 선택한 스크립트만 ADetailer에 적용하는 옵션 추가, 기본값 `True`. 설정 탭에서 지정가능.
- - 기본값: `dynamic_prompting,dynamic_thresholding,wildcards,wildcard_recursive`
-- `person_yolov8s-seg.pt` 모델 추가
-- `ultralytics`의 최소 버전을 `8.0.97`로 설정 (C:\\ 문제 해결된 버전)
-
-### 2023-05-08
-
-- v23.5.9
-- 2가지 이상의 모델을 사용할 수 있음. 기본값: 2, 최대: 5
-- segment 모델을 사용할 수 있게 함. `person_yolov8n-seg.pt` 추가
-
-### 2023-05-07
-
-- v23.5.8
-- 프롬프트와 네거티브 프롬프트에 방향키 지원 (PR #24)
-- `mask_preprocess`를 추가함. 이전 버전과 시드값이 달라질 가능성 있음!
-- 이미지 처리가 일어났을 때에만 before이미지를 저장함
-- 설정창의 레이블을 ADetailer 대신 더 적절하게 수정함
-
-### 2023-05-06
-
-- v23.5.7
-- `ad_use_cfg_scale` 옵션 추가. cfg 스케일을 따로 사용할지 말지 결정함.
-- `ad_enable` 기본값을 `True`에서 `False`로 변경
-- `ad_model`의 기본값을 `None`에서 첫번째 모델로 변경
-- 최소 2개의 입력(ad_enable, ad_model)만 들어오면 작동하게 변경.
-
-- v23.5.7.post0
-- `init_controlnet_ext`을 controlnet_exists == True일때에만 실행
-- webui를 C드라이브 바로 밑에 설치한 사람들에게 `ultralytics` 경고 표시
-
-### 2023-05-05 (어린이날)
-
-- v23.5.5
-- `Save images before ADetailer` 옵션 추가
-- 입력으로 들어온 인자와 ALL_ARGS의 길이가 다르면 에러메세지
-- README.md에 설치방법 추가
-
-- v23.5.6
-- get_args에서 IndexError가 발생하면 자세한 에러메세지를 볼 수 있음
-- AdetailerArgs에 extra_params 내장
-- scripts_args를 딥카피함
-- postprocess_image를 약간 분리함
-
-- v23.5.6.post0
-- `init_controlnet_ext`에서 에러메세지를 자세히 볼 수 있음
-
-### 2023-05-04
-
-- v23.5.4
-- use pydantic for arguments validation
-- revert: ad_model to `None` as default
-- revert: `__future__` imports
-- lazily import yolo and mediapipe
-
-### 2023-05-03
-
-- v23.5.3.post0
-- remove `__future__` imports
-- change to copy scripts and scripts args
-
-- v23.5.3.post1
-- change default ad_model from `None`
-
-### 2023-05-02
-
-- v23.5.3
-- Remove `None` from model list and add `Enable ADetailer` checkbox.
-- install.py `skip_install` fix.
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/ISSUE_TEMPLATE/bug_report.yml b/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/ISSUE_TEMPLATE/bug_report.yml
deleted file mode 100644
index ce58f67..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/ISSUE_TEMPLATE/bug_report.yml
+++ /dev/null
@@ -1,91 +0,0 @@
-name: Bug Report
-description: Create a report
-title: "[Bug]: "
-labels: ["bug-report"]
-
-body:
- - type: checkboxes
- attributes:
- label: Is there an existing issue for this?
- description: Please search to see if an issue already exists for the bug you encountered, and that it hasn't been fixed in a recent build/commit.
- options:
- - label: I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
- required: true
- - type: markdown
- attributes:
- value: |
- *Please fill this form with as much information as possible, don't forget to fill "What OS..." and "What browsers" and *provide screenshots if possible**
- - type: textarea
- id: what-did
- attributes:
- label: What happened?
- description: Tell us what happened in a very clear and simple way
- validations:
- required: true
- - type: textarea
- id: steps
- attributes:
- label: Steps to reproduce the problem
- description: Please provide us with precise step by step information on how to reproduce the bug
- value: |
- 1. Go to ....
- 2. Press ....
- 3. ...
- validations:
- required: true
- - type: textarea
- id: what-should
- attributes:
- label: What should have happened?
- description: Tell what you think the normal behavior should be
- validations:
- required: true
- - type: textarea
- id: commits
- attributes:
- label: Commit where the problem happens
- description: Which commit of the extension are you running on? Please include the commit of both the extension and the webui (Do not write *Latest version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Commit** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)
- value: |
- webui:
- controlnet:
- validations:
- required: true
- - type: dropdown
- id: browsers
- attributes:
- label: What browsers do you use to access the UI ?
- multiple: true
- options:
- - Mozilla Firefox
- - Google Chrome
- - Brave
- - Apple Safari
- - Microsoft Edge
- - type: textarea
- id: cmdargs
- attributes:
- label: Command Line Arguments
- description: Are you using any launching parameters/command line arguments (modified webui-user .bat/.sh) ? If yes, please write them below. Write "No" otherwise.
- render: Shell
- validations:
- required: true
- - type: textarea
- id: extensions
- attributes:
- label: List of enabled extensions
- description: Please provide a full list of enabled extensions or screenshots of your "Extensions" tab.
- validations:
- required: true
- - type: textarea
- id: logs
- attributes:
- label: Console logs
- description: Please provide full cmd/terminal logs from the moment you started UI to the end of it, after your bug happened. If it's very long, provide a link to pastebin or similar service.
- render: Shell
- validations:
- required: true
- - type: textarea
- id: misc
- attributes:
- label: Additional information
- description: Please provide us with any relevant additional info or context.
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/ISSUE_TEMPLATE/config.yml b/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/ISSUE_TEMPLATE/config.yml
deleted file mode 100644
index 0086358..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/ISSUE_TEMPLATE/config.yml
+++ /dev/null
@@ -1 +0,0 @@
-blank_issues_enabled: true
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/workflows/tests.yml b/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/workflows/tests.yml
deleted file mode 100644
index c190bdf..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/.github/workflows/tests.yml
+++ /dev/null
@@ -1,37 +0,0 @@
-name: Run basic features tests on CPU
-
-on:
- - push
- - pull_request
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - name: Checkout Code
- uses: actions/checkout@v3
- with:
- repository: 'AUTOMATIC1111/stable-diffusion-webui'
- path: 'stable-diffusion-webui'
- ref: '5ab7f213bec2f816f9c5644becb32eb72c8ffb89'
-
- - name: Checkout Code
- uses: actions/checkout@v3
- with:
- repository: 'Mikubill/sd-webui-controlnet'
- path: 'stable-diffusion-webui/extensions/sd-webui-controlnet'
-
- - name: Set up Python 3.10
- uses: actions/setup-python@v4
- with:
- python-version: 3.10.6
- cache: pip
- cache-dependency-path: |
- **/requirements*txt
- stable-diffusion-webui/requirements*txt
-
- - run: |
- pip install torch torchvision
- curl -Lo stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_canny-fp16.safetensors https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_canny-fp16.safetensors
- cd stable-diffusion-webui && python launch.py --no-half --disable-opt-split-attention --use-cpu all --skip-torch-cuda-test --api --tests ./extensions/sd-webui-controlnet/tests
- rm -fr stable-diffusion-webui/extensions/sd-webui-controlnet/models/control_canny-fp16.safetensors
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/chatgpt.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/chatgpt.py
deleted file mode 100644
index 0842f82..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/chatgpt.py
+++ /dev/null
@@ -1,676 +0,0 @@
-import os
-import re
-import uuid
-import cv2
-import torch
-import requests
-import io, base64
-import numpy as np
-import gradio as gr
-from PIL import Image
-from base64 import b64encode
-from omegaconf import OmegaConf
-from transformers import pipeline, BlipProcessor, BlipForConditionalGeneration, BlipForQuestionAnswering
-from transformers import AutoModelForCausalLM, AutoTokenizer, CLIPSegProcessor, CLIPSegForImageSegmentation
-
-from langchain.agents.initialize import initialize_agent
-from langchain.agents.tools import Tool
-from langchain.chains.conversation.memory import ConversationBufferMemory
-from langchain.llms.openai import OpenAI
-
-VISUAL_CHATGPT_PREFIX = """Visual ChatGPT is designed to be able to assist with a wide range of text and visual related tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. Visual ChatGPT is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
-Visual ChatGPT is able to process and understand large amounts of text and images. As a language model, Visual ChatGPT can not directly read images, but it has a list of tools to finish different visual tasks. Each image will have a file name formed as "image/xxx.png", and Visual ChatGPT can invoke different tools to indirectly understand pictures. When talking about images, Visual ChatGPT is very strict to the file name and will never fabricate nonexistent files. When using tools to generate new image files, Visual ChatGPT is also known that the image may not be the same as the user's demand, and will use other visual question answering tools or description tools to observe the real image. Visual ChatGPT is able to use tools in a sequence, and is loyal to the tool observation outputs rather than faking the image content and image file name. It will remember to provide the file name from the last tool observation, if a new image is generated.
-Human may provide new figures to Visual ChatGPT with a description. The description helps Visual ChatGPT to understand this image, but Visual ChatGPT should use tools to finish following tasks, rather than directly imagine from the description.
-Overall, Visual ChatGPT is a powerful visual dialogue assistant tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics.
-TOOLS:
-------
-Visual ChatGPT has access to the following tools:"""
-
-VISUAL_CHATGPT_FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
-```
-Thought: Do I need to use a tool? Yes
-Action: the action to take, should be one of [{tool_names}]
-Action Input: the input to the action
-Observation: the result of the action
-```
-When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
-```
-Thought: Do I need to use a tool? No
-{ai_prefix}: [your response here]
-```
-"""
-
-VISUAL_CHATGPT_SUFFIX = """You are very strict to the filename correctness and will never fake a file name if it does not exist.
-You will remember to provide the image file name loyally if it's provided in the last tool observation.
-Begin!
-Previous conversation history:
-{chat_history}
-New input: {input}
-Since Visual ChatGPT is a text language model, Visual ChatGPT must use tools to observe images rather than imagination.
-The thoughts and observations are only visible for Visual ChatGPT, Visual ChatGPT should remember to repeat important information in the final response for Human.
-Thought: Do I need to use a tool? {agent_scratchpad}"""
-
-ENDPOINT = "http://localhost:7860"
-T2IAPI = ENDPOINT + "/controlnet/txt2img"
-DETECTAPI = ENDPOINT + "/controlnet/detect"
-MODELLIST = ENDPOINT + "/controlnet/model_list"
-
-device = "cpu"
-if torch.cuda.is_available():
- device = "cuda"
-
-def readImage(path):
- img = cv2.imread(path)
- retval, buffer = cv2.imencode('.jpg', img)
- b64img = b64encode(buffer).decode("utf-8")
- return b64img
-
-def get_model(pattern='^control_canny.*'):
- r = requests.get(MODELLIST)
- result = r.json()["model_list"]
- for item in result:
- if re.match(pattern, item):
- return item
-
-def do_webui_request(url=T2IAPI, **kwargs):
- reqbody = {
- "prompt": "best quality, extremely detailed",
- "negative_prompt": "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality",
- "seed": -1,
- "subseed": -1,
- "subseed_strength": 0,
- "batch_size": 1,
- "n_iter": 1,
- "steps": 15,
- "cfg_scale": 7,
- "width": 512,
- "height": 768,
- "restore_faces": True,
- "eta": 0,
- "sampler_index": "Euler a",
- "controlnet_input_images": [],
- "controlnet_module": 'canny',
- "controlnet_model": 'control_canny-fp16 [e3fe7712]',
- "controlnet_guidance": 1.0,
- }
- reqbody.update(kwargs)
- r = requests.post(url, json=reqbody)
- return r.json()
-
-
-def cut_dialogue_history(history_memory, keep_last_n_words=500):
- tokens = history_memory.split()
- n_tokens = len(tokens)
- print(f"hitory_memory:{history_memory}, n_tokens: {n_tokens}")
- if n_tokens < keep_last_n_words:
- return history_memory
- else:
- paragraphs = history_memory.split('\n')
- last_n_tokens = n_tokens
- while last_n_tokens >= keep_last_n_words:
- last_n_tokens = last_n_tokens - len(paragraphs[0].split(' '))
- paragraphs = paragraphs[1:]
- return '\n' + '\n'.join(paragraphs)
-
-def get_new_image_name(org_img_name, func_name="update"):
- head_tail = os.path.split(org_img_name)
- head = head_tail[0]
- tail = head_tail[1]
- name_split = tail.split('.')[0].split('_')
- this_new_uuid = str(uuid.uuid4())[0:4]
- if len(name_split) == 1:
- most_org_file_name = name_split[0]
- recent_prev_file_name = name_split[0]
- new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name)
- else:
- assert len(name_split) == 4
- most_org_file_name = name_split[3]
- recent_prev_file_name = name_split[0]
- new_file_name = '{}_{}_{}_{}.png'.format(this_new_uuid, func_name, recent_prev_file_name, most_org_file_name)
- return os.path.join(head, new_file_name)
-
-class MaskFormer:
- def __init__(self, device):
- self.device = device
- self.processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
- self.model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined").to(device)
-
- def inference(self, image_path, text):
- threshold = 0.5
- min_area = 0.02
- padding = 20
- original_image = Image.open(image_path)
- image = original_image.resize((512, 512))
- inputs = self.processor(text=text, images=image, padding="max_length", return_tensors="pt",).to(self.device)
- with torch.no_grad():
- outputs = self.model(**inputs)
- mask = torch.sigmoid(outputs[0]).squeeze().cpu().numpy() > threshold
- area_ratio = len(np.argwhere(mask)) / (mask.shape[0] * mask.shape[1])
- if area_ratio < min_area:
- return None
- true_indices = np.argwhere(mask)
- mask_array = np.zeros_like(mask, dtype=bool)
- for idx in true_indices:
- padded_slice = tuple(slice(max(0, i - padding), i + padding + 1) for i in idx)
- mask_array[padded_slice] = True
- visual_mask = (mask_array * 255).astype(np.uint8)
- image_mask = Image.fromarray(visual_mask)
- return image_mask.resize(image.size)
-
-# class ImageEditing:
-# def __init__(self, device):
-# print("Initializing StableDiffusionInpaint to %s" % device)
-# self.device = device
-# self.mask_former = MaskFormer(device=self.device)
-# # self.inpainting = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting",).to(device)
-
-# def remove_part_of_image(self, input):
-# image_path, to_be_removed_txt = input.split(",")
-# print(f'remove_part_of_image: to_be_removed {to_be_removed_txt}')
-# return self.replace_part_of_image(f"{image_path},{to_be_removed_txt},background")
-
-# def replace_part_of_image(self, input):
-# image_path, to_be_replaced_txt, replace_with_txt = input.split(",")
-# print(f'replace_part_of_image: replace_with_txt {replace_with_txt}')
-# mask_image = self.mask_former.inference(image_path, to_be_replaced_txt)
-# buffered = io.BytesIO()
-# mask_image.save(buffered, format="JPEG")
-# resp = do_webui_request(
-# url=ENDPOINT + "/sdapi/v1/img2img",
-# init_images=[readImage(image_path)],
-# mask=b64encode(buffered.getvalue()).decode("utf-8"),
-# prompt=replace_with_txt,
-# )
-# image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
-# updated_image_path = get_new_image_name(image_path, func_name="replace-something")
-# updated_image.save(updated_image_path)
-# return updated_image_path
-
-# class Pix2Pix:
-# def __init__(self, device):
-# print("Initializing Pix2Pix to %s" % device)
-# self.device = device
-# self.pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16, safety_checker=None).to(device)
-# self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config)
-
-# def inference(self, inputs):
-# """Change style of image."""
-# print("===>Starting Pix2Pix Inference")
-# image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
-# original_image = Image.open(image_path)
-# image = self.pipe(instruct_text,image=original_image,num_inference_steps=40,image_guidance_scale=1.2,).images[0]
-# updated_image_path = get_new_image_name(image_path, func_name="pix2pix")
-# image.save(updated_image_path)
-# return updated_image_path
-
-
-class T2I:
- def __init__(self, device):
- print("Initializing T2I to %s" % device)
- self.device = device
- self.text_refine_tokenizer = AutoTokenizer.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
- self.text_refine_model = AutoModelForCausalLM.from_pretrained("Gustavosta/MagicPrompt-Stable-Diffusion")
- self.text_refine_gpt2_pipe = pipeline("text-generation", model=self.text_refine_model, tokenizer=self.text_refine_tokenizer, device=self.device)
-
- def inference(self, text):
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
- refined_text = self.text_refine_gpt2_pipe(text)[0]["generated_text"]
- print(f'{text} refined to {refined_text}')
- resp = do_webui_request(
- url=ENDPOINT + "/sdapi/v1/txt2img",
- prompt=refined_text,
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- image.save(image_filename)
- print(f"Processed T2I.run, text: {text}, image_filename: {image_filename}")
- return image_filename
-
-
-class ImageCaptioning:
- def __init__(self, device):
- print("Initializing ImageCaptioning to %s" % device)
- self.device = device
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
- self.model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to(self.device)
-
- def inference(self, image_path):
- inputs = self.processor(Image.open(image_path), return_tensors="pt").to(self.device)
- out = self.model.generate(**inputs)
- captions = self.processor.decode(out[0], skip_special_tokens=True)
- return captions
-
-
-class image2canny:
- def inference(self, inputs):
- print("===>Starting image2canny Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="segmentation",
- )
- updated_image_path = get_new_image_name(inputs, func_name="edge")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class canny2image:
- def inference(self, inputs):
- print("===>Starting canny2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="none",
- controlnet_model=get_model(pattern='^control_canny.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="canny2image")
- real_image = Image.fromarray(x_samples[0])
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2line:
- def inference(self, inputs):
- print("===>Starting image2hough Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="mlsd",
- )
- updated_image_path = get_new_image_name(inputs, func_name="line-of")
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- image.save(updated_image_path)
- return updated_image_path
-
-
-class line2image:
- def inference(self, inputs):
- print("===>Starting line2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="none",
- controlnet_model=get_model(pattern='^control_mlsd.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="line2image")
- real_image = Image.fromarray(x_samples[0]) # default the index0 image
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2hed:
- def inference(self, inputs):
- print("===>Starting image2hed Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="hed",
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(inputs, func_name="hed-boundary")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class hed2image:
- def inference(self, inputs):
- print("===>Starting hed2image Inference")
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="none",
- controlnet_model=get_model(pattern='^control_hed.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="hed2image")
- real_image = Image.fromarray(x_samples[0]) # default the index0 image
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2scribble:
- def inference(self, inputs):
- print("===>Starting image2scribble Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="scribble",
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(inputs, func_name="scribble")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class scribble2image:
- def inference(self, inputs):
- print("===>Starting seg2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="none",
- controlnet_model=get_model(pattern='^control_scribble.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="scribble2image")
- real_image = Image.fromarray(x_samples[0])
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2pose:
- def inference(self, inputs):
- print("===>Starting image2pose Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="openpose",
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(inputs, func_name="human-pose")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class pose2image:
- def inference(self, inputs):
- print("===>Starting pose2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="none",
- controlnet_model=get_model(pattern='^control_openpose.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="pose2image")
- real_image = Image.fromarray(x_samples[0]) # default the index0 image
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2seg:
- def inference(self, inputs):
- print("===>Starting image2seg Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="segmentation",
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(inputs, func_name="segmentation")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class seg2image:
- def inference(self, inputs):
- print("===>Starting seg2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="none",
- controlnet_model=get_model(pattern='^control_seg.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="segment2image")
- real_image = Image.fromarray(x_samples[0])
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2depth:
- def inference(self, inputs):
- print("===>Starting image2depth Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="depth",
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(inputs, func_name="depth")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class depth2image:
- def inference(self, inputs):
- print("===>Starting depth2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="depth",
- controlnet_model=get_model(pattern='^control_depth.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="depth2image")
- real_image = Image.fromarray(x_samples[0]) # default the index0 image
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class image2normal:
- def inference(self, inputs):
- print("===>Starting image2 normal Inference")
- resp = do_webui_request(
- url=DETECTAPI,
- controlnet_input_images=[readImage(inputs)],
- controlnet_module="normal",
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(inputs, func_name="normal-map")
- image.save(updated_image_path)
- return updated_image_path
-
-
-class normal2image:
- def inference(self, inputs):
- print("===>Starting normal2image Inference")
- image_path, instruct_text = inputs.split(",")[0], ','.join(inputs.split(',')[1:])
- resp = do_webui_request(
- prompt=instruct_text,
- controlnet_input_images=[readImage(image_path)],
- controlnet_module="normal",
- controlnet_model=get_model(pattern='^control_normal.*'),
- )
- image = Image.open(io.BytesIO(base64.b64decode(resp["images"][0])))
- updated_image_path = get_new_image_name(image_path, func_name="normal2image")
- real_image = Image.fromarray(x_samples[0]) # default the index0 image
- real_image.save(updated_image_path)
- return updated_image_path
-
-
-class BLIPVQA:
- def __init__(self, device):
- print("Initializing BLIP VQA to %s" % device)
- self.device = device
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
- self.model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to(self.device)
-
- def get_answer_from_question_and_image(self, inputs):
- image_path, question = inputs.split(",")
- raw_image = Image.open(image_path).convert('RGB')
- print(F'BLIPVQA :question :{question}')
- inputs = self.processor(raw_image, question, return_tensors="pt").to(self.device)
- out = self.model.generate(**inputs)
- answer = self.processor.decode(out[0], skip_special_tokens=True)
- return answer
-
-
-class ConversationBot:
- def __init__(self):
- print("Initializing VisualChatGPT")
- # self.edit = ImageEditing(device=device)
- self.i2t = ImageCaptioning(device=device)
- self.t2i = T2I(device=device)
- self.image2canny = image2canny()
- self.canny2image = canny2image()
- self.image2line = image2line()
- self.line2image = line2image()
- self.image2hed = image2hed()
- self.hed2image = hed2image()
- self.image2scribble = image2scribble()
- self.scribble2image = scribble2image()
- self.image2pose = image2pose()
- self.pose2image = pose2image()
- self.BLIPVQA = BLIPVQA(device=device)
- self.image2seg = image2seg()
- self.seg2image = seg2image()
- self.image2depth = image2depth()
- self.depth2image = depth2image()
- self.image2normal = image2normal()
- self.normal2image = normal2image()
- # self.pix2pix = Pix2Pix(device="cuda:3")
- self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
- self.tools = [
- Tool(name="Get Photo Description", func=self.i2t.inference,
- description="useful when you want to know what is inside the photo. receives image_path as input. "
- "The input to this tool should be a string, representing the image_path. "),
- Tool(name="Generate Image From User Input Text", func=self.t2i.inference,
- description="useful when you want to generate an image from a user input text and save it to a file. like: generate an image of an object or something, or generate an image that includes some objects. "
- "The input to this tool should be a string, representing the text used to generate image. "),
- # Tool(name="Remove Something From The Photo", func=self.edit.remove_part_of_image,
- # description="useful when you want to remove and object or something from the photo from its description or location. "
- # "The input to this tool should be a comma seperated string of two, representing the image_path and the object need to be removed. "),
- # Tool(name="Replace Something From The Photo", func=self.edit.replace_part_of_image,
- # description="useful when you want to replace an object from the object description or location with another object from its description. "
- # "The input to this tool should be a comma seperated string of three, representing the image_path, the object to be replaced, the object to be replaced with "),
-
- # Tool(name="Instruct Image Using Text", func=self.pix2pix.inference,
- # description="useful when you want to the style of the image to be like the text. like: make it look like a painting. or make it like a robot. "
- # "The input to this tool should be a comma seperated string of two, representing the image_path and the text. "),
- Tool(name="Answer Question About The Image", func=self.BLIPVQA.get_answer_from_question_and_image,
- description="useful when you need an answer for a question based on an image. like: what is the background color of the last image, how many cats in this figure, what is in this figure. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the question"),
- Tool(name="Edge Detection On Image", func=self.image2canny.inference,
- description="useful when you want to detect the edge of the image. like: detect the edges of this image, or canny detection on image, or peform edge detection on this image, or detect the canny image of this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Canny Image", func=self.canny2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and a canny image. like: generate a real image of a object or something from this canny image, or generate a new real image of a object or something from this edge image. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description. "),
- Tool(name="Line Detection On Image", func=self.image2line.inference,
- description="useful when you want to detect the straight line of the image. like: detect the straight lines of this image, or straight line detection on image, or peform straight line detection on this image, or detect the straight line image of this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Line Image", func=self.line2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and a straight line image. like: generate a real image of a object or something from this straight line image, or generate a new real image of a object or something from this straight lines. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description. "),
- Tool(name="Hed Detection On Image", func=self.image2hed.inference,
- description="useful when you want to detect the soft hed boundary of the image. like: detect the soft hed boundary of this image, or hed boundary detection on image, or peform hed boundary detection on this image, or detect soft hed boundary image of this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Soft Hed Boundary Image", func=self.hed2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and a soft hed boundary image. like: generate a real image of a object or something from this soft hed boundary image, or generate a new real image of a object or something from this hed boundary. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description"),
- Tool(name="Segmentation On Image", func=self.image2seg.inference,
- description="useful when you want to detect segmentations of the image. like: segment this image, or generate segmentations on this image, or peform segmentation on this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Segmentations", func=self.seg2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and segmentations. like: generate a real image of a object or something from this segmentation image, or generate a new real image of a object or something from these segmentations. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description"),
- Tool(name="Predict Depth On Image", func=self.image2depth.inference,
- description="useful when you want to detect depth of the image. like: generate the depth from this image, or detect the depth map on this image, or predict the depth for this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Depth", func=self.depth2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and depth image. like: generate a real image of a object or something from this depth image, or generate a new real image of a object or something from the depth map. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description"),
- Tool(name="Predict Normal Map On Image", func=self.image2normal.inference,
- description="useful when you want to detect norm map of the image. like: generate normal map from this image, or predict normal map of this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Normal Map", func=self.normal2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and normal map. like: generate a real image of a object or something from this normal map, or generate a new real image of a object or something from the normal map. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description"),
- Tool(name="Sketch Detection On Image", func=self.image2scribble.inference,
- description="useful when you want to generate a scribble of the image. like: generate a scribble of this image, or generate a sketch from this image, detect the sketch from this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Sketch Image", func=self.scribble2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and a scribble image or a sketch image. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description"),
- Tool(name="Pose Detection On Image", func=self.image2pose.inference,
- description="useful when you want to detect the human pose of the image. like: generate human poses of this image, or generate a pose image from this image. "
- "The input to this tool should be a string, representing the image_path"),
- Tool(name="Generate Image Condition On Pose Image", func=self.pose2image.inference,
- description="useful when you want to generate a new real image from both the user desciption and a human pose image. like: generate a real image of a human from this human pose image, or generate a new real image of a human from this pose. "
- "The input to this tool should be a comma seperated string of two, representing the image_path and the user description")]
-
- def init_langchain(self, openai_api_key):
- self.llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
- self.agent = initialize_agent(
- self.tools,
- self.llm,
- agent="conversational-react-description",
- verbose=True,
- memory=self.memory,
- return_intermediate_steps=True,
- agent_kwargs={'prefix': VISUAL_CHATGPT_PREFIX, 'format_instructions': VISUAL_CHATGPT_FORMAT_INSTRUCTIONS, 'suffix': VISUAL_CHATGPT_SUFFIX}
- )
-
- def run_text(self, openai_api_key, text, state):
- if not hasattr(self, "agent"):
- self.init_langchain(openai_api_key)
- print("===============Running run_text =============")
- print("Inputs:", text, state)
- print("======>Previous memory:\n %s" % self.agent.memory)
- self.agent.memory.buffer = cut_dialogue_history(self.agent.memory.buffer, keep_last_n_words=500)
- res = self.agent({"input": text})
- print("======>Current memory:\n %s" % self.agent.memory)
- response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output'])
- state = state + [(text, response)]
- print("Outputs:", state)
- return state, state
-
- def run_image(self, openai_api_key, image, state, txt):
- if not hasattr(self, "agent"):
- self.init_langchain(openai_api_key)
- print("===============Running run_image =============")
- print("Inputs:", image, state)
- print("======>Previous memory:\n %s" % self.agent.memory)
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
- print("======>Auto Resize Image...")
- img = Image.open(image.name)
- width, height = img.size
- ratio = min(512 / width, 512 / height)
- width_new, height_new = (round(width * ratio), round(height * ratio))
- img = img.resize((width_new, height_new))
- img = img.convert('RGB')
- img.save(image_filename, "PNG")
- print(f"Resize image form {width}x{height} to {width_new}x{height_new}")
- description = self.i2t.inference(image_filename)
- Human_prompt = "\nHuman: provide a figure named {}. The description is: {}. This information helps you to understand this image, but you should use tools to finish following tasks, " \
- "rather than directly imagine from my description. If you understand, say \"Received\". \n".format(image_filename, description)
- AI_prompt = "Received. "
- self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt
- print("======>Current memory:\n %s" % self.agent.memory)
- state = state + [(f"![](/file={image_filename})*{image_filename}*", AI_prompt)]
- print("Outputs:", state)
- return state, state, txt + ' ' + image_filename + ' '
-
-
-if __name__ == '__main__':
- os.makedirs("image/", exist_ok=True)
- bot = ConversationBot()
- with gr.Blocks(css="#chatbot .overflow-y-auto{height:500px}") as demo:
- openai_api_key = gr.Textbox(type="password", label="Enter your OpenAI API key here")
- chatbot = gr.Chatbot(elem_id="chatbot", label="Visual ChatGPT")
- state = gr.State([])
- with gr.Row():
- with gr.Column(scale=0.7):
- txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter, or upload an image").style(container=False)
- with gr.Column(scale=0.15, min_width=0):
- clear = gr.Button("Clear️")
- with gr.Column(scale=0.15, min_width=0):
- btn = gr.UploadButton("Upload", file_types=["image"])
-
- txt.submit(bot.run_text, [openai_api_key, txt, state], [chatbot, state])
- txt.submit(lambda: "", None, txt)
- btn.upload(bot.run_image, [openai_api_key, btn, state, txt], [chatbot, state, txt])
- clear.click(bot.memory.clear)
- clear.click(lambda: [], None, chatbot)
- clear.click(lambda: [], None, state)
-
-
- demo.launch(server_name="0.0.0.0", server_port=7864)
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/txt2img_example/api_txt2img.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/txt2img_example/api_txt2img.py
deleted file mode 100644
index de2b2d6..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/txt2img_example/api_txt2img.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import io
-import cv2
-import base64
-import requests
-from PIL import Image
-
-"""
- To use this example make sure you've done the following steps before executing:
- 1. Ensure automatic1111 is running in api mode with the controlnet extension.
- Use the following command in your terminal to activate:
- ./webui.sh --no-half --api
- 2. Validate python environment meet package dependencies.
- If running in a local repo you'll likely need to pip install cv2, requests and PIL
-"""
-
-
-class ControlnetRequest:
- def __init__(self, prompt, path):
- self.url = "http://localhost:7860/sdapi/v1/txt2img"
- self.prompt = prompt
- self.img_path = path
- self.body = None
-
- def build_body(self):
- self.body = {
- "prompt": self.prompt,
- "negative_prompt": "",
- "batch_size": 1,
- "steps": 20,
- "cfg_scale": 7,
- "alwayson_scripts": {
- "controlnet": {
- "args": [
- {
- "enabled": True,
- "module": "none",
- "model": "canny",
- "weight": 1.0,
- "image": self.read_image(),
- "resize_mode": 1,
- "lowvram": False,
- "processor_res": 64,
- "threshold_a": 64,
- "threshold_b": 64,
- "guidance_start": 0.0,
- "guidance_end": 1.0,
- "control_mode": 0,
- "pixel_perfect": False
- }
- ]
- }
- }
- }
-
- def send_request(self):
- response = requests.post(url=self.url, json=self.body)
- return response.json()
-
- def read_image(self):
- img = cv2.imread(self.img_path)
- retval, bytes = cv2.imencode('.png', img)
- encoded_image = base64.b64encode(bytes).decode('utf-8')
- return encoded_image
-
-
-if __name__ == '__main__':
- path = 'stock_mountain.png'
- prompt = 'a large avalanche'
-
- control_net = ControlnetRequest(prompt, path)
- control_net.build_body()
- output = control_net.send_request()
-
- result = output['images'][0]
-
- image = Image.open(io.BytesIO(base64.b64decode(result.split(",", 1)[0])))
- image.show()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/txt2img_example/stock_mountain.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/txt2img_example/stock_mountain.png
deleted file mode 100644
index 4c036e8..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/txt2img_example/stock_mountain.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/visual_chatgpt.ipynb b/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/visual_chatgpt.ipynb
deleted file mode 100644
index cb71d72..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/example/visual_chatgpt.ipynb
+++ /dev/null
@@ -1,60 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Run WebUI in API mode\n",
- "nohup python launch.py --api --xformers &\n",
- "\n",
- "# Wait until webui fully startup\n",
- "tail -f nohup.out"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Install/Upgrade transformers\n",
- "pip install -U transformers\n",
- "\n",
- "# Install deps\n",
- "pip install langchain==0.0.101 openai \n",
- "\n",
- "# Run exmaple\n",
- "python example/chatgpt.py"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "pynb",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.10.9"
- },
- "orig_nbformat": 4,
- "vscode": {
- "interpreter": {
- "hash": "d73345514d8c18d9a1da7351d222dbd2834c7f4a09e728a0d1f4c4580fbec206"
- }
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-gen.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-gen.png
deleted file mode 100644
index 128292e..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-gen.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-pose.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-pose.png
deleted file mode 100644
index 83b92e3..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-pose.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-source.jpg b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-source.jpg
deleted file mode 100644
index 01e2bdd..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/an-source.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/bal-gen.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/bal-gen.png
deleted file mode 100644
index a3ac242..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/bal-gen.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/bal-source.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/bal-source.png
deleted file mode 100644
index 7f77950..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/bal-source.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm1.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm1.png
deleted file mode 100644
index ce348ea..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm1.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm2.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm2.png
deleted file mode 100644
index 01b4e68..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm2.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm3.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm3.png
deleted file mode 100644
index f9fc7fb..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm3.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm4.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm4.png
deleted file mode 100644
index 1925b38..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/cm4.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/dog_rel.jpg b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/dog_rel.jpg
deleted file mode 100644
index 78a6d81..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/dog_rel.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/dog_rel.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/dog_rel.png
deleted file mode 100644
index a67da58..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/dog_rel.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_gen.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_gen.png
deleted file mode 100644
index 5d0dbf1..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_gen.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_hed.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_hed.png
deleted file mode 100644
index fa7feb7..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_hed.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_source.jpg b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_source.jpg
deleted file mode 100644
index 0a21210..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/evt_source.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro-out.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro-out.png
deleted file mode 100644
index d1eb025..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro-out.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro_canny.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro_canny.png
deleted file mode 100644
index 318f4fa..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro_canny.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro_input.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro_input.png
deleted file mode 100644
index 0ee95dd..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/mahiro_input.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/ref.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/ref.png
deleted file mode 100644
index e5461fd..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/ref.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-dep.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-dep.png
deleted file mode 100644
index 1896956..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-dep.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-out.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-out.png
deleted file mode 100644
index 6bf4670..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-out.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-src.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-src.png
deleted file mode 100644
index 1bece79..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/samples/sk-b-src.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/body_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/body_test.py
deleted file mode 100644
index 54cde1f..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/body_test.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import unittest
-import numpy as np
-
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from annotator.openpose.body import Body, Keypoint, BodyResult
-
-class TestFormatBodyResult(unittest.TestCase):
- def setUp(self):
- self.candidate = np.array([
- [10, 20, 0.9, 0],
- [30, 40, 0.8, 1],
- [50, 60, 0.7, 2],
- [70, 80, 0.6, 3]
- ])
-
- self.subset = np.array([
- [-1, 0, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1.7, 2],
- [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 0.6, 1]
- ])
-
- def test_format_body_result(self):
- expected_result = [
- BodyResult(
- keypoints=[
- None,
- Keypoint(x=10, y=20, score=0.9, id=0),
- Keypoint(x=30, y=40, score=0.8, id=1),
- None
- ] + [None] * 14,
- total_score=1.7,
- total_parts=2
- ),
- BodyResult(
- keypoints=[None] * 17 + [
- Keypoint(x=70, y=80, score=0.6, id=3)
- ],
- total_score=0.6,
- total_parts=1
- )
- ]
-
- result = Body.format_body_result(self.candidate, self.subset)
-
- self.assertEqual(result, expected_result)
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/detection_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/detection_test.py
deleted file mode 100644
index 5ed0761..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/detection_test.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import unittest
-import numpy as np
-
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from annotator.openpose.util import faceDetect, handDetect
-from annotator.openpose.body import Keypoint, BodyResult
-
-class TestFaceDetect(unittest.TestCase):
- def test_no_faces(self):
- oriImg = np.zeros((100, 100, 3), dtype=np.uint8)
- body = BodyResult([None] * 18, total_score=3, total_parts=0)
- expected_result = None
- result = faceDetect(body, oriImg)
-
- self.assertEqual(result, expected_result)
-
- def test_single_face(self):
- body = BodyResult([
- Keypoint(50, 50),
- *([None] * 13),
- Keypoint(30, 40),
- Keypoint(70, 40),
- Keypoint(20, 50),
- Keypoint(80, 50),
- ], total_score=2, total_parts=5)
-
- oriImg = np.zeros((100, 100, 3), dtype=np.uint8)
-
- expected_result = (0, 0, 120)
- result = faceDetect(body, oriImg)
-
- self.assertEqual(result, expected_result)
-
-class TestHandDetect(unittest.TestCase):
- def test_no_hands(self):
- oriImg = np.zeros((100, 100, 3), dtype=np.uint8)
- body = BodyResult([None] * 18, total_score=3, total_parts=0)
- expected_result = []
- result = handDetect(body, oriImg)
-
- self.assertEqual(result, expected_result)
-
- def test_single_left_hand(self):
- oriImg = np.zeros((100, 100, 3), dtype=np.uint8)
-
- body = BodyResult([
- None, None, None, None, None,
- Keypoint(20, 20),
- Keypoint(40, 30),
- Keypoint(60, 40),
- *([None] * 8),
- Keypoint(20, 60),
- Keypoint(40, 70),
- Keypoint(60, 80)
- ], total_score=3, total_parts=0.5)
-
- expected_result = [(49, 26, 33, True)]
- result = handDetect(body, oriImg)
-
- self.assertEqual(result, expected_result)
-
- def test_single_right_hand(self):
- oriImg = np.zeros((100, 100, 3), dtype=np.uint8)
-
- body = BodyResult([
- None, None,
- Keypoint(20, 20),
- Keypoint(40, 30),
- Keypoint(60, 40),
- *([None] * 11),
- Keypoint(20, 60),
- Keypoint(40, 70),
- Keypoint(60, 80)
- ], total_score=3, total_parts=0.5)
-
- expected_result = [(49, 26, 33, False)]
- result = handDetect(body, oriImg)
-
- self.assertEqual(result, expected_result)
-
- def test_multiple_hands(self):
- body = BodyResult([
- Keypoint(20, 20),
- Keypoint(40, 30),
- Keypoint(60, 40),
- Keypoint(20, 60),
- Keypoint(40, 70),
- Keypoint(60, 80),
- Keypoint(10, 10),
- Keypoint(30, 20),
- Keypoint(50, 30),
- Keypoint(10, 50),
- Keypoint(30, 60),
- Keypoint(50, 70),
- *([None] * 6),
- ], total_score=3, total_parts=0.5)
-
- oriImg = np.zeros((100, 100, 3), dtype=np.uint8)
-
- expected_result = [(0, 0, 100, True), (16, 43, 56, False)]
- result = handDetect(body, oriImg)
- self.assertEqual(result, expected_result)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/json_encode_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/json_encode_test.py
deleted file mode 100644
index 3940751..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/json_encode_test.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import json
-import unittest
-import numpy as np
-
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from annotator.openpose import encode_poses_as_json, PoseResult, Keypoint
-from annotator.openpose.body import BodyResult
-
-class TestEncodePosesAsJson(unittest.TestCase):
- def test_empty_list(self):
- poses = []
- canvas_height = 1080
- canvas_width = 1920
- result = encode_poses_as_json(poses, canvas_height, canvas_width)
- expected = json.dumps({
- 'people': [],
- 'canvas_height': canvas_height,
- 'canvas_width': canvas_width,
- }, indent=4)
- self.assertEqual(result, expected)
-
- def test_single_pose_no_keypoints(self):
- poses = [PoseResult(BodyResult(None, 0, 0), None, None, None)]
- canvas_height = 1080
- canvas_width = 1920
- result = encode_poses_as_json(poses, canvas_height, canvas_width)
- expected = json.dumps({
- 'people': [
- {
- 'pose_keypoints_2d': None,
- 'face_keypoints_2d': None,
- 'hand_left_keypoints_2d': None,
- 'hand_right_keypoints_2d': None,
- },
- ],
- 'canvas_height': canvas_height,
- 'canvas_width': canvas_width,
- }, indent=4)
- self.assertEqual(result, expected)
-
- def test_single_pose_with_keypoints(self):
- keypoints = [Keypoint(np.float32(0.5), np.float32(0.5)), None, Keypoint(0.6, 0.6)]
- poses = [PoseResult(BodyResult(keypoints, 0, 0), keypoints, keypoints, keypoints)]
- canvas_height = 1080
- canvas_width = 1920
- result = encode_poses_as_json(poses, canvas_height, canvas_width)
- expected = json.dumps({
- 'people': [
- {
- 'pose_keypoints_2d': [
- 0.5, 0.5, 1.0,
- 0.0, 0.0, 0.0,
- 0.6, 0.6, 1.0,
- ],
- 'face_keypoints_2d': [
- 0.5, 0.5, 1.0,
- 0.0, 0.0, 0.0,
- 0.6, 0.6, 1.0,
- ],
- 'hand_left_keypoints_2d': [
- 0.5, 0.5, 1.0,
- 0.0, 0.0, 0.0,
- 0.6, 0.6, 1.0,
- ],
- 'hand_right_keypoints_2d': [
- 0.5, 0.5, 1.0,
- 0.0, 0.0, 0.0,
- 0.6, 0.6, 1.0,
- ],
- },
- ],
- 'canvas_height': canvas_height,
- 'canvas_width': canvas_width,
- }, indent=4)
- self.assertEqual(result, expected)
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/openpose_e2e_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/openpose_e2e_test.py
deleted file mode 100644
index b078a15..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/annotator_tests/openpose_tests/openpose_e2e_test.py
+++ /dev/null
@@ -1,108 +0,0 @@
-import unittest
-import cv2
-import numpy as np
-from typing import Dict
-
-
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from annotator.openpose import OpenposeDetector
-
-class TestOpenposeDetector(unittest.TestCase):
- image_path = './tests/images'
- def setUp(self) -> None:
- self.detector = OpenposeDetector()
- self.detector.load_model()
-
- def tearDown(self) -> None:
- self.detector.unload_model()
-
- def expect_same_image(self, img1, img2, diff_img_path: str):
- # Calculate the difference between the two images
- diff = cv2.absdiff(img1, img2)
-
- # Set a threshold to highlight the different pixels
- threshold = 30
- diff_highlighted = np.where(diff > threshold, 255, 0).astype(np.uint8)
-
- # Assert that the two images are similar within a tolerance
- similar = np.allclose(img1, img2, rtol=1e-05, atol=1e-08)
- if not similar:
- # Save the diff_highlighted image to inspect the differences
- cv2.imwrite(diff_img_path, cv2.cvtColor(diff_highlighted, cv2.COLOR_RGB2BGR))
-
- self.assertTrue(similar)
-
- # Save expectation image as png so that no compression issue happens.
- def template(self, test_image: str, expected_image: str, detector_config: Dict, overwrite_expectation: bool = False):
- oriImg = cv2.cvtColor(cv2.imread(test_image), cv2.COLOR_BGR2RGB)
- canvas = self.detector(oriImg, **detector_config)
-
- # Create expectation file
- if overwrite_expectation:
- cv2.imwrite(expected_image, cv2.cvtColor(canvas, cv2.COLOR_RGB2BGR))
- else:
- expected_canvas = cv2.cvtColor(cv2.imread(expected_image), cv2.COLOR_BGR2RGB)
- self.expect_same_image(canvas, expected_canvas, diff_img_path=expected_image.replace('.png', '_diff.png'))
-
- def test_body(self):
- self.template(
- test_image = f'{TestOpenposeDetector.image_path}/ski.jpg',
- expected_image = f'{TestOpenposeDetector.image_path}/expected_ski_output.png',
- detector_config=dict(),
- overwrite_expectation=False
- )
-
- def test_hand(self):
- self.template(
- test_image = f'{TestOpenposeDetector.image_path}/woman.jpeg',
- expected_image = f'{TestOpenposeDetector.image_path}/expected_woman_hand_output.png',
- detector_config=dict(
- include_body=False,
- include_face=False,
- include_hand=True,
- ),
- overwrite_expectation=False
- )
-
- def test_face(self):
- self.template(
- test_image = f'{TestOpenposeDetector.image_path}/woman.jpeg',
- expected_image = f'{TestOpenposeDetector.image_path}/expected_woman_face_output.png',
- detector_config=dict(
- include_body=False,
- include_face=True,
- include_hand=False,
- ),
- overwrite_expectation=False
- )
-
- def test_all(self):
- self.template(
- test_image = f'{TestOpenposeDetector.image_path}/woman.jpeg',
- expected_image = f'{TestOpenposeDetector.image_path}/expected_woman_all_output.png',
- detector_config=dict(
- include_body=True,
- include_face=True,
- include_hand=True,
- ),
- overwrite_expectation=False
- )
-
- def test_dw(self):
- self.template(
- test_image = f'{TestOpenposeDetector.image_path}/woman.jpeg',
- expected_image = f'{TestOpenposeDetector.image_path}/expected_woman_dw_all_output.png',
- detector_config=dict(
- include_body=True,
- include_face=True,
- include_hand=True,
- use_dw_pose=True,
- ),
- overwrite_expectation=False,
- )
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/__init__.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/batch_hijack_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/batch_hijack_test.py
deleted file mode 100644
index db75898..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/batch_hijack_test.py
+++ /dev/null
@@ -1,335 +0,0 @@
-import unittest.mock
-import importlib
-from typing import Any
-
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from modules import processing, scripts, shared
-from scripts import controlnet, external_code, batch_hijack
-
-
-batch_hijack.instance.undo_hijack()
-original_process_images_inner = processing.process_images_inner
-
-
-class TestBatchHijack(unittest.TestCase):
- @unittest.mock.patch('modules.script_callbacks.on_script_unloaded')
- def setUp(self, on_script_unloaded_mock):
- self.on_script_unloaded_mock = on_script_unloaded_mock
-
- self.batch_hijack_object = batch_hijack.BatchHijack()
- self.batch_hijack_object.do_hijack()
-
- def tearDown(self):
- self.batch_hijack_object.undo_hijack()
-
- def test_do_hijack__registers_on_script_unloaded(self):
- self.on_script_unloaded_mock.assert_called_once_with(self.batch_hijack_object.undo_hijack)
-
- def test_do_hijack__call_once__hijacks_once(self):
- self.assertEqual(getattr(processing, '__controlnet_original_process_images_inner'), original_process_images_inner)
- self.assertEqual(processing.process_images_inner, self.batch_hijack_object.processing_process_images_hijack)
-
- @unittest.mock.patch('modules.processing.__controlnet_original_process_images_inner')
- def test_do_hijack__multiple_times__hijacks_once(self, process_images_inner_mock):
- self.batch_hijack_object.do_hijack()
- self.batch_hijack_object.do_hijack()
- self.batch_hijack_object.do_hijack()
- self.assertEqual(process_images_inner_mock, getattr(processing, '__controlnet_original_process_images_inner'))
-
-
-class TestGetControlNetBatchesWorks(unittest.TestCase):
- def setUp(self):
- self.p = unittest.mock.MagicMock()
- self.p.scripts = scripts.scripts_txt2img
- self.cn_script = controlnet.Script()
- self.p.scripts.alwayson_scripts = [self.cn_script]
- self.p.script_args = []
-
- def tearDown(self):
- batch_hijack.instance.dispatch_callbacks(batch_hijack.instance.postprocess_batch_callbacks, self.p)
-
- def assert_get_cn_batches_works(self, batch_images_list):
- self.cn_script.args_from = 0
- self.cn_script.args_to = self.cn_script.args_from + len(self.p.script_args)
-
- is_cn_batch, batches, output_dir, _ = batch_hijack.get_cn_batches(self.p)
- batch_hijack.instance.dispatch_callbacks(batch_hijack.instance.process_batch_callbacks, self.p, batches, output_dir)
-
- batch_units = [unit for unit in self.p.script_args if getattr(unit, 'input_mode', batch_hijack.InputMode.SIMPLE) == batch_hijack.InputMode.BATCH]
- if batch_units:
- self.assertEqual(min(len(unit.batch_images) for unit in batch_units), len(batches))
- else:
- self.assertEqual(1, len(batches))
-
- for i, unit in enumerate(self.cn_script.enabled_units):
- self.assertListEqual(batch_images_list[i], list(unit.batch_images))
-
- def test_get_cn_batches__empty(self):
- is_batch, batches, _, _ = batch_hijack.get_cn_batches(self.p)
- self.assertEqual(1, len(batches))
- self.assertEqual(is_batch, False)
-
- def test_get_cn_batches__1_simple(self):
- self.p.script_args.append(external_code.ControlNetUnit(image=get_dummy_image()))
- self.assert_get_cn_batches_works([
- [self.p.script_args[0].image],
- ])
-
- def test_get_cn_batches__2_simples(self):
- self.p.script_args.extend([
- external_code.ControlNetUnit(image=get_dummy_image(0)),
- external_code.ControlNetUnit(image=get_dummy_image(1)),
- ])
- self.assert_get_cn_batches_works([
- [get_dummy_image(0)],
- [get_dummy_image(1)],
- ])
-
- def test_get_cn_batches__1_batch(self):
- self.p.script_args.extend([
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(0),
- get_dummy_image(1),
- ],
- ),
- ])
- self.assert_get_cn_batches_works([
- [
- get_dummy_image(0),
- get_dummy_image(1),
- ],
- ])
-
- def test_get_cn_batches__2_batches(self):
- self.p.script_args.extend([
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(0),
- get_dummy_image(1),
- ],
- ),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(2),
- get_dummy_image(3),
- ],
- ),
- ])
- self.assert_get_cn_batches_works([
- [
- get_dummy_image(0),
- get_dummy_image(1),
- ],
- [
- get_dummy_image(2),
- get_dummy_image(3),
- ],
- ])
-
- def test_get_cn_batches__2_mixed(self):
- self.p.script_args.extend([
- external_code.ControlNetUnit(image=get_dummy_image(0)),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(1),
- get_dummy_image(2),
- ],
- ),
- ])
- self.assert_get_cn_batches_works([
- [
- get_dummy_image(0),
- get_dummy_image(0),
- ],
- [
- get_dummy_image(1),
- get_dummy_image(2),
- ],
- ])
-
- def test_get_cn_batches__3_mixed(self):
- self.p.script_args.extend([
- external_code.ControlNetUnit(image=get_dummy_image(0)),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(1),
- get_dummy_image(2),
- get_dummy_image(3),
- ],
- ),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(4),
- get_dummy_image(5),
- ],
- ),
- ])
- self.assert_get_cn_batches_works([
- [
- get_dummy_image(0),
- get_dummy_image(0),
- ],
- [
- get_dummy_image(1),
- get_dummy_image(2),
- ],
- [
- get_dummy_image(4),
- get_dummy_image(5),
- ],
- ])
-
-class TestProcessImagesPatchWorks(unittest.TestCase):
- @unittest.mock.patch('modules.script_callbacks.on_script_unloaded')
- def setUp(self, on_script_unloaded_mock):
- self.on_script_unloaded_mock = on_script_unloaded_mock
- self.p = unittest.mock.MagicMock()
- self.p.scripts = scripts.scripts_txt2img
- self.cn_script = controlnet.Script()
- self.p.scripts.alwayson_scripts = [self.cn_script]
- self.p.script_args = []
- self.p.all_seeds = [0]
- self.p.all_subseeds = [0]
- self.old_model, shared.sd_model = shared.sd_model, unittest.mock.MagicMock()
-
- self.batch_hijack_object = batch_hijack.BatchHijack()
- self.callbacks_mock = unittest.mock.MagicMock()
- self.batch_hijack_object.process_batch_callbacks.append(self.callbacks_mock.process)
- self.batch_hijack_object.process_batch_each_callbacks.append(self.callbacks_mock.process_each)
- self.batch_hijack_object.postprocess_batch_each_callbacks.insert(0, self.callbacks_mock.postprocess_each)
- self.batch_hijack_object.postprocess_batch_callbacks.insert(0, self.callbacks_mock.postprocess)
- self.batch_hijack_object.do_hijack()
- shared.state.begin()
-
- def tearDown(self):
- shared.state.end()
- self.batch_hijack_object.undo_hijack()
- shared.sd_model = self.old_model
-
- @unittest.mock.patch('modules.processing.__controlnet_original_process_images_inner')
- def assert_process_images_hijack_called(self, process_images_mock, batch_count):
- process_images_mock.return_value = processing.Processed(self.p, [get_dummy_image('output')])
- with unittest.mock.patch.dict(shared.opts.data, {
- 'controlnet_show_batch_images_in_ui': True,
- }):
- res = processing.process_images_inner(self.p)
-
- self.assertEqual(res, process_images_mock.return_value)
-
- if batch_count > 0:
- self.callbacks_mock.process.assert_called()
- self.callbacks_mock.postprocess.assert_called()
- else:
- self.callbacks_mock.process.assert_not_called()
- self.callbacks_mock.postprocess.assert_not_called()
-
- self.assertEqual(self.callbacks_mock.process_each.call_count, batch_count)
- self.assertEqual(self.callbacks_mock.postprocess_each.call_count, batch_count)
-
- def test_process_images_no_units_forwards(self):
- self.assert_process_images_hijack_called(batch_count=0)
-
- def test_process_images__only_simple_units__forwards(self):
- self.p.script_args = [
- external_code.ControlNetUnit(image=get_dummy_image()),
- external_code.ControlNetUnit(image=get_dummy_image()),
- ]
- self.assert_process_images_hijack_called(batch_count=0)
-
- def test_process_images__1_batch_1_unit__runs_1_batch(self):
- self.p.script_args = [
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(),
- ],
- ),
- ]
- self.assert_process_images_hijack_called(batch_count=1)
-
- def test_process_images__2_batches_1_unit__runs_2_batches(self):
- self.p.script_args = [
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(0),
- get_dummy_image(1),
- ],
- ),
- ]
- self.assert_process_images_hijack_called(batch_count=2)
-
- def test_process_images__8_batches_1_unit__runs_8_batches(self):
- batch_count = 8
- self.p.script_args = [
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[get_dummy_image(i) for i in range(batch_count)]
- ),
- ]
- self.assert_process_images_hijack_called(batch_count=batch_count)
-
- def test_process_images__1_batch_2_units__runs_1_batch(self):
- self.p.script_args = [
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[get_dummy_image(0)]
- ),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[get_dummy_image(1)]
- ),
- ]
- self.assert_process_images_hijack_called(batch_count=1)
-
- def test_process_images__2_batches_2_units__runs_2_batches(self):
- self.p.script_args = [
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(0),
- get_dummy_image(1),
- ],
- ),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(2),
- get_dummy_image(3),
- ],
- ),
- ]
- self.assert_process_images_hijack_called(batch_count=2)
-
- def test_process_images__3_batches_2_mixed_units__runs_3_batches(self):
- self.p.script_args = [
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.BATCH,
- batch_images=[
- get_dummy_image(0),
- get_dummy_image(1),
- get_dummy_image(2),
- ],
- ),
- controlnet.UiControlNetUnit(
- input_mode=batch_hijack.InputMode.SIMPLE,
- image=get_dummy_image(3),
- ),
- ]
- self.assert_process_images_hijack_called(batch_count=3)
-
-
-def get_dummy_image(name: Any = 0):
- return f'base64#{name}...'
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/cn_script_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/cn_script_test.py
deleted file mode 100644
index da8669c..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/cn_script_test.py
+++ /dev/null
@@ -1,172 +0,0 @@
-from typing import Any, Dict, List
-import unittest
-from PIL import Image
-import numpy as np
-
-import importlib
-
-utils = importlib.import_module("extensions.sd-webui-controlnet.tests.utils", "utils")
-utils.setup_test_env()
-
-from scripts import external_code, processor
-from scripts.controlnet import prepare_mask, Script, set_numpy_seed
-from modules import processing
-
-
-class TestPrepareMask(unittest.TestCase):
- def test_prepare_mask(self):
- p = processing.StableDiffusionProcessing()
- p.inpainting_mask_invert = True
- p.mask_blur = 5
-
- mask = Image.new("RGB", (10, 10), color="white")
-
- processed_mask = prepare_mask(mask, p)
-
- # Check that mask is correctly converted to grayscale
- self.assertTrue(processed_mask.mode, "L")
-
- # Check that mask colors are correctly inverted
- self.assertEqual(
- processed_mask.getpixel((0, 0)), 0
- ) # inverted white should be black
-
- p.inpainting_mask_invert = False
- processed_mask = prepare_mask(mask, p)
-
- # Check that mask colors are not inverted when 'inpainting_mask_invert' is False
- self.assertEqual(
- processed_mask.getpixel((0, 0)), 255
- ) # white should remain white
-
- p.mask_blur = 0
- mask = Image.new("RGB", (10, 10), color="black")
- processed_mask = prepare_mask(mask, p)
-
- # Check that mask is not blurred when 'mask_blur' is 0
- self.assertEqual(
- processed_mask.getpixel((0, 0)), 0
- ) # black should remain black
-
-
-class TestSetNumpySeed(unittest.TestCase):
- def test_seed_subseed_minus_one(self):
- p = processing.StableDiffusionProcessing()
- p.seed = -1
- p.subseed = -1
- p.all_seeds = [123, 456]
- expected_seed = (123 + 123) & 0xFFFFFFFF
- self.assertEqual(set_numpy_seed(p), expected_seed)
-
- def test_valid_seed_subseed(self):
- p = processing.StableDiffusionProcessing()
- p.seed = 50
- p.subseed = 100
- p.all_seeds = [123, 456]
- expected_seed = (50 + 100) & 0xFFFFFFFF
- self.assertEqual(set_numpy_seed(p), expected_seed)
-
- def test_invalid_seed_subseed(self):
- p = processing.StableDiffusionProcessing()
- p.seed = "invalid"
- p.subseed = 2.5
- p.all_seeds = [123, 456]
- self.assertEqual(set_numpy_seed(p), None)
-
- def test_empty_all_seeds(self):
- p = processing.StableDiffusionProcessing()
- p.seed = -1
- p.subseed = 2
- p.all_seeds = []
- self.assertEqual(set_numpy_seed(p), None)
-
- def test_random_state_change(self):
- p = processing.StableDiffusionProcessing()
- p.seed = 50
- p.subseed = 100
- p.all_seeds = [123, 456]
- expected_seed = (50 + 100) & 0xFFFFFFFF
-
- np.random.seed(0) # set a known seed
- before_random = np.random.randint(0, 1000) # get a random integer
-
- seed = set_numpy_seed(p)
- self.assertEqual(seed, expected_seed)
-
- after_random = np.random.randint(0, 1000) # get another random integer
-
- self.assertNotEqual(before_random, after_random)
-
-
-class MockImg2ImgProcessing(processing.StableDiffusionProcessing):
- """Mock the Img2Img processing as the WebUI version have dependency on
- `sd_model`."""
-
- def __init__(self, init_images, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.init_images = init_images
-
-
-class TestScript(unittest.TestCase):
- sample_base64_image = (
- "data:image/png;base64,"
- "iVBORw0KGgoAAAANSUhEUgAAARMAAAC3CAIAAAC+MS2jAAAAqUlEQVR4nO3BAQ"
- "0AAADCoPdPbQ8HFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
- "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
- "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
- "AAAAAAAAAAAAAAAAAAAAAAAA/wZOlAAB5tU+nAAAAABJRU5ErkJggg=="
- )
-
- sample_np_image = np.array(
- [[100, 200, 50], [150, 75, 225], [30, 120, 180]], dtype=np.uint8
- )
-
- def test_bound_check_params(self):
- def param_required(module: str, param: str) -> bool:
- configs = processor.preprocessor_sliders_config[module]
- config_index = ("processor_res", "threshold_a", "threshold_b").index(param)
- return config_index < len(configs) and configs[config_index] is not None
-
- for module in processor.preprocessor_sliders_config.keys():
- for param in ("processor_res", "threshold_a", "threshold_b"):
- with self.subTest(param=param, module=module):
- unit = external_code.ControlNetUnit(
- module=module,
- **{param: -100},
- )
- Script.bound_check_params(unit)
- if param_required(module, param):
- self.assertGreaterEqual(getattr(unit, param), 0)
- else:
- self.assertEqual(getattr(unit, param), -100)
-
- def test_choose_input_image(self):
- with self.subTest(name="no image"):
- with self.assertRaises(ValueError):
- Script.choose_input_image(
- p=processing.StableDiffusionProcessing(),
- unit=external_code.ControlNetUnit(),
- idx=0,
- )
-
- with self.subTest(name="control net input"):
- _, from_a1111 = Script.choose_input_image(
- p=MockImg2ImgProcessing(init_images=[TestScript.sample_np_image]),
- unit=external_code.ControlNetUnit(
- image=TestScript.sample_base64_image, module="none"
- ),
- idx=0,
- )
- self.assertFalse(from_a1111)
-
- with self.subTest(name="A1111 input"):
- _, from_a1111 = Script.choose_input_image(
- p=MockImg2ImgProcessing(init_images=[TestScript.sample_np_image]),
- unit=external_code.ControlNetUnit(module="none"),
- idx=0,
- )
- self.assertTrue(from_a1111)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/utils_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/utils_test.py
deleted file mode 100644
index c6280aa..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/cn_script/utils_test.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from scripts.utils import ndarray_lru_cache, get_unique_axis0
-
-import unittest
-import numpy as np
-
-class TestNumpyLruCache(unittest.TestCase):
-
- def setUp(self):
- self.arr1 = np.array([1, 2, 3, 4, 5])
- self.arr2 = np.array([1, 2, 3, 4, 5])
-
- @ndarray_lru_cache(max_size=128)
- def add_one(self, arr):
- return arr + 1
-
- def test_same_array(self):
- # Test that the decorator works with numpy arrays.
- result1 = self.add_one(self.arr1)
- result2 = self.add_one(self.arr1)
-
- # If caching is working correctly, these should be the same object.
- self.assertIs(result1, result2)
-
- def test_different_array_same_data(self):
- # Test that the decorator works with different numpy arrays with the same data.
- result1 = self.add_one(self.arr1)
- result2 = self.add_one(self.arr2)
-
- # If caching is working correctly, these should be the same object.
- self.assertIs(result1, result2)
-
- def test_cache_size(self):
- # Test that the cache size limit is respected.
- arrs = [np.array([i]) for i in range(150)]
-
- # Add all arrays to the cache.
-
- result1 = self.add_one(arrs[0])
- for arr in arrs[1:]:
- self.add_one(arr)
-
- # Check that the first array is no longer in the cache.
- result2 = self.add_one(arrs[0])
-
- # If the cache size limit is working correctly, these should not be the same object.
- self.assertIsNot(result1, result2)
-
- def test_large_array(self):
- # Create two large arrays with the same elements in the beginning and end, but one different element in the middle.
- arr1 = np.ones(10000)
- arr2 = np.ones(10000)
- arr2[len(arr2)//2] = 0
-
- result1 = self.add_one(arr1)
- result2 = self.add_one(arr2)
-
- # If hashing is working correctly, these should not be the same object because the input arrays are not equal.
- self.assertIsNot(result1, result2)
-
-class TestUniqueFunctions(unittest.TestCase):
- def test_get_unique_axis0(self):
- data = np.random.randint(0, 100, size=(100000, 3))
- data = np.concatenate((data, data))
- numpy_unique_res = np.unique(data, axis=0)
- get_unique_axis0_res = get_unique_axis0(data)
- self.assertEqual(np.array_equal(
- np.sort(numpy_unique_res, axis=0), np.sort(get_unique_axis0_res, axis=0),
- ), True)
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/__init__.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/external_code_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/external_code_test.py
deleted file mode 100644
index 2fb9d15..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/external_code_test.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import unittest
-import importlib
-
-import numpy as np
-
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from copy import copy
-from scripts import external_code
-from scripts import controlnet
-from modules import scripts, ui, shared
-
-
-class TestExternalCodeWorking(unittest.TestCase):
- max_models = 6
- args_offset = 10
-
- def setUp(self):
- self.scripts = copy(scripts.scripts_txt2img)
- self.scripts.initialize_scripts(False)
- ui.create_ui()
- self.cn_script = controlnet.Script()
- self.cn_script.args_from = self.args_offset
- self.cn_script.args_to = self.args_offset + self.max_models
- self.scripts.alwayson_scripts = [self.cn_script]
- self.script_args = [None] * self.cn_script.args_from
-
- self.initial_max_models = shared.opts.data.get("control_net_max_models_num", 1)
- shared.opts.data.update(control_net_max_models_num=self.max_models)
-
- self.extra_models = 0
-
- def tearDown(self):
- shared.opts.data.update(control_net_max_models_num=self.initial_max_models)
-
- def get_expected_args_to(self):
- args_len = max(self.max_models, len(self.cn_units))
- return self.args_offset + args_len
-
- def assert_update_in_place_ok(self):
- external_code.update_cn_script_in_place(self.scripts, self.script_args, self.cn_units)
- self.assertEqual(self.cn_script.args_to, self.get_expected_args_to())
-
- def test_empty_resizes_min_args(self):
- self.cn_units = []
- self.assert_update_in_place_ok()
-
- def test_empty_resizes_extra_args(self):
- extra_models = 1
- self.cn_units = [external_code.ControlNetUnit()] * (self.max_models + extra_models)
- self.assert_update_in_place_ok()
-
-
-class TestControlNetUnitConversion(unittest.TestCase):
- def setUp(self):
- self.dummy_image = 'base64...'
- self.input = {}
- self.expected = external_code.ControlNetUnit()
-
- def assert_converts_to_expected(self):
- self.assertEqual(vars(external_code.to_processing_unit(self.input)), vars(self.expected))
-
- def test_empty_dict_works(self):
- self.assert_converts_to_expected()
-
- def test_image_works(self):
- self.input = {
- 'image': self.dummy_image
- }
- self.expected = external_code.ControlNetUnit(image=self.dummy_image)
- self.assert_converts_to_expected()
-
- def test_image_alias_works(self):
- self.input = {
- 'input_image': self.dummy_image
- }
- self.expected = external_code.ControlNetUnit(image=self.dummy_image)
- self.assert_converts_to_expected()
-
- def test_masked_image_works(self):
- self.input = {
- 'image': self.dummy_image,
- 'mask': self.dummy_image,
- }
- self.expected = external_code.ControlNetUnit(image={'image': self.dummy_image, 'mask': self.dummy_image})
- self.assert_converts_to_expected()
-
-
-class TestControlNetUnitImageToDict(unittest.TestCase):
- def setUp(self):
- self.dummy_image = utils.readImage("test/test_files/img2img_basic.png")
- self.input = external_code.ControlNetUnit()
- self.expected_image = external_code.to_base64_nparray(self.dummy_image)
- self.expected_mask = external_code.to_base64_nparray(self.dummy_image)
-
- def assert_dict_is_valid(self):
- actual_dict = controlnet.image_dict_from_any(self.input.image)
- self.assertEqual(actual_dict['image'].tolist(), self.expected_image.tolist())
- self.assertEqual(actual_dict['mask'].tolist(), self.expected_mask.tolist())
-
- def test_none(self):
- self.assertEqual(controlnet.image_dict_from_any(self.input.image), None)
-
- def test_image_without_mask(self):
- self.input.image = self.dummy_image
- self.expected_mask = np.zeros_like(self.expected_image, dtype=np.uint8)
- self.assert_dict_is_valid()
-
- def test_masked_image_tuple(self):
- self.input.image = (self.dummy_image, self.dummy_image,)
- self.assert_dict_is_valid()
-
- def test_masked_image_dict(self):
- self.input.image = {'image': self.dummy_image, 'mask': self.dummy_image}
- self.assert_dict_is_valid()
-
-
-class TestPixelPerfectResolution(unittest.TestCase):
- def test_outer_fit(self):
- image = np.zeros((100, 100, 3))
- target_H, target_W = 50, 100
- resize_mode = external_code.ResizeMode.OUTER_FIT
- result = external_code.pixel_perfect_resolution(image, target_H, target_W, resize_mode)
- expected = 50 # manually computed expected result
- self.assertEqual(result, expected)
-
- def test_inner_fit(self):
- image = np.zeros((100, 100, 3))
- target_H, target_W = 50, 100
- resize_mode = external_code.ResizeMode.INNER_FIT
- result = external_code.pixel_perfect_resolution(image, target_H, target_W, resize_mode)
- expected = 100 # manually computed expected result
- self.assertEqual(result, expected)
-
-
-class TestGetAllUnitsFrom(unittest.TestCase):
- def test_none(self):
- self.assertListEqual(external_code.get_all_units_from([None]), [])
-
- def test_bool(self):
- self.assertListEqual(external_code.get_all_units_from([True]), [])
-
- def test_inheritance(self):
- class Foo(external_code.ControlNetUnit):
- def __init__(self):
- super().__init__(self)
- self.bar = 'a'
-
- foo = Foo()
- self.assertListEqual(external_code.get_all_units_from([foo]), [foo])
-
- def test_dict(self):
- units = external_code.get_all_units_from([{}])
- self.assertGreater(len(units), 0)
- self.assertIsInstance(units[0], external_code.ControlNetUnit)
-
- def test_unitlike(self):
- class Foo(object):
- """ bar """
-
- foo = Foo()
- for key in vars(external_code.ControlNetUnit()).keys():
- setattr(foo, key, True)
- setattr(foo, 'bar', False)
- self.assertListEqual(external_code.get_all_units_from([foo]), [foo])
-
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/importlib_reload_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/importlib_reload_test.py
deleted file mode 100644
index 3c81a99..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/importlib_reload_test.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import unittest
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from scripts import external_code
-
-
-class TestImportlibReload(unittest.TestCase):
- def setUp(self):
- self.ControlNetUnit = external_code.ControlNetUnit
-
- def test_reload_does_not_redefine(self):
- importlib.reload(external_code)
- NewControlNetUnit = external_code.ControlNetUnit
- self.assertEqual(self.ControlNetUnit, NewControlNetUnit)
-
- def test_force_import_does_not_redefine(self):
- external_code_copy = importlib.import_module('extensions.sd-webui-controlnet.scripts.external_code', 'external_code')
- self.assertEqual(self.ControlNetUnit, external_code_copy.ControlNetUnit)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/script_args_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/script_args_test.py
deleted file mode 100644
index de2ab29..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/external_code_api/script_args_test.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import unittest
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from scripts import external_code
-
-
-class TestGetAllUnitsFrom(unittest.TestCase):
- def setUp(self):
- self.control_unit = {
- "module": "none",
- "model": utils.get_model(),
- "image": utils.readImage("test/test_files/img2img_basic.png"),
- "resize_mode": 1,
- "low_vram": False,
- "processor_res": 64,
- "control_mode": external_code.ControlMode.BALANCED.value,
- }
- self.object_unit = external_code.ControlNetUnit(**self.control_unit)
-
- def test_empty_converts(self):
- script_args = []
- units = external_code.get_all_units_from(script_args)
- self.assertListEqual(units, [])
-
- def test_object_forwards(self):
- script_args = [self.object_unit]
- units = external_code.get_all_units_from(script_args)
- self.assertListEqual(units, [self.object_unit])
-
-
-if __name__ == '__main__':
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_ski_output.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_ski_output.png
deleted file mode 100644
index 53803ae..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_ski_output.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_all_output.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_all_output.png
deleted file mode 100644
index 94f93cc..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_all_output.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_dw_all_output.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_dw_all_output.png
deleted file mode 100644
index 2bcefcc..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_dw_all_output.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_face_output.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_face_output.png
deleted file mode 100644
index af8f89e..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_face_output.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_hand_output.png b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_hand_output.png
deleted file mode 100644
index ec520a3..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/expected_woman_hand_output.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/ski.jpg b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/ski.jpg
deleted file mode 100644
index 6624841..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/ski.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/woman.jpeg b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/woman.jpeg
deleted file mode 100644
index 8ee7bcc..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/images/woman.jpeg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/utils.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/utils.py
deleted file mode 100644
index b0595a0..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/utils.py
+++ /dev/null
@@ -1,40 +0,0 @@
-import os
-import sys
-import cv2
-from base64 import b64encode
-
-import requests
-
-BASE_URL = "http://localhost:7860"
-
-
-def setup_test_env():
- ext_root = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
- if ext_root not in sys.path:
- sys.path.append(ext_root)
-
-
-def readImage(path):
- img = cv2.imread(path)
- retval, buffer = cv2.imencode('.jpg', img)
- b64img = b64encode(buffer).decode("utf-8")
- return b64img
-
-
-def get_model():
- r = requests.get(BASE_URL+"/controlnet/model_list")
- result = r.json()
- if "model_list" in result:
- result = result["model_list"]
- for item in result:
- print("Using model: ", item)
- return item
- return "None"
-
-
-def get_modules():
- return requests.get(f"{BASE_URL}/controlnet/module_list").json()
-
-
-def detect(json):
- return requests.post(BASE_URL+"/controlnet/detect", json=json)
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/__init__.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/__init__.py
deleted file mode 100644
index e69de29..0000000
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/control_types_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/control_types_test.py
deleted file mode 100644
index 39a9357..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/control_types_test.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import unittest
-import importlib
-import requests
-
-utils = importlib.import_module(
- 'extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-from scripts.processor import preprocessor_filters
-
-
-class TestControlTypes(unittest.TestCase):
- def test_fetching_control_types(self):
- response = requests.get(utils.BASE_URL + "/controlnet/control_types")
- self.assertEqual(response.status_code, 200)
- result = response.json()
- self.assertIn('control_types', result)
-
- for control_type in preprocessor_filters:
- self.assertIn(control_type, result['control_types'])
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/detect_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/detect_test.py
deleted file mode 100644
index d5b02e3..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/detect_test.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import requests
-import unittest
-import importlib
-utils = importlib.import_module(
- 'extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-
-
-class TestDetectEndpointWorking(unittest.TestCase):
- def setUp(self):
- self.base_detect_args = {
- "controlnet_module": "canny",
- "controlnet_input_images": [utils.readImage("test/test_files/img2img_basic.png")],
- "controlnet_processor_res": 512,
- "controlnet_threshold_a": 0,
- "controlnet_threshold_b": 0,
- }
-
- def test_detect_with_invalid_module_performed(self):
- detect_args = self.base_detect_args.copy()
- detect_args.update({
- "controlnet_module": "INVALID",
- })
- self.assertEqual(utils.detect(detect_args).status_code, 422)
-
- def test_detect_with_no_input_images_performed(self):
- detect_args = self.base_detect_args.copy()
- detect_args.update({
- "controlnet_input_images": [],
- })
- self.assertEqual(utils.detect(detect_args).status_code, 422)
-
- def test_detect_with_valid_args_performed(self):
- detect_args = self.base_detect_args
- response = utils.detect(detect_args)
-
- self.assertEqual(response.status_code, 200)
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/img2img_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/img2img_test.py
deleted file mode 100644
index bb1df56..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/img2img_test.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import unittest
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-import requests
-
-
-
-class TestImg2ImgWorkingBase(unittest.TestCase):
- def setUp(self):
- controlnet_unit = {
- "module": "none",
- "model": utils.get_model(),
- "weight": 1.0,
- "input_image": utils.readImage("test/test_files/img2img_basic.png"),
- "mask": utils.readImage("test/test_files/img2img_basic.png"),
- "resize_mode": 1,
- "lowvram": False,
- "processor_res": 64,
- "threshold_a": 64,
- "threshold_b": 64,
- "guidance_start": 0.0,
- "guidance_end": 1.0,
- "control_mode": 0,
- }
- setup_args = {"alwayson_scripts":{"ControlNet":{"args": ([controlnet_unit] * getattr(self, 'units_count', 1))}}}
- self.setup_route(setup_args)
-
- def setup_route(self, setup_args):
- self.url_img2img = "http://localhost:7860/sdapi/v1/img2img"
- self.simple_img2img = {
- "init_images": [utils.readImage("test/test_files/img2img_basic.png")],
- "resize_mode": 0,
- "denoising_strength": 0.75,
- "image_cfg_scale": 0,
- "mask_blur": 4,
- "inpainting_fill": 0,
- "inpaint_full_res": True,
- "inpaint_full_res_padding": 0,
- "inpainting_mask_invert": 0,
- "initial_noise_multiplier": 0,
- "prompt": "example prompt",
- "styles": [],
- "seed": -1,
- "subseed": -1,
- "subseed_strength": 0,
- "seed_resize_from_h": -1,
- "seed_resize_from_w": -1,
- "sampler_name": "Euler a",
- "batch_size": 1,
- "n_iter": 1,
- "steps": 3,
- "cfg_scale": 7,
- "width": 64,
- "height": 64,
- "restore_faces": False,
- "tiling": False,
- "do_not_save_samples": False,
- "do_not_save_grid": False,
- "negative_prompt": "",
- "eta": 0,
- "s_churn": 0,
- "s_tmax": 0,
- "s_tmin": 0,
- "s_noise": 1,
- "override_settings": {},
- "override_settings_restore_afterwards": True,
- "sampler_index": "Euler a",
- "include_init_images": False,
- "send_images": True,
- "save_images": False,
- "alwayson_scripts": {}
- }
- self.simple_img2img.update(setup_args)
-
- def assert_status_ok(self):
- self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200)
- stderr = ""
- with open('test/stderr.txt') as f:
- stderr = f.read().lower()
- with open('test/stderr.txt', 'w') as f:
- # clear stderr file so we can easily parse the next test
- f.write("")
- self.assertFalse('error' in stderr, "Errors in stderr: \n" + stderr)
-
- def test_img2img_simple_performed(self):
- self.assert_status_ok()
-
- def test_img2img_alwayson_scripts_default_units(self):
- self.units_count = 0
- self.setUp()
- self.assert_status_ok()
-
- def test_img2img_default_params(self):
- self.simple_img2img["alwayson_scripts"]["ControlNet"]["args"] = [{
- "input_image": utils.readImage("test/test_files/img2img_basic.png"),
- "model": utils.get_model(),
- }]
- self.assert_status_ok()
-
-if __name__ == "__main__":
- unittest.main()
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/txt2img_test.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/txt2img_test.py
deleted file mode 100644
index 0d98507..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/tests/web_api/txt2img_test.py
+++ /dev/null
@@ -1,145 +0,0 @@
-import unittest
-import importlib
-utils = importlib.import_module('extensions.sd-webui-controlnet.tests.utils', 'utils')
-utils.setup_test_env()
-import requests
-
-
-
-class TestAlwaysonTxt2ImgWorking(unittest.TestCase):
- def setUp(self):
- controlnet_unit = {
- "enabled": True,
- "module": "none",
- "model": utils.get_model(),
- "weight": 1.0,
- "image": utils.readImage("test/test_files/img2img_basic.png"),
- "mask": utils.readImage("test/test_files/img2img_basic.png"),
- "resize_mode": 1,
- "lowvram": False,
- "processor_res": 64,
- "threshold_a": 64,
- "threshold_b": 64,
- "guidance_start": 0.0,
- "guidance_end": 1.0,
- "control_mode": 0,
- "pixel_perfect": False
- }
- setup_args = [controlnet_unit] * getattr(self, 'units_count', 1)
- self.setup_route(setup_args)
-
- def setup_route(self, setup_args):
- self.url_txt2img = "http://localhost:7860/sdapi/v1/txt2img"
- self.simple_txt2img = {
- "enable_hr": False,
- "denoising_strength": 0,
- "firstphase_width": 0,
- "firstphase_height": 0,
- "prompt": "example prompt",
- "styles": [],
- "seed": -1,
- "subseed": -1,
- "subseed_strength": 0,
- "seed_resize_from_h": -1,
- "seed_resize_from_w": -1,
- "batch_size": 1,
- "n_iter": 1,
- "steps": 3,
- "cfg_scale": 7,
- "width": 64,
- "height": 64,
- "restore_faces": False,
- "tiling": False,
- "negative_prompt": "",
- "eta": 0,
- "s_churn": 0,
- "s_tmax": 0,
- "s_tmin": 0,
- "s_noise": 1,
- "sampler_index": "Euler a",
- "alwayson_scripts": {}
- }
- self.setup_controlnet_params(setup_args)
-
- def setup_controlnet_params(self, setup_args):
- self.simple_txt2img["alwayson_scripts"]["ControlNet"] = {
- "args": setup_args
- }
-
- def assert_status_ok(self, msg=None):
- self.assertEqual(requests.post(self.url_txt2img, json=self.simple_txt2img).status_code, 200, msg)
- stderr = ""
- with open('test/stderr.txt') as f:
- stderr = f.read().lower()
- with open('test/stderr.txt', 'w') as f:
- # clear stderr file so that we can easily parse the next test
- f.write("")
- self.assertFalse('error' in stderr, "Errors in stderr: \n" + stderr)
-
- def test_txt2img_simple_performed(self):
- self.assert_status_ok()
-
- def test_txt2img_alwayson_scripts_default_units(self):
- self.units_count = 0
- self.setUp()
- self.assert_status_ok()
-
- def test_txt2img_multiple_batches_performed(self):
- self.simple_txt2img["n_iter"] = 2
- self.assert_status_ok()
-
- def test_txt2img_batch_performed(self):
- self.simple_txt2img["batch_size"] = 2
- self.assert_status_ok()
-
- def test_txt2img_2_units(self):
- self.units_count = 2
- self.setUp()
- self.assert_status_ok()
-
- def test_txt2img_8_units(self):
- self.units_count = 8
- self.setUp()
- self.assert_status_ok()
-
- def test_txt2img_default_params(self):
- self.simple_txt2img["alwayson_scripts"]["ControlNet"]["args"] = [
- {
- "input_image": utils.readImage("test/test_files/img2img_basic.png"),
- "model": utils.get_model(),
- }
- ]
-
- self.assert_status_ok()
-
- def test_call_with_preprocessors(self):
- available_modules = utils.get_modules()
- available_modules_list = available_modules.get('module_list', [])
- available_modules_detail = available_modules.get('module_detail', {})
- for module in ['depth', 'openpose_full']:
- assert module in available_modules_list, f'Failed to find {module}.'
- assert module in available_modules_detail, f"Failed to find {module}'s detail."
- with self.subTest(module=module):
- self.simple_txt2img["alwayson_scripts"]["ControlNet"]["args"] = [
- {
- "input_image": utils.readImage("test/test_files/img2img_basic.png"),
- "model": utils.get_model(),
- "module": module
- }
- ]
- self.assert_status_ok(f'Running preprocessor module: {module}')
-
- def test_call_invalid_params(self):
- for param in ('processor_res', 'threshold_a', 'threshold_b'):
- with self.subTest(param=param):
- self.simple_txt2img["alwayson_scripts"]["ControlNet"]["args"] = [
- {
- "input_image": utils.readImage("test/test_files/img2img_basic.png"),
- "model": utils.get_model(),
- param: -1,
- }
- ]
- self.assert_status_ok(f'Run with {param} = -1.')
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/README.md b/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/README.md
deleted file mode 100644
index c1c6eac..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Web Tests
-Web tests are selenium-based browser interaction tests, that fully simulate
-actual user's behaviours.
-
-# Preparation
-- Have Google Chrome (Any version) installed.
-- Install following python packages with `pip`:
- - `selenium`
- - `webdriver-manager`
-
-# Run Tests
-- Have WebUI with ControlNet installed running on `localhost:7860`
-- Run `python main.py --overwrite_expectation` for the first run to set a
-baseline.
-- Run `python main.py` later to verify the baseline still holds.
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/images/ski.jpg b/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/images/ski.jpg
deleted file mode 100644
index 6624841..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/images/ski.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/main.py b/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/main.py
deleted file mode 100644
index ebc24c1..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-controlnet/web_tests/main.py
+++ /dev/null
@@ -1,393 +0,0 @@
-import argparse
-import unittest
-import os
-import sys
-import time
-import datetime
-from enum import Enum
-from typing import List, Tuple
-
-import cv2
-import requests
-import numpy as np
-from selenium import webdriver
-from selenium.webdriver.common.by import By
-from selenium.webdriver.support.ui import WebDriverWait
-from selenium.webdriver.common.action_chains import ActionChains
-from selenium.webdriver.support import expected_conditions as EC
-from webdriver_manager.chrome import ChromeDriverManager
-
-
-TIMEOUT = 20 # seconds
-CWD = os.getcwd()
-SKI_IMAGE = os.path.join(CWD, "images/ski.jpg")
-
-timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
-test_result_dir = os.path.join("results", f"test_result_{timestamp}")
-test_expectation_dir = "expectations"
-os.makedirs(test_result_dir, exist_ok=True)
-os.makedirs(test_expectation_dir, exist_ok=True)
-driver_path = ChromeDriverManager().install()
-
-
-class GenType(Enum):
- txt2img = "txt2img"
- img2img = "img2img"
-
- def _find_by_xpath(self, driver: webdriver.Chrome, xpath: str) -> "WebElement":
- return driver.find_element(By.XPATH, xpath)
-
- def tab(self, driver: webdriver.Chrome) -> "WebElement":
- return self._find_by_xpath(
- driver,
- f"//*[@id='tabs']/*[contains(@class, 'tab-nav')]//button[text()='{self.value}']",
- )
-
- def controlnet_panel(self, driver: webdriver.Chrome) -> "WebElement":
- return self._find_by_xpath(
- driver, f"//*[@id='tab_{self.value}']//*[@id='controlnet']"
- )
-
- def generate_button(self, driver: webdriver.Chrome) -> "WebElement":
- return self._find_by_xpath(driver, f"//*[@id='{self.value}_generate_box']")
-
- def prompt_textarea(self, driver: webdriver.Chrome) -> "WebElement":
- return self._find_by_xpath(driver, f"//*[@id='{self.value}_prompt']//textarea")
-
-
-class SeleniumTestCase(unittest.TestCase):
- def __init__(self, methodName: str = "runTest") -> None:
- super().__init__(methodName)
- self.driver = None
- self.gen_type = None
-
- def setUp(self) -> None:
- super().setUp()
- self.driver = webdriver.Chrome(driver_path)
- self.driver.get(webui_url)
- wait = WebDriverWait(self.driver, TIMEOUT)
- wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "#controlnet")))
- self.gen_type = GenType.txt2img
-
- def tearDown(self) -> None:
- self.driver.quit()
- super().tearDown()
-
- def select_gen_type(self, gen_type: GenType):
- gen_type.tab(self.driver).click()
- self.gen_type = gen_type
-
- def set_prompt(self, prompt: str):
- textarea = self.gen_type.prompt_textarea(self.driver)
- textarea.clear()
- textarea.send_keys(prompt)
-
- def expand_controlnet_panel(self):
- controlnet_panel = self.gen_type.controlnet_panel(self.driver)
- input_image_group = controlnet_panel.find_element(
- By.CSS_SELECTOR, ".cnet-input-image-group"
- )
- if not input_image_group.is_displayed():
- controlnet_panel.click()
-
- def enable_controlnet_unit(self):
- controlnet_panel = self.gen_type.controlnet_panel(self.driver)
- enable_checkbox = controlnet_panel.find_element(
- By.CSS_SELECTOR, ".cnet-unit-enabled input[type='checkbox']"
- )
- if not enable_checkbox.is_selected():
- enable_checkbox.click()
-
- def iterate_preprocessor_types(self, ignore_none: bool = True):
- dropdown = self.gen_type.controlnet_panel(self.driver).find_element(
- By.CSS_SELECTOR,
- f"#{self.gen_type.value}_controlnet_ControlNet-0_controlnet_preprocessor_dropdown",
- )
-
- index = 0
- while True:
- dropdown.click()
- options = dropdown.find_elements(
- By.XPATH, "//ul[contains(@class, 'options')]/li"
- )
- input_element = dropdown.find_element(By.CSS_SELECTOR, "input")
-
- if index >= len(options):
- return
-
- option = options[index]
- index += 1
-
- if "none" in option.text and ignore_none:
- continue
- option_text = option.text
- option.click()
-
- yield option_text
-
- def select_control_type(self, control_type: str):
- controlnet_panel = self.gen_type.controlnet_panel(self.driver)
- control_type_radio = controlnet_panel.find_element(
- By.CSS_SELECTOR, f'.controlnet_control_type input[value="{control_type}"]'
- )
- control_type_radio.click()
- time.sleep(3) # Wait for gradio backend to update model/module
-
- def set_seed(self, seed: int):
- seed_input = self.driver.find_element(
- By.CSS_SELECTOR, f"#{self.gen_type.value}_seed input[type='number']"
- )
- seed_input.clear()
- seed_input.send_keys(seed)
-
- def set_subseed(self, seed: int):
- show_button = self.driver.find_element(
- By.CSS_SELECTOR,
- f"#{self.gen_type.value}_subseed_show input[type='checkbox']",
- )
- if not show_button.is_selected():
- show_button.click()
-
- subseed_locator = (
- By.CSS_SELECTOR,
- f"#{self.gen_type.value}_subseed input[type='number']",
- )
- WebDriverWait(self.driver, TIMEOUT).until(
- EC.visibility_of_element_located(subseed_locator)
- )
- subseed_input = self.driver.find_element(*subseed_locator)
- subseed_input.clear()
- subseed_input.send_keys(seed)
-
- def upload_controlnet_input(self, img_path: str):
- controlnet_panel = self.gen_type.controlnet_panel(self.driver)
- image_input = controlnet_panel.find_element(
- By.CSS_SELECTOR, '.cnet-input-image-group .cnet-image input[type="file"]'
- )
- image_input.send_keys(img_path)
-
- def upload_img2img_input(self, img_path: str):
- image_input = self.driver.find_element(
- By.CSS_SELECTOR, '#img2img_image input[type="file"]'
- )
- image_input.send_keys(img_path)
-
- def generate_image(self, name: str):
- self.gen_type.generate_button(self.driver).click()
- progress_bar_locator_visible = EC.visibility_of_element_located(
- (By.CSS_SELECTOR, f"#{self.gen_type.value}_results .progress")
- )
- WebDriverWait(self.driver, TIMEOUT).until(progress_bar_locator_visible)
- WebDriverWait(self.driver, TIMEOUT * 10).until_not(progress_bar_locator_visible)
- generated_imgs = self.driver.find_elements(
- By.CSS_SELECTOR,
- f"#{self.gen_type.value}_results #{self.gen_type.value}_gallery img",
- )
- for i, generated_img in enumerate(generated_imgs):
- # Use requests to get the image content
- img_content = requests.get(generated_img.get_attribute("src")).content
-
- # Save the image content to a file
- global overwrite_expectation
- dest_dir = (
- test_expectation_dir if overwrite_expectation else test_result_dir
- )
- img_file_name = f"{self.__class__.__name__}_{name}_{i}.png"
- with open(
- os.path.join(dest_dir, img_file_name),
- "wb",
- ) as img_file:
- img_file.write(img_content)
-
- if not overwrite_expectation:
- try:
- img1 = cv2.imread(os.path.join(test_expectation_dir, img_file_name))
- img2 = cv2.imread(os.path.join(test_result_dir, img_file_name))
- except Exception as e:
- self.assertTrue(False, f"Get exception reading imgs: {e}")
- continue
-
- self.expect_same_image(
- img1,
- img2,
- diff_img_path=os.path.join(
- test_result_dir, img_file_name.replace(".png", "_diff.png")
- ),
- )
-
- def expect_same_image(self, img1, img2, diff_img_path: str):
- # Calculate the difference between the two images
- diff = cv2.absdiff(img1, img2)
-
- # Set a threshold to highlight the different pixels
- threshold = 30
- diff_highlighted = np.where(diff > threshold, 255, 0).astype(np.uint8)
-
- # Assert that the two images are similar within a tolerance
- similar = np.allclose(img1, img2, rtol=0.5, atol=1)
- if not similar:
- # Save the diff_highlighted image to inspect the differences
- cv2.imwrite(diff_img_path, diff_highlighted)
-
- self.assertTrue(similar)
-
-
-simple_control_types = {
- "Canny": "canny",
- "Depth": "depth_midas",
- "Normal": "normal_bae",
- "OpenPose": "openpose_full",
- "MLSD": "mlsd",
- "Lineart": "lineart_standard (from white bg & black line)",
- "SoftEdge": "softedge_pidinet",
- "Scribble": "scribble_pidinet",
- "Seg": "seg_ofade20k",
- "Tile": "tile_resample",
- # Shuffle and Reference are not stable, and expected to fail.
- # The majority of pixels are same, but some outlier pixels can have big diff.
- "Shuffle": "shuffle",
- "Reference": "reference_only",
-}.keys()
-
-
-class SeleniumTxt2ImgTest(SeleniumTestCase):
- def setUp(self) -> None:
- super().setUp()
- self.select_gen_type(GenType.txt2img)
- self.set_seed(100)
- self.set_subseed(1000)
-
- def test_simple_control_types(self):
- """Test simple control types that only requires input image."""
- for control_type in simple_control_types:
- with self.subTest(control_type=control_type):
- self.expand_controlnet_panel()
- self.select_control_type(control_type)
- self.upload_controlnet_input(SKI_IMAGE)
- self.generate_image(f"{control_type}_ski")
-
-
-class SeleniumImg2ImgTest(SeleniumTestCase):
- def setUp(self) -> None:
- super().setUp()
- self.select_gen_type(GenType.img2img)
- self.set_seed(100)
- self.set_subseed(1000)
-
- def test_simple_control_types(self):
- """Test simple control types that only requires input image."""
- for control_type in simple_control_types:
- with self.subTest(control_type=control_type):
- self.expand_controlnet_panel()
- self.select_control_type(control_type)
- self.upload_img2img_input(SKI_IMAGE)
- self.upload_controlnet_input(SKI_IMAGE)
- self.generate_image(f"img2img_{control_type}_ski")
-
-
-class SeleniumInpaintTest(SeleniumTestCase):
- def setUp(self) -> None:
- super().setUp()
-
- def draw_inpaint_mask(self, target_canvas):
- size = target_canvas.size
- width = size["width"]
- height = size["height"]
- brush_radius = 5
- repeat = int(width * 0.1 / brush_radius)
-
- trace: List[Tuple[int, int]] = [
- (brush_radius, 0),
- (0, height * 0.2),
- (brush_radius, 0),
- (0, -height * 0.2),
- ] * repeat
-
- actions = ActionChains(self.driver)
- actions.move_to_element(target_canvas) # move to the canvas
- actions.move_by_offset(*trace[0])
- actions.click_and_hold() # click and hold the left mouse button down
- for stop_point in trace[1:]:
- actions.move_by_offset(*stop_point)
- actions.release() # release the left mouse button
- actions.perform() # perform the action chain
-
- def draw_cn_mask(self):
- canvas = self.gen_type.controlnet_panel(self.driver).find_element(
- By.CSS_SELECTOR, ".cnet-input-image-group .cnet-image canvas"
- )
- self.draw_inpaint_mask(canvas)
-
- def draw_a1111_mask(self):
- canvas = self.driver.find_element(By.CSS_SELECTOR, "#img2maskimg canvas")
- self.draw_inpaint_mask(canvas)
-
- def test_txt2img_inpaint(self):
- self.select_gen_type(GenType.txt2img)
- self.expand_controlnet_panel()
- self.select_control_type("Inpaint")
- self.upload_controlnet_input(SKI_IMAGE)
- self.draw_cn_mask()
-
- self.set_seed(100)
- self.set_subseed(1000)
-
- for option in self.iterate_preprocessor_types():
- with self.subTest(option=option):
- self.generate_image(f"{option}_txt2img_ski")
-
- def test_img2img_inpaint(self):
- # Note: img2img inpaint can only use A1111 mask.
- # ControlNet input is disabled in img2img inpaint.
- self._test_img2img_inpaint(use_cn_mask=False, use_a1111_mask=True)
-
- def _test_img2img_inpaint(self, use_cn_mask: bool, use_a1111_mask: bool):
- self.select_gen_type(GenType.img2img)
- self.expand_controlnet_panel()
- self.select_control_type("Inpaint")
- self.upload_img2img_input(SKI_IMAGE)
- # Send to inpaint
- self.driver.find_element(
- By.XPATH, f"//*[@id='img2img_copy_to_img2img']//button[text()='inpaint']"
- ).click()
- time.sleep(3)
- # Select latent noise to make inpaint effect more visible.
- self.driver.find_element(
- By.XPATH,
- f"//input[@name='radio-img2img_inpainting_fill' and @value='latent noise']",
- ).click()
- self.set_prompt("(coca-cola:2.0)")
- self.enable_controlnet_unit()
- self.upload_controlnet_input(SKI_IMAGE)
-
- self.set_seed(100)
- self.set_subseed(1000)
-
- prefix = ""
- if use_cn_mask:
- self.draw_cn_mask()
- prefix += "controlnet"
-
- if use_a1111_mask:
- self.draw_a1111_mask()
- prefix += "A1111"
-
- for option in self.iterate_preprocessor_types():
- with self.subTest(option=option, mask_prefix=prefix):
- self.generate_image(f"{option}_{prefix}_img2img_ski")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Your script description.")
- parser.add_argument(
- "--overwrite_expectation", action="store_true", help="overwrite expectation"
- )
- parser.add_argument(
- "--target_url", type=str, default="http://localhost:7860", help="WebUI URL"
- )
- args, unknown_args = parser.parse_known_args()
- overwrite_expectation = args.overwrite_expectation
- webui_url = args.target_url
-
- sys.argv = sys.argv[:1] + unknown_args
- unittest.main()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/FUNDING.yml b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/FUNDING.yml
deleted file mode 100644
index e707ddb..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/FUNDING.yml
+++ /dev/null
@@ -1,13 +0,0 @@
-# These are supported funding model platforms
-
-github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
-patreon: deforum
-open_collective: # Replace with a single Open Collective username
-ko_fi: # Replace with a single Ko-fi username
-tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
-community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
-liberapay: # Replace with a single Liberapay username
-issuehunt: # Replace with a single IssueHunt username
-otechie: # Replace with a single Otechie username
-lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
-custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/bug_report.yml b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/bug_report.yml
deleted file mode 100644
index 517bf43..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/bug_report.yml
+++ /dev/null
@@ -1,105 +0,0 @@
-name: Bug Report
-description: Create a bug report for the Deforum extension
-title: "[Bug]: "
-labels: ["bug"]
-
-body:
- - type: checkboxes
- attributes:
- label: Have you read the latest version of the FAQ?
- description: Please visit the page called FAQ & Troubleshooting on the Deforum wiki in this repository and see if your problem has already been described there.
- options:
- - label: I have visited the FAQ page right now and my issue is not present there
- required: true
- - type: checkboxes
- attributes:
- label: Is there an existing issue for this?
- description: Please search to see if an issue already exists for the bug you encountered (including the closed issues).
- options:
- - label: I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
- required: true
- - type: checkboxes
- attributes:
- label: Are you using the latest version of the Deforum extension?
- description: Please, check if your Deforum is based on the latest repo commit (git log) or update it through the 'Extensions' tab and check if the issue still persist. Otherwise, check this box.
- options:
- - label: I have Deforum updated to the lastest version and I still have the issue.
- required: true
- - type: markdown
- attributes:
- value: |
- *Please fill this form with as much information as possible, don't forget to fill "What OS..." and *provide screenshots if possible**
- - type: markdown
- attributes:
- value: |
- **Forewarning:* if you won't provide the full crash log, your issue will be discarded*
- - type: textarea
- id: what-did
- attributes:
- label: What happened?
- description: Tell us what happened in a very clear and simple way
- validations:
- required: true
- - type: textarea
- id: steps
- attributes:
- label: Steps to reproduce the problem
- description: Please provide us with precise step by step information on how to reproduce the bug
- value: |
- 1. Go to ....
- 2. Press ....
- 3. ...
- validations:
- required: true
- - type: textarea
- id: what-should
- attributes:
- label: What should have happened/how would you fix it?
- description: Tell what you think the normal behavior should be or any ideas on how to solve it
- - type: textarea
- id: what-torch
- attributes:
- label: Torch version
- description: Which Torch version your WebUI is working with. You can find it by looking at the bottom of the page.
- validations:
- required: true
- - type: dropdown
- id: where
- attributes:
- label: On which platform are you launching the webui with the extension?
- multiple: true
- options:
- - Local PC setup (Windows)
- - Local PC setup (Linux)
- - Local PC setup (Mac)
- - Google Colab (The Last Ben's)
- - Google Colab (Other)
- - Cloud server (Linux)
- - Other (please specify in "additional information")
- - type: textarea
- id: deforumsettings
- attributes:
- label: Deforum settings
- description: Send here a link to your used settings file or the latest generated one in the 'outputs/img2img-images/Deforum/' folder (ideally, upload it to GitHub gists).
- validations:
- required: true
- - type: textarea
- id: customsettings
- attributes:
- label: Webui core settings
- description: Send here a link to your ui-config.json file in the core 'stable-diffusion-webui' folder. Notice, if you have 'With img2img, do exactly the amount of steps the slider specified' checked, your issue will be discarded.
- validations:
- required: true
- - type: textarea
- id: logs
- attributes:
- label: Console logs
- description: Now, it is the most important part which most users fail for the first time! Please provide the **full** cmd/terminal logs from the moment you started the webui (i.e. clicked the launch file or started it from cmd) to the part when your bug happened.
- render: Shell
- validations:
- required: true
- - type: textarea
- id: misc
- attributes:
- label: Additional information
- description: Any relevant additional info or context.
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/config.yml b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/config.yml
deleted file mode 100644
index 3fb606d..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/config.yml
+++ /dev/null
@@ -1,8 +0,0 @@
-blank_issues_enabled: false
-contact_links:
- - name: Deforum Github discussions
- url: https://github.com/deforum-art/deforum-for-automatic1111-webui/discussions
- about: Please ask and answer questions here. If you want to complain about something, don't try to circumvent issue filling by starting a discussion here 🙃
- - name: Deforum Discord
- url: https://discord.gg/deforum
- about: Here is our main community where we chat, discuss development and share experiments and results
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/feature_request.yml b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/feature_request.yml
deleted file mode 100644
index 3f4bd7c..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/ISSUE_TEMPLATE/feature_request.yml
+++ /dev/null
@@ -1,46 +0,0 @@
-name: Feature request
-description: Suggest an idea for the Deforum extension
-title: "[Feature Request]: "
-labels: ["enhancement"]
-
-body:
- - type: checkboxes
- attributes:
- label: Is there an existing issue for this?
- description: Please search to see if an issue already exists for the feature you want, and that it's not implemented in a recent build/commit.
- options:
- - label: I have searched the existing issues and checked the recent builds/commits
- required: true
- - type: markdown
- attributes:
- value: |
- *Please fill this form with as much information as possible, provide screenshots and/or illustrations of the feature if possible*
- - type: textarea
- id: feature
- attributes:
- label: What would your feature do ?
- description: Tell us about your feature in a very clear and simple way, and what problem it would solve
- validations:
- required: true
- - type: textarea
- id: workflow
- attributes:
- label: Proposed workflow
- description: Please provide us with step by step information on how you'd like the feature to be accessed and used
- value: |
- 1. Go to ....
- 2. Press ....
- 3. ...
- validations:
- required: true
- - type: textarea
- id: misc
- attributes:
- label: Additional information
- description: Add any other context or screenshots about the feature request here.
- - type: textarea
- attributes:
- label: Are you going to help adding it?
- description: Do you want to participate in Deforum development and bring the desired feature sooner? Let us know if you are willing to add the desired feature, ideally, leave your Discord handle here, so we will contact you for a less formal conversation. Our community is welcoming and ready to provide you with any information on the project structure or how the code works. If not, however, keep in mind that if you do not want to do your new feature yourself, you will have to wait until the team picks up your issue.
- validations:
- required: true
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/scripts/issue_checker.py b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/scripts/issue_checker.py
deleted file mode 100644
index 4939ac8..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/scripts/issue_checker.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (C) 2023 Deforum LLC
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU Affero General Public License as published by
-# the Free Software Foundation, version 3 of the License.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU Affero General Public License
-# along with this program. If not, see .
-
-# Contact the authors: https://deforum.github.io/
-
-import os
-import re
-from github import Github
-
-# Get GitHub token from environment variables
-token = os.environ['GITHUB_TOKEN']
-g = Github(token)
-
-# Get the current repository
-print(f"Repo is {os.environ['GITHUB_REPOSITORY']}")
-repo = g.get_repo(os.environ['GITHUB_REPOSITORY'])
-
-# Get the issue number from the event payload
-#issue_number = int(os.environ['ISSUE_NUMBER'])
-
-for issue in repo.get_issues():
- print(f"Processing issue №{issue.number}")
- if issue.pull_request:
- continue
-
- # Get the issue object
- #issue = repo.get_issue(issue_number)
-
- # Define the keywords to search for in the issue
- keywords = ['Python', 'Commit hash', 'Launching Web UI with arguments', 'Model loaded', 'deforum']
-
- # Check if ALL of the keywords are present in the issue
- def check_keywords(issue_body, keywords):
- for keyword in keywords:
- if not re.search(r'\b' + re.escape(keyword) + r'\b', issue_body, re.IGNORECASE):
- return False
- return True
-
- # Check if the issue title has at least a specified number of words
- def check_title_word_count(issue_title, min_word_count):
- words = issue_title.replace("/", " ").replace("\\\\", " ").split()
- return len(words) >= min_word_count
-
- # Check if the issue title is concise
- def check_title_concise(issue_title, max_word_count):
- words = issue_title.replace("/", " ").replace("\\\\", " ").split()
- return len(words) <= max_word_count
-
- # Check if the commit ID is in the correct hash form
- def check_commit_id_format(issue_body):
- match = re.search(r'webui commit id - ([a-fA-F0-9]+|\[[a-fA-F0-9]+\])', issue_body)
- if not match:
- print('webui_commit_id not found')
- return False
- webui_commit_id = match.group(1)
- print(f'webui_commit_id {webui_commit_id}')
- webui_commit_id = webui_commit_id.replace("[", "").replace("]", "")
- if not (7 <= len(webui_commit_id) <= 40):
- print(f'invalid length!')
- return False
- match = re.search(r'deforum exten commit id - ([a-fA-F0-9]+|\[[a-fA-F0-9]+\])', issue_body)
- if match:
- print('deforum commit id not found')
- return False
- t2v_commit_id = match.group(1)
- print(f'deforum_commit_id {t2v_commit_id}')
- t2v_commit_id = t2v_commit_id.replace("[", "").replace("]", "")
- if not (7 <= len(t2v_commit_id) <= 40):
- print(f'invalid length!')
- return False
- return True
-
- # Only if a bug report
- if '[Bug]' in issue.title and not '[Feature Request]' in issue.title:
- print('The issue is eligible')
- # Initialize an empty list to store error messages
- error_messages = []
-
- # Check for each condition and add the corresponding error message if the condition is not met
- if not check_keywords(issue.body, keywords):
- error_messages.append("Include **THE FULL LOG FROM THE START OF THE WEBUI** in the issue description.")
-
- if not check_title_word_count(issue.title, 3):
- error_messages.append("Make sure the issue title has at least 3 words.")
-
- if not check_title_concise(issue.title, 13):
- error_messages.append("The issue title should be concise and contain no more than 13 words.")
-
- # if not check_commit_id_format(issue.body):
- # error_messages.append("Provide a valid commit ID in the format 'commit id - [commit_hash]' **both** for the WebUI and the Extension.")
-
- # If there are any error messages, close the issue and send a comment with the error messages
- if error_messages:
- print('Invalid issue, closing')
- # Add the "not planned" label to the issue
- not_planned_label = repo.get_label("wrong format")
- issue.add_to_labels(not_planned_label)
-
- # Close the issue
- issue.edit(state='closed')
-
- # Generate the comment by concatenating the error messages
- comment = "This issue has been closed due to incorrect formatting. Please address the following mistakes and reopen the issue (click on the 'Reopen' button below):\n\n"
- comment += "\n".join(f"- {error_message}" for error_message in error_messages)
-
- # Add the comment to the issue
- issue.create_comment(comment)
- elif repo.get_label("wrong format") in issue.labels:
- print('Issue is fine')
- issue.edit(state='open')
- issue.delete_labels()
- bug_label = repo.get_label("bug")
- issue.add_to_labels(bug_label)
- comment = "Thanks for addressing your formatting mistakes. The issue has been reopened now."
- issue.create_comment(comment)
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/workflows/issue_checker.yaml b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/workflows/issue_checker.yaml
deleted file mode 100644
index 8c39a05..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/workflows/issue_checker.yaml
+++ /dev/null
@@ -1,23 +0,0 @@
-name: Issue Checker
-
-on:
- issues:
- types: [opened, reopened, edited]
-
-jobs:
- check_issue:
- runs-on: ubuntu-latest
- steps:
- - name: Checkout repository
- uses: actions/checkout@v3
- - name: Set up Python
- uses: actions/setup-python@v3
- with:
- python-version: '3.x'
- - name: Install dependencies
- run: pip install PyGithub
- - name: Check issue
- env:
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- ISSUE_NUMBER: ${{ github.event.number }}
- run: python .github/scripts/issue_checker.py
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/workflows/run_tests.yaml b/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/workflows/run_tests.yaml
deleted file mode 100644
index 8e635b3..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/.github/workflows/run_tests.yaml
+++ /dev/null
@@ -1,108 +0,0 @@
-name: Tests
-
-on:
- - push
- - pull_request
-
-jobs:
- test:
- name: tests on CPU with empty model
- runs-on: ubuntu-latest
- if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
- steps:
- - name: Checkout a1111
- uses: actions/checkout@v3
- with:
- repository: AUTOMATIC1111/stable-diffusion-webui
- ref: v1.5.1
- - name: Checkout Controlnet extension
- uses: actions/checkout@v3
- with:
- repository: Mikubill/sd-webui-controlnet
- path: extensions/sd-webui-controlnet
- - name: Checkout Deforum
- uses: actions/checkout@v3
- with:
- path: extensions/deforum
- - name: Set up Python 3.10
- uses: actions/setup-python@v4
- with:
- python-version: 3.10.6
- cache: pip
- cache-dependency-path: |
- **/requirements*txt
- launch.py
- - name: Install test dependencies
- run: pip install wait-for-it -r extensions/deforum/requirements-dev.txt
- env:
- PIP_DISABLE_PIP_VERSION_CHECK: "1"
- PIP_PROGRESS_BAR: "off"
- - name: Setup environment
- run: python launch.py --skip-torch-cuda-test --exit
- env:
- PIP_DISABLE_PIP_VERSION_CHECK: "1"
- PIP_PROGRESS_BAR: "off"
- TORCH_INDEX_URL: https://download.pytorch.org/whl/cpu
- WEBUI_LAUNCH_LIVE_OUTPUT: "1"
- PYTHONUNBUFFERED: "1"
- - name: Start test server
- run: >
- python -m coverage run
- --data-file=.coverage.server
- launch.py
- --skip-prepare-environment
- --skip-torch-cuda-test
- --test-server
- --do-not-download-clip
- --no-half
- --disable-opt-split-attention
- --use-cpu all
- --api-server-stop
- --deforum-api
- --api
- 2>&1 | tee serverlog.txt &
- - name: Run tests (with continue-on-error due to mysterious non-zero return code on success)
- continue-on-error: true
- id: runtests
- run: |
- wait-for-it --service 127.0.0.1:7860 -t 600
- cd extensions/deforum
- python -m coverage run --data-file=.coverage.client -m pytest -vv --junitxml=tests/results.xml tests
- - name: Check for test failures (necessary because of continue-on-error above)
- id: testresults
- uses: mavrosxristoforos/get-xml-info@1.1.0
- with:
- xml-file: 'extensions/deforum/tests/results.xml'
- xpath: '//testsuite/@failures'
- - name: Fail if there were test failures
- run: |
- echo "Test failures: ${{ steps.testresults.outputs.info }}"
- [ ${{ steps.testresults.outputs.info }} -eq 0 ]
- - name: Kill test server
- if: always()
- run: curl -vv -XPOST http://127.0.0.1:7860/sdapi/v1/server-stop && sleep 10
- - name: Show coverage
- run: |
- python -m coverage combine .coverage* extensions/deforum/.coverage*
- python -m coverage report -i
- python -m coverage html -i
- - name: Upload main app output
- uses: actions/upload-artifact@v3
- if: always()
- with:
- name: serverlog
- path: serverlog.txt
- - name: Upload coverage HTML
- uses: actions/upload-artifact@v3
- if: always()
- with:
- name: htmlcov
- path: htmlcov
- - name: Surface failing tests
- if: always()
- uses: pmeier/pytest-results-action@main
- with:
- path: extensions/deforum/tests/results.xml
- summary: true
- display-options: fEX
- fail-on-empty: true
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/__snapshots__/deforum_postprocess_test.ambr b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/__snapshots__/deforum_postprocess_test.ambr
deleted file mode 100644
index c4738d1..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/__snapshots__/deforum_postprocess_test.ambr
+++ /dev/null
@@ -1,101 +0,0 @@
-# serializer version: 1
-# name: test_post_process_FILM
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
-# name: test_post_process_RIFE
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
-# name: test_post_process_UPSCALE
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
-# name: test_post_process_UPSCALE_FILM
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/__snapshots__/deforum_test.ambr b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/__snapshots__/deforum_test.ambr
deleted file mode 100644
index 1928bc3..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/__snapshots__/deforum_test.ambr
+++ /dev/null
@@ -1,101 +0,0 @@
-# serializer version: 1
-# name: test_3d_mode
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
-# name: test_simple_settings
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
-# name: test_with_hybrid_video
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 2; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 3; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: -1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 4; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 5; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 1; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 1; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera --neg
-
-
- '''
-# ---
-# name: test_with_parseq_inline
- '''
- 1
- 00:00:00,000 --> 00:00:00,050
- F#: 0; Cadence: false; Seed: 1; Angle: 0; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.002; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 55; SubSStrSch: 0; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 55; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: Parseq prompt! --neg neg parseq prompt!
-
- 2
- 00:00:00,050 --> 00:00:00,100
- F#: 1; Cadence: false; Seed: 56; Angle: 30.111; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 56; SubSStrSch: 0.100; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 55.100; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: Parseq prompt! --neg neg parseq prompt!
-
- 3
- 00:00:00,100 --> 00:00:00,150
- F#: 2; Cadence: false; Seed: 56; Angle: 14.643; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 56; SubSStrSch: 0.200; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 55.200; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: Parseq prompt! --neg neg parseq prompt!
-
- 4
- 00:00:00,150 --> 00:00:00,200
- F#: 3; Cadence: false; Seed: 56; Angle: -8.348; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 56; SubSStrSch: 0.300; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 55.300; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: Parseq prompt! --neg neg parseq prompt!
-
- 5
- 00:00:00,200 --> 00:00:00,250
- F#: 4; Cadence: false; Seed: 56; Angle: -27.050; Tr.C.X: 0.500; Tr.C.Y: 0.500; Zoom: 1.003; TrX: 0; TrY: 0; TrZ: 0; RotX: 0; RotY: 0; RotZ: 0; PerFlT: 0; PerFlP: 0; PerFlG: 0; PerFlFV: 53; Noise: 0.040; StrSch: 0.650; CtrstSch: 1; CFGSch: 7; P2PCfgSch: 1.500; SubSSch: 56; SubSStrSch: 0.400; CkptSch: model1.ckpt; StepsSch: 25; SeedSch: 55.400; SamplerSchedule: Euler a; ClipskipSchedule: 2; NoiseMultiplierSchedule: 1.050; MaskSchedule: {video_mask}; NoiseMaskSchedule: {video_mask}; AmountSchedule: 0.050; KernelSchedule: 5; SigmaSchedule: 1; ThresholdSchedule: 0; AspectRatioSchedule: 1; FieldOfViewSchedule: 70; NearSchedule: 200; CadenceFlowFactorSchedule: 1; RedoFlowFactorSchedule: 1; FarSchedule: 10000; HybridCompAlphaSchedule: 0.500; HybridCompMaskBlendAlphaSchedule: 0.500; HybridCompMaskContrastSchedule: 1; HybridCompMaskAutoContrastCutoffHighSchedule: 100; HybridCompMaskAutoContrastCutoffLowSchedule: 0; HybridFlowFactorSchedule: 1; Prompt: Parseq prompt! --neg neg parseq prompt!
-
-
- '''
-# ---
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/conftest.py b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/conftest.py
deleted file mode 100644
index d5cf84b..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/conftest.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Copyright (C) 2023 Deforum LLC
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU Affero General Public License as published by
-# the Free Software Foundation, version 3 of the License.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU Affero General Public License
-# along with this program. If not, see .
-
-# Contact the authors: https://deforum.github.io/
-
-import pytest
-import subprocess
-import sys
-import os
-from subprocess import Popen, PIPE, STDOUT
-from pathlib import Path
-from tenacity import retry, stop_after_delay, wait_fixed
-import threading
-import requests
-
-def pytest_addoption(parser):
- parser.addoption("--start-server", action="store_true", help="start the server before the test run (if not specified, you must start the server manually)")
-
-@pytest.fixture
-def cmdopt(request):
- return request.config.getoption("--start-server")
-
-@retry(wait=wait_fixed(5), stop=stop_after_delay(60))
-def wait_for_service(url):
- response = requests.get(url, timeout=(5, 5))
- print(f"Waiting for server to respond 200 at {url} (response: {response.status_code})...")
- assert response.status_code == 200
-
-@pytest.fixture(scope="session", autouse=True)
-def start_server(request):
- if request.config.getoption("--start-server"):
-
- # Kick off server subprocess
- script_directory = os.path.dirname(__file__)
- a1111_directory = Path(script_directory).parent.parent.parent # sd-webui/extensions/deforum/tests/ -> sd-webui
- print(f"Starting server in {a1111_directory}...")
- proc = Popen(["python", "-m", "coverage", "run", "--data-file=.coverage.server", "launch.py",
- "--skip-prepare-environment", "--skip-torch-cuda-test", "--test-server", "--no-half",
- "--disable-opt-split-attention", "--use-cpu", "all", "--add-stop-route", "--api", "--deforum-api", "--listen"],
- cwd=a1111_directory,
- stdout=PIPE,
- stderr=STDOUT,
- universal_newlines=True)
-
- # ensure server is killed at the end of the test run
- request.addfinalizer(proc.kill)
-
- # Spin up separate thread to capture the server output to file and stdout
- def server_console_manager():
- with proc.stdout, open('serverlog.txt', 'ab') as logfile:
- for line in proc.stdout:
- sys.stdout.write(f"[SERVER LOG] {line}")
- sys.stdout.flush()
- logfile.write(line.encode('utf-8'))
- logfile.flush()
- proc.wait()
-
- threading.Thread(target=server_console_manager).start()
-
- # Wait for deforum API to respond
- wait_for_service('http://localhost:7860/deforum_api/jobs/')
-
- else:
- print("Checking server is already running / waiting for it to come up...")
- wait_for_service('http://localhost:7860/deforum_api/jobs/')
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/deforum_postprocess_test.py b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/deforum_postprocess_test.py
deleted file mode 100644
index 65e41cd..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/deforum_postprocess_test.py
+++ /dev/null
@@ -1,181 +0,0 @@
-# Copyright (C) 2023 Deforum LLC
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU Affero General Public License as published by
-# the Free Software Foundation, version 3 of the License.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU Affero General Public License
-# along with this program. If not, see .
-
-# Contact the authors: https://deforum.github.io/
-
-import glob
-import json
-import os
-
-import pytest
-import requests
-from moviepy.editor import VideoFileClip
-from utils import API_BASE_URL, gpu_disabled, wait_for_job_to_complete
-
-from scripts.deforum_api_models import (DeforumJobPhase,
- DeforumJobStatusCategory)
-from scripts.deforum_helpers.subtitle_handler import get_user_values
-
-@pytest.mark.skipif(gpu_disabled(), reason="requires GPU-enabled server")
-def test_post_process_FILM(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- deforum_settings["frame_interpolation_engine"] = "FILM"
- deforum_settings["frame_interpolation_x_amount"] = 3
- deforum_settings["frame_interpolation_slow_mo_enabled"] = False
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure interpolated video format is as expected
- video_filenames = glob.glob(f'{jobStatus.outdir}/*FILM*.mp4', recursive=True)
- assert len(video_filenames) == 1, "Expected one FILM video to be generated"
-
- interpolated_video_filename = video_filenames[0]
- clip = VideoFileClip(interpolated_video_filename)
- assert clip.fps == deforum_settings['fps'] * deforum_settings["frame_interpolation_x_amount"] , "Video FPS does not match input settings (fps * interpolation amount)"
- assert clip.duration * clip.fps == deforum_settings['max_frames'] * deforum_settings["frame_interpolation_x_amount"], "Video frame count does not match input settings (including interpolation)"
- assert clip.size == [deforum_settings['W'], deforum_settings['H']] , "Video dimensions are not as expected"
-
-@pytest.mark.skipif(gpu_disabled(), reason="requires GPU-enabled server")
-def test_post_process_RIFE(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- deforum_settings["frame_interpolation_engine"] = "RIFE v4.6"
- deforum_settings["frame_interpolation_x_amount"] = 3
- deforum_settings["frame_interpolation_slow_mo_enabled"] = False
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure interpolated video format is as expected
- video_filenames = glob.glob(f'{jobStatus.outdir}/*RIFE*.mp4', recursive=True)
- assert len(video_filenames) == 1, "Expected one RIFE video to be generated"
-
- interpolated_video_filename = video_filenames[0]
- clip = VideoFileClip(interpolated_video_filename)
- assert clip.fps == deforum_settings['fps'] * deforum_settings["frame_interpolation_x_amount"] , "Video FPS does not match input settings (fps * interpolation amount)"
- assert clip.duration * clip.fps == deforum_settings['max_frames'] * deforum_settings["frame_interpolation_x_amount"], "Video frame count does not match input settings (including interpolation)"
- assert clip.size == [deforum_settings['W'], deforum_settings['H']] , "Video dimensions are not as expected"
-
-@pytest.mark.skipif(gpu_disabled(), reason="requires GPU-enabled server")
-def test_post_process_UPSCALE(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- deforum_settings["r_upscale_video"] = True
- deforum_settings["r_upscale_factor"] = "x4"
- deforum_settings["r_upscale_model"] = "realesrgan-x4plus"
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure interpolated video format is as expected
- video_filenames = glob.glob(f'{jobStatus.outdir}/*Upscaled*.mp4', recursive=True)
- assert len(video_filenames) == 1, "Expected one upscaled video to be generated"
-
- interpolated_video_filename = video_filenames[0]
- clip = VideoFileClip(interpolated_video_filename)
- assert clip.fps == deforum_settings['fps'] , "Video FPS does not match input settings"
- assert clip.duration * clip.fps == deforum_settings['max_frames'], "Video frame count does not match input settings"
- assert clip.size == [deforum_settings['W']*4, deforum_settings['H']*4] , "Video dimensions are not as expected (including upscaling)"
-
-
-@pytest.mark.skipif(gpu_disabled(), reason="requires GPU-enabled server")
-def test_post_process_UPSCALE_FILM(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- deforum_settings["r_upscale_video"] = True
- deforum_settings["r_upscale_factor"] = "x4"
- deforum_settings["r_upscale_model"] = "realesrgan-x4plus"
- deforum_settings["frame_interpolation_engine"] = "FILM"
- deforum_settings["frame_interpolation_x_amount"] = 3
- deforum_settings["frame_interpolation_slow_mo_enabled"] = False
- deforum_settings["frame_interpolation_use_upscaled"] = True
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure interpolated video format is as expected
- video_filenames = glob.glob(f'{jobStatus.outdir}/*FILM*upscaled*.mp4', recursive=True)
- assert len(video_filenames) == 1, "Expected one upscaled video to be generated"
-
- interpolated_video_filename = video_filenames[0]
- clip = VideoFileClip(interpolated_video_filename)
- assert clip.fps == deforum_settings['fps'] * deforum_settings["frame_interpolation_x_amount"] , "Video FPS does not match input settings (fps * interpolation amount)"
- assert clip.duration * clip.fps == deforum_settings['max_frames'] * deforum_settings["frame_interpolation_x_amount"], "Video frame count does not match input settings (including interpolation)"
- assert clip.size == [deforum_settings['W']*4, deforum_settings['H']*4] , "Video dimensions are not as expected (including upscaling)"
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/deforum_test.py b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/deforum_test.py
deleted file mode 100644
index 43e8c68..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/deforum_test.py
+++ /dev/null
@@ -1,182 +0,0 @@
-# Copyright (C) 2023 Deforum LLC
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU Affero General Public License as published by
-# the Free Software Foundation, version 3 of the License.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU Affero General Public License
-# along with this program. If not, see .
-
-# Contact the authors: https://deforum.github.io/
-
-import os
-import json
-from scripts.deforum_api_models import DeforumJobStatus, DeforumJobStatusCategory, DeforumJobPhase
-import requests
-from moviepy.editor import VideoFileClip
-import glob
-from pathlib import Path
-from utils import wait_for_job_to_complete, wait_for_job_to_enter_phase, wait_for_job_to_enter_status, API_BASE_URL
-
-from scripts.deforum_helpers.subtitle_handler import get_user_values
-
-def test_simple_settings(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure video format is as expected
- video_filename = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.mp4")
- clip = VideoFileClip(video_filename)
- assert clip.fps == deforum_settings['fps'] , "Video FPS does not match input settings"
- assert clip.duration * clip.fps == deforum_settings['max_frames'] , "Video frame count does not match input settings"
- assert clip.size == [deforum_settings['W'], deforum_settings['H']] , "Video dimensions are not as expected"
-
-
-def test_api_cancel_active_job():
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- data = json.load(settings_file)
- response = requests.post(API_BASE_URL+"/batches", json={"deforum_settings":[data]})
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- wait_for_job_to_enter_phase(job_id, DeforumJobPhase.GENERATING)
-
- cancel_url = API_BASE_URL+"/jobs/"+job_id
- response = requests.delete(cancel_url)
- response.raise_for_status()
- assert response.status_code == 200, f"DELETE request to {cancel_url} failed: {response.status_code}"
-
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.CANCELLED, f"Job {job_id} did not cancel: {jobStatus}"
-
-
-def test_3d_mode(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- deforum_settings['animation_mode'] = "3D"
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure video format is as expected
- video_filename = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.mp4")
- clip = VideoFileClip(video_filename)
- assert clip.fps == deforum_settings['fps'] , "Video FPS does not match input settings"
- assert clip.duration * clip.fps == deforum_settings['max_frames'] , "Video frame count does not match input settings"
- assert clip.size == [deforum_settings['W'], deforum_settings['H']] , "Video dimensions are not as expected"
-
-
-def test_with_parseq_inline(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- with open('tests/testdata/parseq.json', 'r') as parseq_file:
- parseq_data = json.load(parseq_file)
-
- deforum_settings['parseq_manifest'] = json.dumps(parseq_data)
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure video format is as expected
- video_filename = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.mp4")
- clip = VideoFileClip(video_filename)
- assert clip.fps == deforum_settings['fps'] , "Video FPS does not match input settings"
- assert clip.duration * clip.fps == deforum_settings['max_frames'] , "Video frame count does not match input settings"
- assert clip.size == [deforum_settings['W'], deforum_settings['H']] , "Video dimensions are not as expected"
-
-
-# def test_with_parseq_url():
-
-def test_with_hybrid_video(snapshot):
- with open('tests/testdata/simple.input_settings.txt', 'r') as settings_file:
- deforum_settings = json.load(settings_file)
-
- with open('tests/testdata/parseq.json', 'r') as parseq_file:
- parseq_data = json.load(parseq_file)
-
- init_video_local_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "testdata", "example_init_vid.mp4")
- deforum_settings['video_init_path'] = init_video_local_path
- deforum_settings['extract_nth_frame'] = 200 # input video is 900 frames, so we should keep 5 frames
- deforum_settings["hybrid_generate_inputframes"] = True
- deforum_settings["hybrid_composite"] = "Normal"
-
- response = requests.post(API_BASE_URL+"/batches", json={
- "deforum_settings":[deforum_settings],
- "options_overrides": {
- "deforum_save_gen_info_as_srt": True,
- "deforum_save_gen_info_as_srt_params": get_user_values(),
- }
- })
- response.raise_for_status()
- job_id = response.json()["job_ids"][0]
- jobStatus = wait_for_job_to_complete(job_id)
-
- assert jobStatus.status == DeforumJobStatusCategory.SUCCEEDED, f"Job {job_id} failed: {jobStatus}"
-
- # Ensure parameters used at each frame have not regressed
- srt_filenname = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.srt")
- with open(srt_filenname, 'r') as srt_file:
- assert srt_file.read() == snapshot
-
- # Ensure video format is as expected
- video_filename = os.path.join(jobStatus.outdir, f"{jobStatus.timestring}.mp4")
- clip = VideoFileClip(video_filename)
- assert clip.fps == deforum_settings['fps'] , "Video FPS does not match input settings"
- assert clip.duration == 5 / deforum_settings['fps'], "Video frame count does not match input settings"
- assert clip.size == [deforum_settings['W'], deforum_settings['H']] , "Video dimensions are not as expected"
-
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/example_init_vid.mp4 b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/example_init_vid.mp4
deleted file mode 100644
index b11552f..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/example_init_vid.mp4 and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/parseq.json b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/parseq.json
deleted file mode 100644
index 61d650a..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/parseq.json
+++ /dev/null
@@ -1,231 +0,0 @@
-{
- "meta": {
- "generated_by": "sd_parseq",
- "version": "0.1.94",
- "generated_at": "Tue, 01 Aug 2023 03:02:03 GMT",
- "doc_id": "doc-9d687a32-6bb4-41e3-a974-e8c5a6a18571",
- "version_id": "version-9762c158-b3e0-471e-ad6b-97259c8a5141"
- },
- "prompts": {
- "format": "v2",
- "enabled": true,
- "commonPrompt": {
- "name": "Common",
- "positive": "",
- "negative": "",
- "allFrames": true,
- "from": 0,
- "to": 119,
- "overlap": {
- "inFrames": 0,
- "outFrames": 0,
- "type": "none",
- "custom": "prompt_weight_1"
- }
- },
- "commonPromptPos": "append",
- "promptList": [
- {
- "name": "Prompt 1",
- "positive": "Parseq prompt!",
- "negative": "neg parseq prompt!",
- "allFrames": false,
- "from": 0,
- "to": 119,
- "overlap": {
- "inFrames": 0,
- "outFrames": 0,
- "type": "none",
- "custom": "prompt_weight_1"
- }
- }
- ]
- },
- "options": {
- "input_fps": 20,
- "bpm": 140,
- "output_fps": 20,
- "cc_window_width": 0,
- "cc_window_slide_rate": 1,
- "cc_use_input": false
- },
- "managedFields": [
- "seed",
- "angle"
- ],
- "displayedFields": [
- "seed",
- "angle"
- ],
- "keyframes": [
- {
- "frame": 0,
- "zoom": 1,
- "zoom_i": "C",
- "seed": 55,
- "noise": 0.04,
- "strength": 0.6,
- "prompt_weight_1": 1,
- "prompt_weight_1_i": "bez(0,0.6,1,0.4)",
- "prompt_weight_2": 0,
- "prompt_weight_2_i": "bez(0,0.6,1,0.4)",
- "angle": "",
- "angle_i": "sin(p=1b, a=45)"
- },
- {
- "frame": 10,
- "prompt_weight_1": 0,
- "prompt_weight_2": 1,
- "seed": 56
- }
- ],
- "timeSeries": [],
- "keyframeLock": "frames",
- "reverseRender": false,
- "rendered_frames": [
- {
- "frame": 0,
- "seed": 55,
- "angle": 0,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 55,
- "subseed_strength": 0,
- "seed_delta": 55,
- "seed_pc": 98.21428571428571,
- "angle_delta": 0,
- "angle_pc": 0
- },
- {
- "frame": 1,
- "seed": 55.1,
- "angle": 30.11087728614862,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.10000000000000142,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 98.39285714285715,
- "angle_delta": 30.11087728614862,
- "angle_pc": 67.28163648031882
- },
- {
- "frame": 2,
- "seed": 55.2,
- "angle": 44.7534852915723,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.20000000000000284,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 98.57142857142858,
- "angle_delta": 14.642608005423675,
- "angle_pc": 100
- },
- {
- "frame": 3,
- "seed": 55.3,
- "angle": 36.405764746872634,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.29999999999999716,
- "seed_delta": 0.09999999999999432,
- "seed_pc": 98.75,
- "angle_delta": -8.347720544699662,
- "angle_pc": 81.34732861516005
- },
- {
- "frame": 4,
- "seed": 55.4,
- "angle": 9.356026086799169,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.3999999999999986,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 98.92857142857142,
- "angle_delta": -27.049738660073466,
- "angle_pc": 20.905692653530693
- },
- {
- "frame": 5,
- "seed": 55.5,
- "angle": -22.500000000000004,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.5,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 99.10714285714286,
- "angle_delta": -31.856026086799172,
- "angle_pc": -50.275413978175834
- },
- {
- "frame": 6,
- "seed": 55.6,
- "angle": -42.79754323328191,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.6000000000000014,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 99.28571428571429,
- "angle_delta": -20.297543233281903,
- "angle_pc": -95.62952014676112
- },
- {
- "frame": 7,
- "seed": 55.7,
- "angle": -41.109545593917034,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.7000000000000028,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 99.46428571428572,
- "angle_delta": 1.6879976393648732,
- "angle_pc": -91.85775214172767
- },
- {
- "frame": 8,
- "seed": 55.8,
- "angle": -18.303148938411006,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.7999999999999972,
- "seed_delta": 0.09999999999999432,
- "seed_pc": 99.64285714285714,
- "angle_delta": 22.806396655506028,
- "angle_pc": -40.89770622145878
- },
- {
- "frame": 9,
- "seed": 55.9,
- "angle": 13.905764746872622,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0.8999999999999986,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 99.82142857142857,
- "angle_delta": 32.208913685283626,
- "angle_pc": 31.0719146369842
- },
- {
- "frame": 10,
- "seed": 56,
- "angle": 38.97114317029975,
- "deforum_prompt": "Parseq prompt! --neg neg parseq prompt!",
- "subseed": 56,
- "subseed_strength": 0,
- "seed_delta": 0.10000000000000142,
- "seed_pc": 100,
- "angle_delta": 25.065378423427127,
- "angle_pc": 87.07957138175908
- }
- ],
- "rendered_frames_meta": {
- "seed": {
- "max": 56,
- "min": 55,
- "isFlat": false
- },
- "angle": {
- "max": 44.7534852915723,
- "min": -42.79754323328191,
- "isFlat": false
- }
- }
-}
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/simple.input_settings.txt b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/simple.input_settings.txt
deleted file mode 100644
index 28a1004..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/testdata/simple.input_settings.txt
+++ /dev/null
@@ -1,255 +0,0 @@
-{
- "W": 512,
- "H": 512,
- "show_info_on_ui": true,
- "tiling": false,
- "restore_faces": false,
- "seed_resize_from_w": 0,
- "seed_resize_from_h": 0,
- "seed": 1,
- "sampler": "Euler a",
- "steps": 1,
- "batch_name": "Deforum_{timestring}",
- "seed_behavior": "iter",
- "seed_iter_N": 1,
- "use_init": false,
- "strength": 0.8,
- "strength_0_no_init": true,
- "init_image": "None",
- "use_mask": false,
- "use_alpha_as_mask": false,
- "mask_file": "",
- "invert_mask": false,
- "mask_contrast_adjust": 1.0,
- "mask_brightness_adjust": 1.0,
- "overlay_mask": true,
- "mask_overlay_blur": 4,
- "fill": 1,
- "full_res_mask": true,
- "full_res_mask_padding": 4,
- "reroll_blank_frames": "ignore",
- "reroll_patience": 10.0,
- "prompts": {
- "0": "tiny cute swamp bunny, highly detailed, intricate, ultra hd, sharp photo, crepuscular rays, in focus, by tomasz alen kopera",
- "30": "anthropomorphic clean cat, surrounded by fractals, epic angle and pose, symmetrical, 3d, depth of field, ruan jia and fenghua zhong",
- "60": "a beautiful coconut --neg photo, realistic",
- "90": "a beautiful durian, trending on Artstation"
- },
- "animation_prompts_positive": "",
- "animation_prompts_negative": "",
- "animation_mode": "2D",
- "max_frames": 5,
- "border": "replicate",
- "angle": "0:(0)",
- "zoom": "0:(1.0025+0.002*sin(1.25*3.14*t/30))",
- "translation_x": "0:(0)",
- "translation_y": "0:(0)",
- "translation_z": "0:(0)",
- "transform_center_x": "0:(0.5)",
- "transform_center_y": "0:(0.5)",
- "rotation_3d_x": "0:(0)",
- "rotation_3d_y": "0:(0)",
- "rotation_3d_z": "0:(0)",
- "enable_perspective_flip": false,
- "perspective_flip_theta": "0:(0)",
- "perspective_flip_phi": "0:(0)",
- "perspective_flip_gamma": "0:(0)",
- "perspective_flip_fv": "0:(53)",
- "noise_schedule": "0: (0.04)",
- "strength_schedule": "0: (0.65)",
- "contrast_schedule": "0: (1.0)",
- "cfg_scale_schedule": "0: (7)",
- "enable_steps_scheduling": false,
- "steps_schedule": "0: (25)",
- "fov_schedule": "0: (70)",
- "aspect_ratio_schedule": "0: (1)",
- "aspect_ratio_use_old_formula": false,
- "near_schedule": "0: (200)",
- "far_schedule": "0: (10000)",
- "seed_schedule": "0:(s), 1:(-1), \"max_f-2\":(-1), \"max_f-1\":(s)",
- "pix2pix_img_cfg_scale_schedule": "0:(1.5)",
- "enable_subseed_scheduling": false,
- "subseed_schedule": "0:(1)",
- "subseed_strength_schedule": "0:(0)",
- "enable_sampler_scheduling": false,
- "sampler_schedule": "0: (\"Euler a\")",
- "use_noise_mask": false,
- "mask_schedule": "0: (\"{video_mask}\")",
- "noise_mask_schedule": "0: (\"{video_mask}\")",
- "enable_checkpoint_scheduling": false,
- "checkpoint_schedule": "0: (\"model1.ckpt\"), 100: (\"model2.safetensors\")",
- "enable_clipskip_scheduling": false,
- "clipskip_schedule": "0: (2)",
- "enable_noise_multiplier_scheduling": false,
- "noise_multiplier_schedule": "0: (1.05)",
- "resume_from_timestring": false,
- "resume_timestring": "20230707221541",
- "enable_ddim_eta_scheduling": false,
- "ddim_eta_schedule": "0: (0)",
- "enable_ancestral_eta_scheduling": false,
- "ancestral_eta_schedule": "0: (1)",
- "amount_schedule": "0: (0.05)",
- "kernel_schedule": "0: (5)",
- "sigma_schedule": "0: (1.0)",
- "threshold_schedule": "0: (0.0)",
- "color_coherence": "LAB",
- "color_coherence_image_path": "",
- "color_coherence_video_every_N_frames": 1,
- "color_force_grayscale": false,
- "legacy_colormatch": false,
- "diffusion_cadence": 1,
- "optical_flow_cadence": "None",
- "cadence_flow_factor_schedule": "0: (1)",
- "optical_flow_redo_generation": "None",
- "redo_flow_factor_schedule": "0: (1)",
- "diffusion_redo": 0,
- "noise_type": "perlin",
- "perlin_octaves": 4,
- "perlin_persistence": 0.5,
- "use_depth_warping": true,
- "depth_algorithm": "Zoe",
- "midas_weight": 0.4,
- "padding_mode": "reflection",
- "sampling_mode": "bicubic",
- "save_depth_maps": false,
- "video_init_path": "",
- "extract_nth_frame": 1,
- "extract_from_frame": 0,
- "extract_to_frame": -1,
- "overwrite_extracted_frames": false,
- "use_mask_video": false,
- "video_mask_path": "",
- "hybrid_comp_alpha_schedule": "0:(0.5)",
- "hybrid_comp_mask_blend_alpha_schedule": "0:(0.5)",
- "hybrid_comp_mask_contrast_schedule": "0:(1)",
- "hybrid_comp_mask_auto_contrast_cutoff_high_schedule": "0:(100)",
- "hybrid_comp_mask_auto_contrast_cutoff_low_schedule": "0:(0)",
- "hybrid_flow_factor_schedule": "0:(1)",
- "hybrid_generate_inputframes": false,
- "hybrid_generate_human_masks": "None",
- "hybrid_use_first_frame_as_init_image": true,
- "hybrid_motion": "None",
- "hybrid_motion_use_prev_img": false,
- "hybrid_flow_consistency": false,
- "hybrid_consistency_blur": 2,
- "hybrid_flow_method": "RAFT",
- "hybrid_composite": "None",
- "hybrid_use_init_image": false,
- "hybrid_comp_mask_type": "None",
- "hybrid_comp_mask_inverse": false,
- "hybrid_comp_mask_equalize": "None",
- "hybrid_comp_mask_auto_contrast": true,
- "hybrid_comp_save_extra_frames": false,
- "parseq_manifest": "",
- "parseq_use_deltas": true,
- "use_looper": false,
- "init_images": "{\n \"0\": \"https://deforum.github.io/a1/Gi1.png\",\n \"max_f/4-5\": \"https://deforum.github.io/a1/Gi2.png\",\n \"max_f/2-10\": \"https://deforum.github.io/a1/Gi3.png\",\n \"3*max_f/4-15\": \"https://deforum.github.io/a1/Gi4.jpg\",\n \"max_f-20\": \"https://deforum.github.io/a1/Gi1.png\"\n}",
- "image_strength_schedule": "0:(0.75)",
- "blendFactorMax": "0:(0.35)",
- "blendFactorSlope": "0:(0.25)",
- "tweening_frames_schedule": "0:(20)",
- "color_correction_factor": "0:(0.075)",
- "cn_1_overwrite_frames": true,
- "cn_1_vid_path": "",
- "cn_1_mask_vid_path": "",
- "cn_1_enabled": false,
- "cn_1_low_vram": false,
- "cn_1_pixel_perfect": false,
- "cn_1_module": "none",
- "cn_1_model": "control_v11f1p_sd15_depth [cfd03158]",
- "cn_1_weight": "0:(1)",
- "cn_1_guidance_start": "0:(0.0)",
- "cn_1_guidance_end": "0:(1.0)",
- "cn_1_processor_res": 64,
- "cn_1_threshold_a": 64,
- "cn_1_threshold_b": 64,
- "cn_1_resize_mode": "Inner Fit (Scale to Fit)",
- "cn_1_control_mode": "Balanced",
- "cn_1_loopback_mode": false,
- "cn_2_overwrite_frames": true,
- "cn_2_vid_path": "",
- "cn_2_mask_vid_path": "",
- "cn_2_enabled": false,
- "cn_2_low_vram": false,
- "cn_2_pixel_perfect": false,
- "cn_2_module": "none",
- "cn_2_model": "control_v11p_sd15_seg [e1f51eb9]",
- "cn_2_weight": "0:(1)",
- "cn_2_guidance_start": "0:(0.0)",
- "cn_2_guidance_end": "0:(1.0)",
- "cn_2_processor_res": 64,
- "cn_2_threshold_a": 64,
- "cn_2_threshold_b": 64,
- "cn_2_resize_mode": "Inner Fit (Scale to Fit)",
- "cn_2_control_mode": "Balanced",
- "cn_2_loopback_mode": false,
- "cn_3_overwrite_frames": true,
- "cn_3_vid_path": "",
- "cn_3_mask_vid_path": "",
- "cn_3_enabled": false,
- "cn_3_low_vram": false,
- "cn_3_pixel_perfect": false,
- "cn_3_module": "none",
- "cn_3_model": "None",
- "cn_3_weight": "0:(1)",
- "cn_3_guidance_start": "0:(0.0)",
- "cn_3_guidance_end": "0:(1.0)",
- "cn_3_processor_res": 64,
- "cn_3_threshold_a": 64,
- "cn_3_threshold_b": 64,
- "cn_3_resize_mode": "Inner Fit (Scale to Fit)",
- "cn_3_control_mode": "Balanced",
- "cn_3_loopback_mode": false,
- "cn_4_overwrite_frames": true,
- "cn_4_vid_path": "",
- "cn_4_mask_vid_path": "",
- "cn_4_enabled": false,
- "cn_4_low_vram": false,
- "cn_4_pixel_perfect": false,
- "cn_4_module": "none",
- "cn_4_model": "None",
- "cn_4_weight": "0:(1)",
- "cn_4_guidance_start": "0:(0.0)",
- "cn_4_guidance_end": "0:(1.0)",
- "cn_4_processor_res": 64,
- "cn_4_threshold_a": 64,
- "cn_4_threshold_b": 64,
- "cn_4_resize_mode": "Inner Fit (Scale to Fit)",
- "cn_4_control_mode": "Balanced",
- "cn_4_loopback_mode": false,
- "cn_5_overwrite_frames": true,
- "cn_5_vid_path": "",
- "cn_5_mask_vid_path": "",
- "cn_5_enabled": false,
- "cn_5_low_vram": false,
- "cn_5_pixel_perfect": false,
- "cn_5_module": "none",
- "cn_5_model": "None",
- "cn_5_weight": "0:(1)",
- "cn_5_guidance_start": "0:(0.0)",
- "cn_5_guidance_end": "0:(1.0)",
- "cn_5_processor_res": 64,
- "cn_5_threshold_a": 64,
- "cn_5_threshold_b": 64,
- "cn_5_resize_mode": "Inner Fit (Scale to Fit)",
- "cn_5_control_mode": "Balanced",
- "cn_5_loopback_mode": false,
- "skip_video_creation": false,
- "fps": 20,
- "make_gif": false,
- "delete_imgs": false,
- "delete_input_frames": false,
- "add_soundtrack": "None",
- "soundtrack_path": "",
- "r_upscale_video": false,
- "r_upscale_factor": "x4",
- "r_upscale_model": "realesrgan-x4plus",
- "r_upscale_keep_imgs": true,
- "store_frames_in_ram": false,
- "frame_interpolation_engine": "None",
- "frame_interpolation_x_amount": 3,
- "frame_interpolation_slow_mo_enabled": false,
- "frame_interpolation_slow_mo_amount": 2,
- "frame_interpolation_keep_imgs": false,
- "frame_interpolation_use_upscaled": false
-}
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/utils.py b/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/utils.py
deleted file mode 100644
index a11e1f8..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-deforum/tests/utils.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# Copyright (C) 2023 Deforum LLC
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU Affero General Public License as published by
-# the Free Software Foundation, version 3 of the License.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU Affero General Public License
-# along with this program. If not, see .
-
-# Contact the authors: https://deforum.github.io/
-
-from tenacity import retry, stop_after_delay, wait_fixed
-from pydantic_requests import PydanticSession
-import requests
-from scripts.deforum_api_models import DeforumJobStatus, DeforumJobStatusCategory, DeforumJobPhase
-
-SERVER_BASE_URL = "http://localhost:7860"
-API_ROOT = "/deforum_api"
-API_BASE_URL = SERVER_BASE_URL + API_ROOT
-
-@retry(wait=wait_fixed(2), stop=stop_after_delay(900))
-def wait_for_job_to_complete(id : str):
- with PydanticSession(
- {200: DeforumJobStatus}, headers={"accept": "application/json"}
- ) as session:
- response = session.get(API_BASE_URL+"/jobs/"+id)
- response.raise_for_status()
- jobStatus : DeforumJobStatus = response.model
- print(f"Waiting for job {id}: status={jobStatus.status}; phase={jobStatus.phase}; execution_time:{jobStatus.execution_time}s")
- assert jobStatus.status != DeforumJobStatusCategory.ACCEPTED
- return jobStatus
-
-@retry(wait=wait_fixed(1), stop=stop_after_delay(120))
-def wait_for_job_to_enter_phase(id : str, phase : DeforumJobPhase):
- with PydanticSession(
- {200: DeforumJobStatus}, headers={"accept": "application/json"}
- ) as session:
- response = session.get(API_BASE_URL+"/jobs/"+id)
- response.raise_for_status()
- jobStatus : DeforumJobStatus = response.model
- print(f"Waiting for job {id} to enter phase {phase}. Currently: status={jobStatus.status}; phase={jobStatus.phase}; execution_time:{jobStatus.execution_time}s")
- assert jobStatus.phase != phase
- return jobStatus
-
-@retry(wait=wait_fixed(1), stop=stop_after_delay(120))
-def wait_for_job_to_enter_status(id : str, status : DeforumJobStatusCategory):
- with PydanticSession(
- {200: DeforumJobStatus}, headers={"accept": "application/json"}
- ) as session:
- response = session.get(API_BASE_URL+"/jobs/"+id)
- response.raise_for_status()
- jobStatus : DeforumJobStatus = response.model
- print(f"Waiting for job {id} to enter status {status}. Currently: status={jobStatus.status}; phase={jobStatus.phase}; execution_time:{jobStatus.execution_time}s")
- assert jobStatus.status == status
- return jobStatus
-
-
-def gpu_disabled():
- response = requests.get(SERVER_BASE_URL+"/sdapi/v1/cmd-flags")
- response.raise_for_status()
- cmd_flags = response.json()
- return cmd_flags["use_cpu"] == ["all"]
-
-
-
-
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover.jpg b/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover.jpg
deleted file mode 100644
index 3e13d93..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover.mp4 b/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover.mp4
deleted file mode 100644
index 39fec8b..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover.mp4 and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover_yuv420p.mp4 b/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover_yuv420p.mp4
deleted file mode 100644
index 43f5f77..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/cover_yuv420p.mp4 and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/desc.png b/src/code/images/sd-resource/extensions/sd-webui-llul/images/desc.png
deleted file mode 100644
index c827387..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/desc.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/llul.mp4 b/src/code/images/sd-resource/extensions/sd-webui-llul/images/llul.mp4
deleted file mode 100644
index b344c4a..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/llul.mp4 and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/llul_yuv420p.mp4 b/src/code/images/sd-resource/extensions/sd-webui-llul/images/llul_yuv420p.mp4
deleted file mode 100644
index c8dd19a..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/llul_yuv420p.mp4 and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/mask_effect.jpg b/src/code/images/sd-resource/extensions/sd-webui-llul/images/mask_effect.jpg
deleted file mode 100644
index f761ef8..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/mask_effect.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample1.jpg b/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample1.jpg
deleted file mode 100644
index bcfab5b..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample1.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample2.jpg b/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample2.jpg
deleted file mode 100644
index 34fed78..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample2.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample3.jpg b/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample3.jpg
deleted file mode 100644
index 3b7b173..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample3.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample4.jpg b/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample4.jpg
deleted file mode 100644
index 60657fc..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-llul/images/sample4.jpg and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/bug_report.yml b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/bug_report.yml
deleted file mode 100644
index 568a189..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/bug_report.yml
+++ /dev/null
@@ -1,75 +0,0 @@
-name: 'Bug report'
-description: "❗️❗️❗️If you don't use this template to provide bug feedback, I won't address your issue."
-title: '[Bug] '
-body:
- - type: checkboxes
- attributes:
- label: 'Issue Feedback'
- description: Please check the box below to indicate that you are aware of the relevant information.
- options:
- - label: I confirm that I have searched for a solution to this issue in the [FAQ](https://aiodoc.physton.com/FAQ.html) and couldn't find a solution.
- required: true
- - label: I confirm that I have searched for this issue in the [Issues](https://github.com/Physton/sd-webui-prompt-all-in-one/issues) list (including closed ones) and couldn't find a solution.
- required: true
- - label: I confirm that I have read the [Wiki](https://aiodoc.physton.com/) and couldn't find a solution.
- required: true
- - type: textarea
- attributes:
- label: 'Describe the Issue'
- description: Please describe the problem you encountered here.
- validations:
- required: true
- - type: textarea
- attributes:
- label: 'Steps to Reproduce'
- description: Please let me know the steps you took to trigger the issue.
- - type: textarea
- attributes:
- label: 'Screenshot or log'
- description: Please provide console screenshots or screenshots of the issue if possible.
- - type: dropdown
- attributes:
- label: 'OS'
- options:
- - Windows
- - macOS
- - Ubuntu
- - Other Linux
- - Other
- validations:
- required: true
- - type: dropdown
- attributes:
- label: 'Browser'
- options:
- - Chrome
- - Edge
- - Safari
- - Firefox
- - Other
- validations:
- required: true
- - type: input
- attributes:
- label: Stable Diffusion WebUI version
- placeholder: e.g. b6af0a3, 1.3.1
- validations:
- required: false
- - type: input
- attributes:
- label: Extension version
- placeholder: e.g. e0498a1
- validations:
- required: false
- - type: input
- attributes:
- label: Python version
- placeholder: e.g. 3.10.11
- validations:
- required: false
- - type: input
- attributes:
- label: Gradio version
- placeholder: e.g. 3.31.0
- validations:
- required: false
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/bug_report_cn.yml b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/bug_report_cn.yml
deleted file mode 100644
index 1df932f..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/bug_report_cn.yml
+++ /dev/null
@@ -1,75 +0,0 @@
-name: '反馈问题'
-description: '❗️❗️❗️如果不使用此模板反馈bug,我不会去解决你的问题'
-title: '[Bug] '
-body:
- - type: checkboxes
- attributes:
- label: '反馈须知'
- description: 请在下方勾选表示你已经知晓相关内容。
- options:
- - label: 我确认已经在 [常见问题](https://aiodoc.physton.com/zh-CN/FAQ.html) 中搜索了此次反馈的问题,没有找到解决方法。
- required: true
- - label: 我确认已经在 [Issues](https://github.com/Physton/sd-webui-prompt-all-in-one/issues) 列表(包括已经 Close 的)中搜索了此次反馈的问题,没有找到解决方法。
- required: true
- - label: 我确认阅读了 [文档](https://aiodoc.physton.com/zh-CN/),没有找到解决方法。
- required: true
- - type: textarea
- attributes:
- label: '描述问题'
- description: 请在此描述你遇到了什么问题。
- validations:
- required: true
- - type: textarea
- attributes:
- label: '如何复现'
- description: 请告诉我你是通过什么操作触发的该问题。
- - type: textarea
- attributes:
- label: '截图或日志'
- description: 请在此提供控制台截图、屏幕截图。
- - type: dropdown
- attributes:
- label: '操作系统'
- options:
- - Windows
- - macOS
- - Ubuntu
- - Other Linux
- - Other
- validations:
- required: true
- - type: dropdown
- attributes:
- label: '浏览器'
- options:
- - Chrome
- - Edge
- - Safari
- - Firefox
- - Other
- validations:
- required: true
- - type: input
- attributes:
- label: Stable Diffusion WebUI 版本
- placeholder: e.g. b6af0a3, 1.3.1
- validations:
- required: false
- - type: input
- attributes:
- label: 扩展版本
- placeholder: e.g. e0498a1
- validations:
- required: false
- - type: input
- attributes:
- label: Python 版本
- placeholder: e.g. 3.10.11
- validations:
- required: false
- - type: input
- attributes:
- label: Gradio 版本
- placeholder: e.g. 3.31.0
- validations:
- required: false
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/feature_request.yml b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/feature_request.yml
deleted file mode 100644
index fbdaf4f..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/feature_request.yml
+++ /dev/null
@@ -1,22 +0,0 @@
-name: 'Feature request'
-description: 'Feature request'
-title: '[Feature] '
-body:
- - type: textarea
- attributes:
- label: 'Is this feature related to existing issues?'
- description: If it is, please list the links or describe the issues here.
- - type: textarea
- attributes:
- label: 'What feature do you want or what suggestions do you have?'
- description: Please let me know.
- validations:
- required: true
- - type: textarea
- attributes:
- label: 'Are there any similar competitors that can be referenced?'
- description: You can provide links or screenshots of reference products.
- - type: textarea
- attributes:
- label: 'Additional information'
- description: Feel free to share any other considerations you have.
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/feature_request_cn.yml b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/feature_request_cn.yml
deleted file mode 100644
index f7490f1..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/ISSUE_TEMPLATE/feature_request_cn.yml
+++ /dev/null
@@ -1,22 +0,0 @@
-name: '功能建议'
-description: '功能建议'
-title: '[Feature] '
-body:
- - type: textarea
- attributes:
- label: '这个功能与现有的问题有关吗?'
- description: 如果有关,请在此列出链接或者描述问题。
- - type: textarea
- attributes:
- label: '你想要什么功能或者有什么建议?'
- description: 尽管告诉我。
- validations:
- required: true
- - type: textarea
- attributes:
- label: '有没有可以参考的同类竞品?'
- description: 可以给出参考产品的链接或者截图。
- - type: textarea
- attributes:
- label: '其他信息'
- description: 可以说说你的其他考虑。
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/workflows/issue-translator.yml b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/workflows/issue-translator.yml
deleted file mode 100644
index 3601fd8..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/.github/workflows/issue-translator.yml
+++ /dev/null
@@ -1,18 +0,0 @@
-name: 'issue-translator'
-on:
- issue_comment:
- types: [created]
- issues:
- types: [opened]
-
-jobs:
- build:
- runs-on: ubuntu-latest
- steps:
- - uses: usthe/issues-translate-action@v2.7
- with:
- IS_MODIFY_TITLE: false
- # not require, default false, . Decide whether to modify the issue title
- # if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
- CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
- # not require. Customize the translation robot prefix message.
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/.gitignore b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/.gitignore
deleted file mode 100644
index 456a557..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-tested.json
-.env
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/get_lang.py b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/get_lang.py
deleted file mode 100644
index 4527769..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/get_lang.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
-
-from scripts.physton_prompt.get_lang import get_lang
-
-print(get_lang('is_required', {'0': '11'}))
-print(get_lang('is_required1'))
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/get_version.py b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/get_version.py
deleted file mode 100644
index e43e0ec..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/get_version.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
-
-from scripts.physton_prompt.get_version import get_git_commit_version, get_git_remote_versions, get_latest_version
-
-print(get_git_remote_versions())
-print(get_git_commit_version())
-print(get_latest_version())
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/privacy_api_config.py b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/privacy_api_config.py
deleted file mode 100644
index 4a2e0e1..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/privacy_api_config.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
-from scripts.physton_prompt.storage import Storage
-from scripts.physton_prompt.get_translate_apis import privacy_translate_api_config, unprotected_translate_api_config
-st = Storage()
-key = 'translate_api.volcengine'
-data = st.get(key)
-data = privacy_translate_api_config(key, data)
-print(data)
-data = unprotected_translate_api_config(key, data)
-print(data)
-
-data = {
- 'key': 'translate_api.volcengine',
- 'data': {
- 'access_key_id': 'AKLTYz*****************************************',
- 'access_key_secret': 'TWpVNV******************************************************',
- 'region': 'cn-north-1',
- }
-}
-data['data'] = unprotected_translate_api_config(data['key'], data['data'])
-print(data)
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translate.py b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translate.py
deleted file mode 100644
index 59e317d..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translate.py
+++ /dev/null
@@ -1,82 +0,0 @@
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
-import time
-import json
-from scripts.physton_prompt.translate import translate
-from scripts.physton_prompt.get_i18n import get_i18n
-from scripts.physton_prompt.get_translate_apis import get_translate_apis
-from scripts.physton_prompt.storage import Storage
-
-i18n = get_i18n()
-st = Storage()
-text = 'Hello World, I am a boy'
-
-tested_file = os.path.join(os.path.dirname(__file__), 'tested.json')
-tested = []
-if os.path.exists(tested_file):
- with open(tested_file, 'r') as f:
- tested = json.load(f)
-
-def is_tested(api_key, from_lang, to_lang):
- for item in tested:
- if item['api'] == api_key and item['from'] == from_lang and item['to'] == to_lang:
- return item['translated_text']
- return False
-
-def add_tested(api_key, from_lang, to_lang, translated_text):
- tested.append({
- 'api': api_key,
- 'from': from_lang,
- 'to': to_lang,
- 'translated_text': translated_text
- })
- with open(tested_file, 'w') as f:
- json.dump(tested, f, indent=4, ensure_ascii=False)
-
-def test_api(api):
- print(f"开始测试 {api['name']}")
- config_name = 'translate_api.' + api['key']
- config = st.get(config_name)
- if not config:
- config = {}
- for lang_code in api['support']:
- if lang_code == 'en_US' or lang_code == 'en_GB':
- continue
- if not api['support'][lang_code]:
- continue
- if api['key'] == 'openai' or api['key'] == 'deepl':
- continue
-
- translated_text = is_tested(api['key'], 'en_US', lang_code)
- if not translated_text:
- print(f" 测试 en_US -> {lang_code}", end=' ')
- result = translate(text, from_lang='en_US', to_lang=lang_code, api=api['key'],api_config=config)
- if not result['success']:
- print(f"失败: {result['message']}")
- time.sleep(0.5)
- # raise Exception(f"测试 {api['name']} 失败:{result['message']}")
- continue
- add_tested(api['key'], 'en_US', lang_code, result['translated_text'])
- translated_text = result['translated_text']
- print(f" 结果: {translated_text}")
- time.sleep(0.5)
-
- if not is_tested(api['key'], lang_code, 'en_US'):
- print(f" 测试 {lang_code} -> en_US", end=' ')
- result = translate(translated_text, from_lang=lang_code, to_lang='en_US', api=api['key'],api_config=config)
- if not result['success']:
- print(f"失败: {result['message']}")
- time.sleep(0.5)
- # raise Exception(f"测试 {api['name']} 失败:{result['message']}")
- continue
- translated_text = result['translated_text']
- add_tested(api['key'], lang_code, 'en_US', translated_text)
- print(f" 结果: {translated_text}")
- time.sleep(0.5)
-
-apis = get_translate_apis()
-for group in apis['apis']:
- for api in group['children']:
- test_api(api)
-
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translator.py b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translator.py
deleted file mode 100644
index b26e92e..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translator.py
+++ /dev/null
@@ -1,187 +0,0 @@
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
-
-from dotenv import load_dotenv
-load_dotenv(os.path.join(os.path.dirname(__file__), '.env'))
-
-from scripts.physton_prompt.translator.microsoft_translator import MicrosoftTranslator
-from scripts.physton_prompt.translator.google_tanslator import GoogleTranslator
-from scripts.physton_prompt.translator.openai_translator import OpenaiTranslator
-from scripts.physton_prompt.translator.amazon_translator import AmazonTranslator
-from scripts.physton_prompt.translator.deepl_translator import DeeplTranslator
-from scripts.physton_prompt.translator.baidu_translator import BaiduTranslator
-from scripts.physton_prompt.translator.youdao_translator import YoudaoTranslator
-from scripts.physton_prompt.translator.alibaba_translator import AlibabaTranslator
-from scripts.physton_prompt.translator.tencent_translator import TencentTranslator
-from scripts.physton_prompt.translator.translators_translator import TranslatorsTranslator
-from scripts.physton_prompt.translator.yandex_translator import YandexTranslator
-from scripts.physton_prompt.translator.mymemory_translator import MyMemoryTranslator
-from scripts.physton_prompt.translator.niutrans_translator import NiutransTranslator
-
-from scripts.physton_prompt.translate import translate
-from scripts.physton_prompt.get_i18n import get_i18n
-
-text = 'project'
-texts = [
- 'Hello World',
- '1 girl', '2 girl', '3 girl', '4 girl', '5 girl',
- '1 dog', '2 dog', '3 dog', '4 dog', '5 dog',
- '1 cat', '2 cat', '3 cat', '4 cat', '5 cat',
- '1 car', '2 car', '3 car', '4 car', '5 car',
- '1 apple', '2 apple', '3 apple', '4 apple', '5 apple',
- '1 banana', '2 banana', '3 banana', '4 banana', '5 banana',
- '1 orange', '2 orange', '3 orange', '4 orange', '5 orange',
- '1 watermelon', '2 watermelon', '3 watermelon', '4 watermelon', '5 watermelon',
- '1 pear', '2 pear', '3 pear', '4 pear', '5 pear',
- '1 peach', '2 peach', '3 peach', '4 peach', '5 peach',
- '1 grape', '2 grape', '3 grape', '4 grape', '5 grape',
- '1 pineapple', '2 pineapple', '3 pineapple', '4 pineapple', '5 pineapple',
-]
-
-def test_google():
- api_config = {
- 'api_key': os.getenv('GOOGLE_API_KEY')
- }
- print(translate(text, 'en_US', 'zh_CN', 'google', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'google', api_config))
-
-def test_microsoft():
- api_config = {
- 'api_key': os.getenv('MICROSOFT_API_KEY'),
- 'region': 'eastasia'
- }
- print(translate(text, 'en_US', 'zh_CN', 'microsoft', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'microsoft', api_config))
-
-def test_openai():
- api_config = {
- 'api_base': os.getenv('OPENAI_API_BASE'),
- 'api_key': os.getenv('OPENAI_API_KEY'),
- 'model': 'gpt-3.5-turbo'
- }
- print(translate(text, 'en_US', 'zh_CN', 'openai', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'openai', api_config))
-
-def test_amazon():
- api_config = {
- 'api_key_id': os.getenv('AMAZON_API_KEY_ID'),
- 'api_key_secret': os.getenv('AMAZON_API_KEY_SECRET'),
- 'region': 'us-east-1'
- }
- print(translate(text, 'en_US', 'zh_CN', 'amazon', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'amazon', api_config))
-
-def test_deepl():
- api_config = {
- 'api_key': os.getenv('DEEPL_API_KEY')
- }
- print(translate(text, 'en_US', 'zh_CN', 'deepl', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'deepl', api_config))
-
-def test_baidu():
- api_config = {
- 'app_id': os.getenv('BAIDU_APP_ID'),
- 'app_secret': os.getenv('BAIDU_APP_SECRET')
- }
- print(translate(text, 'en_US', 'zh_CN', 'baidu', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'baidu', api_config))
-
-def test_youdao():
- api_config = {
- 'app_id': os.getenv('YOUDAO_APP_ID'),
- 'app_secret': os.getenv('YOUDAO_APP_SECRET')
- }
- print(translate(text, 'en_US', 'zh_CN', 'youdao', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'youdao', api_config))
-
-def test_alibaba():
- api_config = {
- 'access_key_id': os.getenv('ALIBABA_ACCESS_KEY_ID'),
- 'access_key_secret': os.getenv('ALIBABA_ACCESS_KEY_SECRET'),
- }
- print(translate(text, 'en_US', 'zh_CN', 'alibaba', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'alibaba', api_config))
-
-def test_tencent():
- api_config = {
- 'secret_id': os.getenv('TENCENT_SECRET_ID'),
- 'secret_key': os.getenv('TENCENT_SECRET_KEY'),
- 'region': 'ap-shanghai'
- }
- print(translate(text, 'en_US', 'zh_CN', 'tencent', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'tencent', api_config))
-
-def test_translators():
- print(translate(text, 'en_US', 'zh_CN', 'alibaba_free', {'region': 'EN'}))
- print(translate(texts, 'en_US', 'zh_CN', 'alibaba_free', {'region': 'EN'}))
-
-def test_yandex():
- api_config = {
- 'api_key': os.getenv('YANDEX_API_KEY'),
- }
- print(translate(text, 'en_US', 'zh_CN', 'yandex', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'yandex', api_config))
-
-def test_mymemory():
- api_config = {
- 'api_key': os.getenv('MYMEMORY_API_KEY'),
- }
- print(translate(text, 'en_US', 'zh_TW', 'myMemory_free', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'myMemory_free', api_config))
-
-def test_niutrans():
- api_config = {
- 'api_key': os.getenv('NIUTRANS_API_KEY')
- }
- print(translate(text, 'en_US', 'zh_TW', 'niutrans', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'niutrans', api_config))
-
-def test_caiyun():
- api_config = {
- 'token': os.getenv('CAIYUN_TOKEN')
- }
- print(translate(text, 'en_US', 'zh_CN', 'caiyun', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'caiyun', api_config))
-
-
-def test_volcengine():
- api_config = {
- 'access_key_id': os.getenv('VOLCENGINE_ACCESS_KEY_ID'),
- 'access_key_secret': os.getenv('VOLCENGINE_ACCESS_KEY_SECRET'),
- }
- print(translate(text, 'en_US', 'zh_TW', 'volcengine', api_config))
- print(translate(texts, 'en_US', 'zh_TW', 'volcengine', api_config))
-
-def test_iflytekV1():
- api_config = {
- 'app_id': os.getenv('IFLYTEK_APP_ID'),
- 'api_secret': os.getenv('IFLYTEK_API_SECRET'),
- 'api_key': os.getenv('IFLYTEK_API_KEY'),
- }
- print(translate(text, 'en_US', 'zh_CN', 'iflytekV1', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'iflytekV1', api_config))
-
-def test_iflytekV2():
- api_config = {
- 'app_id': os.getenv('IFLYTEK_APP_ID'),
- 'api_secret': os.getenv('IFLYTEK_API_SECRET'),
- 'api_key': os.getenv('IFLYTEK_API_KEY'),
- }
- print(translate(text, 'en_US', 'zh_CN', 'iflytekV2', api_config))
- print(translate(texts, 'en_US', 'zh_CN', 'iflytekV2', api_config))
-
-def test_languages():
- i18n = get_i18n()
- languages = []
- for item in i18n['languages']:
- if item['code'] == 'en_US':
- continue
- languages.append(item['code'])
-
- for lang in languages:
- print(f'lang: {lang} => ')
- print(translate(text, 'en_US', lang, 'myMemory_free'))
- pass
-
-test_translators()
diff --git a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translators.py b/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translators.py
deleted file mode 100644
index cba8dec..0000000
--- a/src/code/images/sd-resource/extensions/sd-webui-prompt-all-in-one/tests/translators.py
+++ /dev/null
@@ -1,31 +0,0 @@
-import os
-import sys
-sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
-
-region = 'CN'
-os.environ['translators_default_region'] = region
-from scripts.physton_prompt.translators.server import translate_text, tss
-tss.server_region = region
-tss._bing.server_region = region
-tss._google.server_region = region
-
-text = '''
-Hi, this extension is developed by Physton. Welcome to use it!
-If you have any suggestions or opinions, please feel free to raise an issue or PR on Github.
-If you find this extension helpful, please give me a star on Github!
-
-Developed by: Physton
-Github: Physton/sd-webui-prompt-all-in-one
-'''
-translator = 'alibaba'
-print("--------------------------------------")
-print(translate_text(text, translator, 'zh', 'en'))
-
-print("--------------------------------------")
-print(translate_text('你好', translator, 'zh', 'en'))
-
-print("--------------------------------------")
-print(translate_text('女孩', translator, 'zh', 'en'))
-
-print("--------------------------------------")
-print(translate_text('美女', translator, 'zh', 'en'))
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/sd-webui-roop/example/example.png b/src/code/images/sd-resource/extensions/sd-webui-roop/example/example.png
deleted file mode 100644
index 7a7f0f3..0000000
Binary files a/src/code/images/sd-resource/extensions/sd-webui-roop/example/example.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/.github/workflows/codeql.yml b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/.github/workflows/codeql.yml
deleted file mode 100644
index e424a60..0000000
--- a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/.github/workflows/codeql.yml
+++ /dev/null
@@ -1,74 +0,0 @@
-# For most projects, this workflow file will not need changing; you simply need
-# to commit it to your repository.
-#
-# You may wish to alter this file to override the set of languages analyzed,
-# or to provide custom queries or build logic.
-#
-# ******** NOTE ********
-# We have attempted to detect the languages in your repository. Please check
-# the `language` matrix defined below to confirm you have the correct set of
-# supported CodeQL languages.
-#
-name: "CodeQL"
-
-on:
- push:
- branches: [ "main" ]
- pull_request:
- # The branches below must be a subset of the branches above
- branches: [ "main" ]
- schedule:
- - cron: '34 9 * * 3'
-
-jobs:
- analyze:
- name: Analyze
- runs-on: ubuntu-latest
- permissions:
- actions: read
- contents: read
- security-events: write
-
- strategy:
- fail-fast: false
- matrix:
- language: [ 'javascript', 'python' ]
- # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
- # Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
-
- steps:
- - name: Checkout repository
- uses: actions/checkout@v3
-
- # Initializes the CodeQL tools for scanning.
- - name: Initialize CodeQL
- uses: github/codeql-action/init@v2
- with:
- languages: ${{ matrix.language }}
- # If you wish to specify custom queries, you can do so here or in a config file.
- # By default, queries listed here will override any specified in a config file.
- # Prefix the list here with "+" to use these queries and those in the config file.
-
- # Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
- # queries: security-extended,security-and-quality
-
-
- # Autobuild attempts to build any compiled languages (C/C++, C#, Go, or Java).
- # If this step fails, then you should remove it and run the build manually (see below)
- - name: Autobuild
- uses: github/codeql-action/autobuild@v2
-
- # ℹ️ Command-line programs to run using the OS shell.
- # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
-
- # If the Autobuild fails above, remove it and uncomment the following three lines.
- # modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
-
- # - run: |
- # echo "Run, Build Application using script"
- # ./location_of_script_within_repo/buildscript.sh
-
- - name: Perform CodeQL Analysis
- uses: github/codeql-action/analyze@v2
- with:
- category: "/language:${{matrix.language}}"
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss02.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss02.png
deleted file mode 100644
index 7acd1a3..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss02.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss03.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss03.png
deleted file mode 100644
index 0a69c40..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss03.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss04.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss04.png
deleted file mode 100644
index f8f49af..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss04.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss05.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss05.png
deleted file mode 100644
index 52f8534..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss05.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss06.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss06.png
deleted file mode 100644
index 79f2f5e..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss06.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss07.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss07.png
deleted file mode 100644
index dad1e82..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss07.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss08.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss08.png
deleted file mode 100644
index c58b913..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss08.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss09.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss09.png
deleted file mode 100644
index 19500a1..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss09.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss10.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss10.png
deleted file mode 100644
index d369626..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-dataset-tag-editor/pic/ss10.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/.github/workflows/pylint.yml b/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/.github/workflows/pylint.yml
deleted file mode 100644
index 383e65c..0000000
--- a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/.github/workflows/pylint.yml
+++ /dev/null
@@ -1,23 +0,0 @@
-name: Pylint
-
-on: [push]
-
-jobs:
- build:
- runs-on: ubuntu-latest
- strategy:
- matrix:
- python-version: ["3.8", "3.9", "3.10"]
- steps:
- - uses: actions/checkout@v3
- - name: Set up Python ${{ matrix.python-version }}
- uses: actions/setup-python@v3
- with:
- python-version: ${{ matrix.python-version }}
- - name: Install dependencies
- run: |
- python -m pip install --upgrade pip
- pip install pylint
- - name: Analysing the code with pylint
- run: |
- pylint $(git ls-files '*.py')
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/model-comparison.md b/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/model-comparison.md
deleted file mode 100644
index 6eba4c5..0000000
--- a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/model-comparison.md
+++ /dev/null
@@ -1,49 +0,0 @@
-Model comparison
----
-
-* Used image: [hecattaart's artwork](https://vk.com/hecattaart?w=wall-89063929_3767)
-* Threshold: `0.5`
-
-### DeepDanbooru
-
-#### [`deepdanbooru-v3-20211112-sgd-e28`](https://github.com/KichangKim/DeepDanbooru/releases/tag/v3-20211112-sgd-e28)
-```
-1girl, animal ears, cat ears, cat tail, clothes writing, full body, rating:safe, shiba inu, shirt, shoes, simple background, sneakers, socks, solo, standing, t-shirt, tail, white background, white shirt
-```
-
-#### [`deepdanbooru-v4-20200814-sgd-e30`](https://github.com/KichangKim/DeepDanbooru/releases/tag/v4-20200814-sgd-e30)
-```
-1girl, animal, animal ears, bottomless, clothes writing, full body, rating:safe, shirt, shoes, short sleeves, sneakers, solo, standing, t-shirt, tail, white background, white shirt
-```
-
-#### `e621-v3-20221117-sgd-e32`
-```
-anthro, bottomwear, clothing, footwear, fur, hi res, mammal, shirt, shoes, shorts, simple background, sneakers, socks, solo, standing, text on clothing, text on topwear, topwear, white background
-```
-
-### Waifu Diffusion Tagger
-
-#### [`wd14-vit`](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger)
-```
-1boy, animal ears, dog, furry, leg hair, male focus, shirt, shoes, simple background, socks, solo, tail, white background
-```
-
-#### [`wd14-convnext`](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger)
-```
-full body, furry, shirt, shoes, simple background, socks, solo, tail, white background
-```
-
-#### [`wd14-vit-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2)
-```
-1boy, animal ears, cat, furry, male focus, shirt, shoes, simple background, socks, solo, tail, white background
-```
-
-#### [`wd14-convnext-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2)
-```
-animal focus, clothes writing, earrings, full body, meme, shirt, shoes, simple background, socks, solo, sweat, tail, white background, white shirt
-```
-
-#### [`wd14-swinv2-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2)
-```
-1boy, arm hair, black footwear, cat, dirty, full body, furry, leg hair, male focus, shirt, shoes, simple background, socks, solo, standing, tail, white background, white shirt
-```
\ No newline at end of file
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/screenshot.png b/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/screenshot.png
deleted file mode 100644
index 5e07148..0000000
Binary files a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/screenshot.png and /dev/null differ
diff --git a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/what-is-wd14-tagger.md b/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/what-is-wd14-tagger.md
deleted file mode 100644
index bff9a61..0000000
--- a/src/code/images/sd-resource/extensions/stable-diffusion-webui-wd14-tagger/docs/what-is-wd14-tagger.md
+++ /dev/null
@@ -1,30 +0,0 @@
-What is Waifu Diffison 1.4 Tagger?
----
-
-Image to text model created and maintained by [MrSmilingWolf](https://huggingface.co/SmilingWolf), which was used to train Waifu Diffusion.
-
-Please ask the original author `MrSmilingWolf#5991` for questions related to model or additional training.
-
-## SwinV2 vs Convnext vs ViT
-> It's got characters now, the HF space has been updated too. Model of choice for classification is SwinV2 now. ConvNext was used to extract features because SwinV2 is a bit of a pain cuz it is twice as slow and more memory intensive
-
-— [this message](https://discord.com/channels/930499730843250783/930499731451428926/1066830289382408285) from the [東方Project AI discord server](https://discord.com/invite/touhouai)
-
-> To make it clear: the ViT model is the one used to tag images for WD 1.4. That's why the repo was originally called like that. This one has been trained on the same data and tags, but has got no other relation to WD 1.4, aside from stemming from the same coordination effort. They were trained in parallel, and the best one at the time was selected for WD 1.4
->
-> This particular model was trained later and might actually be slightly better than the ViT one. Difference is in the noise range tho
-
-— [this thread](https://discord.com/channels/930499730843250783/1052283314997837955) from the [東方Project AI discord server](https://discord.com/invite/touhouai)
-
-## Performance
-> I stack them together and get a 1.1GB model with higher validation metrics than the three separated, so they each do their own thing and averaging the predictions sorta helps covering for each models failures. I suppose.
-> As for my impression for each model:
-> - SwinV2: a memory and GPU hog. Best metrics of the bunch, my model is compatible with timm weights (so it can be used on PyTorch if somebody ports it) but slooow. Good for a few predictions, would reconsider for massive tagging jobs if you're pressed for time
-> - ConvNext: nice perfs, good metrics. A sweet spot. The 1024 final embedding size provides ample space for training the Dense layer on other datasets, like E621.
-> - ViT: fastest of the bunch, at least on TPU, probably on GPU too? Slightly less then stellar metrics when compared with the other two. Onnxruntime and Tensorflow keep adding optimizations for Transformer models so that's good too.
-
-— [this message](https://discord.com/channels/930499730843250783/930499731451428926/1066833768112996384) from the [東方Project AI discord server](https://discord.com/invite/touhouai)
-
-## Links
-- [MrSmilingWolf's HuggingFace profile](https://huggingface.co/SmilingWolf)
-- [MrSmilingWolf's GitHub profile](https://github.com/SmilingWolf)