8 Commits
1.0.2 ... 1.1.0

Author SHA1 Message Date
AbdBarho
59892da866 Custom Models Auto (#75) 2022-09-17 13:44:00 +02:00
Abdullah Barhoum
fceb83c2b0 Dev hlky 2022-09-16 21:10:40 +02:00
AbdBarho
17b01a7627 Parallel Downloads (#74) 2022-09-16 20:07:50 +02:00
Abdullah Barhoum
b96d7c30d0 make executable 2022-09-16 18:37:41 +02:00
AbdBarho
aae83bb8f2 Update lstein to dev branch (#73) 2022-09-16 16:40:20 +02:00
AbdBarho
10763a8f61 Update Git Post Buffer 2022-09-16 06:51:14 +02:00
AbdBarho
64e8f093d2 Create stale.yml 2022-09-16 06:41:49 +02:00
AbdBarho
3e0a137c23 Remove outdated (#69) 2022-09-15 22:48:38 +02:00
17 changed files with 188 additions and 82 deletions

20
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

View File

@@ -41,13 +41,18 @@ Screenshots:
### lstein ### lstein
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, but less so for the WebUI. [lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
## Setup & Usage ## Setup & Usage
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue! Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
## Contributing ## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it. Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Disclaimer ## Disclaimer

1
cache/.gitignore vendored
View File

@@ -2,3 +2,4 @@
/transformers /transformers
/weights /weights
/models /models
/custom-models

View File

@@ -52,3 +52,6 @@ services:
<<: *base_service <<: *base_service
profiles: ["lstein"] profiles: ["lstein"]
build: ./services/lstein/ build: ./services/lstein/
environment:
- PRELOAD=false
- CLI_ARGS=

View File

@@ -42,7 +42,7 @@ RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeForme
# Note: don't update the sha of previous versions because the install will take forever # Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step # instead, update the repo state in a later step
ARG SHA=2ddaeb318a9626502ef4bf949a312253d8021ff0 ARG SHA=99585b3514e2d7e987651d5c6a0806f933af012b
RUN <<EOF RUN <<EOF
cd stable-diffusion-webui cd stable-diffusion-webui
git pull --rebase git pull --rebase
@@ -61,4 +61,5 @@ RUN chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
WORKDIR ${WORKDIR} WORKDIR ${WORKDIR}
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr # run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config ${CLI_ARGS} CMD /docker/mount.sh && \
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir /cache/custom-models --ckpt /cache/models/model.ckpt ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for AUTOMATIC1111
The WebUI of [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as docker container!
## Setup
Clone this repo, download the `model.ckpt` and `GFPGANv1.3.pth` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd AUTOMATIC1111
docker compose up --build
```
You can change the cli parameters in `AUTOMATIC1111/docker-compose.yml`. The full list of cil parameters can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)

View File

@@ -44,5 +44,15 @@
"interrogate_clip_num_beams": 1, "interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24, "interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48, "interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0 "interrogate_clip_dict_limit": 1500.0,
"samples_filename_pattern": "",
"directories_filename_pattern": "",
"save_selected_only": false,
"filter_nsfw": false,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"enable_quantization": false,
"enable_batch_seeds": true,
"memmon_poll_rate": 8,
"sd_model_checkpoint": null
} }

View File

@@ -32,5 +32,4 @@ rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/C
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
# mount config mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
# ln -sf /docker/config.json ${WORKDIR}/config.json

View File

@@ -1,6 +1,6 @@
FROM bash:alpine3.15 FROM bash:alpine3.15
RUN apk add parallel RUN apk add parallel aria2
COPY . /docker COPY . /docker
RUN chmod +x /docker/download.sh RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"] ENTRYPOINT ["/docker/download.sh"]

View File

@@ -2,32 +2,12 @@
set -Eeuo pipefail set -Eeuo pipefail
# [[ "$(sha256sum -b $file | head -c 64)" == "$sha" ]] mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
declare -A MODELS echo "Downloading, this might take a while..."
MODELS['model.ckpt']='https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media' aria2c --input-file /docker/links.txt --dir /cache/models --continue
MODELS['GFPGANv1.3.pth']='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth'
MODELS['RealESRGAN_x4plus.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
MODELS['RealESRGAN_x4plus_anime_6B.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'
MODELS['LDSR.yaml']='https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1'
MODELS['LDSR.ckpt']='https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1'
echo "Downloading..."
for file in "${!MODELS[@]}"; do
url=${MODELS[$file]}
full_path="/cache/models/$file"
if [[ -f "$full_path" ]]; then
echo "- $file exists"
continue
fi
mkdir -p $(dirname $full_path)
wget --tries=10 -c -O $full_path $url
done
echo "Checking SHAs..." echo "Checking SHAs..."
time parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c" parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"

View File

@@ -0,0 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
out=GFPGANv1.3.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR.ckpt

View File

@@ -13,6 +13,7 @@ RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clea
RUN <<EOF RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion cd stable-diffusion
git reset --hard 7623a5734740025d79b710f3744bff9276e1467b git reset --hard 7623a5734740025d79b710f3744bff9276e1467b
@@ -28,7 +29,7 @@ RUN pip install -U --no-cache-dir pyperclip
ARG BRANCH=master ARG BRANCH=master
ARG SHA=833a91047df999302f699637768741cecee9c37b ARG SHA=833a91047df999302f699637768741cecee9c37b
# ARG BRANCH=dev # ARG BRANCH=dev
# ARG SHA=b4de6caf697d311c1238c15a4c863fa529a35522 # ARG SHA=5f3d7facdea58fc4f89b8c584d22a4639615a2f8
RUN <<EOF RUN <<EOF
cd stable-diffusion cd stable-diffusion
git fetch git fetch
@@ -60,4 +61,4 @@ EXPOSE 7860
# run, -u to not buffer stdout / stderr # run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \ CMD /docker/mount.sh && \
python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt --ldsr-dir /latent-diffusion --inbrowser ${CLI_ARGS} python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt --ldsr-dir /latent-diffusion --inbrowser ${CLI_ARGS}
# STREAMLIT_SERVER_PORT=7860 python -m streamlit run scripts/webui_streamlit.py # STREAMLIT_SERVER_PORT=7860 python -m streamlit run scripts/webui_streamlit.py

View File

@@ -2,9 +2,12 @@
general: general:
gpu: 0 gpu: 0
outdir: /outputs outdir: /outputs
ckpt: "/cache/models/model.ckpt"
default_model: "Stable Diffusion v1.4"
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "/cache/models/model.ckpt"
fp: fp:
name: "embeddings/alex/embeddings_gs-11000.pt" name:
GFPGAN_dir: "./src/gfpgan" GFPGAN_dir: "./src/gfpgan"
RealESRGAN_dir: "./src/realesrgan" RealESRGAN_dir: "./src/realesrgan"
RealESRGAN_model: "RealESRGAN_x4plus" RealESRGAN_model: "RealESRGAN_x4plus"
@@ -15,44 +18,90 @@ general:
extra_models_cpu: False extra_models_cpu: False
extra_models_gpu: False extra_models_gpu: False
save_metadata: True save_metadata: True
save_format: "png"
skip_grid: False skip_grid: False
skip_save: False skip_save: False
grid_format: "jpg:95" grid_format: "jpg:95"
save_format: "png"
n_rows: -1 n_rows: -1
no_verify_input: False no_verify_input: False
no_half: False no_half: False
use_float16: False
precision: "autocast" precision: "autocast"
optimized: False optimized: False
optimized_turbo: False optimized_turbo: True
optimized_config: "optimizedSD/v1-inference.yaml"
update_preview: True update_preview: True
update_preview_frequency: 1 update_preview_frequency: 5
txt2img: txt2img:
prompt: prompt:
height: 512 height: 512
width: 512 width: 512
cfg_scale: 5.0 cfg_scale: 7.5
seed: "" seed: ""
batch_count: 1 batch_count: 1
batch_size: 1 batch_size: 1
sampling_steps: 50 sampling_steps: 30
default_sampler: "k_lms" default_sampler: "k_euler"
separate_prompts: False separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True normalize_prompt_weights: True
save_individual_images: True save_individual_images: True
save_grid: True save_grid: True
group_by_prompt: True group_by_prompt: True
save_as_jpg: False save_as_jpg: False
use_GFPGAN: True use_GFPGAN: False
use_RealESRGAN: True use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus" RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0 variant_amount: 0.0
variant_seed: "" variant_seed: ""
write_info_files: True
txt2vid:
default_model: "CompVis/stable-diffusion-v1-4"
custom_models_list:
[
"CompVis/stable-diffusion-v1-4",
"naclbit/trinart_stable_diffusion_v2",
"hakurei/waifu-diffusion",
"osanseviero/BigGAN-deep-128",
]
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
num_inference_steps: 200
default_sampler: "k_euler"
scheduler_name: "klms"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
dynamic_preview_frequency: True
normalize_prompt_weights: True
save_individual_images: True
save_video: True
group_by_prompt: True
write_info_files: True
do_loop: False
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
beta_start: 0.00085
beta_end: 0.012
beta_scheduler_type: "linear"
max_frames: 1000
img2img: img2img:
prompt: prompt:
sampling_steps: 50 sampling_steps: 30
# Adding an int to toggles enables the corresponding feature. # Adding an int to toggles enables the corresponding feature.
# 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them) # 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them)
# 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0) # 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0)
@@ -65,11 +114,12 @@ img2img:
# 8: jpg samples # 8: jpg samples
# 9: Fix faces using GFPGAN # 9: Fix faces using GFPGAN
# 10: Upscale images using Real-ESRGAN # 10: Upscale images using Real-ESRGAN
sampler_name: k_lms sampler_name: "k_euler"
denoising_strength: 0.45 denoising_strength: 0.45
# 0: Keep masked area # 0: Keep masked area
# 1: Regenerate only masked area # 1: Regenerate only masked area
mask_mode: 0 mask_mode: 0
mask_restore: False
# 0: Just resize # 0: Just resize
# 1: Crop and resize # 1: Crop and resize
# 2: Resize and fill # 2: Resize and fill
@@ -77,7 +127,7 @@ img2img:
# Leave blank for random seed: # Leave blank for random seed:
seed: "" seed: ""
ddim_eta: 0.0 ddim_eta: 0.0
cfg_scale: 5.0 cfg_scale: 7.5
batch_count: 1 batch_count: 1
batch_size: 1 batch_size: 1
height: 512 height: 512
@@ -87,16 +137,19 @@ img2img:
loopback: True loopback: True
random_seed_loopback: True random_seed_loopback: True
separate_prompts: False separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True normalize_prompt_weights: True
save_individual_images: True save_individual_images: True
save_grid: True save_grid: True
group_by_prompt: True group_by_prompt: True
save_as_jpg: False save_as_jpg: False
use_GFPGAN: True use_GFPGAN: False
use_RealESRGAN: True use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus" RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0 variant_amount: 0.0
variant_seed: "" variant_seed: ""
write_info_files: True
gfpgan: gfpgan:
strength: 100 strength: 100

View File

@@ -20,12 +20,25 @@ conda env update --file environment.yaml -n base
conda clean -a -y conda clean -a -y
EOF EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
ARG BRANCH=development SHA=45af30f3a4c98b50c755717831c5fff75a3a8b43
# ARG BRANCH=main SHA=89da371f4841f7e05da5a1672459d700c3920784
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip uninstall opencv-python -y && pip install --prefer-binary --upgrade --force-reinstall --no-cache-dir opencv-python-headless
COPY . /docker/
RUN python3 /docker/info.py /stable-diffusion/static/dream_web/index.html && chmod +x /docker/mount.sh
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion WORKDIR /stable-diffusion
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD mkdir -p /stable-diffusion/models/ldm/stable-diffusion-v1/ && \ CMD /docker/mount.sh && python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
ln -sf /cache/models/model.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt && \
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for lstein
The WebUI of [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion) as docker container!
Although it is a simple UI, the project has a lot of potential.
## Setup
Clone this repo, download the `model.ckpt` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd lstein
docker compose up --build
```

10
services/lstein/info.py Normal file
View File

@@ -0,0 +1,10 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace('GitHub site</a>', """
GitHub site</a>, Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
""", 1)
)

26
services/lstein/mount.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -eu
ROOT=/stable-diffusion
mkdir -p "${ROOT}/models/ldm/stable-diffusion-v1/"
ln -sf /cache/models/model.ckpt "${ROOT}/models/ldm/stable-diffusion-v1/model.ckpt"
if test -f /cache/models/GFPGANv1.3.pth; then
base="${ROOT}/src/gfpgan/experiments/pretrained_models/"
mkdir -p "${base}"
ln -sf /cache/models/GFPGANv1.3.pth "${base}/GFPGANv1.3.pth"
echo "Mounted GFPGANv1.3.pth"
fi
# facexlib
FACEX_WEIGHTS=/opt/conda/lib/python3.8/site-packages/facexlib/weights
rm -rf "${FACEX_WEIGHTS}"
mkdir -p /cache/weights
ln -sf -T /cache/weights "${FACEX_WEIGHTS}"
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py
fi