9 Commits
1.2.0 ... 2.0.1

Author SHA1 Message Date
AbdBarho
710280c7ab Update versions (#120)
- auto:
2995107fa2
  - More samplers
  - Textual inversion training
- hlky:
1a9c053cb7
  - Build times are SLOW
- lstein:
4f247a3672
  - Prepare for 2.0 release
  - very cool new UI!
2022-10-07 09:46:07 +02:00
AbdBarho
e1e03229fd Update versions (#116)
- auto:
1eb588cbf1
- hlky:
1e7bdfe3f3
2022-10-04 19:56:38 +02:00
AbdBarho
79868d88e8 Fix chmod on non-existing dir (#113)
closes #112
2022-10-02 09:25:31 +02:00
AbdBarho
6f5eef42a7 Fix typo (#111)
Closes #110
2022-10-01 19:59:54 +02:00
AbdBarho
14c4b36aff v2 (#108)
### Update versions
- auto:
3f417566b0

### Breaking changes:
* renamed `automatic-1111` service to `auto`
* the `cache` folder is now deprecated, replaced with `data` (see
migration guide below)
* `embeddings` folder has been moved to `data/embeddings`
* use GFPGAN 1.4

### Migration Guide

Note: in theory, running the command 
```
docker compose --profile download up --build
```
is all you need to use the new version, however, this means you will
also have to download everything again. A new script is available under
`scripts/migratev1tov2.sh` that will copy models to the new structure
and should get you most of the way, run
```bash
./scripts/migratev1tov2.sh
```
or you can manually inspect the script and copy the files

After that, run
```
docker compose --profile download up --build
```
to validate everything.
2022-10-01 12:57:53 +02:00
AbdBarho
28f171e64d Update / Disable lstein Temporarily (#106)
- auto:
f80c3696f6
- model merger now works! the resulting model is saved in
`cache/custom-models`
- hlky:
aaa3be16e0
- lstein:
8c9f2ae705
- This UI has been temporarely disabled due to limitation in the output
path:
8c9f2ae705/backend/modules/create_cmd_parser.py (L26)
2022-09-30 09:37:27 +02:00
AbdBarho
9af4a23ec4 Stalebot: don't ignore updates 2022-09-29 12:04:40 +02:00
AbdBarho
24ecd676ab Update versions (#104)
- auto:
15f333a266
  - Checkpoint merger NOT WORKING!!!
- hlky:
7bd785d28f
  - Streamlit UI still unstable and clunky
2022-09-28 10:18:07 +02:00
Sebastian Piechowiak
ef36c50cf9 Docker compose .gitignore update (#100)
Docker compose allows override some settings in `docker-compose.yml` by
using additional file: `docker-compose.override.yml`.
This allows to hold own settings in override file which does not
conflict with updates made by pulling newer version with "git pull"
command.

This feature requires three things:
1. Creating `docker-compose.override.yml-dist` file which is a
distributed file inside repo. This file can be copied as
`docker-compose.override.yml` and modified for own needs.
2. Change in `.gitignore` file so `docker-compose.override.yml` file is
ignored, so git pull / commit will not complain about this file.
3. Modify wiki entry about setup to mention possibility to use this
method.

Closes #101
2022-09-28 08:36:53 +02:00
19 changed files with 168 additions and 126 deletions

View File

@@ -18,4 +18,3 @@ jobs:
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7
ignore-updates: true

3
.gitignore vendored
View File

@@ -1,3 +1,2 @@
/dev
/.devcontainer
embeddings/*
/docker-compose.override.yml

View File

@@ -40,6 +40,7 @@ Screenshots:
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541298-f902b021-a1eb-4e4b-b2eb-b6a696a8ec80.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541295-7d7f2162-2189-4e0a-abbd-703f4779e1cd.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541294-aa7f7735-a973-4e17-ada0-1fe3acbb1772.jpg) |
<!--
### lstein
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
@@ -47,6 +48,7 @@ Screenshots:
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
-->
## Setup & Usage

5
cache/.gitignore vendored
View File

@@ -1,5 +0,0 @@
/torch
/transformers
/weights
/models
/custom-models

14
data/.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
# for all of the stuff downloaded by transformers, pytorch, and others
/.cache
# for all stable diffusion models (main, waifu diffusion, etc..)
/StableDiffusion
# others
/Codeformer
/GFPGAN
/ESRGAN
/BSRGAN
/RealESRGAN
/SwinIR
/ScuNET
/LDSR
/embeddings

View File

@@ -4,7 +4,7 @@ x-base_service: &base_service
ports:
- "7860:7860"
volumes:
- &v1 ./cache:/cache
- &v1 ./data:/data
- &v2 ./output:/output
deploy:
resources:
@@ -30,7 +30,7 @@ services:
environment:
- CLI_ARGS=--optimized-turbo
automatic1111: &automatic
auto: &automatic
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
@@ -38,11 +38,10 @@ services:
- *v1
- *v2
- ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json
- ./embeddings:/stable-diffusion-webui/embeddings
environment:
- CLI_ARGS=--allow-code --medvram
automatic1111-cpu:
auto-cpu:
<<: *automatic
profiles: ["auto-cpu"]
deploy: {}

30
scripts/migratev1tov2.sh Executable file
View File

@@ -0,0 +1,30 @@
mkdir -p data/.cache data/StableDiffusion data/Codeformer data/GFPGAN data/ESRGAN data/BSRGAN data/RealESRGAN data/SwinIR data/LDSR data/embeddings
cp -vf cache/models/model.ckpt data/StableDiffusion/model.ckpt
cp -vf cache/models/LDSR.ckpt data/LDSR/model.ckpt
cp -vf cache/models/LDSR.yaml data/LDSR/project.yaml
cp -vf cache/models/RealESRGAN_x4plus.pth data/RealESRGAN/
cp -vf cache/models/RealESRGAN_x4plus_anime_6B.pth data/RealESRGAN/
cp -vrf cache/torch data/.cache/
mkdir -p data/.cache/huggingface/transformers/
cp -vrf cache/transformers/* data/.cache/huggingface/transformers/
cp -v cache/custom-models/* data/StableDiffusion/
mkdir -p data/.cache/clip/
cp -vf cache/weights/ViT-L-14.pt data/.cache/clip/
cp -vf cache/weights/codeformer.pth data/Codeformer/codeformer-v0.1.0.pth
cp -vf cache/weights/detection_Resnet50_Final.pth data/.cache/
cp -vf cache/weights/parsing_parsenet.pth data/.cache/
cp -v embeddings/* data/embeddings/
echo this script was created 10/2022
echo Dont forget to run: docker compose --profile download up --build
echo the cache and embeddings folders can be deleted, but its not necessary.

View File

@@ -17,6 +17,8 @@ git reset --hard 24268930bf1dce879235a7fddd0b2355b84d7ea6
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
EOF
RUN git clone https://github.com/crowsonkb/k-diffusion.git repositories/k-diffusion && cd repositories/k-diffusion && git reset --hard f4e99857772fc3a126ba886aadf795a332774878
FROM continuumio/miniconda3:4.12.0
@@ -33,8 +35,8 @@ RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git reset --hard 7e77938230d4fefb6edccdba0b80b61d8416673e
pip install --prefer-binary --no-cache-dir -r requirements.txt
git reset --hard 1eb588cbf19924333b88beaa1ac0041904966640
pip install --prefer-binary --no-cache-dir -r requirements_versions.txt
EOF
ENV ROOT=/stable-diffusion-webui \
@@ -47,29 +49,29 @@ RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeForme
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=ca3e5519e8b6dc020c5e7ae508738afb5dc6f3ec
ARG SHA=2995107fa24cfd72b0a991e18271dcde148c2807
RUN <<EOF
cd stable-diffusion-webui
git pull --rebase
git reset --hard ${SHA}
pip install --prefer-binary --no-cache-dir -r requirements.txt
pip install --prefer-binary --no-cache-dir -r requirements_versions.txt
pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF
RUN pip install --prefer-binary -U --no-cache-dir opencv-python-headless
RUN pip install --prefer-binary --no-cache-dir opencv-python-headless \
git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
COPY . /docker
RUN <<EOF
chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
# hackiest of hacks, change default cache dir of clip #88
# https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/clip/clip.py#L94
sed -i -- 's/download_root: str = None/download_root: str = "\/cache\/weights"/' /opt/conda/lib/python3.8/site-packages/clip/clip.py
EOF
ENV CLI_ARGS=""
WORKDIR ${WORKDIR}
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir /cache/custom-models --ckpt /cache/models/model.ckpt --gfpgan-model /cache/models/GFPGANv1.3.pth ${CLI_ARGS}
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir ${ROOT}/models/Stable-diffusion ${CLI_ARGS}

View File

@@ -1,36 +1,33 @@
#!/bin/bash
set -e
set -Eeuo pipefail
declare -A MODELS
declare -A MOUNTS
MODELS["${ROOT}/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS["${WORKDIR}/repositories/latent-diffusion/experiments/pretrained_models/model.chkpt"]=LDSR.ckpt
MODELS["${WORKDIR}/repositories/latent-diffusion/experiments/pretrained_models/project.yaml"]=LDSR.yaml
MOUNTS["/root/.cache"]="/data/.cache"
MODELS_DIR=/cache/models
# main
MOUNTS["${ROOT}/models/Stable-diffusion"]="/data/StableDiffusion"
MOUNTS["${ROOT}/models/Codeformer"]="/data/Codeformer"
MOUNTS["${ROOT}/models/GFPGAN"]="/data/GFPGAN"
MOUNTS["${ROOT}/models/ESRGAN"]="/data/ESRGAN"
MOUNTS["${ROOT}/models/BSRGAN"]="/data/BSRGAN"
MOUNTS["${ROOT}/models/RealESRGAN"]="/data/RealESRGAN"
MOUNTS["${ROOT}/models/SwinIR"]="/data/SwinIR"
MOUNTS["${ROOT}/models/ScuNET"]="/data/ScuNET"
MOUNTS["${ROOT}/models/LDSR"]="/data/LDSR"
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="${MODELS_DIR}/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
MOUNTS["${ROOT}/embeddings"]="/data/embeddings"
# extra hacks
MOUNTS["${ROOT}/repositories/CodeFormer/weights/facelib"]="/data/.cache"
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -vp "$from_path"
mkdir -vp "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
# force realesrgan cache
rm -rf /opt/conda/lib/python3.8/site-packages/realesrgan/weights
ln -s -T "${MODELS_DIR}" /opt/conda/lib/python3.8/site-packages/realesrgan/weights
# force facexlib cache
mkdir -p /cache/weights/ ${WORKDIR}/gfpgan/
ln -sf /cache/weights/ ${WORKDIR}/gfpgan/
# code former cache
rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/CodeFormer/weights/facelib
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models

View File

@@ -1,6 +1,6 @@
fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 /cache/models/model.ckpt
c953a88f2727c85c3d9ae72e2bd4846bbaf59fe6972ad94130e23e7017524a70 /cache/models/GFPGANv1.3.pth
4fa0d38905f75ac06eb49a7951b426670021be3018265fd191d2125df9d682f1 /cache/models/RealESRGAN_x4plus.pth
f872d837d3c90ed2e05227bed711af5671a6fd1c9f7d7e91c911a61f155e99da /cache/models/RealESRGAN_x4plus_anime_6B.pth
c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13 /cache/models/LDSR.ckpt
9d6ad53c5dafeb07200fb712db14b813b527edd262bc80ea136777bdb41be2ba /cache/models/LDSR.yaml
fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 /data/StableDiffusion/model.ckpt
e2cd4703ab14f4d01fd1383a8a8b266f9a5833dacee8e6a79d3bf21a1b6be5ad /data/GFPGAN/GFPGANv1.4.pth
4fa0d38905f75ac06eb49a7951b426670021be3018265fd191d2125df9d682f1 /data/RealESRGAN/RealESRGAN_x4plus.pth
f872d837d3c90ed2e05227bed711af5671a6fd1c9f7d7e91c911a61f155e99da /data/RealESRGAN/RealESRGAN_x4plus_anime_6B.pth
c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13 /data/LDSR/model.ckpt
9d6ad53c5dafeb07200fb712db14b813b527edd262bc80ea136777bdb41be2ba /data/LDSR/project.yaml

View File

@@ -2,7 +2,7 @@
set -Eeuo pipefail
mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
mkdir -p /data/.cache /data/StableDiffusion /data/Codeformer /data/GFPGAN /data/ESRGAN /data/BSRGAN /data/RealESRGAN /data/SwinIR /data/LDSR /data/ScuNET /data/embeddings
cat <<EOF
By using this software, you agree to the following licenses:
@@ -13,8 +13,12 @@ EOF
echo "Downloading, this might take a while..."
aria2c --input-file /docker/links.txt --dir /cache/models --continue
aria2c --input-file /docker/links.txt --dir /data --continue
echo "Checking SHAs..."
parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"
# fix potential permissions
# TODO: need something better than this:
# chmod -R 777 /data /output

View File

@@ -1,12 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
out=GFPGANv1.3.pth
out=StableDiffusion/model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth
out=GFPGAN/GFPGANv1.4.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN_x4plus.pth
out=RealESRGAN/RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN/RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR.yaml
out=LDSR/project.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR.ckpt
out=LDSR/model.ckpt

View File

@@ -16,7 +16,7 @@ RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 7623a5734740025d79b710f3744bff9276e1467b
git reset --hard 1a9c053cb7b6832695771db2555c0adc9b41e95f
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
@@ -24,8 +24,8 @@ EOF
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
# ARG BRANCH=master SHA=d0bb60a139d60e6c2b9be4e18e0e29a86aa5af59
ARG BRANCH=dev SHA=1fd28eed1ebc3aa04b9b00e2a899f3bf07f64bdc
ARG BRANCH=master SHA=1a9c053cb7b6832695771db2555c0adc9b41e95f
# ARG BRANCH=dev SHA=1e7bdfe3f38a6dd37fc230f440ea1b0db0937240
RUN <<EOF
cd stable-diffusion
git fetch
@@ -42,9 +42,9 @@ COPY . /docker/
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh
WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt ${CLI_ARGS}
# STREAMLIT_SERVER_PORT=7860 python -m streamlit run scripts/webui_streamlit.py --theme.base dark
python3 -u scripts/webui.py --outdir /output --ckpt /data/StableDiffusion/model.ckpt ${CLI_ARGS}
# sed -i -- 's/8501/7860/g' .streamlit/config.toml && STREAMLIT_SERVER_HEADLESS=true python -u -m streamlit run scripts/webui_streamlit.py --theme.base dark

View File

@@ -1,33 +1,31 @@
#!/bin/bash
set -e
set -Eeuo pipefail
declare -A MODELS
declare -A MOUNTS
ROOT=/stable-diffusion/src
MODELS["${ROOT}/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS["${ROOT}/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus.pth"]=RealESRGAN_x4plus.pth
MODELS["${ROOT}/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth"]=RealESRGAN_x4plus_anime_6B.pth
MODELS["${ROOT}/latent-diffusion/experiments/pretrained_models/model.ckpt"]=LDSR.ckpt
MODELS["${ROOT}/latent-diffusion/experiments/pretrained_models/project.yaml"]=LDSR.yaml
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${PWD}/models/realesrgan"]=/data/RealESRGAN
MOUNTS["${PWD}/models/ldsr"]=/data/LDSR
MOUNTS["${PWD}/models/custom"]=/data/StableDiffusion
MODELS_DIR=/cache/models
# hack
MOUNTS["${PWD}/models/gfpgan/GFPGANv1.3.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${PWD}/models/gfpgan/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="${MODELS_DIR}/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
# force facexlib cache
mkdir -p /cache/weights/ /stable-diffusion/gfpgan/
ln -sf /cache/weights/ /stable-diffusion/gfpgan/
# streamlit config
ln -sf /docker/userconfig_streamlit.yaml /stable-diffusion/configs/webui/userconfig_streamlit.yaml

View File

@@ -1,7 +1,7 @@
general:
outdir: /outputs
default_model: "Stable Diffusion v1.4"
default_model_path: /cache/models/model.ckpt
default_model_path: /data/StableDiffusion/model.ckpt
outdir_txt2img: /outputs/txt2img-samples
outdir_img2img: /outputs/img2img-samples
optimized: True

View File

@@ -12,28 +12,27 @@ RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorc
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
ENV PIP_EXISTS_ACTION=w
RUN <<EOF
git clone https://github.com/invoke-ai/InvokeAI.git stable-diffusion
cd stable-diffusion
git reset --hard a1739a73b48bfe98b6abcb67f5a0197a9ad270e0
git reset --hard 8a8be92eac17e0ef699528157596b2336bdee532
sed -i -- 's/python=3.8.5/python=3.9/g' environment.yaml
git config --global http.postBuffer 1048576000
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
ARG BRANCH=development SHA=b40bfb5116b7fc618f78a0d152005ceb46153443
# this breaks on generation:
# there is a new UI anyway, but it is not by any means ready.
# ARG BRANCH=development SHA=bdbc76fcd4bd3362312dc91b087d9af66de423b1
ARG BRANCH=development SHA=4f247a3672474bd9c46060bab6087dbf9e2531f3
RUN <<EOF
cd stable-diffusion
git fetch
git reset --hard
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda env update --file environment.yml -n base
conda clean -a -y
EOF
@@ -41,16 +40,18 @@ RUN pip uninstall opencv-python -y && pip install --prefer-binary --force-reinst
COPY . /docker/
RUN <<EOF
python3 /docker/info.py /stable-diffusion/static/dream_web/index.html
python3 /docker/info.py /stable-diffusion/frontend/dist/index.html
chmod +x /docker/mount.sh
sed -i -- 's/outputs\//\/output/g' /stable-diffusion/backend/server.py
EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PRELOAD=false CLI_ARGS=""
ENV PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion
EXPOSE 7860
CMD /docker/mount.sh && \
# python3 -u backend/server.py --host 0.0.0.0 --port 7860 --cors http://localhost:7860
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
# python3 -u backend/server.py --host 0.0.0.0 --port 9090
# echo The lstein webUI is currently deactivated due to implementation limitations: \
# https://github.com/invoke-ai/InvokeAI/blob/8c9f2ae705cf723d4a8a73c416e8d8bf2d746977/backend/modules/create_cmd_parser.py#L26 \
# Once the path the output is fixed, the UI will be activated again

View File

@@ -4,7 +4,10 @@ from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace('GitHub site</a>', """
GitHub site</a>, Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
.replace(' <div id="root"></div>', """
<div id="root"></div>
<div>
Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
</div>
""", 1)
)

View File

@@ -1,28 +1,27 @@
#!/bin/bash
set -eu
set -Eeuo pipefail
ROOT=/stable-diffusion
declare -A MOUNTS
mkdir -p "${ROOT}/models/ldm/stable-diffusion-v1/"
ln -sf /cache/models/model.ckpt "${ROOT}/models/ldm/stable-diffusion-v1/model.ckpt"
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${PWD}/models/ldm/stable-diffusion-v1/model.ckpt"]=/data/StableDiffusion/model.ckpt
MOUNTS["${PWD}/src/gfpgan/experiments/pretrained_models/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${PWD}/ldm/dream/restoration/codeformer/weights"]=/data/CodeFormer
# hacks
MOUNTS["/opt/conda/lib/python3.9/site-packages/facexlib/weights"]=/data/.cache
MOUNTS["/opt/conda/lib/python3.9/site-packages/realesrgan/weights"]=/data/RealESRGAN
base="${ROOT}/src/gfpgan/experiments/pretrained_models/"
mkdir -p "${base}"
# TODO: "real" GFPGANv1.4.pth
ln -sf /cache/models/GFPGANv1.3.pth "${base}/GFPGANv1.4.pth"
echo "Mounted GFPGANv1.3.pth"
# facexlib
FACEX_WEIGHTS=/opt/conda/lib/python3.9/site-packages/facexlib/weights
rm -rf "${FACEX_WEIGHTS}"
mkdir -p /cache/weights
ln -sf -T /cache/weights "${FACEX_WEIGHTS}"
REALESRGAN_WEIGHTS=/opt/conda/lib/python3.9/site-packages/realesrgan/weights
rm -rf "${REALESRGAN_WEIGHTS}"
ln -sf -T /cache/weights "${REALESRGAN_WEIGHTS}"
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py