16 Commits
5.0.1 ... 5.0.3

Author SHA1 Message Date
AbdBarho
2efaeb41cd Add styles.csv support (#440)
Follow up to #386 after
https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9334 has
been merged.

Closes #435
2023-05-04 07:48:19 +02:00
AbdBarho
56b942237e Update Auto (#379)
Last version before pytorch 2


5ab7f213be
2023-05-04 07:29:49 +02:00
AbdBarho
7b8bc3d74a LyCORIS - ModelScope (#439)
Follow up to #401

Closes #401
Closes #437

---------

Co-authored-by: svupper <56261963+svupper@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@ip-10-123-2-162.eu-west-3.compute.internal>
2023-05-04 06:55:01 +02:00
divens
445f3f8bac Add tty to comfy service (#429)
Closes issue #428

Co-authored-by: Dylan Ivens <12586504+divens@users.noreply.github.com>
2023-04-28 19:55:06 +02:00
LEv145
076b5747d3 Fix file permissions (#425)
https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/424

Co-authored-by: LEv145 <you@example.com>
2023-04-28 15:42:12 +02:00
PassiveLemon
2a0de025e2 Support for ComfyUI (#384)
As discussed in Discussion
[#367](https://github.com/AbdBarho/stable-diffusion-webui-docker/discussions/367),
this adds support for the newer ComfyUI. I forked the fork that would
already add this but the maintainer of that fork hasn't implemented the
changes needed to properly get the output function working, which I did.
I believe everything is functional though I have not tested every single
node.

I changed the table format for the README and a few other minor things
for aesthetic reasons but if you want me to revert those, I will.

---------

Co-authored-by: Jonathan Kovacs <jkovacs-dev@users.noreply.github.com>
Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-04-21 21:34:17 +02:00
AbdBarho
10c16e1971 Refactor invoke (#405)
Fixes a problem with cross attention class missing from diffusers


models are now taken from the huggingFace cache.


50eb02f68b
2023-04-16 10:56:27 +02:00
AJ Walter
555c26b7ce Make Dockerfiles OCI compliant (#408)
## Justification

Closes issue #352

This update makes the Dockerfiles OCI compliant, making it easier to use
Buildah or other image building techniques that require it

## Implementation

This changes a few things, listed below:

* auto: Download container is switched to alpine. The `git` container
specified the `/git` directory as a volume. As such, all the files under
`/git` would be lost after each script invoke. Alpine is used later in
the build process anyway, so it shouldn't be any extra cost to switch to
it
* auto: "New" clone.sh script is copied into the container, which is
basically just the previous clone script that was embedded in the
Dockerfile.
* all: `<<EOF` heredoc styles have been switched to `&& \`
* all: I added NVIDIA_DRIVER_CAPABILITIES and NVIDIA_VISIBLE_DEVICES to
expose my Nvidia card. This is most likely a selinux/podman problem, but
shouldn't change anything with docker to add it.
* docker-compose: I added selinux labeling. I tested this with real
docker (not just podman!) and it seems to work fine. Though I suggest
you try it too.

## Testing

Locally builds with buildah. 

Note: for caching to work properly, you still need to replace
`/root/.cache/pip` with `/root/.cache/pip,Z` on selinux systems.

Note: I was having some trouble running invoke. Thought it was this PR,
but it's a known issue. See
https://github.com/invoke-ai/InvokeAI/issues/3182

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-04-16 10:32:03 +02:00
Simon Oelerich
5d379bf7bc Add mounts for openpose (#387)
Upon enabling the ControlNet addon from
https://github.com/AbdBarho/stable-diffusion-webui-docker/pull/385 one
might want to use the `openpose` preprocessors. Those are downloaded by
the addon the first time they are used. Without proper mounts those
networks will be downloaded on usage after each container start.
This PR enables those mounts to reduce data traffic.
2023-04-05 19:09:07 +02:00
Simon Oelerich
d2c1e551d7 Enable ControlNet mounts for AUTOMATIC1111 (#385)
The ControlNet addon
[sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
requires the `data/ControlNet` folder to be mounted into
`models/ControlNet`.
This PR enables said mount and adds the ControlNet folder to
`.gitignore` file.

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-04-04 18:55:14 +02:00
AbdBarho
063665eae1 Update Auto (#365)
a9fed7c364
2023-03-14 18:30:08 +01:00
AbdBarho
bb54e89b34 Update Auto (#363)
27e319dc4f
2023-03-11 22:35:11 +01:00
AbdBarho
aa69f11230 Fix preload for Invoke (#346)
Refs #345
2023-02-27 19:59:36 +01:00
AbdBarho
c54e26348e Update Invoke (#343)
6e0c6d9cc9
2023-02-26 10:53:57 +01:00
AbdBarho
b36de9ef2b Add libgoogle-perftools-dev (#341)
- auto:
0cc0ee1bcb

Closes #326
2023-02-23 21:50:16 +01:00
AbdBarho
70d8d7f37f Update versions (#338)
- auto:
076d624a29
- invoke:
d3c1b747ee
2023-02-19 16:25:06 +01:00
15 changed files with 234 additions and 138 deletions

View File

@@ -11,3 +11,4 @@ Closes issue #
- auto: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/
- sygil: https://github.com/Sygil-Dev/sygil-webui/commit/
- invoke: https://github.com/invoke-ai/InvokeAI/commit/
- comfy: https://github.com/comfyanonymous/ComfyUI/commit/

View File

@@ -16,6 +16,7 @@ jobs:
- auto
- sygil
- invoke
- comfy
- download
runs-on: ubuntu-latest
name: ${{ matrix.profile }}

View File

@@ -34,6 +34,14 @@ This repository provides multiple UIs for you to play around with stable diffusi
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541298-f902b021-a1eb-4e4b-b2eb-b6a696a8ec80.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541295-7d7f2162-2189-4e0a-abbd-703f4779e1cd.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541294-aa7f7735-a973-4e17-ada0-1fe3acbb1772.jpg) |
### [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
[Full feature list here](https://github.com/comfyanonymous/ComfyUI#features), Screenshot:
| Workflow |
| -------------------------------------------------------------------------------- |
| ![](https://github.com/comfyanonymous/ComfyUI/raw/master/comfyui_screenshot.png) |
## Contributing
Contributions are welcome! **Create a discussion first of what the problem is and what you want to contribute (before you implement anything)**
@@ -51,5 +59,6 @@ Special thanks to everyone behind these awesome projects, without them, none of
- [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
- [InvokeAI](https://github.com/invoke-ai/InvokeAI)
- [Sygil-webui](https://github.com/Sygil-Dev/sygil-webui)
- [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
- [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
- and many many more.

4
data/.gitignore vendored
View File

@@ -20,3 +20,7 @@
/VAE
/embeddings
/Lora
/ControlNet
/openpose
/ModelScope
/LyCORIS

View File

@@ -28,7 +28,7 @@ services:
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
image: sd-auto:45
image: sd-auto:54
environment:
- CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api
@@ -43,11 +43,10 @@ services:
<<: *base_service
profiles: ["invoke"]
build: ./services/invoke/
image: sd-invoke:23
image: sd-invoke:27
environment:
- PRELOAD=true
- CLI_ARGS=
- CLI_ARGS=--no-nsfw_checker --no-safety_checker --xformers
sygil: &sygil
<<: *base_service
@@ -63,3 +62,20 @@ services:
profiles: ["sygil-sl"]
environment:
- USE_STREAMLIT=1
comfy: &comfy
<<: *base_service
profiles: ["comfy"]
build: ./services/comfy/
image: sd-comfy:1
tty: true
environment:
- CLI_ARGS=
comfy-cpu:
<<: *comfy
profiles: ["comfy-cpu"]
deploy: {}
environment:
- CLI_ARGS=--cpu

View File

@@ -1,14 +1,6 @@
# syntax=docker/dockerfile:1
FROM alpine/git:2.36.2 as download
SHELL ["/bin/sh", "-ceuxo", "pipefail"]
RUN <<EOF
cat <<'EOE' > /clone.sh
mkdir -p repositories/"$1" && cd repositories/"$1" && git init && git remote add origin "$2" && git fetch origin "$3" --depth=1 && git reset --hard "$3" && rm -rf .git
EOE
EOF
COPY clone.sh /clone.sh
RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd0b2355b84d7ea6 \
&& rm -rf data assets **/*.ipynb
@@ -30,21 +22,20 @@ RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diff
FROM python:3.10.9-slim
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
RUN PIP_NO_CACHE_DIR=1 pip install torch==1.13.1+cu117 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
RUN --mount=type=cache,target=/root/.cache/pip pip install torch==1.13.1+cu117 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
# RUN --mount=type=cache,target=/root/.cache/pip pip install torch==2.0.0+cu118 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
RUN apt-get update && apt install fonts-dejavu-core rsync git jq moreutils -y && apt-get clean
RUN --mount=type=cache,target=/root/.cache/pip <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git reset --hard d7aec59c4eb02f723b3d55c6f927a42e97acd679
pip install -r requirements_versions.txt
EOF
RUN --mount=type=cache,target=/root/.cache/pip \
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git && \
cd stable-diffusion-webui && \
git reset --hard d7aec59c4eb02f723b3d55c6f927a42e97acd679 && \
pip install -r requirements_versions.txt
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.15-cp310-cp310-linux_x86_64.whl \
@@ -53,7 +44,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
ENV ROOT=/stable-diffusion-webui
COPY --from=download /git/ ${ROOT}
COPY --from=download /repositories/ ${ROOT}/repositories/
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/data/* ${ROOT}/interrogate
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r ${ROOT}/repositories/CodeFormer/requirements.txt
@@ -67,26 +58,31 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=3715ece0adce7bf7c5e9c5ab3710b2fdc3848f39
RUN --mount=type=cache,target=/root/.cache/pip <<EOF
cd stable-diffusion-webui
git fetch
git reset --hard ${SHA}
pip install -r requirements_versions.txt
EOF
# TODO: either remove if fixed in A1111 (unlikely) or move to the top with other apt stuff
RUN apt-get -y install libgoogle-perftools-dev && apt-get clean
ENV LD_PRELOAD=libtcmalloc.so
ARG SHA=5ab7f213bec2f816f9c5644becb32eb72c8ffb89
RUN --mount=type=cache,target=/root/.cache/pip \
cd stable-diffusion-webui && \
git fetch && \
git reset --hard ${SHA} && \
pip install -r requirements_versions.txt
RUN --mount=type=cache,target=/root/.cache/pip pip install -U opencv-python-headless
COPY . /docker
RUN <<EOF
python3 /docker/info.py ${ROOT}/modules/ui.py
mv ${ROOT}/style.css ${ROOT}/user.css
# one of the ugliest hacks I ever wrote
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /usr/local/lib/python3.10/site-packages/gradio/routes.py
EOF
RUN \
python3 /docker/info.py ${ROOT}/modules/ui.py && \
mv ${ROOT}/style.css ${ROOT}/user.css && \
# one of the ugliest hacks I ever wrote \
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /usr/local/lib/python3.10/site-packages/gradio/routes.py && \
git config --global --add safe.directory '*'
WORKDIR ${ROOT}
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
ENV NVIDIA_VISIBLE_DEVICES=all
ENV CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]

View File

@@ -0,0 +1,11 @@
#!/bin/bash
set -Eeuox pipefail
mkdir -p /repositories/"$1"
cd /repositories/"$1"
git init
git remote add origin "$2"
git fetch origin "$3" --depth=1
git reset --hard "$3"
rm -rf .git

View File

@@ -15,6 +15,10 @@ if [ ! -f /data/config/auto/ui-config.json ]; then
echo '{}' >/data/config/auto/ui-config.json
fi
if [ ! -f /data/config/auto/styles.csv ]; then
touch /data/config/auto/styles.csv
fi
declare -A MOUNTS
MOUNTS["/root/.cache"]="/data/.cache"
@@ -35,10 +39,15 @@ MOUNTS["${ROOT}/models/torch_deepdanbooru"]="/data/Deepdanbooru"
MOUNTS["${ROOT}/models/BLIP"]="/data/BLIP"
MOUNTS["${ROOT}/models/midas"]="/data/MiDaS"
MOUNTS["${ROOT}/models/Lora"]="/data/Lora"
MOUNTS["${ROOT}/models/LyCORIS"]="/data/LyCORIS"
MOUNTS["${ROOT}/models/ControlNet"]="/data/ControlNet"
MOUNTS["${ROOT}/models/openpose"]="/data/openpose"
MOUNTS["${ROOT}/models/ModelScope"]="/data/ModelScope"
MOUNTS["${ROOT}/embeddings"]="/data/embeddings"
MOUNTS["${ROOT}/config.json"]="/data/config/auto/config.json"
MOUNTS["${ROOT}/ui-config.json"]="/data/config/auto/ui-config.json"
MOUNTS["${ROOT}/styles.csv"]="/data/config/auto/styles.csv"
MOUNTS["${ROOT}/extensions"]="/data/config/auto/extensions"
# extra hacks

44
services/comfy/Dockerfile Normal file
View File

@@ -0,0 +1,44 @@
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/5.0.0/xformers-0.0.17.dev449-cp310-cp310-manylinux2014_x86_64.whl'
FROM python:3.10.9-slim
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
RUN --mount=type=cache,target=/root/.cache/pip pip install torch==1.13.1 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
RUN apt-get update && apt-get install -y git && apt-get clean
ENV ROOT=/stable-diffusion
RUN --mount=type=cache,target=/root/.cache/pip \
git clone https://github.com/comfyanonymous/ComfyUI.git ${ROOT} && \
cd ${ROOT} && \
git checkout master && \
git reset --hard 884ea653c8d6fe19b3724f45a04a0d74cd881f2f && \
pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.17-cp310-cp310-linux_x86_64.whl \
pip install triton /xformers-0.0.17-cp310-cp310-linux_x86_64.whl
WORKDIR ${ROOT}
ARG BRANCH=master SHA=884ea653c8d6fe19b3724f45a04a0d74cd881f2f
RUN --mount=type=cache,target=/root/.cache/pip \
git fetch && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -r requirements.txt
# add info
COPY . /docker/
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python -u main.py --listen --port 7860 ${CLI_ARGS}

47
services/comfy/entrypoint.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
mkdir -vp /data/config/comfy/
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${ROOT}/models/checkpoints"]="/data/StableDiffusion"
MOUNTS["${ROOT}/models/controlnet"]="/data/ControlNet"
MOUNTS["${ROOT}/models/upscale_models/RealESRGAN"]="/data/RealESRGAN"
MOUNTS["${ROOT}/models/upscale_models/GFPGAN"]="/data/GFPGAN"
MOUNTS["${ROOT}/models/upscale_models/SwinIR"]="/data/SwinIR"
MOUNTS["${ROOT}/models/vae"]="/data/VAE"
# data
MOUNTS["${ROOT}/models/loras"]="/data/Lora"
MOUNTS["${ROOT}/models/embeddings"]="/data/embeddings"
# config
# TODO: I am not sure if this is final, maybe it should change in the future
MOUNTS["${ROOT}/models/clip"]="/data/.cache/comfy/clip"
MOUNTS["${ROOT}/models/clip_vision"]="/data/.cache/comfy/clip_vision"
MOUNTS["${ROOT}/models/custom_nodes"]="/data/config/comfy/custom_nodes"
MOUNTS["${ROOT}/models/style_models"]="/data/config/comfy/style_models"
MOUNTS["${ROOT}/models/t2i_adapter"]="/data/config/comfy/t2i_adapter"
# output
MOUNTS["${ROOT}/output"]="/output/comfy"
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
if [ ! -f "$from_path" ]; then
mkdir -vp "$from_path"
fi
mkdir -vp "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
exec "$@"

View File

@@ -3,7 +3,7 @@
set -Eeuo pipefail
# TODO: maybe just use the .gitignore file to create all of these
mkdir -vp /data/.cache /data/StableDiffusion /data/Codeformer /data/GFPGAN /data/ESRGAN /data/BSRGAN /data/RealESRGAN /data/SwinIR /data/LDSR /data/ScuNET /data/embeddings /data/VAE /data/Deepdanbooru /data/MiDaS /data/Lora
mkdir -vp /data/.cache /data/StableDiffusion /data/LyCORIS /data/Codeformer /data/ModelScope /data/GFPGAN /data/ESRGAN /data/BSRGAN /data/RealESRGAN /data/SwinIR /data/LDSR /data/ScuNET /data/embeddings /data/VAE /data/Deepdanbooru /data/MiDaS /data/Lora /data/ControlNet /data/openpose
echo "Downloading, this might take a while..."

View File

@@ -1,5 +1,3 @@
# syntax=docker/dockerfile:1
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/5.0.0/xformers-0.0.17.dev449-cp310-cp310-manylinux2014_x86_64.whl'
@@ -7,66 +5,54 @@ RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diff
FROM python:3.10-slim
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive PIP_EXISTS_ACTION=w PIP_PREFER_BINARY=1
RUN --mount=type=cache,target=/root/.cache/pip pip install torch==1.13.1+cu117 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
RUN apt-get update && apt-get install git -y && apt-get clean
RUN git clone https://github.com/invoke-ai/InvokeAI.git /stable-diffusion
WORKDIR /stable-diffusion
RUN --mount=type=cache,target=/root/.cache/pip <<EOF
git reset --hard f232068ab89bd80e4f5f3133dcdb62ea78f1d0f7
git config --global http.postBuffer 1048576000
egrep -v '^-e .' environments-and-requirements/requirements-lin-cuda.txt > req.txt
pip install -r req.txt
rm req.txt
EOF
# patch match:
# https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md
RUN <<EOF
apt-get update
# apt-get install build-essential python3-opencv libopencv-dev -y
apt-get install make g++ libopencv-dev -y
apt-get clean
cd /usr/lib/x86_64-linux-gnu/pkgconfig/
ln -sf opencv4.pc opencv.pc
EOF
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && \
apt-get install make g++ git libopencv-dev -y && \
apt-get clean && \
cd /usr/lib/x86_64-linux-gnu/pkgconfig/ && \
ln -sf opencv4.pc opencv.pc
ARG BRANCH=main SHA=6551527fe249dc7a44e3fab9db9451c0dc3ad851
RUN --mount=type=cache,target=/root/.cache/pip <<EOF
git fetch
git reset --hard
git checkout ${BRANCH}
git reset --hard ${SHA}
pip install .
# egrep -v '^-e .' environments-and-requirements/requirements-lin-cuda.txt > req.txt
# pip install -r req.txt
# rm req.txt
EOF
ENV ROOT=/InvokeAI
RUN git clone https://github.com/invoke-ai/InvokeAI.git ${ROOT}
WORKDIR ${ROOT}
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.15-cp310-cp310-linux_x86_64.whl \
pip install -U opencv-python-headless huggingface_hub triton /xformers-0.0.15-cp310-cp310-linux_x86_64.whl && \
git reset --hard 4463124bddd221c333d4c70e73aa2949ad35453d && \
pip install .
ARG BRANCH=main SHA=50eb02f68be912276a9c106d5e8038a5671a0386
RUN --mount=type=cache,target=/root/.cache/pip \
git fetch && \
git reset --hard && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -U .
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.17-cp310-cp310-linux_x86_64.whl \
pip install -U opencv-python-headless triton /xformers-0.0.17-cp310-cp310-linux_x86_64.whl && \
python3 -c "from patchmatch import patch_match"
RUN touch invokeai.init
COPY . /docker/
# mkdir configs && cp invokeai/configs/INITIAL_MODELS.yaml configs/models.yaml
ENV PYTHONUNBUFFERED=1 ROOT=/stable-diffusion PYTHONPATH="${PYTHONPATH}:${ROOT}" PRELOAD=false CLI_ARGS="" HF_HOME=/root/.cache/huggingface
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
ENV NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONUNBUFFERED=1 PRELOAD=false HF_HOME=/root/.cache/huggingface CONFIG_DIR=/data/config/invoke CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD invokeai --web --host 0.0.0.0 --port 7860 --config /docker/models.yaml --root_dir ${ROOT} --outdir /output/invoke ${CLI_ARGS}
CMD invokeai --web --host 0.0.0.0 --port 7860 --root_dir ${ROOT} --config ${CONFIG_DIR}/models.yaml --outdir /output/invoke ${CLI_ARGS}
# TODO: make sure the config is persisted between sessions

View File

@@ -4,25 +4,25 @@ set -Eeuo pipefail
declare -A MOUNTS
mkdir -p ${CONFIG_DIR}
# cache
MOUNTS["/root/.cache"]=/data/.cache/
# this is really just a hack to avoid migrations
rm -rf ${HF_HOME}/diffusers
# ui specific
MOUNTS["${ROOT}/models/codeformer"]=/data/Codeformer/
MOUNTS["${ROOT}/models/gfpgan/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${ROOT}/models/gfpgan/weights"]=/data/.cache/
MOUNTS["${ROOT}/models/gfpgan/weights"]=/data/GFPGAN/
MOUNTS["${ROOT}/models/realesrgan"]=/data/RealESRGAN/
MOUNTS["${ROOT}/models/bert-base-uncased"]=/data/.cache/huggingface/transformers/
MOUNTS["${ROOT}/models/openai/clip-vit-large-patch14"]=/data/.cache/huggingface/transformers/
MOUNTS["${ROOT}/models/CompVis/stable-diffusion-safety-checker"]=/data/.cache/huggingface/transformers/
MOUNTS["${ROOT}/models/ldm"]=/data/.cache/invoke/ldm/
MOUNTS["${ROOT}/embeddings"]=/data/embeddings/
# hacks
MOUNTS["${ROOT}/models/clipseg"]=/data/.cache/invoke/clipseg/
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
@@ -38,9 +38,10 @@ for to_path in "${!MOUNTS[@]}"; do
echo Mounted $(basename "${from_path}")
done
# if "${PRELOAD}" == "true"; then
# set -Eeuo pipefail
# python3 -u scripts/preload_models.py --skip-sd-weights --root ${ROOT} --config_file /docker/models.yaml
# fi
if "${PRELOAD}" == "true"; then
set -Eeuo pipefail
invokeai-configure --root ${ROOT} --yes
cp ${ROOT}/configs/models.yaml ${CONFIG_DIR}/models.yaml
fi
exec "$@"

View File

@@ -1,23 +0,0 @@
# This file describes the alternative machine learning models
# available to InvokeAI script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
stable-diffusion-1.5:
description: Stable Diffusion version 1.5
weights: /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
vae: /data/VAE/vae-ft-mse-840000-ema-pruned.ckpt
config: /stable-diffusion/invokeai/configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
default: true
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting
weights: /data/StableDiffusion/sd-v1-5-inpainting.ckpt
vae: /data/VAE/vae-ft-mse-840000-ema-pruned.ckpt
config: /stable-diffusion/invokeai/configs/stable-diffusion/v1-inpainting-inference.yaml
width: 512
height: 512
default: false

View File

@@ -1,45 +1,39 @@
# syntax=docker/dockerfile:1
FROM python:3.8-slim
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
RUN --mount=type=cache,target=/root/.cache/pip pip install torch==1.13.0 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
RUN apt-get update && apt install gcc libsndfile1 ffmpeg build-essential zip unzip git -y && apt-get clean
RUN --mount=type=cache,target=/root/.cache/pip <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/Sygil-Dev/sygil-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 5291437085bddd16d752f811b6552419a2044d12
pip install -r requirements.txt
EOF
RUN --mount=type=cache,target=/root/.cache/pip \
git config --global http.postBuffer 1048576000 && \
git clone https://github.com/Sygil-Dev/sygil-webui.git stable-diffusion && \
cd stable-diffusion && \
git reset --hard 5291437085bddd16d752f811b6552419a2044d12 && \
pip install -r requirements.txt
ARG BRANCH=master SHA=571fb897edd58b714bb385dfaa1ad59aecef8bc7
RUN --mount=type=cache,target=/root/.cache/pip <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
pip install -r requirements.txt
EOF
RUN --mount=type=cache,target=/root/.cache/pip \
cd stable-diffusion && \
git fetch && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip pip install -U 'transformers>=4.24'
# add info
COPY . /docker/
RUN <<EOF
python /docker/info.py /stable-diffusion/frontend/frontend.py
chmod +x /docker/mount.sh /docker/run.sh
# streamlit
sed -i -- 's/8501/7860/g' /stable-diffusion/.streamlit/config.toml
EOF
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && \
chmod +x /docker/mount.sh /docker/run.sh && \
# streamlit \
sed -i -- 's/8501/7860/g' /stable-diffusion/.streamlit/config.toml
WORKDIR /stable-diffusion
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
ENV NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" STREAMLIT_SERVER_HEADLESS=true USE_STREAMLIT=0 CLI_ARGS=""
EXPOSE 7860