33 Commits

Author SHA1 Message Date
lainedfles
7a6f5e1374 Merge 0f3187505b into 802d0bcd68 2024-10-29 07:39:23 +00:00
Self Denial
0f3187505b feat: Add support for Krita AI Diffusion support
This commit adds support for integrating Krita AI Diffusion models and nodes into the Comfy image editing application. It includes the following changes:

- Downloads recommended models for Krita AI Diffusion if enabled
- Symlinks the downloaded upscale models into the models directory
- Adds environment variables to enable/disable dependancy node packs
- Clones and sets up the required Git repositories for those node packs if enabled
- Installs additional dependencies like aiohttp and tqdm if Krita AI Diffusion is used

These changes allow Comfy to leverage the powerful image generation and editing capabilities provided by the Krita AI Diffusion project. Users can now access advanced features like outpainting, inpainting, upscaling, etc. within the Comfy UI.

The commit also improves the build process by using a cache for pip installs and specifying types for mounted volumes in Docker for better performance and reproducibility.
2024-10-29 01:36:21 -06:00
Self Denial
25d8d0c008 Update, more ComfyUI Dockerfile features & bugfix
- Update base image to pytorch 2.3.1
    - Fix missing comment line escape
    - Support [krita-ai-diffusion](https://github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup) (`USE_KRITA=true`)
      This enables the following custom nodes (including comfyui_controlnet_aux):
      * **ComfyUI_IPAdapter_plus** (`USE_IPAPLUS=true`)
      * **comfyui-inpaint-nodes** (`USE_INPAINT=true`)
      * **comfyui-tooling-nodes** (`USE_TOOLING=true`)
2024-10-28 22:40:37 -06:00
Self Denial
6a0366cf45 New ComfyUI Dockerfile features
- Non-root default
- Use `git describe` to automatically use latest ComfyUI tag (or `USE_EDGE=true` variable for latest commit)
- Support custom_nodes:
  * **ComfyUI-GGUF support** (`USE_GGUF=true`)
  * **x-flux-comfyui support** (`USE_XFLUX=true`)
  * **comfyui_controlnet_aux support** (`USE_CNAUX=false`)
2024-09-29 03:45:53 -06:00
AbdBarho
802d0bcd68 Remove invoke (#705)
The invoke team already maintains a docker setup for their service, this
copy here was maybe relevant 2 years ago when all of this started, but I
don't think it makes sense anymore.

Refer to invoke's docs to install using docker
https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
2024-06-23 11:16:21 +02:00
mohamednabiel717
b1a26b8041 Update Auto to 1.9.4 (#700)
feee37d75f

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-06-07 19:10:28 +02:00
AbdBarho
f1bf3b0943 Bump pytorch containers (#697)
Closes #696
Closes #694
2024-05-28 19:39:33 +02:00
AbdBarho
35a18b3d46 Update Comfy (#693)
276f8fce9f

Closes #676 
Closes #674

Refs #686
2024-05-20 14:44:41 +02:00
神楽坂·喵
887e49c495 Add missing assets to auto1111 (#684)
Closes #683

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-05-20 13:41:54 +02:00
Derek Palmer (Creative)
7051ce0a44 Updated docker-compose to remove obsolete version syntax (#692)
Removes `version:` syntax in `docker-compose` file. If left in, it
throws an obsolete warning. I removed it from the docker-compose file to
reduce unnecessary warnings and to keep the code up to current
standards.

See [Version top-level element
(obsolete)](https://docs.docker.com/compose/compose-file/04-version-and-name/#version-top-level-element-obsolete)
for reference.
2024-05-20 13:41:36 +02:00
SachiaLanlus
ac94eac2b5 Update Auto v1.9.3 (#673)
Closes issue  #672

### Update versions

- auto:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.9.3
2024-05-20 13:35:07 +02:00
AbdBarho
015c2ec829 Pin xformers (for now) (#651)
Closes #648
Closes #649
2024-02-03 08:50:40 +01:00
AbdBarho
245d1d443f Update package index (#650)
Closes #622
2024-02-03 08:17:45 +01:00
Johannes Sjölund
60c4832185 Update open_clip to v2.20.0 in Auto (#617)
Fixes #615.

Updates `open-clip-torch` to the one specified in auto's
[requirements_versions.txt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/requirements_versions.txt#L18).

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:34:46 +01:00
Adam Florizone
f613639748 Update Auto v1.7.0 (#632)
Update Auto v1.7.0


https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.7.0

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:30:40 +01:00
simonmcnair
fbc5c359d0 Resolve memory usage situation in Auto (#620)
Fixes
https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/612

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:13:01 +01:00
sejoung kim
90affeb72a Bump Comfy (#603)
d1f3637a5a

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:04:02 +01:00
AbdBarho
3e67f559d4 Update Auto (#610)
Closes #609

4afaaf8a02
2023-11-13 21:12:07 +01:00
cococig
a2561f2659 Update automatic1111 webui base image (#601)
Update the minor version of Python in the base image for AUTOMATIC1111
web UI.

Closes issue #600
2023-11-13 19:35:24 +01:00
cloudaxes
6a34739135 Update Automatic1111 to v1.6.0 (#585)
Update Automatic1111 Stable Diffusion Webui to v1.6.0.

Closes #583 

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-09-09 16:10:05 +02:00
Sebastian Piechowiak
630980b1bf Skipping installation of requirements for disabled extensions (#582)
Closes #563
2023-09-09 15:34:06 +02:00
66li
84740598bc Update generative-models version (#581)
Upgrade a dependent library



https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.5.2/modules/launch_utils.py#L288C90-L288C130

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-08-31 20:04:32 +02:00
AbdBarho
59b9762ac7 Update Comfy (#580)
7e941f9f24
2023-08-30 20:00:48 +02:00
AbdBarho
70357bf01e Auto 1.5.2 (#579)
c9c8485bc1
2023-08-30 19:55:06 +02:00
Manuel Schmid
def76291f8 Update Automatic1111 to 1.5.1 to add compatibility for SDXL (#560)
Uses the latest release of
https://github.com/Stability-AI/generative-models
45c443b316737a4ab6e40413d7794a7f5657c19f

Tested with the official SDXL 1.0 model from
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors
and official refiner from
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors.
VAE:
https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensors

Closes #558
Closes #559

68f336bd99

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-07-30 15:42:32 +02:00
AbdBarho
09a0f11946 Add startup script for comfy (#552)
Closes #451

---------

Co-authored-by: PassiveLemon <lemonl3mn@protonmail.com>
2023-07-22 08:31:17 +02:00
cloudaxes
6de45b1984 Upgrade k-diffusion to Release 0.0.15 to get access to DPM++ (2M) SDE sampler. (#537)
Closes issue #536
2023-07-22 07:23:30 +02:00
AbdBarho
103e11493b Auto 1.4.0 (#507)
394ffa7b0a

Maybe bug:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11040
2023-07-02 08:15:51 +02:00
AbdBarho
95e96602f9 Bump auto 2023-06-26 21:57:45 +02:00
神楽坂·喵
37a82af4b7 Add build-essential package (#522)
Fix the problem that some extensions need to be installed from src
Now, because the step of installing extensions is moved forward in
`entrypoint.sh` instead of `startup.sh`, we cannot install some required
packages before executing `install.py`
When installing the extension `sd-webui-roop`, it relies on
`insightface==0.7.3`, and when installing this pypi package, it is found
that when building the wheel package, an error will be reported because
`gcc` cannot be found

ddc02ee1a9/requirements.txt (L1)
Therefore, considering that not all pypi packages are distributed in
wheel, those pypi packages distributed in src need `build-essential` to
build
2023-06-26 21:37:37 +02:00
AbdBarho
5e28222332 Allow setting port through env WEBUI_PORT (#521)
I am actually not happy with this solution, I would prefer if it was
possible to customize the ports within `docker-compose.override.yml`
2023-06-25 20:33:57 +02:00
AbdBarho
6c45e0c2ef Create dirs if not exist (#520)
Closes #519
2023-06-25 20:21:41 +02:00
神楽坂·喵
6365811f35 Modify installation extension dependencies (#518)
Perform a full extension installation process instead of just installing
dependencies
Some extensions do not include `requirements.txt` but install
dependencies in `install.py`, and all extensions include `install.py`,
so it is safe to use it for extended dependency installation
This is because the extension development of AUTOMATIC1111's webui does
not require the existence of `requirements.txt` but uses `install.py` to
initialize the extension

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions#installpy
2023-06-25 12:42:04 +02:00
14 changed files with 172 additions and 227 deletions

View File

@@ -19,7 +19,7 @@ assignees: ""
**Which UI**
auto or auto-cpu or invoke or sygil?
auto or auto-cpu or invoke or comfy?
**Hardware / Software**

View File

@@ -9,6 +9,5 @@ Closes issue #
### Update versions
- auto: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/
- sygil: https://github.com/Sygil-Dev/sygil-webui/commit/
- invoke: https://github.com/invoke-ai/InvokeAI/commit/
- comfy: https://github.com/comfyanonymous/ComfyUI/commit/

View File

@@ -14,7 +14,6 @@ jobs:
matrix:
profile:
- auto
- invoke
- comfy
- download
runs-on: ubuntu-latest

View File

@@ -18,14 +18,6 @@ This repository provides multiple UIs for you to play around with stable diffusi
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541954-46afd772-d0c8-4005-874c-e2eca40c02f2.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541956-5b528de7-1b5d-479f-a1db-d3f5a53afc59.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541957-cf78b352-a071-486d-8889-f26952779a61.jpg) |
### [InvokeAI](https://github.com/invoke-ai/InvokeAI)
[Full feature list here](https://github.com/invoke-ai/InvokeAI#features), Screenshots:
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/195158552-39f58cb6-cfcc-4141-9995-a626e3760752.jpg) | ![](https://user-images.githubusercontent.com/24505302/195158553-152a0ab8-c0fd-4087-b121-4823bcd8d6b5.jpg) | ![](https://user-images.githubusercontent.com/24505302/195158548-e118206e-c519-4915-85d6-4c248eb10fc0.jpg) |
### [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
[Full feature list here](https://github.com/comfyanonymous/ComfyUI#features), Screenshot:

View File

@@ -1,8 +1,6 @@
version: '3.9'
x-base_service: &base_service
ports:
- "7860:7860"
- "${WEBUI_PORT:-7860}:7860"
volumes:
- &v1 ./data:/data
- &v2 ./output:/output
@@ -29,7 +27,7 @@ services:
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
image: sd-auto:59
image: sd-auto:78
environment:
- CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api
@@ -40,27 +38,11 @@ services:
environment:
- CLI_ARGS=--no-half --precision full --allow-code --enable-insecure-extension-access --api
invoke: &invoke
<<: *base_service
profiles: ["invoke"]
build: ./services/invoke/
image: sd-invoke:30
environment:
- PRELOAD=true
- CLI_ARGS=--xformers
# invoke-cpu:
# <<: *invoke
# profiles: ["invoke-cpu"]
# environment:
# - PRELOAD=true
# - CLI_ARGS=--always_use_cpu
comfy: &comfy
<<: *base_service
profiles: ["comfy"]
build: ./services/comfy/
image: sd-comfy:3
image: sd-comfy:7
environment:
- CLI_ARGS=

View File

@@ -2,26 +2,19 @@ FROM alpine/git:2.36.2 as download
COPY clone.sh /clone.sh
RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd0b2355b84d7ea6 \
&& rm -rf data assets **/*.ipynb
RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git 6f7db241d2f8ba7457bac5ca9753331f0c266917
RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git 47b6b607fdd31875c9279cd2f4f16b92e4ea958e \
RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf \
&& rm -rf assets data/**/*.png data/**/*.jpg data/**/*.gif
RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af \
&& rm -rf assets inputs
RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git 5b3af030dd83e0297272d861c19477735d0317ec
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2486589f24165c8e3b303f84e9dbbea318df83e8
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git ab527a9a6d347f364e3d185ba6d714e22d80cb3c
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2cf03aaf6e704197fd0dae7c7f96aa59cf1b11c9
RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-models 45c443b316737a4ab6e40413d7794a7f5657c19f
RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets 6f7db241d2f8ba7457bac5ca9753331f0c266917
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
FROM python:3.10.9-slim
FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
@@ -30,61 +23,39 @@ RUN --mount=type=cache,target=/var/cache/apt \
# we need those
apt-get install -y fonts-dejavu-core rsync git jq moreutils aria2 \
# extensions needs those
ffmpeg libglfw3-dev libgles2-mesa-dev pkg-config libcairo2 libcairo2-dev
RUN --mount=type=cache,target=/cache --mount=type=cache,target=/root/.cache/pip \
aria2c -x 5 --dir /cache --out torch-2.0.1-cp310-cp310-linux_x86_64.whl -c \
https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl && \
pip install /cache/torch-2.0.1-cp310-cp310-linux_x86_64.whl torchvision --index-url https://download.pytorch.org/whl/cu118
ffmpeg libglfw3-dev libgles2-mesa-dev pkg-config libcairo2 libcairo2-dev build-essential
WORKDIR /
RUN --mount=type=cache,target=/root/.cache/pip \
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git && \
cd stable-diffusion-webui && \
git reset --hard 20ae71faa8ef035c31aa3a410b707d792c8203a3 && \
git reset --hard v1.9.4 && \
pip install -r requirements_versions.txt
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64.whl \
pip install /xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64.whl
ENV ROOT=/stable-diffusion-webui
COPY --from=download /repositories/ ${ROOT}/repositories/
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/data/* ${ROOT}/interrogate
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r ${ROOT}/repositories/CodeFormer/requirements.txt
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/clip_interrogator/data/* ${ROOT}/interrogate
RUN --mount=type=cache,target=/root/.cache/pip \
pip install pyngrok \
pip install pyngrok xformers==0.0.26.post1 \
git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 \
git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b
git+https://github.com/mlfoundations/open_clip.git@v2.20.0
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
# TODO: either remove if fixed in A1111 (unlikely) or move to the top with other apt stuff
# there seems to be a memory leak (or maybe just memory not being freed fast enough) that is fixed by this version of malloc
# maybe move this up to the dependencies list.
RUN apt-get -y install libgoogle-perftools-dev && apt-get clean
ENV LD_PRELOAD=libtcmalloc.so
ARG SHA=20ae71faa8ef035c31aa3a410b707d792c8203a3
RUN --mount=type=cache,target=/root/.cache/pip \
cd stable-diffusion-webui && \
git fetch && \
git reset --hard ${SHA} && \
pip install -r requirements_versions.txt
COPY . /docker
RUN \
python3 /docker/info.py ${ROOT}/modules/ui.py && \
mv ${ROOT}/style.css ${ROOT}/user.css && \
# mv ${ROOT}/style.css ${ROOT}/user.css && \
# one of the ugliest hacks I ever wrote \
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /usr/local/lib/python3.10/site-packages/gradio/routes.py && \
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /opt/conda/lib/python3.10/site-packages/gradio/routes.py && \
git config --global --add safe.directory '*'
WORKDIR ${ROOT}

View File

@@ -5,6 +5,10 @@ set -Eeuo pipefail
# TODO: move all mkdir -p ?
mkdir -p /data/config/auto/scripts/
# mount scripts individually
echo $ROOT
ls -lha $ROOT
find "${ROOT}/scripts/" -maxdepth 1 -type l -delete
cp -vrfTs /data/config/auto/scripts/ "${ROOT}/scripts/"
@@ -20,6 +24,8 @@ if [ ! -f /data/config/auto/styles.csv ]; then
fi
# copy models from original models folder
mkdir -p /data/models/VAE-approx/ /data/models/karlo/
rsync -a --info=NAME ${ROOT}/models/VAE-approx/ /data/models/VAE-approx/
rsync -a --info=NAME ${ROOT}/models/karlo/ /data/models/karlo/
@@ -57,9 +63,16 @@ chown -R root ~/.cache/
chmod 766 ~/.cache/
shopt -s nullglob
list=(./extensions/*/requirements.txt)
for req in "${list[@]}"; do
pip install -r "$req"
# For install.py, please refer to https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions#installpy
list=(./extensions/*/install.py)
for installscript in "${list[@]}"; do
EXTNAME=$(echo $installscript | cut -d '/' -f 3)
# Skip installing dependencies if extension is disabled in config
if $(jq -e ".disabled_extensions|any(. == \"$EXTNAME\")" config.json); then
echo "Skipping disabled extension ($EXTNAME)"
continue
fi
PYTHONPATH=${ROOT} python "$installscript"
done
if [ -f "/data/config/auto/startup.sh" ]; then

View File

@@ -1,14 +0,0 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace(' return demo', """
with demo:
gr.Markdown(
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)'
)
return demo
""", 1)
)

View File

@@ -1,42 +1,95 @@
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
FROM pytorch/pytorch:2.3.1-cuda12.1-cudnn8-runtime
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
# Limited system user UID
ARG USE_UID=991
# Limited system user GID
ARG USE_GID=991
# Latest tag or bleeding edge commit
ARG USE_EDGE=false
# ComfyUI-GGUF
ARG USE_GGUF=false
# x-flux-comfyui
ARG USE_XFLUX=false
# comfyui_controlnet_aux
ARG USE_CNAUX=false
# krita-ai-diffusion
ARG USE_KRITA=false
# ComfyUI_IPAdapter_plus
ARG USE_IPAPLUS=false
# comfyui-inpaint-nodes
ARG USE_INPAINT=false
# comfyui-tooling-nodes
ARG USE_TOOLING=false
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1 USE_EDGE=$USE_EDGE
ENV USE_GGUF=$USE_GGUF USE_XFLUX=$USE_XFLUX ROOT=/stable-diffusion
ENV CACHE=/home/app/.cache USE_CNAUX=$USE_CNAUX USE_KRITA=$USE_KRITA
ENV USE_IPAPLUS=$USE_IPAPLUS USE_INPAINT=$USE_INPAINT USE_TOOLING=$USE_TOOLING
RUN apt-get update && apt-get install -y git && apt-get clean
# User/Group
RUN groupadd -r app -g ${USE_GID} && useradd --no-log-init -m -r -g app app -u ${USE_UID} && \
mkdir -p ${ROOT} && chown ${USE_UID}:${USE_GID} ${ROOT} && mkdir -p ${CACHE}/pip && chown -R ${USE_UID}:${USE_GID} ${CACHE}
RUN --mount=type=cache,uid=${USE_UID},gid=${USE_GID},target=${CACHE} chown -R ${USE_UID}:${USE_UID} ${CACHE}
ENV ROOT=/stable-diffusion
RUN --mount=type=cache,target=/root/.cache/pip \
RUN apt-get update && apt-get install -y git && ([ "${USE_XFLUX}" = "true" ] && apt-get install -y libgl1-mesa-glx python3-opencv) && apt-get clean
USER app:app
ENV PATH="${PATH}:/home/app/.local/bin"
RUN --mount=type=cache,uid=${USE_UID},gid=${USE_GID},target=${CACHE} pip --cache-dir=${CACHE}/pip install -U pip
RUN --mount=type=cache,uid=${USE_UID},gid=${USE_GID},target=${CACHE} \
git clone https://github.com/comfyanonymous/ComfyUI.git ${ROOT} && \
cd ${ROOT} && \
git checkout master && \
git reset --hard 884ea653c8d6fe19b3724f45a04a0d74cd881f2f && \
pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21-cp310-cp310-linux_x86_64.whl \
pip install /xformers-0.0.21-cp310-cp310-linux_x86_64.whl
bash -c 'VERSION=$(git describe --tags --abbrev=0) && \
if [ "${USE_EDGE}" = "true" ]; then VERSION=$(git describe --abbrev=7); fi && \
git reset --hard ${VERSION}' && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && \
if [ "${USE_KRITA}" = "true" ]; then \
pip --cache-dir=${CACHE}/pip install aiohttp tqdm && \
git clone https://github.com/Acly/krita-ai-diffusion.git && \
cd krita-ai-diffusion && git checkout main && \
git submodule update --init && cd ..; \
export USE_CNAUX="true" USE_IPAPLUS="true" \
USE_INPAINT="true" USE_TOOLING="true"; \
fi; \
if [ "${USE_GGUF}" = "true" ]; then \
git clone https://github.com/city96/ComfyUI-GGUF.git && \
cd ComfyUI-GGUF && git checkout main && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && cd ..; \
fi; \
if [ "${USE_XFLUX}" = "true" ]; then \
git clone https://github.com/XLabs-AI/x-flux-comfyui.git && \
cd x-flux-comfyui && git checkout main && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && cd ..; \
fi; \
if [ "${USE_CNAUX}" = "true" ]; then \
git clone https://github.com/Fannovel16/comfyui_controlnet_aux.git && \
cd comfyui_controlnet_aux && git checkout main && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && \
# This extra step to separate onnxruntime installation is required to restore onnx cuda support \
pip --cache-dir=${CACHE}/pip install onnxruntime && pip --cache-dir=${CACHE}/pip install onnxruntime-gpu && cd ..; \
fi; \
if [ "${USE_IPAPLUS}" = "true" ]; then \
git clone https://github.com/cubiq/ComfyUI_IPAdapter_plus.git && \
cd ComfyUI_IPAdapter_plus && git checkout main && cd ..; \
fi; \
if [ "${USE_INPAINT}" = "true" ]; then \
git clone https://github.com/Acly/comfyui-inpaint-nodes.git && \
cd comfyui-inpaint-nodes && git checkout main && \
pip --cache-dir=${CACHE}/pip install opencv-python && cd ..; \
fi; \
if [ "${USE_TOOLING}" = "true" ]; then \
git clone https://github.com/Acly/comfyui-tooling-nodes.git && \
cd comfyui-tooling-nodes && git checkout main && cd ..; \
fi
WORKDIR ${ROOT}
COPY --chown=${USE_UID}:${USE_GID} . /docker/
RUN chmod u+x /docker/entrypoint.sh && cp /docker/extra_model_paths.yaml ${ROOT}
ARG BRANCH=master SHA=8607c2d42d10b0108de02528e813cc703e58813f
RUN --mount=type=cache,target=/root/.cache/pip \
git fetch && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -r requirements.txt
# add info
COPY . /docker/
RUN cp /docker/extra_model_paths.yaml ${ROOT}
ENV NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
ENV NVIDIA_VISIBLE_DEVICES=all PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python -u main.py --listen --port 7860 ${CLI_ARGS}

View File

@@ -2,11 +2,12 @@
set -Eeuo pipefail
mkdir -vp /data/config/comfy/custom_nodes
CUSTOM_NODES="/data/config/comfy/custom_nodes"
mkdir -vp "${CUSTOM_NODES}"
declare -A MOUNTS
MOUNTS["/root/.cache"]="/data/.cache"
MOUNTS["${CACHE}"]="/data/.cache"
MOUNTS["${ROOT}/input"]="/data/config/comfy/input"
MOUNTS["${ROOT}/output"]="/output/comfy"
@@ -22,4 +23,47 @@ for to_path in "${!MOUNTS[@]}"; do
echo Mounted $(basename "${from_path}")
done
if [ "${USE_KRITA}" = "true" ]; then
if [ "${KRITA_DOWNLOAD_MODELS:-false}" = "true" ]; then
cd "${ROOT}/krita-ai-diffusion/scripts" && python scripts/download_models.py --recommended /data && cd ..
cd "${ROOT}/models/" mv -v upscale_models upscale_models.stock && ln -sT /data/models/upscale_models upscale_models
fi
export USE_CNAUX="true" USE_IPAPLUS="true" USE_INPAINT="true"; USE_TOOLING="true"
fi
if [ "${USE_GGUF}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/ComfyUI-GGUF" ] && mv "${ROOT}/ComfyUI-GGUF" "${CUSTOM_NODES}"/
fi
if [ "${USE_XFLUX}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/x-flux-comfyui" ] && mv "${ROOT}/x-flux-comfyui" "${CUSTOM_NODES}"/
[ ! -e "/data/models/clip_vision" ] && mkdir -p /data/models/clip_vision
[ ! -e "/data/models/clip_vision/model.safetensors" ] && cd /data/models/clip_vision && \
python -c 'import sys; from urllib.request import urlopen; from pathlib import Path; Path(sys.argv[2]).write_bytes(urlopen("".join([sys.argv[1],sys.argv[2]])).read())' \
"https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/" "model.safetensors"
[ ! -e "/data/models/xlabs" ] && mkdir -p /data/models/xlabs/{ipadapters,loras,controlnets}
[ ! -e "/data/models/xlabs/ipadapters/flux-ip-adapter.safetensors" ] && cd /data/models/xlabs/ipadapters && \
python -c 'import sys; from urllib.request import urlopen; from pathlib import Path; Path(sys.argv[2]).write_bytes(urlopen("".join([sys.argv[1],sys.argv[2]])).read())' \
"https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/" "flux-ip-adapter.safetensors"
[ -d "${ROOT}/models/xlabs" ] && rm -rf "${ROOT}/models/xlabs"
[ ! -e "${ROOT}/models/xlabs" ] && cd "${ROOT}/models" && ln -sT /data/models/xlabs xlabs && cd ..
fi
if [ "${USE_CNAUX}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/comfyui_controlnet_aux" ] && mv "${ROOT}/comfyui_controlnet_aux" "${CUSTOM_NODES}"/
fi
if [ "${USE_IPAPLUS}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/ComfyUI_IPAdapter_plus" ] && mv "${ROOT}/ComfyUI_IPAdapter_plus" "${CUSTOM_NODES}"/
fi
if [ "${USE_INPAINT}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/comfyui-inpaint-nodes" ] && mv "${ROOT}/comfyui-inpaint-nodes" "${CUSTOM_NODES}"/
fi
if [ "${USE_TOOLING}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/comfyui-tooling-nodes" ] && mv "${ROOT}/comfyui-tooling-nodes" "${CUSTOM_NODES}"/
fi
if [ -f "/data/config/comfy/startup.sh" ]; then
pushd ${ROOT}
. /data/config/comfy/startup.sh
popd
fi
exec "$@"

View File

@@ -15,11 +15,15 @@ a111:
gligen: models/GLIGEN
clip: models/CLIPEncoder
embeddings: embeddings
unet: models/unet
clip_vision: models/clip_vision
xlabs: models/xlabs
inpaint: models/inpaint
ipadapter: models/ipadapter
custom_nodes: config/comfy/custom_nodes
# TODO: I am unsure about these, need more testing
# style_models: config/comfy/style_models
# t2i_adapter: config/comfy/t2i_adapter
# clip_vision: config/comfy/clip_vision
# diffusers: config/comfy/diffusers

View File

@@ -1,6 +1,6 @@
FROM bash:alpine3.15
FROM bash:alpine3.19
RUN apk add parallel aria2
RUN apk update && apk add parallel aria2
COPY . /docker
RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"]

View File

@@ -1,53 +0,0 @@
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
ENV DEBIAN_FRONTEND=noninteractive PIP_EXISTS_ACTION=w PIP_PREFER_BINARY=1
# patch match:
# https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && \
apt-get install make g++ git libopencv-dev -y && \
apt-get clean && \
cd /usr/lib/x86_64-linux-gnu/pkgconfig/ && \
ln -sf opencv4.pc opencv.pc
ENV ROOT=/InvokeAI
RUN git clone https://github.com/invoke-ai/InvokeAI.git ${ROOT}
WORKDIR ${ROOT}
RUN --mount=type=cache,target=/root/.cache/pip \
git reset --hard f3b2e02921927d9317255b1c3811f47bd40a2bf9 && \
pip install -e .
ARG BRANCH=main SHA=f3b2e02921927d9317255b1c3811f47bd40a2bf9
RUN --mount=type=cache,target=/root/.cache/pip \
git fetch && \
git reset --hard && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -U -e .
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21-cp310-cp310-linux_x86_64.whl \
pip install -U opencv-python-headless triton /xformers-0.0.21-cp310-cp310-linux_x86_64.whl && \
python3 -c "from patchmatch import patch_match"
COPY . /docker/
ENV NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONUNBUFFERED=1 PRELOAD=false HF_HOME=/root/.cache/huggingface CONFIG_DIR=/data/config/invoke CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD invokeai --web --host 0.0.0.0 --port 7860 --root_dir ${ROOT} --config ${CONFIG_DIR}/models.yaml \
--outdir /output/invoke --embedding_directory /data/embeddings/ --lora_directory /data/models/Lora \
--no-nsfw_checker --no-safety_checker ${CLI_ARGS}

View File

@@ -1,45 +0,0 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
mkdir -p ${CONFIG_DIR} ${ROOT}/configs/stable-diffusion/
# cache
MOUNTS["/root/.cache"]=/data/.cache/
# this is really just a hack to avoid migrations
rm -rf ${HF_HOME}/diffusers
# ui specific
MOUNTS["${ROOT}/models/codeformer"]=/data/models/Codeformer/
MOUNTS["${ROOT}/models/gfpgan/GFPGANv1.4.pth"]=/data/models/GFPGAN/GFPGANv1.4.pth
MOUNTS["${ROOT}/models/gfpgan/weights"]=/data/models/GFPGAN/
MOUNTS["${ROOT}/models/realesrgan"]=/data/models/RealESRGAN/
MOUNTS["${ROOT}/models/ldm"]=/data/.cache/invoke/ldm/
# hacks
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
# ends with slash, make it!
if [[ "$from_path" == */ ]]; then
mkdir -vp "$from_path"
fi
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
if "${PRELOAD}" == "true"; then
set -Eeuo pipefail
invokeai-configure --root ${ROOT} --yes
cp ${ROOT}/configs/models.yaml ${CONFIG_DIR}/models.yaml
fi
exec "$@"