38 Commits

Author SHA1 Message Date
lainedfles
1fccee32b0 Merge a282e3cf5b into 802d0bcd68 2024-11-25 04:53:47 +00:00
Self Denial
a282e3cf5b fix: Correct directory change command in entrypoint.sh
This commit fixes an issue in the `entrypoint.sh` script where the directory was not being changed back to its original location after running the model download script for Krita. The `cd ..` command has been replaced with `cd -`, which correctly returns to the previous working directory, ensuring that subsequent commands in the script are executed from the expected path. This change prevents potential errors and ensures the script behaves as intended.
2024-11-24 21:53:35 -07:00
Self Denial
ed02c136f0 refactor: Improve error handling and consistency in entrypoint script
This commit refactors the `entrypoint.sh` script to enhance error handling and ensure consistent use of quotes around variables. Specifically:

1. **Error Handling**: Added `set +e` before `git pull` commands within the loop for updating custom nodes, allowing the script to continue processing other nodes even if one fails. This is followed by `set -e` to revert to strict error handling.

2. **Consistent Quoting**: Ensured that all variables are quoted consistently throughout the script to prevent issues with spaces or special characters in file paths.

3. **Command Placement**: Moved the model download command for Krita after checking and creating symbolic links, ensuring that models are downloaded into the correct directory structure.

4. **Use of `cp -a` Instead of `mv`**: Changed the use of `mv` to `cp -a` when copying custom nodes to preserve file attributes and avoid accidental deletion of source files.

These changes improve the robustness and reliability of the script, making it more resilient to errors and ensuring that all operations are performed correctly.
2024-11-24 21:50:04 -07:00
Self Denial
ba5ecaa941 fix: Ensure correct directory after creating symbolic link in entrypoint.sh
This commit addresses an issue where the script did not change back to the original directory after creating a symbolic link. By adding `&& cd ..` at the end of the `ln -sfT /data/models/upscale_models upscale_models` command, we ensure that the working directory remains consistent and expected for subsequent operations in the script. This fix prevents potential errors or unexpected behavior due to an incorrect current working directory.
2024-11-24 19:05:54 -07:00
Self Denial
1ba0fa8546 feat: Enhance ComfyUI Dockerfile with additional features and configurations
This commit significantly enhances the `Dockerfile` for the ComfyUI service by introducing several new features and configurations. Here's a detailed breakdown of the changes:

1. **Base Image Update**:
   - Updated the base image from `pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime` to `pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime`. This ensures compatibility with newer versions of PyTorch and CUDA, providing better performance and support for recent hardware.

2. **Environment Variables**:
   - Introduced several new environment variables (`USE_UID`, `USE_GID`, `USE_USER`, `USE_GROUP`) to allow customization of the user and group IDs within the container.
   - Added flags (`USE_EDGE`, `USE_GGUF`, `USE_XFLUX`, `USE_CNAUX`, `USE_KRITA`, `USE_IPAPLUS`, `USE_INPAINT`, `USE_TOOLING`) to enable or disable optional features and integrations.

3. **User/Group Management**:
   - Added logic to create a user and group with specified IDs (`USE_UID` and `USE_GID`). This is useful for running the container in environments where specific user permissions are required.
   - Set default values for these variables, ensuring that the Dockerfile remains functional without explicit configuration.

4. **Optional Feature Integration**:
   - Included conditional logic to clone and install additional repositories based on the flags set (`USE_GGUF`, `USE_XFLUX`, `USE_CNAUX`, `USE_KRITA`, `USE_IPAPLUS`, `USE_INPAINT`, `USE_TOOLING`). This allows users to customize their ComfyUI installation by enabling only the features they need.
   - Ensured that dependencies for these optional features are installed correctly, including handling specific cases like separating ONNX Runtime installations to restore CUDA support.

5. **Python Version Update**:
   - Changed the Python command from `python` to `python3` in the CMD instruction to explicitly specify the use of Python 3, which is a best practice for clarity and compatibility.

6. **File Permissions and Ownership**:
   - Used `--chown=${USE_UID}:${USE_GID}` when copying files into the container to ensure that the correct user owns these files, preventing permission issues during execution.

7. **General Improvements**:
   - Improved readability and maintainability of the Dockerfile by organizing the steps logically and adding comments where necessary.
   - Ensured that all commands are idempotent, meaning they can be run multiple times without causing unintended side effects.

- `USE_UID`: Specifies the user ID for the non-root user within the container. Default is set to value 0.
- `USE_GID`: Specifies the group ID for the non-root user within the container. Default is set to value like 0.
- `USE_USER`: Specifies the username for the non-root user within the container. Default is set to `root`.
- `USE_GROUP`: Specifies the group name for the non-root user within the container. Default is set to `root`.
- `USE_EDGE`: If set to `true`, clones and installs the latest development version of ComfyUI from the main branch.
- `USE_GGUF`: If set to `true`, clones and installs the ComfyUI-GGUF extension for GPU acceleration.
- `USE_XFLUX`: If set to `true`, clones and installs the x-flux-comfyui extension for additional functionalities.
- `USE_CNAUX`: If set to `true`, clones and installs the comfyui_controlnet_aux extension for control net auxiliary features.
- `USE_KRITA`: If set to `true`, clones and installs multiple extensions (comfyui_controlnet_aux, ComfyUI_IPAdapter_plus, comfyui-inpaint-nodes, comfyui-tooling-nodes) that are useful when integrating with Krita.
- `USE_IPAPLUS`: If set to `true`, clones and installs the ComfyUI_IPAdapter_plus extension for IP Adapter functionalities.
- `USE_INPAINT`: If set to `true`, clones and installs the comfyui-inpaint-nodes extension for inpainting capabilities.
- `USE_TOOLING`: If set to `true`, clones and installs the comfyui-tooling-nodes extension for additional tooling features.

These ARGs provide flexibility in configuring the Docker container, allowing users to tailor the ComfyUI installation to their specific needs.
2024-11-24 18:29:16 -07:00
Self Denial
eceac3ce5f feat: Add custom node management and additional configuration options in entrypoint.sh
This commit introduces several enhancements to the `entrypoint.sh` script for the Comfy service:

1. **Custom Node Management**: Added logic to update custom nodes located in `/data/config/comfy/custom_nodes/`. If the `UPDATE_CUSTOM_NODES` environment variable is set to `true`, it will pull the latest changes from each node's Git repository and update submodules recursively for specific nodes like `krita-ai-diffusion`.

2. **Environment Variable Usage**: Introduced a new environment variable `CACHE` which should be used instead of hardcoding `/root/.cache`. This makes the script more flexible and configurable.

3. **Krita Integration**: Added support for Krita integration with ComfyUI, including downloading models if `KRITA_DOWNLOAD_MODELS` is set to `true`, managing upscale models, and setting up symbolic links for model directories.

4. **GGUF, XFLUX, CNAUX, IPAPLUS, INPAINT, TOOLING Support**: Included logic to handle additional custom nodes (`ComfyUI-GGUF`, `x-flux-comfyui`, `comfyui_controlnet_aux`, `ComfyUI_IPAdapter_plus`, `comfyui-inpaint-nodes`, `comfyui-tooling-nodes`) based on their respective environment variables (`USE_GGUF`, `USE_XFLUX`, `USE_CNAUX`, `USE_IPAPLUS`, `USE_INPAINT`, `USE_TOOLING`). These nodes are moved to the custom nodes directory if they don't already exist.

5. **Model Management**: Added functionality to download and manage specific models required by certain integrations, such as CLIP Vision and IP Adapters for XFLUX.

These changes enhance the script's flexibility, making it easier to integrate additional features and manage dependencies dynamically based on environment configurations.
2024-11-24 16:34:53 -07:00
Self Denial
2de99033e8 feat: Add new model paths to extra_model_paths.yaml
This commit introduces several new model paths to the `extra_model_paths.yaml` file under the `a111` section. The added paths include `unet`, `clip_vision`, `xlabs`, `inpaint`, and `ipadapter`. These additions enhance the configuration by providing more options for different models, which can be utilized in various functionalities within the service.
2024-11-24 16:32:41 -07:00
Self Denial
e68863099d chore: Add .env to .gitignore and create initial .env file
This commit introduces a new `.env` file for environment variable management and updates the `.gitignore` file to ensure that this sensitive configuration file is not tracked by version control. This change helps maintain security by preventing accidental exposure of environment-specific settings in the repository.
2024-11-24 16:18:20 -07:00
Self Denial
edcd949867 chore: Add .env file support to docker-compose
This commit introduces the use of an `.env` file for environment variable management across all defined services in the `docker-compose.yml`. By adding `env_file: .env`, each service will now load environment variables from a local `.env` file, enhancing configuration flexibility and security. This change ensures that sensitive information or common settings can be centralized and easily managed without hardcoding them into the Docker Compose file.
2024-11-24 16:12:33 -07:00
AbdBarho
802d0bcd68 Remove invoke (#705)
The invoke team already maintains a docker setup for their service, this
copy here was maybe relevant 2 years ago when all of this started, but I
don't think it makes sense anymore.

Refer to invoke's docs to install using docker
https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
2024-06-23 11:16:21 +02:00
mohamednabiel717
b1a26b8041 Update Auto to 1.9.4 (#700)
feee37d75f

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-06-07 19:10:28 +02:00
AbdBarho
f1bf3b0943 Bump pytorch containers (#697)
Closes #696
Closes #694
2024-05-28 19:39:33 +02:00
AbdBarho
35a18b3d46 Update Comfy (#693)
276f8fce9f

Closes #676 
Closes #674

Refs #686
2024-05-20 14:44:41 +02:00
神楽坂·喵
887e49c495 Add missing assets to auto1111 (#684)
Closes #683

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-05-20 13:41:54 +02:00
Derek Palmer (Creative)
7051ce0a44 Updated docker-compose to remove obsolete version syntax (#692)
Removes `version:` syntax in `docker-compose` file. If left in, it
throws an obsolete warning. I removed it from the docker-compose file to
reduce unnecessary warnings and to keep the code up to current
standards.

See [Version top-level element
(obsolete)](https://docs.docker.com/compose/compose-file/04-version-and-name/#version-top-level-element-obsolete)
for reference.
2024-05-20 13:41:36 +02:00
SachiaLanlus
ac94eac2b5 Update Auto v1.9.3 (#673)
Closes issue  #672

### Update versions

- auto:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.9.3
2024-05-20 13:35:07 +02:00
AbdBarho
015c2ec829 Pin xformers (for now) (#651)
Closes #648
Closes #649
2024-02-03 08:50:40 +01:00
AbdBarho
245d1d443f Update package index (#650)
Closes #622
2024-02-03 08:17:45 +01:00
Johannes Sjölund
60c4832185 Update open_clip to v2.20.0 in Auto (#617)
Fixes #615.

Updates `open-clip-torch` to the one specified in auto's
[requirements_versions.txt](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/requirements_versions.txt#L18).

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:34:46 +01:00
Adam Florizone
f613639748 Update Auto v1.7.0 (#632)
Update Auto v1.7.0


https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.7.0

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:30:40 +01:00
simonmcnair
fbc5c359d0 Resolve memory usage situation in Auto (#620)
Fixes
https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/612

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:13:01 +01:00
sejoung kim
90affeb72a Bump Comfy (#603)
d1f3637a5a

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2024-01-01 11:04:02 +01:00
AbdBarho
3e67f559d4 Update Auto (#610)
Closes #609

4afaaf8a02
2023-11-13 21:12:07 +01:00
cococig
a2561f2659 Update automatic1111 webui base image (#601)
Update the minor version of Python in the base image for AUTOMATIC1111
web UI.

Closes issue #600
2023-11-13 19:35:24 +01:00
cloudaxes
6a34739135 Update Automatic1111 to v1.6.0 (#585)
Update Automatic1111 Stable Diffusion Webui to v1.6.0.

Closes #583 

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-09-09 16:10:05 +02:00
Sebastian Piechowiak
630980b1bf Skipping installation of requirements for disabled extensions (#582)
Closes #563
2023-09-09 15:34:06 +02:00
66li
84740598bc Update generative-models version (#581)
Upgrade a dependent library



https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.5.2/modules/launch_utils.py#L288C90-L288C130

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-08-31 20:04:32 +02:00
AbdBarho
59b9762ac7 Update Comfy (#580)
7e941f9f24
2023-08-30 20:00:48 +02:00
AbdBarho
70357bf01e Auto 1.5.2 (#579)
c9c8485bc1
2023-08-30 19:55:06 +02:00
Manuel Schmid
def76291f8 Update Automatic1111 to 1.5.1 to add compatibility for SDXL (#560)
Uses the latest release of
https://github.com/Stability-AI/generative-models
45c443b316737a4ab6e40413d7794a7f5657c19f

Tested with the official SDXL 1.0 model from
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors
and official refiner from
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors.
VAE:
https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensors

Closes #558
Closes #559

68f336bd99

---------

Co-authored-by: AbdBarho <ka70911@gmail.com>
2023-07-30 15:42:32 +02:00
AbdBarho
09a0f11946 Add startup script for comfy (#552)
Closes #451

---------

Co-authored-by: PassiveLemon <lemonl3mn@protonmail.com>
2023-07-22 08:31:17 +02:00
cloudaxes
6de45b1984 Upgrade k-diffusion to Release 0.0.15 to get access to DPM++ (2M) SDE sampler. (#537)
Closes issue #536
2023-07-22 07:23:30 +02:00
AbdBarho
103e11493b Auto 1.4.0 (#507)
394ffa7b0a

Maybe bug:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11040
2023-07-02 08:15:51 +02:00
AbdBarho
95e96602f9 Bump auto 2023-06-26 21:57:45 +02:00
神楽坂·喵
37a82af4b7 Add build-essential package (#522)
Fix the problem that some extensions need to be installed from src
Now, because the step of installing extensions is moved forward in
`entrypoint.sh` instead of `startup.sh`, we cannot install some required
packages before executing `install.py`
When installing the extension `sd-webui-roop`, it relies on
`insightface==0.7.3`, and when installing this pypi package, it is found
that when building the wheel package, an error will be reported because
`gcc` cannot be found

ddc02ee1a9/requirements.txt (L1)
Therefore, considering that not all pypi packages are distributed in
wheel, those pypi packages distributed in src need `build-essential` to
build
2023-06-26 21:37:37 +02:00
AbdBarho
5e28222332 Allow setting port through env WEBUI_PORT (#521)
I am actually not happy with this solution, I would prefer if it was
possible to customize the ports within `docker-compose.override.yml`
2023-06-25 20:33:57 +02:00
AbdBarho
6c45e0c2ef Create dirs if not exist (#520)
Closes #519
2023-06-25 20:21:41 +02:00
神楽坂·喵
6365811f35 Modify installation extension dependencies (#518)
Perform a full extension installation process instead of just installing
dependencies
Some extensions do not include `requirements.txt` but install
dependencies in `install.py`, and all extensions include `install.py`,
so it is safe to use it for extended dependency installation
This is because the extension development of AUTOMATIC1111's webui does
not require the existence of `requirements.txt` but uses `install.py` to
initialize the extension

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions#installpy
2023-06-25 12:42:04 +02:00
16 changed files with 203 additions and 228 deletions

0
.env Normal file
View File

View File

@@ -19,7 +19,7 @@ assignees: ""
**Which UI**
auto or auto-cpu or invoke or sygil?
auto or auto-cpu or invoke or comfy?
**Hardware / Software**

View File

@@ -9,6 +9,5 @@ Closes issue #
### Update versions
- auto: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/
- sygil: https://github.com/Sygil-Dev/sygil-webui/commit/
- invoke: https://github.com/invoke-ai/InvokeAI/commit/
- comfy: https://github.com/comfyanonymous/ComfyUI/commit/

View File

@@ -14,7 +14,6 @@ jobs:
matrix:
profile:
- auto
- invoke
- comfy
- download
runs-on: ubuntu-latest

1
.gitignore vendored
View File

@@ -1,5 +1,6 @@
/.devcontainer
/docker-compose.override.yml
/.env
# VSCode specific
*.code-workspace

View File

@@ -18,14 +18,6 @@ This repository provides multiple UIs for you to play around with stable diffusi
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541954-46afd772-d0c8-4005-874c-e2eca40c02f2.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541956-5b528de7-1b5d-479f-a1db-d3f5a53afc59.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541957-cf78b352-a071-486d-8889-f26952779a61.jpg) |
### [InvokeAI](https://github.com/invoke-ai/InvokeAI)
[Full feature list here](https://github.com/invoke-ai/InvokeAI#features), Screenshots:
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/195158552-39f58cb6-cfcc-4141-9995-a626e3760752.jpg) | ![](https://user-images.githubusercontent.com/24505302/195158553-152a0ab8-c0fd-4087-b121-4823bcd8d6b5.jpg) | ![](https://user-images.githubusercontent.com/24505302/195158548-e118206e-c519-4915-85d6-4c248eb10fc0.jpg) |
### [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
[Full feature list here](https://github.com/comfyanonymous/ComfyUI#features), Screenshot:

View File

@@ -1,13 +1,12 @@
version: '3.9'
x-base_service: &base_service
ports:
- "7860:7860"
- "${WEBUI_PORT:-7860}:7860"
volumes:
- &v1 ./data:/data
- &v2 ./output:/output
stop_signal: SIGKILL
tty: true
env_file: .env
deploy:
resources:
reservations:
@@ -29,9 +28,10 @@ services:
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
image: sd-auto:59
image: sd-auto:78
environment:
- CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api
env_file: .env
auto-cpu:
<<: *automatic
@@ -39,30 +39,16 @@ services:
deploy: {}
environment:
- CLI_ARGS=--no-half --precision full --allow-code --enable-insecure-extension-access --api
invoke: &invoke
<<: *base_service
profiles: ["invoke"]
build: ./services/invoke/
image: sd-invoke:30
environment:
- PRELOAD=true
- CLI_ARGS=--xformers
# invoke-cpu:
# <<: *invoke
# profiles: ["invoke-cpu"]
# environment:
# - PRELOAD=true
# - CLI_ARGS=--always_use_cpu
env_file: .env
comfy: &comfy
<<: *base_service
profiles: ["comfy"]
build: ./services/comfy/
image: sd-comfy:3
image: sd-comfy:7
environment:
- CLI_ARGS=
env_file: .env
comfy-cpu:
@@ -71,3 +57,4 @@ services:
deploy: {}
environment:
- CLI_ARGS=--cpu
env_file: .env

View File

@@ -2,26 +2,19 @@ FROM alpine/git:2.36.2 as download
COPY clone.sh /clone.sh
RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd0b2355b84d7ea6 \
&& rm -rf data assets **/*.ipynb
RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git 6f7db241d2f8ba7457bac5ca9753331f0c266917
RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git 47b6b607fdd31875c9279cd2f4f16b92e4ea958e \
RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf \
&& rm -rf assets data/**/*.png data/**/*.jpg data/**/*.gif
RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af \
&& rm -rf assets inputs
RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git 5b3af030dd83e0297272d861c19477735d0317ec
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2486589f24165c8e3b303f84e9dbbea318df83e8
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git ab527a9a6d347f364e3d185ba6d714e22d80cb3c
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2cf03aaf6e704197fd0dae7c7f96aa59cf1b11c9
RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-models 45c443b316737a4ab6e40413d7794a7f5657c19f
RUN . /clone.sh stable-diffusion-webui-assets https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets 6f7db241d2f8ba7457bac5ca9753331f0c266917
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
FROM python:3.10.9-slim
FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
@@ -30,61 +23,39 @@ RUN --mount=type=cache,target=/var/cache/apt \
# we need those
apt-get install -y fonts-dejavu-core rsync git jq moreutils aria2 \
# extensions needs those
ffmpeg libglfw3-dev libgles2-mesa-dev pkg-config libcairo2 libcairo2-dev
RUN --mount=type=cache,target=/cache --mount=type=cache,target=/root/.cache/pip \
aria2c -x 5 --dir /cache --out torch-2.0.1-cp310-cp310-linux_x86_64.whl -c \
https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl && \
pip install /cache/torch-2.0.1-cp310-cp310-linux_x86_64.whl torchvision --index-url https://download.pytorch.org/whl/cu118
ffmpeg libglfw3-dev libgles2-mesa-dev pkg-config libcairo2 libcairo2-dev build-essential
WORKDIR /
RUN --mount=type=cache,target=/root/.cache/pip \
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git && \
cd stable-diffusion-webui && \
git reset --hard 20ae71faa8ef035c31aa3a410b707d792c8203a3 && \
git reset --hard v1.9.4 && \
pip install -r requirements_versions.txt
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64.whl \
pip install /xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64.whl
ENV ROOT=/stable-diffusion-webui
COPY --from=download /repositories/ ${ROOT}/repositories/
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/data/* ${ROOT}/interrogate
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r ${ROOT}/repositories/CodeFormer/requirements.txt
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/clip_interrogator/data/* ${ROOT}/interrogate
RUN --mount=type=cache,target=/root/.cache/pip \
pip install pyngrok \
pip install pyngrok xformers==0.0.26.post1 \
git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 \
git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b
git+https://github.com/mlfoundations/open_clip.git@v2.20.0
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
# TODO: either remove if fixed in A1111 (unlikely) or move to the top with other apt stuff
# there seems to be a memory leak (or maybe just memory not being freed fast enough) that is fixed by this version of malloc
# maybe move this up to the dependencies list.
RUN apt-get -y install libgoogle-perftools-dev && apt-get clean
ENV LD_PRELOAD=libtcmalloc.so
ARG SHA=20ae71faa8ef035c31aa3a410b707d792c8203a3
RUN --mount=type=cache,target=/root/.cache/pip \
cd stable-diffusion-webui && \
git fetch && \
git reset --hard ${SHA} && \
pip install -r requirements_versions.txt
COPY . /docker
RUN \
python3 /docker/info.py ${ROOT}/modules/ui.py && \
mv ${ROOT}/style.css ${ROOT}/user.css && \
# mv ${ROOT}/style.css ${ROOT}/user.css && \
# one of the ugliest hacks I ever wrote \
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /usr/local/lib/python3.10/site-packages/gradio/routes.py && \
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /opt/conda/lib/python3.10/site-packages/gradio/routes.py && \
git config --global --add safe.directory '*'
WORKDIR ${ROOT}

View File

@@ -5,6 +5,10 @@ set -Eeuo pipefail
# TODO: move all mkdir -p ?
mkdir -p /data/config/auto/scripts/
# mount scripts individually
echo $ROOT
ls -lha $ROOT
find "${ROOT}/scripts/" -maxdepth 1 -type l -delete
cp -vrfTs /data/config/auto/scripts/ "${ROOT}/scripts/"
@@ -20,6 +24,8 @@ if [ ! -f /data/config/auto/styles.csv ]; then
fi
# copy models from original models folder
mkdir -p /data/models/VAE-approx/ /data/models/karlo/
rsync -a --info=NAME ${ROOT}/models/VAE-approx/ /data/models/VAE-approx/
rsync -a --info=NAME ${ROOT}/models/karlo/ /data/models/karlo/
@@ -57,9 +63,16 @@ chown -R root ~/.cache/
chmod 766 ~/.cache/
shopt -s nullglob
list=(./extensions/*/requirements.txt)
for req in "${list[@]}"; do
pip install -r "$req"
# For install.py, please refer to https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions#installpy
list=(./extensions/*/install.py)
for installscript in "${list[@]}"; do
EXTNAME=$(echo $installscript | cut -d '/' -f 3)
# Skip installing dependencies if extension is disabled in config
if $(jq -e ".disabled_extensions|any(. == \"$EXTNAME\")" config.json); then
echo "Skipping disabled extension ($EXTNAME)"
continue
fi
PYTHONPATH=${ROOT} python "$installscript"
done
if [ -f "/data/config/auto/startup.sh" ]; then

View File

@@ -1,14 +0,0 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace(' return demo', """
with demo:
gr.Markdown(
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)'
)
return demo
""", 1)
)

View File

@@ -1,42 +1,108 @@
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
FROM pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
# Limited system user UID
ARG USE_UID=0
# Limited system user GID
ARG USE_GID=0
# System user name
ARG USE_USER=root
# System group name
ARG USE_GROUP=root
# Latest tag or bleeding edge commit
ARG USE_EDGE=false
# ComfyUI-GGUF
ARG USE_GGUF=false
# x-flux-comfyui
ARG USE_XFLUX=false
# comfyui_controlnet_aux
ARG USE_CNAUX=false
# krita-ai-diffusion
ARG USE_KRITA=false
# ComfyUI_IPAdapter_plus
ARG USE_IPAPLUS=false
# comfyui-inpaint-nodes
ARG USE_INPAINT=false
# comfyui-tooling-nodes
ARG USE_TOOLING=false
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1
ENV USE_CNAUX=$USE_CNAUX USE_IPAPLUS=$USE_IPAPLUS
ENV USE_INPAINT=$USE_INPAINT USE_TOOLING=$USE_TOOLING
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1 USE_EDGE=$USE_EDGE
ENV USE_GGUF=$USE_GGUF USE_XFLUX=$USE_XFLUX ROOT=/stable-diffusion
ENV CACHE=/home/$USE_USER/.cache USE_KRITA=$USE_KRITA
RUN apt-get update && apt-get install -y git && apt-get clean
RUN mkdir -p ${ROOT} ${CACHE}/pip /home/${USE_USER}
ENV ROOT=/stable-diffusion
RUN --mount=type=cache,target=/root/.cache/pip \
# User/Group
RUN if [ ${USE_GID} -ne 0 ]; then \
groupadd -r ${USE_GROUP} -g ${USE_GID}; \
fi; \
if [ ${USE_GID} -ne 0 ]; then \
useradd --no-log-init -m -r -g ${USE_GROUP} ${USE_USER} -u ${USE_UID}; \
fi; \
chown -R ${USE_UID}:${USE_GID} ${ROOT} ${CACHE} /home/${USE_USER}
RUN apt-get update && apt-get install -y git python3-pip
RUN if [ "${USE_XFLUX}" = "true" ] || [ "${USE_KRITA}" = "true" ] || [ "${USE_CNAUX}" = "true" ]; then \
apt-get install -y libgl1-mesa-glx python3-opencv; \
fi
RUN apt-get clean
USER ${USE_USER}:${USE_GROUP}
ENV PATH="${PATH}:/home/${USE_USER}/.local/bin"
RUN --mount=type=cache,uid=${USE_UID},gid=${USE_GID},target=${CACHE} pip --cache-dir=${CACHE}/pip install -U pip
RUN --mount=type=cache,uid=${USE_UID},gid=${USE_GID},target=${CACHE} \
git clone https://github.com/comfyanonymous/ComfyUI.git ${ROOT} && \
cd ${ROOT} && \
git checkout master && \
git reset --hard 884ea653c8d6fe19b3724f45a04a0d74cd881f2f && \
pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21-cp310-cp310-linux_x86_64.whl \
pip install /xformers-0.0.21-cp310-cp310-linux_x86_64.whl
bash -c 'VERSION=$(git describe --tags --abbrev=0) && \
if [ "${USE_EDGE}" = "true" ]; then VERSION=$(git describe --abbrev=7); fi && \
git reset --hard ${VERSION}' && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && \
if [ "${USE_KRITA}" = "true" ]; then \
git clone https://github.com/Acly/krita-ai-diffusion.git && \
cd krita-ai-diffusion && git checkout main && \
git submodule update --init && \
pip --cache-dir=${CACHE}/pip install aiohttp tqdm && cd ..; \
fi; \
if [ "${USE_GGUF}" = "true" ]; then \
git clone https://github.com/city96/ComfyUI-GGUF.git && \
cd ComfyUI-GGUF && git checkout main && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && cd ..; \
fi; \
if [ "${USE_XFLUX}" = "true" ]; then \
git clone https://github.com/XLabs-AI/x-flux-comfyui.git && \
cd x-flux-comfyui && git checkout main && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && cd ..; \
fi; \
if [ "${USE_CNAUX}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then \
git clone https://github.com/Fannovel16/comfyui_controlnet_aux.git && \
cd comfyui_controlnet_aux && git checkout main && \
pip --cache-dir=${CACHE}/pip install -r requirements.txt && \
# This extra step to separate onnxruntime installation is required to restore onnx cuda support \
pip --cache-dir=${CACHE}/pip install onnxruntime && pip --cache-dir=${CACHE}/pip install onnxruntime-gpu && cd ..; \
fi; \
if [ "${USE_IPAPLUS}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then \
git clone https://github.com/cubiq/ComfyUI_IPAdapter_plus.git && \
cd ComfyUI_IPAdapter_plus && git checkout main && cd ..; \
fi; \
if [ "${USE_INPAINT}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then \
git clone https://github.com/Acly/comfyui-inpaint-nodes.git && \
cd comfyui-inpaint-nodes && git checkout main && \
pip --cache-dir=${CACHE}/pip install opencv-python && cd ..; \
fi; \
if [ "${USE_TOOLING}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then \
git clone https://github.com/Acly/comfyui-tooling-nodes.git && \
cd comfyui-tooling-nodes && git checkout main && cd ..; \
fi
WORKDIR ${ROOT}
COPY --chown=${USE_UID}:${USE_GID} . /docker/
RUN chmod u+x /docker/entrypoint.sh && cp /docker/extra_model_paths.yaml ${ROOT}
ARG BRANCH=master SHA=8607c2d42d10b0108de02528e813cc703e58813f
RUN --mount=type=cache,target=/root/.cache/pip \
git fetch && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -r requirements.txt
# add info
COPY . /docker/
RUN cp /docker/extra_model_paths.yaml ${ROOT}
ENV NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
ENV NVIDIA_VISIBLE_DEVICES=all PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python -u main.py --listen --port 7860 ${CLI_ARGS}
CMD python3 -u main.py --listen --port 7860 ${CLI_ARGS}

View File

@@ -2,11 +2,12 @@
set -Eeuo pipefail
mkdir -vp /data/config/comfy/custom_nodes
CUSTOM_NODES="/data/config/comfy/custom_nodes"
mkdir -vp "${CUSTOM_NODES}"
declare -A MOUNTS
MOUNTS["/root/.cache"]="/data/.cache"
MOUNTS["${CACHE}"]="/data/.cache"
MOUNTS["${ROOT}/input"]="/data/config/comfy/input"
MOUNTS["${ROOT}/output"]="/output/comfy"
@@ -22,4 +23,58 @@ for to_path in "${!MOUNTS[@]}"; do
echo Mounted $(basename "${from_path}")
done
if [ "${UPDATE_CUSTOM_NODES:-false}" = "true" ]; then
find /data/config/comfy/custom_nodes/ -mindepth 1 -maxdepth 1 -type d | while read NODE
do echo "---- ${NODE##*/} ----"
set +e
cd "$NODE" && git pull; cd ..;
set -e
done
fi
if [ "${USE_KRITA}" = "true" ]; then
[ -d "${ROOT}/models/upscale_models" ] && rm -rf "${ROOT}/models/upscale_models"
if [ ! -L "${ROOT}/models/upscale_models" ]; then
cd "${ROOT}/models"
ln -sfT /data/models/upscale_models upscale_models && cd ..
fi
if [ "${KRITA_DOWNLOAD_MODELS:-false}" = "true" ]; then
cd "${ROOT}/krita-ai-diffusion/scripts" && python3 download_models.py --verbose --retry-attempts 10 --continue-on-error --recommended /data && cd -
fi
fi
if [ "${USE_GGUF}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/ComfyUI-GGUF" ] && cp -a "${ROOT}/ComfyUI-GGUF" "${CUSTOM_NODES}"/
fi
if [ "${USE_XFLUX}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/x-flux-comfyui" ] && cp -a "${ROOT}/x-flux-comfyui" "${CUSTOM_NODES}"/
[ ! -e "/data/models/clip_vision" ] && mkdir -p /data/models/clip_vision
[ ! -e "/data/models/clip_vision/model.safetensors" ] && cd /data/models/clip_vision && \
python3 -c 'import sys; from urllib.request import urlopen; from pathlib import Path; Path(sys.argv[2]).write_bytes(urlopen(sys.argv[1]).read())' \
"https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors" "model.safetensors"
[ ! -e "/data/models/xlabs" ] && mkdir -p /data/models/xlabs/{ipadapters,loras,controlnets}
[ ! -e "/data/models/xlabs/ipadapters/flux-ip-adapter.safetensors" ] && cd /data/models/xlabs/ipadapters && \
python3 -c 'import sys; from urllib.request import urlopen; from pathlib import Path; Path(sys.argv[2]).write_bytes(urlopen(sys.argv[1]).read())' \
"https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/ip_adapter.safetensors" "flux-ip-adapter.safetensors"
[ -d "${ROOT}/models/xlabs" ] && rm -rf "${ROOT}/models/xlabs"
[ ! -e "${ROOT}/models/xlabs" ] && cd "${ROOT}/models" && ln -sT /data/models/xlabs xlabs && cd ..
fi
if [ "${USE_CNAUX}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/comfyui_controlnet_aux" ] && cp -a "${ROOT}/comfyui_controlnet_aux" "${CUSTOM_NODES}"/
fi
if [ "${USE_IPAPLUS}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/ComfyUI_IPAdapter_plus" ] && cp -a "${ROOT}/ComfyUI_IPAdapter_plus" "${CUSTOM_NODES}"/
fi
if [ "${USE_INPAINT}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/comfyui-inpaint-nodes" ] && cp -a "${ROOT}/comfyui-inpaint-nodes" "${CUSTOM_NODES}"/
fi
if [ "${USE_TOOLING}" = "true" ] || [ "${USE_KRITA}" = "true" ]; then
[ ! -e "${CUSTOM_NODES}/comfyui-tooling-nodes" ] && cp -a "${ROOT}/comfyui-tooling-nodes" "${CUSTOM_NODES}"/
fi
if [ -f "/data/config/comfy/startup.sh" ]; then
pushd "${ROOT}"
. /data/config/comfy/startup.sh
popd
fi
exec "$@"

View File

@@ -15,11 +15,15 @@ a111:
gligen: models/GLIGEN
clip: models/CLIPEncoder
embeddings: embeddings
unet: models/unet
clip_vision: models/clip_vision
xlabs: models/xlabs
inpaint: models/inpaint
ipadapter: models/ipadapter
custom_nodes: config/comfy/custom_nodes
# TODO: I am unsure about these, need more testing
# style_models: config/comfy/style_models
# t2i_adapter: config/comfy/t2i_adapter
# clip_vision: config/comfy/clip_vision
# diffusers: config/comfy/diffusers

View File

@@ -1,6 +1,6 @@
FROM bash:alpine3.15
FROM bash:alpine3.19
RUN apk add parallel aria2
RUN apk update && apk add parallel aria2
COPY . /docker
RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"]

View File

@@ -1,53 +0,0 @@
FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
ENV DEBIAN_FRONTEND=noninteractive PIP_EXISTS_ACTION=w PIP_PREFER_BINARY=1
# patch match:
# https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && \
apt-get install make g++ git libopencv-dev -y && \
apt-get clean && \
cd /usr/lib/x86_64-linux-gnu/pkgconfig/ && \
ln -sf opencv4.pc opencv.pc
ENV ROOT=/InvokeAI
RUN git clone https://github.com/invoke-ai/InvokeAI.git ${ROOT}
WORKDIR ${ROOT}
RUN --mount=type=cache,target=/root/.cache/pip \
git reset --hard f3b2e02921927d9317255b1c3811f47bd40a2bf9 && \
pip install -e .
ARG BRANCH=main SHA=f3b2e02921927d9317255b1c3811f47bd40a2bf9
RUN --mount=type=cache,target=/root/.cache/pip \
git fetch && \
git reset --hard && \
git checkout ${BRANCH} && \
git reset --hard ${SHA} && \
pip install -U -e .
RUN --mount=type=cache,target=/root/.cache/pip \
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21-cp310-cp310-linux_x86_64.whl \
pip install -U opencv-python-headless triton /xformers-0.0.21-cp310-cp310-linux_x86_64.whl && \
python3 -c "from patchmatch import patch_match"
COPY . /docker/
ENV NVIDIA_VISIBLE_DEVICES=all
ENV PYTHONUNBUFFERED=1 PRELOAD=false HF_HOME=/root/.cache/huggingface CONFIG_DIR=/data/config/invoke CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD invokeai --web --host 0.0.0.0 --port 7860 --root_dir ${ROOT} --config ${CONFIG_DIR}/models.yaml \
--outdir /output/invoke --embedding_directory /data/embeddings/ --lora_directory /data/models/Lora \
--no-nsfw_checker --no-safety_checker ${CLI_ARGS}

View File

@@ -1,45 +0,0 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
mkdir -p ${CONFIG_DIR} ${ROOT}/configs/stable-diffusion/
# cache
MOUNTS["/root/.cache"]=/data/.cache/
# this is really just a hack to avoid migrations
rm -rf ${HF_HOME}/diffusers
# ui specific
MOUNTS["${ROOT}/models/codeformer"]=/data/models/Codeformer/
MOUNTS["${ROOT}/models/gfpgan/GFPGANv1.4.pth"]=/data/models/GFPGAN/GFPGANv1.4.pth
MOUNTS["${ROOT}/models/gfpgan/weights"]=/data/models/GFPGAN/
MOUNTS["${ROOT}/models/realesrgan"]=/data/models/RealESRGAN/
MOUNTS["${ROOT}/models/ldm"]=/data/.cache/invoke/ldm/
# hacks
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
# ends with slash, make it!
if [[ "$from_path" == */ ]]; then
mkdir -vp "$from_path"
fi
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
if "${PRELOAD}" == "true"; then
set -Eeuo pipefail
invokeai-configure --root ${ROOT} --yes
cp ${ROOT}/configs/models.yaml ${CONFIG_DIR}/models.yaml
fi
exec "$@"