47 Commits

Author SHA1 Message Date
AbdBarho
59892da866 Custom Models Auto (#75) 2022-09-17 13:44:00 +02:00
Abdullah Barhoum
fceb83c2b0 Dev hlky 2022-09-16 21:10:40 +02:00
AbdBarho
17b01a7627 Parallel Downloads (#74) 2022-09-16 20:07:50 +02:00
Abdullah Barhoum
b96d7c30d0 make executable 2022-09-16 18:37:41 +02:00
AbdBarho
aae83bb8f2 Update lstein to dev branch (#73) 2022-09-16 16:40:20 +02:00
AbdBarho
10763a8f61 Update Git Post Buffer 2022-09-16 06:51:14 +02:00
AbdBarho
64e8f093d2 Create stale.yml 2022-09-16 06:41:49 +02:00
AbdBarho
3e0a137c23 Remove outdated (#69) 2022-09-15 22:48:38 +02:00
AbdBarho
a1c16942ff Matrix CI/CD (#68) 2022-09-15 22:02:38 +02:00
AbdBarho
6ae3473214 Update auto (#67)
* Update auto
2022-09-15 21:05:31 +02:00
AbdBarho
5d731cb43c Add gcc (#66)
* update auto

* add gcc
2022-09-15 20:45:21 +02:00
AbdBarho
c1fa2f1457 Updates & Prepare hlky streamlit (#59) 2022-09-14 19:48:00 +02:00
AbdBarho
d8cfdd3af5 Update Versions (#58)
* Update Versions
2022-09-13 21:40:10 +02:00
AbdBarho
03d12cbcd9 Update README.md 2022-09-13 20:24:27 +02:00
AbdBarho
2e76b6c4e7 Update README.md 2022-09-13 19:51:58 +02:00
AbdBarho
5eae2076ce Update Automatic (#55)
* Update Automatic
2022-09-12 22:46:38 +02:00
AbdBarho
725e1f39ba Update (#54)
* Update hlky

* Update auto
2022-09-12 22:21:03 +02:00
AbdBarho
ab651fe0d7 Major refactor (#49)
* update folders

* Update CI

* Update CI

* Unify base docker images

* Remove dead config

* Update permissions

* Remove CPU Hack

* Update hlky

* Add Download Service

* Adapt services

* executable

* remove buggy parameter

* rename to SHA

* Update README
2022-09-11 20:18:50 +02:00
AbdBarho
f76f8d4671 Update Versions (#45)
* Update Automatic to 1b963c20

* Update hlky to dev branch

* Update automatic
2022-09-11 05:25:45 +02:00
AbdBarho
e32a48f42a Update Versions (#43) 2022-09-09 19:05:19 +02:00
AbdBarho
76989b39a6 Update Versions (#42)
* Update hlky

* Update automatic
2022-09-08 17:56:15 +02:00
AbdBarho
4d9fc381bb Remove restart on failure (#41) 2022-09-07 22:38:02 +02:00
AbdBarho
bcee253fe0 Update README.md 2022-09-07 19:52:43 +02:00
AbdBarho
499143009a Update Versions (#39) 2022-09-07 19:45:03 +02:00
AbdBarho
c614625f04 Update AUTOMATIC1111 (#37) 2022-09-06 20:16:28 +02:00
abdullah
ccd6e238b2 executable 2022-09-06 12:17:19 +02:00
Abdullah Barhoum
829864af9b Add Font 2022-09-05 21:05:43 +02:00
AbdBarho
ccc7306f48 Add AUTOMATIC1111 and lstein WebUIs (#32)
* Lstein

* Add AUTOMATIC1111 and lstein UIs

* Update Workflow
2022-09-05 19:51:22 +02:00
AbdBarho
082876aab3 SHA as Build ARG (#30) 2022-09-04 09:12:07 +02:00
AbdBarho
ae834cb764 Update Core to bb765f1 (#29) 2022-09-04 08:46:14 +02:00
AbdBarho
5f6d9fbb03 CI / Build Image (#27) 2022-09-03 17:36:19 +02:00
AbdBarho
d4da252343 Update README(#26) 2022-09-03 14:39:18 +02:00
AbdBarho
5af482ed8c Update Core to c84748a (#25) 2022-09-03 13:15:25 +02:00
AbdBarho
ce4e190f8f Force LF (#24) 2022-09-03 06:59:16 +02:00
AbdBarho
bae3590980 Make mount.sh executable in the git index (#22) 2022-09-02 16:31:55 +02:00
AbdBarho
1588d1eecf Force LF endings (#19) 2022-09-02 12:16:21 +02:00
AbdBarho
9cbd58b3f4 Update README.md (#18) 2022-09-02 09:58:26 +02:00
AbdBarho
089fc524d8 Add Latent Diffusion & Image Lab (#17)
* Add Latent Diffusion & Image Lab

* Update versions
2022-09-02 09:55:36 +02:00
AbdBarho
0d8b7d4ac8 Update Core to c5b2c86f (#15) 2022-09-01 06:27:15 +02:00
AbdBarho
561664ea6e Update FAQ (#14)
Closes #9

Add fix to green output
2022-08-31 22:32:21 +02:00
AbdBarho
77c2b2d217 Update issue templates (#13) 2022-08-31 19:06:03 +02:00
Abdullah Barhoum
6c0c610f27 Update Readme 2022-08-31 18:20:51 +02:00
Abdullah Barhoum
dc730b7f6b Typo 2022-08-31 18:18:32 +02:00
Abdullah Barhoum
15952906a1 Add Textual inversion 2022-08-31 18:17:44 +02:00
Abdullah Barhoum
4aaf38970a Update Core to ff8c2d0 2022-08-31 17:50:38 +02:00
AbdBarho
61bd38dfe4 Remove Outdated GFPGAN Comment 2022-08-31 08:51:06 +02:00
AbdBarho
bec4997639 Remove Outdated CLI Args 2022-08-31 08:50:23 +02:00
27 changed files with 782 additions and 137 deletions

9
.editorconfig Normal file
View File

@@ -0,0 +1,9 @@
root = true
[*]
end_of_line = lf
indent_style = space
indent_size = 2
charset = utf-8
insert_final_newline = true
trim_trailing_whitespace = true

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
* text=auto eol=lf

36
.github/ISSUE_TEMPLATE/bug.md vendored Normal file
View File

@@ -0,0 +1,36 @@
---
name: Bug
about: Report a bug
title: ''
labels: bug
assignees: ''
---
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue)**
**Describe the bug**
**Which UI**
hlky or auto or auto-cpu or lstein?
**Steps to Reproduce**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Hardware / Software:**
- OS: [e.g. Windows / Ubuntu and version]
- RAM:
- GPU: [Nvidia 1660 / No GPU]
- VRAM:
- Docker Version, Docker compose version
- Release version [e.g. 1.0.1]
**Additional context**
Any other context about the problem here. If applicable, add screenshots to help explain your problem.

19
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: Build Images
on: [push]
jobs:
build:
strategy:
matrix:
profile:
- auto
- hlky
- lstein
- download
runs-on: ubuntu-latest
name: ${{ matrix.profile }}
steps:
- uses: actions/checkout@v3
# better caching?
- run: docker compose --profile ${{ matrix.profile }} build --progress plain

22
.github/workflows/executable.yml1 vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Check executable
on: [push]
jobs:
check:
runs-on: ubuntu-latest
name: Check all sh
steps:
- run: git config --global core.fileMode true
- uses: actions/checkout@v3
- shell: bash
run: |
shopt -s globstar;
FAIL=0
for file in **/*.sh; do
if [ -f "${file}" ] && [ -r "${file}" ] && [ ! -x "${file}" ]; then
echo "$file" is not executable;
FAIL=1
fi
done
exit ${FAIL}

20
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

View File

@@ -1,62 +1,74 @@
# Stable Diffusion WebUI Docker # Stable Diffusion WebUI Docker
Run Stable Diffusion on your machine with a nice UI without any hassle! Run Stable Diffusion on your machine with a nice UI without any hassle!
This repository provides the [WebUI](https://github.com/hlky/stable-diffusion-webui) as docker for easy setup and deployment. Please note that this repo delivers all cutting-edge unstable changes from the WebUI, so expect some bugs. This repository provides multiple UIs for you to play around with stable diffusion:
## Setup ## Features
make sure you have docker installed and up to date. Download this repo and run: ### AUTOMATIC1111
``` [AUTOMATIC1111's fork](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is imho the most feature rich yet elegant UI:
docker compose build
```
you can let it build in the background while you download the different models - Text to image, with many samplers and even negative prompts!
- Image to image, with masking, cropping, in-painting, out-painting, variations.
- GFPGAN, RealESRGAN, LDSR, CodeFormer.
- Loopback, prompt weighting, prompt matrix, X/Y plot
- Live preview of the generated images.
- Highly optimized 4GB GPU support, or even CPU only!
- [Full feature list here](https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase)
- [Stable Diffusion v1.4 (4GB)](https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media), rename to `model.ckpt` | Text to image | Image to image | Extras |
- (Optional) [GFPGANv1.3.pth (333MB)](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) to improve generated faces. | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
- (Optional) [RealESRGAN_x4plus.pth (64MB)](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth (18MB)](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) for super-sampling. | ![](https://user-images.githubusercontent.com/24505302/189541954-46afd772-d0c8-4005-874c-e2eca40c02f2.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541956-5b528de7-1b5d-479f-a1db-d3f5a53afc59.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541957-cf78b352-a071-486d-8889-f26952779a61.jpg) |
Put all of the downloaded files in the `models` folder, it should look something like this: ### hlky
``` [hlky's fork](https://github.com/hlky/stable-diffusion-webui) is one of the most popular UIs, with many features:
models/
├── GFPGANv1.3.pth
├── RealESRGAN_x4plus.pth
├── RealESRGAN_x4plus_anime_6B.pth
└── model.ckpt
```
## Run - Text to image, with many samplers
- Image to image, with masking, cropping, in-painting, variations.
- GFPGAN, RealESRGAN, LDSR, GoBig, GoLatent
- Loopback, prompt weighting
- 6GB or even 4GB GPU support!
- [Full feature list here](https://github.com/sd-webui/stable-diffusion-webui/blob/master/README.md)
After the build is done, you can run the app with: Screenshots:
``` | Text to image | Image to image | Image Lab |
docker compose up --build | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
``` | ![](https://user-images.githubusercontent.com/24505302/189541298-f902b021-a1eb-4e4b-b2eb-b6a696a8ec80.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541295-7d7f2162-2189-4e0a-abbd-703f4779e1cd.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541294-aa7f7735-a973-4e17-ada0-1fe3acbb1772.jpg) |
Will start the app on http://localhost:7860/ ### lstein
Note: the first start will take sometime as some other models will be downloaded, these will be cached in the `cache` folder, so next runs are faster. [lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
## Config | Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
in the `docker-compose.yml` you can change the `CLI_ARGS` variable contains all of the variables that will be passed to [the web ui](https://github.com/hlky/stable-diffusion/blob/fa977b3d6f9d0b264035c949fd70415476f00036/scripts/webui.py). ## Setup & Usage
By default: `--gfpgan-gpu 0 --esrgan-cpu --optimized-turbo` are given, which allow you to use this model on a 6GB GPU. Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
NOTE: GFPGAN does not work on the CPU [More info here](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/4)
# Disclaimer ## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Disclaimer
The authors of this project are not responsible for any content generated using this interface. The authors of this project are not responsible for any content generated using this interface.
This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read [the license](./LICENSE). This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read [the license](./LICENSE).
# Thanks ## Thanks
Special thanks to everyone behind these awesome projects, without them, none of this would have been possible: Special thanks to everyone behind these awesome projects, without them, none of this would have been possible:
- [hlky/stable-diffusion-webui](https://github.com/hlky/stable-diffusion-webui) - [hlky/stable-diffusion-webui](https://github.com/hlky/stable-diffusion-webui)
- [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
- [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion)
- [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) - [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
- [hlky/sd-enable-textual-inversion](https://github.com/hlky/sd-enable-textual-inversion)
- [devilismyfriend/latent-diffusion](https://github.com/devilismyfriend/latent-diffusion)
- [Hafiidz/latent-diffusion](https://github.com/Hafiidz/latent-diffusion)

View File

@@ -1,57 +0,0 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN git clone https://github.com/hlky/stable-diffusion.git && cd stable-diffusion && git reset --hard 554bd068e6f2f6bc55449a67fe017ddd77090f28
RUN conda env update --file stable-diffusion/environment.yaml --name base && conda clean -a -y
# fonts for generating the grid
RUN apt-get update && apt install fonts-dejavu-core && apt-get clean
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
RUN cd stable-diffusion && git pull && git reset --hard fa977b3d6f9d0b264035c949fd70415476f00036 && \
conda env update --file environment.yaml --name base && conda clean -a -y
# download dev UI version, update the sha below in case you want some other version
# RUN <<EOF
# git clone https://github.com/hlky/stable-diffusion-webui.git
# cd stable-diffusion-webui
# # map to this file: https://github.com/hlky/stable-diffusion-webui/blob/master/.github/sync.yml
# git reset --hard 49e6178fd82ca736f9bbc621c6b12487c300e493
# cp -t /stable-diffusion/scripts/ webui.py relauncher.py txt2img.yaml
# cp -t /stable-diffusion/configs/webui webui.yaml
# cp -t /stable-diffusion/frontend/ frontend/*
# cd / && rm -rf stable-diffusion-webui
# EOF
# Textual-inversion:
# RUN <<EOF
# git clone https://github.com/hlky/sd-enable-textual-inversion.git
# cp -rf sd-enable-textual-inversion /stable-diffusion
# EOF
# add info
COPY info.py /info.py
RUN python /info.py /stable-diffusion/frontend/frontend.py
WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS="" \
GFPGAN_PATH=/stable-diffusion/src/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth \
RealESRGAN_PATH=/stable-diffusion/src/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus.pth \
RealESRGAN_ANIME_PATH=/stable-diffusion/src/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth
EXPOSE 7860
CMD \
for path in "${GFPGAN_PATH}" "${RealESRGAN_PATH}" "${RealESRGAN_ANIME_PATH}"; do \
name=$(basename "${path}"); \
base=$(dirname "${path}"); \
test -f "/models/${name}" && mkdir -p "${base}" && ln -sf "/models/${name}" "${path}" && echo "Mounted ${name}";\
done;\
# force facexlib cache
mkdir -p /cache/weights/ && rm -rf /stable-diffusion/src/facexlib/facexlib/weights && \
ln -sf /cache/weights/ /stable-diffusion/src/facexlib/facexlib/ && \
# run, -u to not buffer stdout / stderr
python3 -u scripts/webui.py --outdir /output --ckpt /models/model.ckpt --save-metadata ${CLI_ARGS}

2
cache/.gitignore vendored
View File

@@ -1,3 +1,5 @@
/torch /torch
/transformers /transformers
/weights /weights
/models
/custom-models

View File

@@ -1,17 +1,11 @@
version: '3.9' version: '3.9'
services: x-base_service: &base_service
model:
build: ./build/
restart: on-failure
ports: ports:
- "7860:7860" - "7860:7860"
volumes: volumes:
- ./cache:/cache - &v1 ./cache:/cache
- ./output:/output - &v2 ./output:/output
- ./models:/models
environment:
- CLI_ARGS=--extra-models-cpu --optimized-turbo
deploy: deploy:
resources: resources:
reservations: reservations:
@@ -19,3 +13,45 @@ services:
- driver: nvidia - driver: nvidia
device_ids: ['0'] device_ids: ['0']
capabilities: [gpu] capabilities: [gpu]
name: webui-docker
services:
download:
build: ./services/download/
profiles: ["download"]
volumes:
- *v1
hlky:
<<: *base_service
profiles: ["hlky"]
build: ./services/hlky/
environment:
- CLI_ARGS=--optimized-turbo
automatic1111: &automatic
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
volumes:
- *v1
- *v2
- ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json
environment:
- CLI_ARGS=--medvram --opt-split-attention
automatic1111-cpu:
<<: *automatic
profiles: ["auto-cpu"]
deploy: {}
environment:
- CLI_ARGS=--no-half --precision full
lstein:
<<: *base_service
profiles: ["lstein"]
build: ./services/lstein/
environment:
- PRELOAD=false
- CLI_ARGS=

4
models/.gitignore vendored
View File

@@ -1,4 +0,0 @@
/model.ckpt
/GFPGANv1.3.pth
/RealESRGAN_x4plus.pth
/RealESRGAN_x4plus_anime_6B.pth

5
scripts/chmod.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
find . -name "*.sh" -exec git update-index --chmod=+x {} \;

View File

@@ -0,0 +1,65 @@
# syntax=docker/dockerfile:1
FROM alpine/git:2.36.2 as download
RUN <<EOF
# because taming-transformers is huge
git config --global http.postBuffer 1048576000
git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer
git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion
git clone https://github.com/salesforce/BLIP.git repositories/BLIP
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
EOF
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git reset --hard 13eec4f3d4081fdc43883c5ef02e471a2b6c7212
conda env update --file environment-wsl2.yaml -n base
conda clean -a -y
pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF
ENV ROOT=/stable-diffusion-webui \
WORKDIR=/stable-diffusion-webui/repositories/stable-diffusion
COPY --from=download /git/ ${ROOT}
RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeFormer/requirements.txt
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=99585b3514e2d7e987651d5c6a0806f933af012b
RUN <<EOF
cd stable-diffusion-webui
git pull --rebase
git reset --hard ${SHA}
pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF
RUN pip install --prefer-binary -U --no-cache-dir opencv-python-headless
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
COPY . /docker
RUN chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
WORKDIR ${WORKDIR}
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir /cache/custom-models --ckpt /cache/models/model.ckpt ${CLI_ARGS}

View File

@@ -0,0 +1,58 @@
{
"outdir_samples": "/output",
"outdir_txt2img_samples": "/output/txt2img-images",
"outdir_img2img_samples": "/output/img2img-images",
"outdir_extras_samples": "/output/extras-images",
"outdir_txt2img_grids": "/output/txt2img-grids",
"outdir_img2img_grids": "/output/img2img-grids",
"outdir_save": "/output/saved",
"font": "DejaVuSans.ttf",
"__WARNING__": "DON'T CHANGE ANYTHING BEFORE THIS",
"samples_filename_format": "",
"outdir_grids": "",
"save_to_dirs": false,
"grid_save_to_dirs": false,
"save_to_dirs_prompt_len": 10,
"samples_save": true,
"samples_format": "png",
"grid_save": true,
"return_grid": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"n_rows": -1,
"jpeg_quality": 80,
"export_for_4chan": true,
"enable_pnginfo": true,
"add_model_hash_to_info": false,
"enable_emphasis": true,
"save_txt": false,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"random_artist_categories": [],
"upscale_at_full_resolution_padding": 16,
"show_progressbar": true,
"show_progress_every_n_steps": 7,
"multiple_tqdm": true,
"face_restoration_model": null,
"code_former_weight": 0.5,
"save_images_before_face_restoration": false,
"face_restoration_unload": false,
"interrogate_keep_models_in_memory": false,
"interrogate_use_builtin_artists": true,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0,
"samples_filename_pattern": "",
"directories_filename_pattern": "",
"save_selected_only": false,
"filter_nsfw": false,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"enable_quantization": false,
"enable_batch_seeds": true,
"memmon_poll_rate": 8,
"sd_model_checkpoint": null
}

View File

@@ -0,0 +1,14 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace(' return demo', """
with demo:
gr.Markdown(
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)'
)
return demo
""", 1)
)

35
services/AUTOMATIC1111/mount.sh Executable file
View File

@@ -0,0 +1,35 @@
#!/bin/bash
set -e
declare -A MODELS
MODELS["${WORKDIR}/models/ldm/stable-diffusion-v1/model.ckpt"]=model.ckpt
MODELS["${ROOT}/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS_DIR=/cache/models
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="${MODELS_DIR}/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
done
# force realesrgan cache
rm -rf /opt/conda/lib/python3.8/site-packages/realesrgan/weights
ln -s -T "${MODELS_DIR}" /opt/conda/lib/python3.8/site-packages/realesrgan/weights
# force facexlib cache
mkdir -p /cache/weights/ ${WORKDIR}/gfpgan/
ln -sf /cache/weights/ ${WORKDIR}/gfpgan/
# code former cache
rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/CodeFormer/weights/facelib
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models

View File

@@ -0,0 +1,6 @@
FROM bash:alpine3.15
RUN apk add parallel aria2
COPY . /docker
RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"]

View File

@@ -0,0 +1,6 @@
fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 /cache/models/model.ckpt
c953a88f2727c85c3d9ae72e2bd4846bbaf59fe6972ad94130e23e7017524a70 /cache/models/GFPGANv1.3.pth
4fa0d38905f75ac06eb49a7951b426670021be3018265fd191d2125df9d682f1 /cache/models/RealESRGAN_x4plus.pth
f872d837d3c90ed2e05227bed711af5671a6fd1c9f7d7e91c911a61f155e99da /cache/models/RealESRGAN_x4plus_anime_6B.pth
c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13 /cache/models/LDSR.ckpt
9d6ad53c5dafeb07200fb712db14b813b527edd262bc80ea136777bdb41be2ba /cache/models/LDSR.yaml

13
services/download/download.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -Eeuo pipefail
mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
echo "Downloading, this might take a while..."
aria2c --input-file /docker/links.txt --dir /cache/models --continue
echo "Checking SHAs..."
parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"

View File

@@ -0,0 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
out=GFPGANv1.3.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR.ckpt

64
services/hlky/Dockerfile Normal file
View File

@@ -0,0 +1,64 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 7623a5734740025d79b710f3744bff9276e1467b
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# new dependency, should be added to the environment.yaml
RUN pip install -U --no-cache-dir pyperclip
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG BRANCH=master
ARG SHA=833a91047df999302f699637768741cecee9c37b
# ARG BRANCH=dev
# ARG SHA=5f3d7facdea58fc4f89b8c584d22a4639615a2f8
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# Latent diffusion
RUN <<EOF
git clone https://github.com/Hafiidz/latent-diffusion.git
cd latent-diffusion
git reset --hard e1a84a89fcbb49881546cf2acf1e7e250923dba0
# hacks all the way down
mv ldm ldm_latent &&
sed -i -- 's/from ldm/from ldm_latent/g' *.py
# dont forget to update the yaml!!
EOF
# add info
COPY . /docker/
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh
WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt --ldsr-dir /latent-diffusion --inbrowser ${CLI_ARGS}
# STREAMLIT_SERVER_PORT=7860 python -m streamlit run scripts/webui_streamlit.py

View File

@@ -9,7 +9,5 @@ file.write_text(
Created using <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker">stable-diffusion-webui-docker</a>. Created using <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker">stable-diffusion-webui-docker</a>.
</p> </p>
<p>For help and advanced usage guides, <p>For help and advanced usage guides,
""", 1)\ """, 1)
.replace('img2img_cfg = gr.Slider(minimum=1.0, maximum=30.0', 'img2img_cfg = gr.Slider(minimum=1.0, maximum=60.0')
) )

38
services/hlky/mount.sh Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/bash
set -e
declare -A MODELS
ROOT=/stable-diffusion/src
MODELS["${ROOT}/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS["${ROOT}/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus.pth"]=RealESRGAN_x4plus.pth
MODELS["${ROOT}/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth"]=RealESRGAN_x4plus_anime_6B.pth
MODELS["/latent-diffusion/experiments/pretrained_models/model.ckpt"]=LDSR.ckpt
# MODELS["/latent-diffusion/experiments/pretrained_models/project.yaml"]=LDSR.yaml
MODELS_DIR=/cache/models
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="${MODELS_DIR}/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
done
# hack for latent-diffusion
if test -f "${MODELS_DIR}/LDSR.yaml"; then
sed 's/ldm\./ldm_latent\./g' "${MODELS_DIR}/LDSR.yaml" >/latent-diffusion/experiments/pretrained_models/project.yaml
fi
# force facexlib cache
mkdir -p /cache/weights/ /stable-diffusion/gfpgan/
ln -sf /cache/weights/ /stable-diffusion/gfpgan/
# streamlit config
ln -sf /docker/webui_streamlit.yaml /stable-diffusion/configs/webui/webui_streamlit.yaml

View File

@@ -0,0 +1,155 @@
# UI defaults configuration file. It is automatically loaded if located at configs/webui/webui_streamlit.yaml.
general:
gpu: 0
outdir: /outputs
default_model: "Stable Diffusion v1.4"
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "/cache/models/model.ckpt"
fp:
name:
GFPGAN_dir: "./src/gfpgan"
RealESRGAN_dir: "./src/realesrgan"
RealESRGAN_model: "RealESRGAN_x4plus"
outdir_txt2img: /outputs/txt2img-samples
outdir_img2img: /outputs/img2img-samples
gfpgan_cpu: False
esrgan_cpu: False
extra_models_cpu: False
extra_models_gpu: False
save_metadata: True
save_format: "png"
skip_grid: False
skip_save: False
grid_format: "jpg:95"
n_rows: -1
no_verify_input: False
no_half: False
use_float16: False
precision: "autocast"
optimized: False
optimized_turbo: True
optimized_config: "optimizedSD/v1-inference.yaml"
update_preview: True
update_preview_frequency: 5
txt2img:
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
default_sampler: "k_euler"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
write_info_files: True
txt2vid:
default_model: "CompVis/stable-diffusion-v1-4"
custom_models_list:
[
"CompVis/stable-diffusion-v1-4",
"naclbit/trinart_stable_diffusion_v2",
"hakurei/waifu-diffusion",
"osanseviero/BigGAN-deep-128",
]
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
num_inference_steps: 200
default_sampler: "k_euler"
scheduler_name: "klms"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
dynamic_preview_frequency: True
normalize_prompt_weights: True
save_individual_images: True
save_video: True
group_by_prompt: True
write_info_files: True
do_loop: False
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
beta_start: 0.00085
beta_end: 0.012
beta_scheduler_type: "linear"
max_frames: 1000
img2img:
prompt:
sampling_steps: 30
# Adding an int to toggles enables the corresponding feature.
# 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them)
# 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0)
# 2: Loopback (use images from previous batch when creating next batch)
# 3: Random loopback seed
# 4: Save individual images
# 5: Save grid
# 6: Sort samples by prompt
# 7: Write sample info files
# 8: jpg samples
# 9: Fix faces using GFPGAN
# 10: Upscale images using Real-ESRGAN
sampler_name: "k_euler"
denoising_strength: 0.45
# 0: Keep masked area
# 1: Regenerate only masked area
mask_mode: 0
mask_restore: False
# 0: Just resize
# 1: Crop and resize
# 2: Resize and fill
resize_mode: 0
# Leave blank for random seed:
seed: ""
ddim_eta: 0.0
cfg_scale: 7.5
batch_count: 1
batch_size: 1
height: 512
width: 512
# Textual inversion embeddings file path:
fp: ""
loopback: True
random_seed_loopback: True
separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
write_info_files: True
gfpgan:
strength: 100

View File

@@ -0,0 +1,44 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF
git clone https://github.com/lstein/stable-diffusion.git
cd stable-diffusion
git reset --hard 751283a2de81bee4bb571fbabe4adb19f1d85b97
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
ARG BRANCH=development SHA=45af30f3a4c98b50c755717831c5fff75a3a8b43
# ARG BRANCH=main SHA=89da371f4841f7e05da5a1672459d700c3920784
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip uninstall opencv-python -y && pip install --prefer-binary --upgrade --force-reinstall --no-cache-dir opencv-python-headless
COPY . /docker/
RUN python3 /docker/info.py /stable-diffusion/static/dream_web/index.html && chmod +x /docker/mount.sh
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion
EXPOSE 7860
CMD /docker/mount.sh && python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}

10
services/lstein/info.py Normal file
View File

@@ -0,0 +1,10 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace('GitHub site</a>', """
GitHub site</a>, Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
""", 1)
)

26
services/lstein/mount.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -eu
ROOT=/stable-diffusion
mkdir -p "${ROOT}/models/ldm/stable-diffusion-v1/"
ln -sf /cache/models/model.ckpt "${ROOT}/models/ldm/stable-diffusion-v1/model.ckpt"
if test -f /cache/models/GFPGANv1.3.pth; then
base="${ROOT}/src/gfpgan/experiments/pretrained_models/"
mkdir -p "${base}"
ln -sf /cache/models/GFPGANv1.3.pth "${base}/GFPGANv1.3.pth"
echo "Mounted GFPGANv1.3.pth"
fi
# facexlib
FACEX_WEIGHTS=/opt/conda/lib/python3.8/site-packages/facexlib/weights
rm -rf "${FACEX_WEIGHTS}"
mkdir -p /cache/weights
ln -sf -T /cache/weights "${FACEX_WEIGHTS}"
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py
fi