40 Commits
0.3.5 ... 2.0.1

Author SHA1 Message Date
AbdBarho
710280c7ab Update versions (#120)
- auto:
2995107fa2
  - More samplers
  - Textual inversion training
- hlky:
1a9c053cb7
  - Build times are SLOW
- lstein:
4f247a3672
  - Prepare for 2.0 release
  - very cool new UI!
2022-10-07 09:46:07 +02:00
AbdBarho
e1e03229fd Update versions (#116)
- auto:
1eb588cbf1
- hlky:
1e7bdfe3f3
2022-10-04 19:56:38 +02:00
AbdBarho
79868d88e8 Fix chmod on non-existing dir (#113)
closes #112
2022-10-02 09:25:31 +02:00
AbdBarho
6f5eef42a7 Fix typo (#111)
Closes #110
2022-10-01 19:59:54 +02:00
AbdBarho
14c4b36aff v2 (#108)
### Update versions
- auto:
3f417566b0

### Breaking changes:
* renamed `automatic-1111` service to `auto`
* the `cache` folder is now deprecated, replaced with `data` (see
migration guide below)
* `embeddings` folder has been moved to `data/embeddings`
* use GFPGAN 1.4

### Migration Guide

Note: in theory, running the command 
```
docker compose --profile download up --build
```
is all you need to use the new version, however, this means you will
also have to download everything again. A new script is available under
`scripts/migratev1tov2.sh` that will copy models to the new structure
and should get you most of the way, run
```bash
./scripts/migratev1tov2.sh
```
or you can manually inspect the script and copy the files

After that, run
```
docker compose --profile download up --build
```
to validate everything.
2022-10-01 12:57:53 +02:00
AbdBarho
28f171e64d Update / Disable lstein Temporarily (#106)
- auto:
f80c3696f6
- model merger now works! the resulting model is saved in
`cache/custom-models`
- hlky:
aaa3be16e0
- lstein:
8c9f2ae705
- This UI has been temporarely disabled due to limitation in the output
path:
8c9f2ae705/backend/modules/create_cmd_parser.py (L26)
2022-09-30 09:37:27 +02:00
AbdBarho
9af4a23ec4 Stalebot: don't ignore updates 2022-09-29 12:04:40 +02:00
AbdBarho
24ecd676ab Update versions (#104)
- auto:
15f333a266
  - Checkpoint merger NOT WORKING!!!
- hlky:
7bd785d28f
  - Streamlit UI still unstable and clunky
2022-09-28 10:18:07 +02:00
Sebastian Piechowiak
ef36c50cf9 Docker compose .gitignore update (#100)
Docker compose allows override some settings in `docker-compose.yml` by
using additional file: `docker-compose.override.yml`.
This allows to hold own settings in override file which does not
conflict with updates made by pulling newer version with "git pull"
command.

This feature requires three things:
1. Creating `docker-compose.override.yml-dist` file which is a
distributed file inside repo. This file can be copied as
`docker-compose.override.yml` and modified for own needs.
2. Change in `.gitignore` file so `docker-compose.override.yml` file is
ignored, so git pull / commit will not complain about this file.
3. Modify wiki entry about setup to mention possibility to use this
method.

Closes #101
2022-09-28 08:36:53 +02:00
AbdBarho
43a5e5e85f Update versions (#99)
- auto:
ca3e5519e8
- hlky:
1fd28eed1e
- lstein:
b40bfb5116
2022-09-26 08:31:47 +02:00
Rafael Goes
5bbc21ea3d Adding embeddings volume for auto textual inversion (#98)
Adding embeddings volume mapping for AUTOMATIC1111, enabling textual
inversion feature. As discussed in #93
2022-09-25 18:56:38 +02:00
AbdBarho
09366ed955 Ignore Updates 2022-09-25 12:42:33 +02:00
AbdBarho
d4874e7c3a Update versions (#96)
- auto:
a2bea2f97a
- hlky:
f585ab1923
   - New UI is still in works & extremely unstable
- lstein: No new updates, especially not to the UI
2022-09-24 11:10:11 +02:00
AbdBarho
7638fb4e5e Fix CLIP model caching #88 (#95)
Refs #88
hacky solution but works for now
2022-09-24 09:57:57 +02:00
AbdBarho
15a61a99d6 Explicit path to GFPGAN model (#91)
Refs #89
2022-09-23 16:38:50 +02:00
AbdBarho
556a50f49b Pin transformers version (#90)
Refs #88
2022-09-23 16:24:14 +02:00
AbdBarho
b899f4e516 Update (#87)
### Update versions

- auto:
d6fd71f36f
- hlky:
2a911049aa
2022-09-23 10:34:01 +02:00
AbdBarho
a8c85b4699 Update versions (#86)
- auto:
5a1951f175
  - Now with LDSR support
- hlky:
fa6a31b23c
- lstein: prepare for new UI

Closes #85
2022-09-21 19:10:27 +02:00
Abdullah Barhoum
a96285d10b Update License 2022-09-20 19:35:10 +02:00
AbdBarho
83b78fe504 Update versions (#82)
### Update versions

- auto: dd911a47b3
- hlky: 17748cbc9c
- lstein: 50d607ffea
2022-09-19 22:02:46 +02:00
AbdBarho
84f9cb84e7 Update versions (#77)
AUTOMATIC1111/stable-diffusion-webui@9e892d9

lstein/stable-diffusion@9bcb0df

transformers==4.22 for caching

Refs #78
2022-09-18 13:49:06 +02:00
AbdBarho
6a66ff6abb Update hlky to dev (#76)
Update hlky to dev

abb0c1c377
2022-09-17 16:09:54 +02:00
AbdBarho
59892da866 Custom Models Auto (#75) 2022-09-17 13:44:00 +02:00
Abdullah Barhoum
fceb83c2b0 Dev hlky 2022-09-16 21:10:40 +02:00
AbdBarho
17b01a7627 Parallel Downloads (#74) 2022-09-16 20:07:50 +02:00
Abdullah Barhoum
b96d7c30d0 make executable 2022-09-16 18:37:41 +02:00
AbdBarho
aae83bb8f2 Update lstein to dev branch (#73) 2022-09-16 16:40:20 +02:00
AbdBarho
10763a8f61 Update Git Post Buffer 2022-09-16 06:51:14 +02:00
AbdBarho
64e8f093d2 Create stale.yml 2022-09-16 06:41:49 +02:00
AbdBarho
3e0a137c23 Remove outdated (#69) 2022-09-15 22:48:38 +02:00
AbdBarho
a1c16942ff Matrix CI/CD (#68) 2022-09-15 22:02:38 +02:00
AbdBarho
6ae3473214 Update auto (#67)
* Update auto
2022-09-15 21:05:31 +02:00
AbdBarho
5d731cb43c Add gcc (#66)
* update auto

* add gcc
2022-09-15 20:45:21 +02:00
AbdBarho
c1fa2f1457 Updates & Prepare hlky streamlit (#59) 2022-09-14 19:48:00 +02:00
AbdBarho
d8cfdd3af5 Update Versions (#58)
* Update Versions
2022-09-13 21:40:10 +02:00
AbdBarho
03d12cbcd9 Update README.md 2022-09-13 20:24:27 +02:00
AbdBarho
2e76b6c4e7 Update README.md 2022-09-13 19:51:58 +02:00
AbdBarho
5eae2076ce Update Automatic (#55)
* Update Automatic
2022-09-12 22:46:38 +02:00
AbdBarho
725e1f39ba Update (#54)
* Update hlky

* Update auto
2022-09-12 22:21:03 +02:00
AbdBarho
ab651fe0d7 Major refactor (#49)
* update folders

* Update CI

* Update CI

* Unify base docker images

* Remove dead config

* Update permissions

* Remove CPU Hack

* Update hlky

* Add Download Service

* Adapt services

* executable

* remove buggy parameter

* rename to SHA

* Update README
2022-09-11 20:18:50 +02:00
38 changed files with 614 additions and 389 deletions

5
.devscripts/chmod.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
find . -name "*.sh" -exec git update-index --chmod=+x {} \;

View File

@@ -7,13 +7,17 @@ assignees: ''
---
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue) and in [the issues in the WebUI repo](https://github.com/hlky/stable-diffusion-webui)**
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue)**
**Describe the bug**
**Which UI**
hlky or auto or auto-cpu or lstein?
**Steps to Reproduce**
1. Go to '...'
2. Click on '....'
@@ -22,8 +26,11 @@ assignees: ''
**Hardware / Software:**
- OS: [e.g. Windows / Ubuntu and version]
- RAM:
- GPU: [Nvidia 1660 / No GPU]
- Version [e.g. 22]
- VRAM:
- Docker Version, Docker compose version
- Release version [e.g. 1.0.1]
**Additional context**
Any other context about the problem here. If applicable, add screenshots to help explain your problem.

5
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,5 @@
### Update versions
- auto: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/
- hlky: https://github.com/sd-webui/stable-diffusion-webui/commit/
- lstein: https://github.com/lstein/stable-diffusion/commit/

View File

@@ -1,24 +1,19 @@
name: Build Image
name: Build Images
on: [push]
# TODO: how to cache intermediate images?
jobs:
build_hlky:
build:
strategy:
matrix:
profile:
- auto
- hlky
- lstein
- download
runs-on: ubuntu-latest
name: hlky
name: ${{ matrix.profile }}
steps:
- uses: actions/checkout@v3
- run: docker compose build --progress plain
build_AUTOMATIC1111:
runs-on: ubuntu-latest
name: AUTOMATIC1111
steps:
- uses: actions/checkout@v3
- run: cd AUTOMATIC1111 && docker compose build --progress plain
build_lstein:
runs-on: ubuntu-latest
name: lstein
steps:
- uses: actions/checkout@v3
- run: cd lstein && docker compose build --progress plain
# better caching?
- run: docker compose --profile ${{ matrix.profile }} build --progress plain

20
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

2
.gitignore vendored
View File

@@ -1,2 +1,2 @@
/dev
/.devcontainer
/docker-compose.override.yml

View File

@@ -1,61 +0,0 @@
# syntax=docker/dockerfile:1
FROM alpine/git:2.36.2 as download
RUN <<EOF
# who knows
git config --global http.postBuffer 1048576000
git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer
git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
EOF
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git reset --hard 13eec4f3d4081fdc43883c5ef02e471a2b6c7212
conda env update --file environment-wsl2.yaml -n base
conda clean -a -y
pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF
ENV ROOT=/stable-diffusion-webui \
WORKDIR=/stable-diffusion-webui/repositories/stable-diffusion
COPY --from=download /git/ ${ROOT}
RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeFormer/requirements.txt
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=06fadd2dc5c2753558a9f3971568c2673819f48c
RUN <<EOF
cd stable-diffusion-webui
git pull
git reset --hard ${SHA}
pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF
RUN pip install --prefer-binary -U --no-cache-dir opencv-python-headless markupsafe==2.0.1
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
COPY . /docker
RUN chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
WORKDIR ${WORKDIR}
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u ../../webui.py --listen --port 7860 ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for AUTOMATIC1111
The WebUI of [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as docker container!
## Setup
Clone this repo, download the `model.ckpt` and `GFPGANv1.3.pth` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd AUTOMATIC1111
docker compose up --build
```
You can change the cli parameters in `AUTOMATIC1111/docker-compose.yml`. The full list of cil parameters can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)

View File

@@ -1 +0,0 @@
{"outdir_samples": "/output", "outdir_txt2img_samples": "/output/txt2img-images", "outdir_img2img_samples": "/output/img2img-images", "outdir_extras_samples": "/output/extras-images", "outdir_txt2img_grids": "/output/txt2img-grids", "outdir_img2img_grids": "/output/img2img-grids", "outdir_save": "/output/saved", "__WARNING__": "DON'T CHANGE ANYTHING BEFORE THIS", "outdir_grids": "", "save_to_dirs": false, "save_to_dirs_prompt_len": 10, "samples_save": true, "samples_format": "png", "grid_save": true, "return_grid": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "n_rows": -1, "jpeg_quality": 80, "export_for_4chan": true, "enable_pnginfo": true, "font": "DejaVuSans.ttf", "enable_emphasis": true, "save_txt": false, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "random_artist_categories": [], "upscale_at_full_resolution_padding": 16, "show_progressbar": true, "show_progress_every_n_steps": 0, "multiple_tqdm": true, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5}

View File

@@ -1,21 +0,0 @@
version: '3.9'
services:
model:
build: .
ports:
- "7860:7860"
volumes:
- ../cache:/cache
- ../output:/output
- ../models:/models
- ./config.json:/docker/config.json
environment:
- CLI_ARGS=--medvram --opt-split-attention
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]

View File

@@ -1,34 +0,0 @@
#!/bin/bash
set -e
declare -A MODELS
MODELS["${WORKDIR}/models/ldm/stable-diffusion-v1/model.ckpt"]=model.ckpt
MODELS["${ROOT}/GFPGANv1.3.pth"]=GFPGANv1.3.pth
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="/models/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
done
# force realesrgan cache
rm -rf /opt/conda/lib/python3.8/site-packages/realesrgan/weights
ln -s -T /models /opt/conda/lib/python3.8/site-packages/realesrgan/weights
# force facexlib cache
mkdir -p /cache/weights/ ${WORKDIR}/gfpgan/
ln -sf /cache/weights/ ${WORKDIR}/gfpgan/
# code former cache
rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/CodeFormer/weights/facelib
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
# mount config
ln -sf /docker/config.json ${WORKDIR}/config.json

View File

@@ -86,4 +86,11 @@ administration of justice, law enforcement, immigration or asylum
processes, such as predicting an individual will commit fraud/crime
commitment (e.g. by text profiling, drawing causal relationships between
assertions made in documents, indiscriminate and arbitrarily-targeted
use).
use).
By using this software, you also agree to the following licenses:
https://github.com/CompVis/stable-diffusion/blob/main/LICENSE
https://github.com/TencentARC/GFPGAN/blob/master/LICENSE
https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE

View File

@@ -2,84 +2,69 @@
Run Stable Diffusion on your machine with a nice UI without any hassle!
This repository provides the [WebUI](https://github.com/hlky/stable-diffusion-webui) as a docker image for easy setup and deployment.
Now with experimental support for 2 other forks:
- [AUTOMATIC1111](./AUTOMATIC1111/) (Stable, very few bugs!)
- [lstein](./lstein/)
NOTE: big update coming up!
This repository provides multiple UIs for you to play around with stable diffusion:
## Features
- Interactive UI with many features, and more on the way!
- Support for 6GB GPU cards.
- GFPGAN for face reconstruction, RealESRGAN for super-sampling.
- Experimental:
- Latent Diffusion Super Resolution
- GoBig
- GoLatent
- many more!
### AUTOMATIC1111
## Setup
[AUTOMATIC1111's fork](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is imho the most feature rich yet elegant UI:
Make sure you have an **up to date** version of docker installed. Download this repo and run:
- Text to image, with many samplers and even negative prompts!
- Image to image, with masking, cropping, in-painting, out-painting, variations.
- GFPGAN, RealESRGAN, LDSR, CodeFormer.
- Loopback, prompt weighting, prompt matrix, X/Y plot
- Live preview of the generated images.
- Highly optimized 4GB GPU support, or even CPU only!
- Textual inversion allows you to use pretrained textual inversion embeddings
- [Full feature list here](https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase)
```
docker compose build
```
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541954-46afd772-d0c8-4005-874c-e2eca40c02f2.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541956-5b528de7-1b5d-479f-a1db-d3f5a53afc59.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541957-cf78b352-a071-486d-8889-f26952779a61.jpg) |
you can let it build in the background while you download the different models
### hlky
- [Stable Diffusion v1.4 (4GB)](https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media), rename to `model.ckpt`
- (Optional) [GFPGANv1.3.pth (333MB)](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth).
- (Optional) [RealESRGAN_x4plus.pth (64MB)](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) and [RealESRGAN_x4plus_anime_6B.pth (18MB)](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth).
- (Optional) [LDSR (2GB)](https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1) and [its configuration](https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1), rename to `LDSR.ckpt` and `LDSR.yaml` respectively.
<!-- - (Optional) [RealESRGAN_x2plus.pth (64MB)](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth)
- TODO: (I still need to find the RealESRGAN_x2plus_6b.pth) -->
[hlky's fork](https://github.com/hlky/stable-diffusion-webui) is one of the most popular UIs, with many features:
Put all of the downloaded files in the `models` folder, it should look something like this:
- Text to image, with many samplers
- Image to image, with masking, cropping, in-painting, variations.
- GFPGAN, RealESRGAN, LDSR, GoBig, GoLatent
- Loopback, prompt weighting
- 6GB or even 4GB GPU support!
- [Full feature list here](https://github.com/sd-webui/stable-diffusion-webui/blob/master/README.md)
```
models/
├── model.ckpt
├── GFPGANv1.3.pth
├── RealESRGAN_x4plus.pth
├── RealESRGAN_x4plus_anime_6B.pth
├── LDSR.ckpt
└── LDSR.yaml
```
Screenshots:
## Run
| Text to image | Image to image | Image Lab |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541298-f902b021-a1eb-4e4b-b2eb-b6a696a8ec80.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541295-7d7f2162-2189-4e0a-abbd-703f4779e1cd.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541294-aa7f7735-a973-4e17-ada0-1fe3acbb1772.jpg) |
After the build is done, you can run the app with:
<!--
### lstein
```
docker compose up --build
```
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
Will start the app on http://localhost:7860/
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
-->
Note: the first start will take sometime as some other models will be downloaded, these will be cached in the `cache` folder, so next runs are faster.
## Setup & Usage
### FAQ
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
You can find fixes to common issues [in the wiki page.](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ)
## Contributing
## Config
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
in the `docker-compose.yml` you can change the `CLI_ARGS` variable, which contains the arguments that will be passed to the WebUI. By default: `--extra-models-cpu --optimized-turbo` are given, which allow you to use this model on a 6GB GPU. However, some features might not be available in the mode. [You can find the full list of arguments here.](https://github.com/hlky/stable-diffusion-webui/blob/2b1ac8daf7ea82c6c56eabab7e80ec1c33106a98/scripts/webui.py)
You can set the `WEBUI_SHA` to [any SHA from the main repo](https://github.com/hlky/stable-diffusion/commits/main), this will build the container against that commit. Use at your own risk.
# Disclaimer
## Disclaimer
The authors of this project are not responsible for any content generated using this interface.
This license of this software forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please read [the license](./LICENSE).
# Thanks
## Thanks
Special thanks to everyone behind these awesome projects, without them, none of this would have been possible:
@@ -89,3 +74,4 @@ Special thanks to everyone behind these awesome projects, without them, none of
- [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
- [hlky/sd-enable-textual-inversion](https://github.com/hlky/sd-enable-textual-inversion)
- [devilismyfriend/latent-diffusion](https://github.com/devilismyfriend/latent-diffusion)
- [Hafiidz/latent-diffusion](https://github.com/Hafiidz/latent-diffusion)

3
cache/.gitignore vendored
View File

@@ -1,3 +0,0 @@
/torch
/transformers
/weights

14
data/.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
# for all of the stuff downloaded by transformers, pytorch, and others
/.cache
# for all stable diffusion models (main, waifu diffusion, etc..)
/StableDiffusion
# others
/Codeformer
/GFPGAN
/ESRGAN
/BSRGAN
/RealESRGAN
/SwinIR
/ScuNET
/LDSR
/embeddings

View File

@@ -1,22 +1,11 @@
version: '3.9'
services:
model:
build:
context: ./hlky/
args:
# You can choose any commit sha from https://github.com/hlky/stable-diffusion/commits/main
# USE AT YOUR OWN RISK! otherwise just leave it empty.
BRANCH:
WEBUI_SHA:
x-base_service: &base_service
ports:
- "7860:7860"
volumes:
- ./cache:/cache
- ./output:/output
- ./models:/models
environment:
- CLI_ARGS=--extra-models-cpu --optimized-turbo
- &v1 ./data:/data
- &v2 ./output:/output
deploy:
resources:
reservations:
@@ -24,3 +13,45 @@ services:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
name: webui-docker
services:
download:
build: ./services/download/
profiles: ["download"]
volumes:
- *v1
hlky:
<<: *base_service
profiles: ["hlky"]
build: ./services/hlky/
environment:
- CLI_ARGS=--optimized-turbo
auto: &automatic
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
volumes:
- *v1
- *v2
- ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json
environment:
- CLI_ARGS=--allow-code --medvram
auto-cpu:
<<: *automatic
profiles: ["auto-cpu"]
deploy: {}
environment:
- CLI_ARGS=--no-half --precision full
lstein:
<<: *base_service
profiles: ["lstein"]
build: ./services/lstein/
environment:
- PRELOAD=false
- CLI_ARGS=

View File

@@ -1,65 +0,0 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 2b1ac8daf7ea82c6c56eabab7e80ec1c33106a98
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# new dependency, should be added to the environment.yaml
RUN pip install -U --no-cache-dir pyperclip
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG BRANCH=dev
ARG WEBUI_SHA=be2ece06837e37d90181a17340c7e1aac91ba4fb
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${WEBUI_SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# Textual inversion
RUN <<EOF
git clone https://github.com/hlky/sd-enable-textual-inversion.git &&
cd /sd-enable-textual-inversion && git reset --hard 08f9b5046552d17cf7327b30a98410222741b070 &&
rsync -a /sd-enable-textual-inversion/ /stable-diffusion/ &&
rm -rf /sd-enable-textual-inversion
EOF
# Latent diffusion
RUN <<EOF
git clone https://github.com/devilismyfriend/latent-diffusion &&
cd /latent-diffusion &&
git reset --hard 6d61fc03f15273a457950f2cdc10dddf53ba6809 &&
# hacks all the way down
mv ldm ldm_latent &&
sed -i -- 's/from ldm/from ldm_latent/g' *.py
# dont forget to update the yaml!!
EOF
# add info
COPY . /docker/
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh
WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u scripts/webui.py --outdir /output --ckpt /models/model.ckpt --ldsr-dir /latent-diffusion ${CLI_ARGS}

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -e
declare -A MODELS
MODELS["/stable-diffusion/src/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS["/stable-diffusion/src/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus.pth"]=RealESRGAN_x4plus.pth
MODELS["/stable-diffusion/src/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth"]=RealESRGAN_x4plus_anime_6B.pth
MODELS["/latent-diffusion/experiments/pretrained_models/model.ckpt"]=LDSR.ckpt
# MODELS["/latent-diffusion/experiments/pretrained_models/project.yaml"]=LDSR.yaml
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="/models/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
done
# hack for latent-diffusion
if test -f /models/LDSR.yaml; then
sed 's/ldm\./ldm_latent\./g' /models/LDSR.yaml >/latent-diffusion/experiments/pretrained_models/project.yaml
fi
# force facexlib cache
mkdir -p /cache/weights/ /stable-diffusion/gfpgan/
ln -sf /cache/weights/ /stable-diffusion/gfpgan/

View File

@@ -1,29 +0,0 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF
git clone https://github.com/lstein/stable-diffusion.git
cd stable-diffusion
git reset --hard 751283a2de81bee4bb571fbabe4adb19f1d85b97
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
WORKDIR /stable-diffusion
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD mkdir -p /stable-diffusion/models/ldm/stable-diffusion-v1/ && \
ln -sf /models/model.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt && \
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for lstein
The WebUI of [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion) as docker container!
Although it is a simple UI, the project has a lot of potential.
## Setup
Clone this repo, download the `model.ckpt` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd lstein
docker compose up --build
```

View File

@@ -1,20 +0,0 @@
version: '3.9'
services:
model:
build: .
ports:
- "7860:7860"
volumes:
- ../cache:/cache
- ../output:/output
- ../models:/models
environment:
- CLI_ARGS=
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]

7
models/.gitignore vendored
View File

@@ -1,7 +0,0 @@
/model.ckpt
/GFPGANv1.3.pth
/RealESRGAN_x2plus.pth
/RealESRGAN_x4plus.pth
/RealESRGAN_x4plus_anime_6B.pth
/LDSR.ckpt
/LDSR.yaml

30
scripts/migratev1tov2.sh Executable file
View File

@@ -0,0 +1,30 @@
mkdir -p data/.cache data/StableDiffusion data/Codeformer data/GFPGAN data/ESRGAN data/BSRGAN data/RealESRGAN data/SwinIR data/LDSR data/embeddings
cp -vf cache/models/model.ckpt data/StableDiffusion/model.ckpt
cp -vf cache/models/LDSR.ckpt data/LDSR/model.ckpt
cp -vf cache/models/LDSR.yaml data/LDSR/project.yaml
cp -vf cache/models/RealESRGAN_x4plus.pth data/RealESRGAN/
cp -vf cache/models/RealESRGAN_x4plus_anime_6B.pth data/RealESRGAN/
cp -vrf cache/torch data/.cache/
mkdir -p data/.cache/huggingface/transformers/
cp -vrf cache/transformers/* data/.cache/huggingface/transformers/
cp -v cache/custom-models/* data/StableDiffusion/
mkdir -p data/.cache/clip/
cp -vf cache/weights/ViT-L-14.pt data/.cache/clip/
cp -vf cache/weights/codeformer.pth data/Codeformer/codeformer-v0.1.0.pth
cp -vf cache/weights/detection_Resnet50_Final.pth data/.cache/
cp -vf cache/weights/parsing_parsenet.pth data/.cache/
cp -v embeddings/* data/embeddings/
echo this script was created 10/2022
echo Dont forget to run: docker compose --profile download up --build
echo the cache and embeddings folders can be deleted, but its not necessary.

View File

@@ -0,0 +1,77 @@
# syntax=docker/dockerfile:1
FROM alpine/git:2.36.2 as download
RUN git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion && cd repositories/stable-diffusion && git reset --hard 69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc
RUN git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer && cd repositories/CodeFormer && git reset --hard c5b4593074ba6214284d6acd5f1719b6c5d739af
RUN git clone https://github.com/salesforce/BLIP.git repositories/BLIP && cd repositories/BLIP && git reset --hard 48211a1594f1321b00f14c9f7a5b4813144b2fb9
RUN git clone https://github.com/Hafiidz/latent-diffusion.git repositories/latent-diffusion && cd repositories/latent-diffusion && git reset --hard abf33e7002d59d9085081bce93ec798dcabd49af
RUN <<EOF
# because taming-transformers is huge
git config --global http.postBuffer 1048576000
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
git reset --hard 24268930bf1dce879235a7fddd0b2355b84d7ea6
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
EOF
RUN git clone https://github.com/crowsonkb/k-diffusion.git repositories/k-diffusion && cd repositories/k-diffusion && git reset --hard f4e99857772fc3a126ba886aadf795a332774878
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git reset --hard 1eb588cbf19924333b88beaa1ac0041904966640
pip install --prefer-binary --no-cache-dir -r requirements_versions.txt
EOF
ENV ROOT=/stable-diffusion-webui \
WORKDIR=/stable-diffusion-webui/repositories/stable-diffusion
COPY --from=download /git/ ${ROOT}
RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeFormer/requirements.txt
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=2995107fa24cfd72b0a991e18271dcde148c2807
RUN <<EOF
cd stable-diffusion-webui
git pull --rebase
git reset --hard ${SHA}
pip install --prefer-binary --no-cache-dir -r requirements_versions.txt
pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF
RUN pip install --prefer-binary --no-cache-dir opencv-python-headless \
git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
COPY . /docker
RUN <<EOF
chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
EOF
ENV CLI_ARGS=""
WORKDIR ${WORKDIR}
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir ${ROOT}/models/Stable-diffusion ${CLI_ARGS}

View File

@@ -0,0 +1,79 @@
{
"outdir_samples": "/output",
"outdir_txt2img_samples": "/output/txt2img-images",
"outdir_img2img_samples": "/output/img2img-images",
"outdir_extras_samples": "/output/extras-images",
"outdir_txt2img_grids": "/output/txt2img-grids",
"outdir_img2img_grids": "/output/img2img-grids",
"outdir_save": "/output/saved",
"font": "DejaVuSans.ttf",
"__WARNING__": "DON'T CHANGE ANYTHING BEFORE THIS",
"add_model_hash_to_info": false,
"code_former_weight": 0.5,
"directories_filename_pattern": "",
"directories_max_prompt_words": 8,
"enable_batch_seeds": true,
"enable_emphasis": true,
"enable_pnginfo": true,
"enable_quantization": false,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"export_for_4chan": true,
"face_restoration_model": null,
"face_restoration_unload": false,
"filter_nsfw": false,
"grid_extended_filename": false,
"grid_format": "png",
"grid_only_if_multiple": true,
"grid_save": true,
"grid_save_to_dirs": false,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_max_length": 48,
"interrogate_clip_min_length": 24,
"interrogate_clip_num_beams": 1,
"interrogate_keep_models_in_memory": false,
"interrogate_use_builtin_artists": true,
"jpeg_quality": 80,
"js_modal_lightbox": true,
"js_modal_lightbox_initialy_zoomed": true,
"ldsr_post_down": 1,
"ldsr_pre_down": 1,
"ldsr_steps": 30,
"memmon_poll_rate": 8,
"multiple_tqdm": true,
"n_rows": -1,
"outdir_grids": "",
"random_artist_categories": [],
"realesrgan_enabled_models": [
"Real-ESRGAN 4x plus",
"Real-ESRGAN 4x plus anime 6B",
"Real-ESRGAN 2x plus",
"Real-ESRGAN AnimeVideo",
"Real-ESRGAN General WDN x4x3",
"Real-ESRGAN General x4x3"
],
"return_grid": true,
"samples_filename_format": "",
"samples_filename_pattern": "",
"samples_format": "png",
"samples_log_stdout": false,
"samples_save": true,
"save_images_before_color_correction": false,
"save_images_before_face_restoration": false,
"save_selected_only": false,
"save_to_dirs": false,
"save_to_dirs_prompt_len": 10,
"save_txt": false,
"sd_model_checkpoint": null,
"show_progress_every_n_steps": 7,
"show_progressbar": true,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"upscale_at_full_resolution_padding": 16,
"upscaler_for_hires_fix": null,
"upscaler_for_img2img": null,
"use_original_name_batch": false
}

View File

@@ -7,7 +7,7 @@ file.write_text(
.replace(' return demo', """
with demo:
gr.Markdown(
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/tree/master/AUTOMATIC1111)'
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)'
)
return demo
""", 1)

33
services/AUTOMATIC1111/mount.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
MOUNTS["/root/.cache"]="/data/.cache"
# main
MOUNTS["${ROOT}/models/Stable-diffusion"]="/data/StableDiffusion"
MOUNTS["${ROOT}/models/Codeformer"]="/data/Codeformer"
MOUNTS["${ROOT}/models/GFPGAN"]="/data/GFPGAN"
MOUNTS["${ROOT}/models/ESRGAN"]="/data/ESRGAN"
MOUNTS["${ROOT}/models/BSRGAN"]="/data/BSRGAN"
MOUNTS["${ROOT}/models/RealESRGAN"]="/data/RealESRGAN"
MOUNTS["${ROOT}/models/SwinIR"]="/data/SwinIR"
MOUNTS["${ROOT}/models/ScuNET"]="/data/ScuNET"
MOUNTS["${ROOT}/models/LDSR"]="/data/LDSR"
MOUNTS["${ROOT}/embeddings"]="/data/embeddings"
# extra hacks
MOUNTS["${ROOT}/repositories/CodeFormer/weights/facelib"]="/data/.cache"
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -vp "$from_path"
mkdir -vp "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done

View File

@@ -0,0 +1,6 @@
FROM bash:alpine3.15
RUN apk add parallel aria2
COPY . /docker
RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"]

View File

@@ -0,0 +1,6 @@
fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 /data/StableDiffusion/model.ckpt
e2cd4703ab14f4d01fd1383a8a8b266f9a5833dacee8e6a79d3bf21a1b6be5ad /data/GFPGAN/GFPGANv1.4.pth
4fa0d38905f75ac06eb49a7951b426670021be3018265fd191d2125df9d682f1 /data/RealESRGAN/RealESRGAN_x4plus.pth
f872d837d3c90ed2e05227bed711af5671a6fd1c9f7d7e91c911a61f155e99da /data/RealESRGAN/RealESRGAN_x4plus_anime_6B.pth
c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13 /data/LDSR/model.ckpt
9d6ad53c5dafeb07200fb712db14b813b527edd262bc80ea136777bdb41be2ba /data/LDSR/project.yaml

24
services/download/download.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
set -Eeuo pipefail
mkdir -p /data/.cache /data/StableDiffusion /data/Codeformer /data/GFPGAN /data/ESRGAN /data/BSRGAN /data/RealESRGAN /data/SwinIR /data/LDSR /data/ScuNET /data/embeddings
cat <<EOF
By using this software, you agree to the following licenses:
https://github.com/CompVis/stable-diffusion/blob/main/LICENSE
https://github.com/TencentARC/GFPGAN/blob/master/LICENSE
https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE
EOF
echo "Downloading, this might take a while..."
aria2c --input-file /docker/links.txt --dir /data --continue
echo "Checking SHAs..."
parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"
# fix potential permissions
# TODO: need something better than this:
# chmod -R 777 /data /output

View File

@@ -0,0 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=StableDiffusion/model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth
out=GFPGAN/GFPGANv1.4.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN/RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN/RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR/project.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR/model.ckpt

50
services/hlky/Dockerfile Normal file
View File

@@ -0,0 +1,50 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 1a9c053cb7b6832695771db2555c0adc9b41e95f
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG BRANCH=master SHA=1a9c053cb7b6832695771db2555c0adc9b41e95f
# ARG BRANCH=dev SHA=1e7bdfe3f38a6dd37fc230f440ea1b0db0937240
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip install -U --no-cache-dir pyperclip
# add info
COPY . /docker/
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh
WORKDIR /stable-diffusion
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u scripts/webui.py --outdir /output --ckpt /data/StableDiffusion/model.ckpt ${CLI_ARGS}
# sed -i -- 's/8501/7860/g' .streamlit/config.toml && STREAMLIT_SERVER_HEADLESS=true python -u -m streamlit run scripts/webui_streamlit.py --theme.base dark

31
services/hlky/mount.sh Executable file
View File

@@ -0,0 +1,31 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
ROOT=/stable-diffusion/src
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${PWD}/models/realesrgan"]=/data/RealESRGAN
MOUNTS["${PWD}/models/ldsr"]=/data/LDSR
MOUNTS["${PWD}/models/custom"]=/data/StableDiffusion
# hack
MOUNTS["${PWD}/models/gfpgan/GFPGANv1.3.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${PWD}/models/gfpgan/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
# streamlit config
ln -sf /docker/userconfig_streamlit.yaml /stable-diffusion/configs/webui/userconfig_streamlit.yaml

View File

@@ -0,0 +1,8 @@
general:
outdir: /outputs
default_model: "Stable Diffusion v1.4"
default_model_path: /data/StableDiffusion/model.ckpt
outdir_txt2img: /outputs/txt2img-samples
outdir_img2img: /outputs/img2img-samples
optimized: True
optimized_turbo: True

View File

@@ -0,0 +1,57 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
# now it requires python3.9
RUN conda install python=3.9 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
ENV PIP_EXISTS_ACTION=w
RUN <<EOF
git clone https://github.com/invoke-ai/InvokeAI.git stable-diffusion
cd stable-diffusion
git reset --hard 8a8be92eac17e0ef699528157596b2336bdee532
sed -i -- 's/python=3.8.5/python=3.9/g' environment.yaml
git config --global http.postBuffer 1048576000
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
ARG BRANCH=development SHA=4f247a3672474bd9c46060bab6087dbf9e2531f3
RUN <<EOF
cd stable-diffusion
git fetch
git reset --hard
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yml -n base
conda clean -a -y
EOF
RUN pip uninstall opencv-python -y && pip install --prefer-binary --force-reinstall --no-cache-dir opencv-python-headless transformers==4.19.2
COPY . /docker/
RUN <<EOF
python3 /docker/info.py /stable-diffusion/frontend/dist/index.html
chmod +x /docker/mount.sh
EOF
ENV PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion
EXPOSE 7860
CMD /docker/mount.sh && \
# python3 -u backend/server.py --host 0.0.0.0 --port 7860 --cors http://localhost:7860
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
# echo The lstein webUI is currently deactivated due to implementation limitations: \
# https://github.com/invoke-ai/InvokeAI/blob/8c9f2ae705cf723d4a8a73c416e8d8bf2d746977/backend/modules/create_cmd_parser.py#L26 \
# Once the path the output is fixed, the UI will be activated again

13
services/lstein/info.py Normal file
View File

@@ -0,0 +1,13 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace(' <div id="root"></div>', """
<div id="root"></div>
<div>
Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
</div>
""", 1)
)

28
services/lstein/mount.sh Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${PWD}/models/ldm/stable-diffusion-v1/model.ckpt"]=/data/StableDiffusion/model.ckpt
MOUNTS["${PWD}/src/gfpgan/experiments/pretrained_models/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${PWD}/ldm/dream/restoration/codeformer/weights"]=/data/CodeFormer
# hacks
MOUNTS["/opt/conda/lib/python3.9/site-packages/facexlib/weights"]=/data/.cache
MOUNTS["/opt/conda/lib/python3.9/site-packages/realesrgan/weights"]=/data/RealESRGAN
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py
fi