72 Commits
1.0.2 ... 3.2.0

Author SHA1 Message Date
AbdBarho
d20b8732b3 Update xformers (#241)
8910bb5a65
2022-11-25 13:25:10 +01:00
AbdBarho
23757d2356 Update invoke (#234)
2b7e3abe57
2022-11-20 11:35:56 +01:00
AbdBarho
9e7979b756 Update Versions (#230)
- auto:
47a44c7e42
- hlky:
269107a104


Refs #216
2022-11-20 11:05:39 +01:00
AbdBarho
8623c73741 v1.5 Inpainting (#221)
Also remove v1.4

Closes #217
2022-11-13 08:42:27 +01:00
AbdBarho
9b6750b2f6 Use cuda 11.6 for auto (#220)
auto:
98947d173e

Closes #218 #219
2022-11-13 07:12:17 +01:00
AbdBarho
5e3f20ba43 Move contribution to the top 2022-11-12 18:33:59 +01:00
AbdBarho
53ac3601d7 Update versions (#213)
- auto:
ac08562854
- hlky:
09b64d4f75
- lstein
  - On hold because of many breaking changes
2022-11-11 07:19:47 +01:00
AbdBarho
37feff58bb Reorder Readme sections 2022-11-09 18:56:22 +01:00
AbdBarho
427320475b Add Samplers (#205)
804d9fb83d

Closes #201 
Closes #204
2022-11-07 07:02:31 +01:00
AbdBarho
9a60522244 Update stale.yml 2022-11-06 14:24:02 +01:00
AbdBarho
887a16ef35 Redirect to discussions 2022-11-06 11:10:21 +01:00
AbdBarho
0a4c2a34b8 Hack to allow installing extensions (#200)
remember to remove it if its fixed upstream
2022-11-05 18:04:39 +01:00
AbdBarho
73cd69075e Update versions (#198)
- auto:
30b1bcc64e
- hlky:
6f6d7571ea
2022-11-05 09:51:30 +01:00
AbdBarho
b33c0d4bcf Fix UI Layout (#196)
Closes #183
2022-11-04 23:29:44 +01:00
AbdBarho
5450583be1 Deepdanbooru Support (#194)
Builds on top of #150 

Thanks to @pirahtays

Co-authored-by: Imaginator <mriegel@gmail.com>
Co-authored-by: pirahtays <35934562+pirahtays@users.noreply.github.com>
2022-11-04 22:41:38 +01:00
AbdBarho
1cfb915d12 Invoke AI v2.1 (#195)
6b89adfa7e
2022-11-04 22:35:44 +01:00
AbdBarho
fb9d1e579c Update versions (#193)
- auto:
cd5eafaf03
- hlky:
62f9706d6a
2022-11-02 21:57:01 +01:00
AbdBarho
9092aa233b Update versions (#189)
- auto:
dd02889124
- hlky:
d8e61a5cd3
2022-11-01 17:12:53 +01:00
AbdBarho
a5218b8639 Auto Extensions (#176)
Closes #148 
Closes #172
2022-10-30 10:01:18 +01:00
AbdBarho
d6cbafdca8 Scripts support (#187)
Closes #186
2022-10-30 09:42:30 +01:00
AbdBarho
4464e9d9e9 Update versions (#185)
- auto:
35c45df28b
- hlky:
091520bed0
- lstein:
fdf9b1c40c
2022-10-29 22:02:35 +02:00
AbdBarho
fb5407a6bc Smaller git clones (#179)
Closes #135
2022-10-27 16:49:30 +02:00
AbdBarho
5b4acd605d Contribution Info (#181) 2022-10-26 23:57:29 +02:00
AbdBarho
48f8650fd8 Bump Versions (#178)
- auto:
737eb28fac
- hlky:
5f6141ae7c
- lstein:
2b6d78e436
2022-10-26 20:02:00 +02:00
Sebastian Piechowiak
31c21025ea Aria fixes (#170)
Fixes WARN / ERRORs addressed in
https://github.com/AbdBarho/stable-diffusion-webui-docker/issues/167
Closes #167
2022-10-26 19:13:31 +02:00
AbdBarho
1211e9c5de Downgrade gradio (#175)
Closes #173
2022-10-25 21:05:09 +02:00
AbdBarho
49ad173e95 Bump Versions (#171)
- auto:
df0a1f8381
- hlky:
bb7fce1a87
- lstein:
3081b6b7dd
2022-10-24 22:17:51 +02:00
ProducerMatt
5122f83c0f Tags for CLIP interrogation (#166)
This populates a folder with tags for the CLIP interrogator to label
images with. I.e. artists, art styles, art mediums, moods and genres,
etc.
2022-10-23 09:05:28 +02:00
AbdBarho
3c544dd7f4 SD 1.5 (#164)
### Update versions

- auto:
f49c08ea56
- hlky:
8d1e42b9c5
- lstein:
554445a985
2022-10-22 10:44:39 +02:00
AbdBarho
42cc17da74 New Url (#161)
Closes #159

I am not sure how often we will face this problem again.
2022-10-21 04:28:25 +02:00
AbdBarho
31e4dec08f Expose auto ui config (#149)
Closes #147
2022-10-17 18:55:16 +02:00
AbdBarho
0148e5e109 hotfix CI/CD 2022-10-16 17:08:24 +02:00
AbdBarho
111825ac25 xformers for auto (#136)
Closes #128
2022-10-16 16:35:14 +02:00
AbdBarho
c1e13867d9 Update versions (#146)
- auto:
36a0ba357a
  - History tab should be working, closes #138 
- hlky:
bd57d22f2e
- experimental support for the streamlit UI, use the env var
`USE_STREAMLIT`, see the `docker-compose.yml` file, closes #105
  - Don't create issues if it fails, it is still very early in dev.
2022-10-16 16:27:20 +02:00
AbdBarho
463f332d14 update condition (#141)
Closes #140
2022-10-15 08:08:36 +02:00
AbdBarho
3682303355 Update (#139)
### Update versions

- auto:
03d62538ae
  - History Tab IS NOT WORKING YET! #138 
- hlky:
fd51bab1ec
- lstein:
fe2a2cfc8b

Closes #102 the config file has been moved to `data/config/auto`
2022-10-14 22:42:34 +02:00
Mou Lai
402c691a49 Fix & Update to use lstein (#131)
1. Update `docker-compose.yml`: use `PRELOAD=true` for lstein. See the
[note](94bad8555a/docs/installation/INSTALL_LINUX.md (L60))
in the installation guide of InvokeAI.
3. Fix `services/lstein/mount.sh`: change `CodeFormer` to `Codeformer`.
4. Update `services/lstein/mount.sh`: avoid having to re-download the
`${PWD}/gfpgan/weights/...` every time the container is started.
2022-10-14 15:52:01 +02:00
AbdBarho
b36113b7d8 Update shell (#134)
Closes #133
2022-10-13 20:27:32 +02:00
AbdBarho
b60c787474 Update bug.md 2022-10-12 09:11:06 +02:00
AbdBarho
161fd52c16 Bump Versions (#127)
### Update versions

- auto:
6a9ea5b41c
- hlky:
2215a3b403
- lstein:
79e79b78aa
  - Now is back with v2!

Closes #123
2022-10-11 20:00:58 +02:00
AbdBarho
3b3c244c31 Add Empty dir for saving (#126)
Fixes  #124

Creates empty dir for saving, should be done by the UI...
2022-10-11 18:18:52 +02:00
AbdBarho
5698c49653 Update versions (#121)
- auto:
050a6a798c
  - Now uses python 3.10
  - requires a complete re-install
  - Image is now smaller (5.7GB vs 9.8GB)
- hlky:
fe6e72fde7
- lstein:
31869885d9
  - img2img now works
2022-10-09 11:39:31 +02:00
AbdBarho
710280c7ab Update versions (#120)
- auto:
2995107fa2
  - More samplers
  - Textual inversion training
- hlky:
1a9c053cb7
  - Build times are SLOW
- lstein:
4f247a3672
  - Prepare for 2.0 release
  - very cool new UI!
2022-10-07 09:46:07 +02:00
AbdBarho
e1e03229fd Update versions (#116)
- auto:
1eb588cbf1
- hlky:
1e7bdfe3f3
2022-10-04 19:56:38 +02:00
AbdBarho
79868d88e8 Fix chmod on non-existing dir (#113)
closes #112
2022-10-02 09:25:31 +02:00
AbdBarho
6f5eef42a7 Fix typo (#111)
Closes #110
2022-10-01 19:59:54 +02:00
AbdBarho
14c4b36aff v2 (#108)
### Update versions
- auto:
3f417566b0

### Breaking changes:
* renamed `automatic-1111` service to `auto`
* the `cache` folder is now deprecated, replaced with `data` (see
migration guide below)
* `embeddings` folder has been moved to `data/embeddings`
* use GFPGAN 1.4

### Migration Guide

Note: in theory, running the command 
```
docker compose --profile download up --build
```
is all you need to use the new version, however, this means you will
also have to download everything again. A new script is available under
`scripts/migratev1tov2.sh` that will copy models to the new structure
and should get you most of the way, run
```bash
./scripts/migratev1tov2.sh
```
or you can manually inspect the script and copy the files

After that, run
```
docker compose --profile download up --build
```
to validate everything.
2022-10-01 12:57:53 +02:00
AbdBarho
28f171e64d Update / Disable lstein Temporarily (#106)
- auto:
f80c3696f6
- model merger now works! the resulting model is saved in
`cache/custom-models`
- hlky:
aaa3be16e0
- lstein:
8c9f2ae705
- This UI has been temporarely disabled due to limitation in the output
path:
8c9f2ae705/backend/modules/create_cmd_parser.py (L26)
2022-09-30 09:37:27 +02:00
AbdBarho
9af4a23ec4 Stalebot: don't ignore updates 2022-09-29 12:04:40 +02:00
AbdBarho
24ecd676ab Update versions (#104)
- auto:
15f333a266
  - Checkpoint merger NOT WORKING!!!
- hlky:
7bd785d28f
  - Streamlit UI still unstable and clunky
2022-09-28 10:18:07 +02:00
Sebastian Piechowiak
ef36c50cf9 Docker compose .gitignore update (#100)
Docker compose allows override some settings in `docker-compose.yml` by
using additional file: `docker-compose.override.yml`.
This allows to hold own settings in override file which does not
conflict with updates made by pulling newer version with "git pull"
command.

This feature requires three things:
1. Creating `docker-compose.override.yml-dist` file which is a
distributed file inside repo. This file can be copied as
`docker-compose.override.yml` and modified for own needs.
2. Change in `.gitignore` file so `docker-compose.override.yml` file is
ignored, so git pull / commit will not complain about this file.
3. Modify wiki entry about setup to mention possibility to use this
method.

Closes #101
2022-09-28 08:36:53 +02:00
AbdBarho
43a5e5e85f Update versions (#99)
- auto:
ca3e5519e8
- hlky:
1fd28eed1e
- lstein:
b40bfb5116
2022-09-26 08:31:47 +02:00
Rafael Goes
5bbc21ea3d Adding embeddings volume for auto textual inversion (#98)
Adding embeddings volume mapping for AUTOMATIC1111, enabling textual
inversion feature. As discussed in #93
2022-09-25 18:56:38 +02:00
AbdBarho
09366ed955 Ignore Updates 2022-09-25 12:42:33 +02:00
AbdBarho
d4874e7c3a Update versions (#96)
- auto:
a2bea2f97a
- hlky:
f585ab1923
   - New UI is still in works & extremely unstable
- lstein: No new updates, especially not to the UI
2022-09-24 11:10:11 +02:00
AbdBarho
7638fb4e5e Fix CLIP model caching #88 (#95)
Refs #88
hacky solution but works for now
2022-09-24 09:57:57 +02:00
AbdBarho
15a61a99d6 Explicit path to GFPGAN model (#91)
Refs #89
2022-09-23 16:38:50 +02:00
AbdBarho
556a50f49b Pin transformers version (#90)
Refs #88
2022-09-23 16:24:14 +02:00
AbdBarho
b899f4e516 Update (#87)
### Update versions

- auto:
d6fd71f36f
- hlky:
2a911049aa
2022-09-23 10:34:01 +02:00
AbdBarho
a8c85b4699 Update versions (#86)
- auto:
5a1951f175
  - Now with LDSR support
- hlky:
fa6a31b23c
- lstein: prepare for new UI

Closes #85
2022-09-21 19:10:27 +02:00
Abdullah Barhoum
a96285d10b Update License 2022-09-20 19:35:10 +02:00
AbdBarho
83b78fe504 Update versions (#82)
### Update versions

- auto: dd911a47b3
- hlky: 17748cbc9c
- lstein: 50d607ffea
2022-09-19 22:02:46 +02:00
AbdBarho
84f9cb84e7 Update versions (#77)
AUTOMATIC1111/stable-diffusion-webui@9e892d9

lstein/stable-diffusion@9bcb0df

transformers==4.22 for caching

Refs #78
2022-09-18 13:49:06 +02:00
AbdBarho
6a66ff6abb Update hlky to dev (#76)
Update hlky to dev

abb0c1c377
2022-09-17 16:09:54 +02:00
AbdBarho
59892da866 Custom Models Auto (#75) 2022-09-17 13:44:00 +02:00
Abdullah Barhoum
fceb83c2b0 Dev hlky 2022-09-16 21:10:40 +02:00
AbdBarho
17b01a7627 Parallel Downloads (#74) 2022-09-16 20:07:50 +02:00
Abdullah Barhoum
b96d7c30d0 make executable 2022-09-16 18:37:41 +02:00
AbdBarho
aae83bb8f2 Update lstein to dev branch (#73) 2022-09-16 16:40:20 +02:00
AbdBarho
10763a8f61 Update Git Post Buffer 2022-09-16 06:51:14 +02:00
AbdBarho
64e8f093d2 Create stale.yml 2022-09-16 06:41:49 +02:00
AbdBarho
3e0a137c23 Remove outdated (#69) 2022-09-15 22:48:38 +02:00
35 changed files with 555 additions and 406 deletions

5
.devscripts/chmod.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -Eeuo pipefail
find services -name "*.sh" -exec git update-index --chmod=+x {} \;

30
.devscripts/migratev1tov2.sh Executable file
View File

@@ -0,0 +1,30 @@
mkdir -p data/.cache data/StableDiffusion data/Codeformer data/GFPGAN data/ESRGAN data/BSRGAN data/RealESRGAN data/SwinIR data/LDSR data/embeddings
cp -vf cache/models/model.ckpt data/StableDiffusion/model.ckpt
cp -vf cache/models/LDSR.ckpt data/LDSR/model.ckpt
cp -vf cache/models/LDSR.yaml data/LDSR/project.yaml
cp -vf cache/models/RealESRGAN_x4plus.pth data/RealESRGAN/
cp -vf cache/models/RealESRGAN_x4plus_anime_6B.pth data/RealESRGAN/
cp -vrf cache/torch data/.cache/
mkdir -p data/.cache/huggingface/transformers/
cp -vrf cache/transformers/* data/.cache/huggingface/transformers/
cp -v cache/custom-models/* data/StableDiffusion/
mkdir -p data/.cache/clip/
cp -vf cache/weights/ViT-L-14.pt data/.cache/clip/
cp -vf cache/weights/codeformer.pth data/Codeformer/codeformer-v0.1.0.pth
cp -vf cache/weights/detection_Resnet50_Final.pth data/.cache/
cp -vf cache/weights/parsing_parsenet.pth data/.cache/
cp -v embeddings/* data/embeddings/
echo this script was created 10/2022
echo Dont forget to run: docker compose --profile download up --build
echo the cache and embeddings folders can be deleted, but its not necessary.

View File

@@ -7,30 +7,39 @@ assignees: ''
---
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue)**
<!-- PLEASE FILL THIS OUT, IT WILL MAKE BOTH OF OUR LIVES EASIER -->
**Has this issue been opened before?**
- [ ] It is not in the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ), I checked.
- [ ] It is not in the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=), I searched.
**Describe the bug**
<!-- tried to run the app, my cat exploded -->
**Which UI**
hlky or auto or auto-cpu or lstein?
**Hardware / Software**
- OS: [e.g. Windows 10 / Ubuntu ]
- OS version: <!-- on windows, use the command `winver` to find out, on ubuntu `lsb_release -d` -->
- WSL version (if applicable): <!-- get using `wsl -l -v` -->
- Docker Version: <!-- get using `docker version` -->
- Docker compose version: <!-- get using `docker compose version` -->
- Repo version: <!-- tag, commit sha, or "from master" -->
- RAM:
- GPU/VRAM:
**Steps to Reproduce**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Hardware / Software:**
- OS: [e.g. Windows / Ubuntu and version]
- RAM:
- GPU: [Nvidia 1660 / No GPU]
- VRAM:
- Docker Version, Docker compose version
- Release version [e.g. 1.0.1]
**Additional context**
Any other context about the problem here. If applicable, add screenshots to help explain your problem.

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Feature request? Questions regarding some extension?
url: https://github.com/AbdBarho/stable-diffusion-webui-docker/discussions
about: Please use the discussions tab

13
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,13 @@
<!--
Have you created an issue before opening a merge request???
https://github.com/AbdBarho/stable-diffusion-webui-docker#contributing
Please create one so we can discuss it, I don't want your effort to go to waste.
-->
Closes issue #
### Update versions
- auto: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/
- hlky: https://github.com/sd-webui/stable-diffusion-webui/commit/
- lstein: https://github.com/invoke-ai/InvokeAI/commit/

View File

@@ -1,6 +1,12 @@
name: Build Images
on: [push]
on:
push:
branches: master
pull_request:
paths:
- docker-compose.yml
- services
jobs:
build:

View File

@@ -1,22 +0,0 @@
name: Check executable
on: [push]
jobs:
check:
runs-on: ubuntu-latest
name: Check all sh
steps:
- run: git config --global core.fileMode true
- uses: actions/checkout@v3
- shell: bash
run: |
shopt -s globstar;
FAIL=0
for file in **/*.sh; do
if [ -f "${file}" ] && [ -r "${file}" ] && [ ! -x "${file}" ]; then
echo "$file" is not executable;
FAIL=1
fi
done
exit ${FAIL}

20
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '0 0 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v6
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

36
.github/workflows/xformers.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: Build Xformers
on:
workflow_dispatch: {}
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 180
container:
image: python:3.10-slim
env:
DEBIAN_FRONTEND: noninteractive
XFORMERS_DISABLE_FLASH_ATTN: 1
FORCE_CUDA: 1
TORCH_CUDA_ARCH_LIST: "6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6"
NVCC_FLAGS: --use_fast_math -DXFORMERS_MEM_EFF_ATTENTION_DISABLE_BACKWARD
MAX_JOBS: 4
steps:
- run: |
apt-get update
apt-get install gpg wget git -y
wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.0-1_all.deb
dpkg -i cuda-keyring_1.0-1_all.deb
apt-get update
apt-get install cuda-nvcc-11-8 cuda-libraries-dev-11-8 -y
export PIP_CACHE_DIR=$(pwd)/cache
pip install ninja install torch --extra-index-url https://download.pytorch.org/whl/cu113
pip wheel --wheel-dir=data git+https://github.com/facebookresearch/xformers.git@3633e1afc7bffbe61957f04e7bb1a742ee910ace#egg=xformers
- name: Artifacts
uses: actions/upload-artifact@v3
with:
name: xformers
path: data/xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl

2
.gitignore vendored
View File

@@ -1,2 +1,2 @@
/dev
/.devcontainer
/docker-compose.override.yml

13
LICENSE
View File

@@ -87,3 +87,16 @@ processes, such as predicting an individual will commit fraud/crime
commitment (e.g. by text profiling, drawing causal relationships between
assertions made in documents, indiscriminate and arbitrarily-targeted
use).
By using this software, you also agree to the following licenses:
https://github.com/CompVis/stable-diffusion/blob/main/LICENSE
https://github.com/sd-webui/stable-diffusion-webui/blob/master/LICENSE
https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE
https://github.com/cszn/BSRGAN/blob/main/LICENSE
https://github.com/sczhou/CodeFormer/blob/master/LICENSE
https://github.com/TencentARC/GFPGAN/blob/master/LICENSE
https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE
https://github.com/xinntao/ESRGAN/blob/master/LICENSE
https://github.com/cszn/SCUNet/blob/main/LICENSE

View File

@@ -2,10 +2,19 @@
Run Stable Diffusion on your machine with a nice UI without any hassle!
This repository provides multiple UIs for you to play around with stable diffusion:
## Setup & Usage
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
## Contributing
Contributions are welcome! **Create a discussion first of what the problem is and what you want to contribute (before you implement anything)**
## Features
This repository provides multiple UIs for you to play around with stable diffusion:
### AUTOMATIC1111
[AUTOMATIC1111's fork](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is imho the most feature rich yet elegant UI:
@@ -16,22 +25,23 @@ This repository provides multiple UIs for you to play around with stable diffusi
- Loopback, prompt weighting, prompt matrix, X/Y plot
- Live preview of the generated images.
- Highly optimized 4GB GPU support, or even CPU only!
- Textual inversion allows you to use pretrained textual inversion embeddings
- [Full feature list here](https://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase)
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541954-46afd772-d0c8-4005-874c-e2eca40c02f2.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541956-5b528de7-1b5d-479f-a1db-d3f5a53afc59.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541957-cf78b352-a071-486d-8889-f26952779a61.jpg) |
### hlky
### hlky (sd-webui / sygil-webui)
[hlky's fork](https://github.com/hlky/stable-diffusion-webui) is one of the most popular UIs, with many features:
[hlky's fork](https://github.com/Sygil-Dev/sygil-webui) is one of the most popular UIs, with many features:
- Text to image, with many samplers
- Image to image, with masking, cropping, in-painting, variations.
- GFPGAN, RealESRGAN, LDSR, GoBig, GoLatent
- Loopback, prompt weighting
- 6GB or even 4GB GPU support!
- [Full feature list here](https://github.com/sd-webui/stable-diffusion-webui/blob/master/README.md)
- [Full feature list here](https://github.com/Sygil-Dev/sygil-webui/blob/master/README.md)
Screenshots:
@@ -39,16 +49,21 @@ Screenshots:
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/189541298-f902b021-a1eb-4e4b-b2eb-b6a696a8ec80.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541295-7d7f2162-2189-4e0a-abbd-703f4779e1cd.jpg) | ![](https://user-images.githubusercontent.com/24505302/189541294-aa7f7735-a973-4e17-ada0-1fe3acbb1772.jpg) |
### lstein
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, but less so for the WebUI.
## Setup & Usage
### lstein (InvokeAI)
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
[lstein's fork](https://github.com/invoke-ai/InvokeAI) is one of the earliest with a wonderful WebUI.
- Text to image, with many samplers
- Image to image
- 4GB GPU support
- More coming!
- [Full feature list here](https://github.com/invoke-ai/InvokeAI#features)
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/195158552-39f58cb6-cfcc-4141-9995-a626e3760752.jpg) | ![](https://user-images.githubusercontent.com/24505302/195158553-152a0ab8-c0fd-4087-b121-4823bcd8d6b5.jpg) | ![](https://user-images.githubusercontent.com/24505302/195158548-e118206e-c519-4915-85d6-4c248eb10fc0.jpg) |
## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Disclaimer

4
cache/.gitignore vendored
View File

@@ -1,4 +0,0 @@
/torch
/transformers
/weights
/models

19
data/.gitignore vendored Normal file
View File

@@ -0,0 +1,19 @@
# for all of the stuff downloaded by transformers, pytorch, and others
/.cache
# for UIs
/config
# for all stable diffusion models (main, waifu diffusion, etc..)
/StableDiffusion
# others
/Codeformer
/GFPGAN
/ESRGAN
/BSRGAN
/RealESRGAN
/SwinIR
/ScuNET
/LDSR
/Deepdanbooru
/Hypernetworks
/VAE
/embeddings

View File

@@ -4,7 +4,7 @@ x-base_service: &base_service
ports:
- "7860:7860"
volumes:
- &v1 ./cache:/cache
- &v1 ./data:/data
- &v2 ./output:/output
deploy:
resources:
@@ -23,32 +23,35 @@ services:
volumes:
- *v1
hlky:
<<: *base_service
profiles: ["hlky"]
build: ./services/hlky/
environment:
- CLI_ARGS=--optimized-turbo
automatic1111: &automatic
auto: &automatic
<<: *base_service
profiles: ["auto"]
build: ./services/AUTOMATIC1111
volumes:
- *v1
- *v2
- ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json
image: sd-auto:18
environment:
- CLI_ARGS=--medvram --opt-split-attention
- CLI_ARGS=--allow-code --medvram --xformers
automatic1111-cpu:
auto-cpu:
<<: *automatic
profiles: ["auto-cpu"]
deploy: {}
environment:
- CLI_ARGS=--no-half --precision full
hlky:
<<: *base_service
profiles: ["hlky"]
build: ./services/hlky/
image: sd-hlky:9
environment:
- CLI_ARGS=--optimized-turbo
- USE_STREAMLIT=0
lstein:
<<: *base_service
profiles: ["lstein"]
build: ./services/lstein/
image: sd-lstein:7
environment:
- PRELOAD=true
- CLI_ARGS=--max_loaded_models=1

View File

@@ -1,5 +0,0 @@
#!/bin/bash
set -Eeuo pipefail
find . -name "*.sh" -exec git update-index --chmod=+x {} \;

View File

@@ -1,64 +1,96 @@
# syntax=docker/dockerfile:1
FROM alpine/git:2.36.2 as download
SHELL ["/bin/sh", "-ceuxo", "pipefail"]
RUN <<EOF
# because taming-transformers is huge
git config --global http.postBuffer 1048576000
git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer
git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion
git clone https://github.com/salesforce/BLIP.git repositories/BLIP
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
cat <<'EOE' > /clone.sh
mkdir -p repositories/"$1" && cd repositories/"$1" && git init && git remote add origin "$2" && git fetch origin "$3" --depth=1 && git reset --hard "$3" && rm -rf .git
EOE
EOF
RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd0b2355b84d7ea6 \
&& rm -rf data assets **/*.ipynb
FROM continuumio/miniconda3:4.12.0
RUN . /clone.sh stable-diffusion https://github.com/CompVis/stable-diffusion.git 69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc \
&& rm -rf assets data/**/*.png data/**/*.jpg data/**/*.gif
RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af \
&& rm -rf assets inputs
RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git 60e5042ca0da89c14d1dd59d73883280f8fce991
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2486589f24165c8e3b303f84e9dbbea318df83e8
FROM alpine:3 as xformers
RUN apk add aria2
RUN aria2c -x 10 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/3.1.0/xformers-0.0.15.dev0+4e3631d.d20221125-cp310-cp310-linux_x86_64.whl'
FROM python:3.10-slim
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1 PIP_NO_CACHE_DIR=1
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN apt-get update && apt install fonts-dejavu-core rsync git jq moreutils -y && apt-get clean
RUN <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git reset --hard 13eec4f3d4081fdc43883c5ef02e471a2b6c7212
conda env update --file environment-wsl2.yaml -n base
conda clean -a -y
pip install --prefer-binary --no-cache-dir -r requirements.txt
git reset --hard 98947d173e3f1667eba29c904f681047dea9de90
pip install -r requirements_versions.txt
EOF
ENV ROOT=/stable-diffusion-webui \
WORKDIR=/stable-diffusion-webui/repositories/stable-diffusion
COPY --from=xformers /wheel.whl xformers-0.0.15-cp310-cp310-linux_x86_64.whl
RUN pip install xformers-0.0.15-cp310-cp310-linux_x86_64.whl && rm xformers-0.0.15-cp310-cp310-linux_x86_64.whl
ENV ROOT=/stable-diffusion-webui
COPY --from=download /git/ ${ROOT}
RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeFormer/requirements.txt
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/data/* ${ROOT}/interrogate
RUN pip install -r ${ROOT}/repositories/CodeFormer/requirements.txt
ARG DEEPDANBOORU="0"
RUN [[ "${DEEPDANBOORU:-0}" == "0" ]] && : || pip install tensorflow-cpu==2.10 tensorflow-io==0.27.0 git+https://github.com/KichangKim/DeepDanbooru.git@edf73df4cdaeea2cf00e9ac08bd8a9026b7a7b26#egg=deepdanbooru
RUN pip install opencv-python-headless \
git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 \
pyngrok
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=2ddaeb318a9626502ef4bf949a312253d8021ff0
ARG SHA=47a44c7e421b98ca07e92dbf88769b04c9e28f86
RUN <<EOF
cd stable-diffusion-webui
git pull --rebase
git fetch
git reset --hard ${SHA}
pip install --prefer-binary --no-cache-dir -r requirements.txt
pip install -r requirements_versions.txt
EOF
RUN pip install --prefer-binary -U --no-cache-dir opencv-python-headless
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
RUN pip install opencv-python-headless
COPY . /docker
RUN chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
RUN <<EOF
python3 /docker/info.py ${ROOT}/modules/ui.py
mv ${ROOT}/style.css ${ROOT}/user.css
sed -i 's/os.rename(tmpdir, target_dir)/shutil.move(tmpdir,target_dir)/' ${ROOT}/modules/ui_extensions.py
EOF
WORKDIR ${WORKDIR}
WORKDIR ${ROOT}
ENV CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config ${CLI_ARGS}
CMD python3 -u webui.py --listen --port 7860 ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for AUTOMATIC1111
The WebUI of [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as docker container!
## Setup
Clone this repo, download the `model.ckpt` and `GFPGANv1.3.pth` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd AUTOMATIC1111
docker compose up --build
```
You can change the cli parameters in `AUTOMATIC1111/docker-compose.yml`. The full list of cil parameters can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)

View File

@@ -6,43 +6,5 @@
"outdir_txt2img_grids": "/output/txt2img-grids",
"outdir_img2img_grids": "/output/img2img-grids",
"outdir_save": "/output/saved",
"font": "DejaVuSans.ttf",
"__WARNING__": "DON'T CHANGE ANYTHING BEFORE THIS",
"samples_filename_format": "",
"outdir_grids": "",
"save_to_dirs": false,
"grid_save_to_dirs": false,
"save_to_dirs_prompt_len": 10,
"samples_save": true,
"samples_format": "png",
"grid_save": true,
"return_grid": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"n_rows": -1,
"jpeg_quality": 80,
"export_for_4chan": true,
"enable_pnginfo": true,
"add_model_hash_to_info": false,
"enable_emphasis": true,
"save_txt": false,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"random_artist_categories": [],
"upscale_at_full_resolution_padding": 16,
"show_progressbar": true,
"show_progress_every_n_steps": 7,
"multiple_tqdm": true,
"face_restoration_model": null,
"code_former_weight": 0.5,
"save_images_before_face_restoration": false,
"face_restoration_unload": false,
"interrogate_keep_models_in_memory": false,
"interrogate_use_builtin_artists": true,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0
"font": "DejaVuSans.ttf"
}

View File

@@ -0,0 +1,63 @@
#!/bin/bash
set -Eeuo pipefail
# TODO: move all mkdir -p ?
mkdir -p /data/config/auto/scripts/
cp -n /docker/config.json /data/config/auto/config.json
jq '. * input' /data/config/auto/config.json /docker/config.json | sponge /data/config/auto/config.json
if [ ! -f /data/config/auto/ui-config.json ]; then
echo '{}' >/data/config/auto/ui-config.json
fi
# copy scripts, we cannot just mount the directory because it will override the already provided scripts in the repo
cp -rfT /data/config/auto/scripts/ "${ROOT}/scripts"
declare -A MOUNTS
MOUNTS["/root/.cache"]="/data/.cache"
# main
MOUNTS["${ROOT}/models/Stable-diffusion"]="/data/StableDiffusion"
MOUNTS["${ROOT}/models/VAE"]="/data/VAE"
MOUNTS["${ROOT}/models/Codeformer"]="/data/Codeformer"
MOUNTS["${ROOT}/models/GFPGAN"]="/data/GFPGAN"
MOUNTS["${ROOT}/models/ESRGAN"]="/data/ESRGAN"
MOUNTS["${ROOT}/models/BSRGAN"]="/data/BSRGAN"
MOUNTS["${ROOT}/models/RealESRGAN"]="/data/RealESRGAN"
MOUNTS["${ROOT}/models/SwinIR"]="/data/SwinIR"
MOUNTS["${ROOT}/models/ScuNET"]="/data/ScuNET"
MOUNTS["${ROOT}/models/LDSR"]="/data/LDSR"
MOUNTS["${ROOT}/models/hypernetworks"]="/data/Hypernetworks"
MOUNTS["${ROOT}/models/deepbooru"]="/data/Deepdanbooru"
MOUNTS["${ROOT}/embeddings"]="/data/embeddings"
MOUNTS["${ROOT}/config.json"]="/data/config/auto/config.json"
MOUNTS["${ROOT}/ui-config.json"]="/data/config/auto/ui-config.json"
MOUNTS["${ROOT}/extensions"]="/data/config/auto/extensions"
# extra hacks
MOUNTS["${ROOT}/repositories/CodeFormer/weights/facelib"]="/data/.cache"
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
if [ ! -f "$from_path" ]; then
mkdir -vp "$from_path"
fi
mkdir -vp "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
mkdir -p /output/saved /output/txt2img-images/ /output/img2img-images /output/extras-images/ /output/grids/ /output/txt2img-grids/ /output/img2img-grids/
if [ -f "/data/config/auto/startup.sh" ]; then
pushd ${ROOT}
. /data/config/auto/startup.sh
popd
fi
exec "$@"

View File

@@ -1,36 +0,0 @@
#!/bin/bash
set -e
declare -A MODELS
MODELS["${WORKDIR}/models/ldm/stable-diffusion-v1/model.ckpt"]=model.ckpt
MODELS["${ROOT}/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS_DIR=/cache/models
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="${MODELS_DIR}/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
done
# force realesrgan cache
rm -rf /opt/conda/lib/python3.8/site-packages/realesrgan/weights
ln -s -T "${MODELS_DIR}" /opt/conda/lib/python3.8/site-packages/realesrgan/weights
# force facexlib cache
mkdir -p /cache/weights/ ${WORKDIR}/gfpgan/
ln -sf /cache/weights/ ${WORKDIR}/gfpgan/
# code former cache
rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/CodeFormer/weights/facelib
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
# mount config
# ln -sf /docker/config.json ${WORKDIR}/config.json

View File

@@ -1,6 +1,6 @@
FROM bash:alpine3.15
RUN apk add parallel
RUN apk add parallel aria2
COPY . /docker
RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"]

View File

@@ -1,6 +1,8 @@
fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556 /cache/models/model.ckpt
c953a88f2727c85c3d9ae72e2bd4846bbaf59fe6972ad94130e23e7017524a70 /cache/models/GFPGANv1.3.pth
4fa0d38905f75ac06eb49a7951b426670021be3018265fd191d2125df9d682f1 /cache/models/RealESRGAN_x4plus.pth
f872d837d3c90ed2e05227bed711af5671a6fd1c9f7d7e91c911a61f155e99da /cache/models/RealESRGAN_x4plus_anime_6B.pth
c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13 /cache/models/LDSR.ckpt
9d6ad53c5dafeb07200fb712db14b813b527edd262bc80ea136777bdb41be2ba /cache/models/LDSR.yaml
cc6cb27103417325ff94f52b7a5d2dde45a7515b25c255d8e396c90014281516 /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
c6bbc15e3224e6973459ba78de4998b80b50112b0ae5b5c67113d56b4e366b19 /data/StableDiffusion/sd-v1-5-inpainting.ckpt
c6a580b13a5bc05a5e16e4dbb80608ff2ec251a162311590c1f34c013d7f3dab /data/VAE/vae-ft-mse-840000-ema-pruned.ckpt
e2cd4703ab14f4d01fd1383a8a8b266f9a5833dacee8e6a79d3bf21a1b6be5ad /data/GFPGAN/GFPGANv1.4.pth
4fa0d38905f75ac06eb49a7951b426670021be3018265fd191d2125df9d682f1 /data/RealESRGAN/RealESRGAN_x4plus.pth
f872d837d3c90ed2e05227bed711af5671a6fd1c9f7d7e91c911a61f155e99da /data/RealESRGAN/RealESRGAN_x4plus_anime_6B.pth
c209caecac2f97b4bb8f4d726b70ac2ac9b35904b7fc99801e1f5e61f9210c13 /data/LDSR/model.ckpt
9d6ad53c5dafeb07200fb712db14b813b527edd262bc80ea136777bdb41be2ba /data/LDSR/project.yaml

View File

@@ -2,32 +2,27 @@
set -Eeuo pipefail
# [[ "$(sha256sum -b $file | head -c 64)" == "$sha" ]]
# TODO: maybe just use the .gitignore file to create all of these
mkdir -vp /data/.cache /data/StableDiffusion /data/Codeformer /data/GFPGAN /data/ESRGAN /data/BSRGAN /data/RealESRGAN /data/SwinIR /data/LDSR /data/ScuNET /data/embeddings /data/VAE /data/Deepdanbooru
declare -A MODELS
echo "Downloading, this might take a while..."
MODELS['model.ckpt']='https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media'
MODELS['GFPGANv1.3.pth']='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth'
MODELS['RealESRGAN_x4plus.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
MODELS['RealESRGAN_x4plus_anime_6B.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'
MODELS['LDSR.yaml']='https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1'
MODELS['LDSR.ckpt']='https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1'
echo "Downloading..."
for file in "${!MODELS[@]}"; do
url=${MODELS[$file]}
full_path="/cache/models/$file"
if [[ -f "$full_path" ]]; then
echo "- $file exists"
continue
fi
mkdir -p $(dirname $full_path)
wget --tries=10 -c -O $full_path $url
done
aria2c --disable-ipv6 --input-file /docker/links.txt --dir /data --continue
echo "Checking SHAs..."
time parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"
parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"
cat <<EOF
By using this software, you agree to the following licenses:
https://github.com/CompVis/stable-diffusion/blob/main/LICENSE
https://github.com/AbdBarho/stable-diffusion-webui-docker/blob/master/LICENSE
https://github.com/sd-webui/stable-diffusion-webui/blob/master/LICENSE
https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE
https://github.com/cszn/BSRGAN/blob/main/LICENSE
https://github.com/sczhou/CodeFormer/blob/master/LICENSE
https://github.com/TencentARC/GFPGAN/blob/master/LICENSE
https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE
https://github.com/xinntao/ESRGAN/blob/master/LICENSE
https://github.com/cszn/SCUNet/blob/main/LICENSE
EOF

View File

@@ -0,0 +1,16 @@
https://huggingface.co/ZeroCool94/stable-diffusion-v1-5/resolve/main/Stable%20Diffusion%20v1-5-Pruned-ema%20only.ckpt
out=StableDiffusion/v1-5-pruned-emaonly.ckpt
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt
out=VAE/vae-ft-mse-840000-ema-pruned.ckpt
https://huggingface.co/ZeroCool94/stable-diffusion-v1-5/resolve/main/Stable%20Diffusion-v1-5-Inpainting.ckpt
out=StableDiffusion/sd-v1-5-inpainting.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth
out=GFPGAN/GFPGANv1.4.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN/RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN/RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR/project.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR/model.ckpt

View File

@@ -12,23 +12,20 @@ RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorc
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
ENV PIP_PREFER_BINARY=1 PIP_NO_CACHE_DIR=1
RUN <<EOF
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
git config --global http.postBuffer 1048576000
git clone https://github.com/Sygil-Dev/sygil-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 7623a5734740025d79b710f3744bff9276e1467b
git reset --hard 091520bed06f913c9f432f9f47ccbe22b46068d7
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# new dependency, should be added to the environment.yaml
RUN pip install -U --no-cache-dir pyperclip
RUN apt-get update && apt install libsndfile1 ffmpeg -y && apt-get clean
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG BRANCH=master
ARG SHA=833a91047df999302f699637768741cecee9c37b
# ARG BRANCH=dev
# ARG SHA=b4de6caf697d311c1238c15a4c863fa529a35522
ARG BRANCH=dev SHA=269107a104fc9fee3201eb2c56cf7adb3d063e4b
RUN <<EOF
cd stable-diffusion
git fetch
@@ -38,26 +35,17 @@ conda env update --file environment.yaml -n base
conda clean -a -y
EOF
# Latent diffusion
RUN <<EOF
git clone https://github.com/Hafiidz/latent-diffusion.git
cd latent-diffusion
git reset --hard e1a84a89fcbb49881546cf2acf1e7e250923dba0
# hacks all the way down
mv ldm ldm_latent &&
sed -i -- 's/from ldm/from ldm_latent/g' *.py
# dont forget to update the yaml!!
EOF
# add info
COPY . /docker/
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh
RUN <<EOF
python /docker/info.py /stable-diffusion/frontend/frontend.py
chmod +x /docker/mount.sh /docker/run.sh
# streamlit
sed -i -- 's/8501/7860/g' /stable-diffusion/.streamlit/config.toml
EOF
WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
ENV PYTHONPATH="${PYTHONPATH}:${PWD}" STREAMLIT_SERVER_HEADLESS=true USE_STREAMLIT=0 CLI_ARGS=""
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \
python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt --ldsr-dir /latent-diffusion --inbrowser ${CLI_ARGS}
# STREAMLIT_SERVER_PORT=7860 python -m streamlit run scripts/webui_streamlit.py
CMD /docker/mount.sh && /docker/run.sh

View File

@@ -1,38 +1,32 @@
#!/bin/bash
set -e
set -Eeuo pipefail
declare -A MODELS
declare -A MOUNTS
ROOT=/stable-diffusion/src
MODELS["${ROOT}/gfpgan/experiments/pretrained_models/GFPGANv1.3.pth"]=GFPGANv1.3.pth
MODELS["${ROOT}/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus.pth"]=RealESRGAN_x4plus.pth
MODELS["${ROOT}/realesrgan/experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth"]=RealESRGAN_x4plus_anime_6B.pth
MODELS["/latent-diffusion/experiments/pretrained_models/model.ckpt"]=LDSR.ckpt
# MODELS["/latent-diffusion/experiments/pretrained_models/project.yaml"]=LDSR.yaml
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${PWD}/models/realesrgan"]=/data/RealESRGAN
MOUNTS["${PWD}/models/ldsr"]=/data/LDSR
MOUNTS["${PWD}/models/custom"]=/data/StableDiffusion
MODELS_DIR=/cache/models
# hack
MOUNTS["${PWD}/models/gfpgan/GFPGANv1.3.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${PWD}/models/gfpgan/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${PWD}/gfpgan/weights"]=/data/.cache
for path in "${!MODELS[@]}"; do
name=${MODELS[$path]}
base=$(dirname "${path}")
from_path="${MODELS_DIR}/${name}"
if test -f "${from_path}"; then
mkdir -p "${base}" && ln -sf "${from_path}" "${path}" && echo "Mounted ${name}"
else
echo "Skipping ${name}"
fi
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
# hack for latent-diffusion
if test -f "${MODELS_DIR}/LDSR.yaml"; then
sed 's/ldm\./ldm_latent\./g' "${MODELS_DIR}/LDSR.yaml" >/latent-diffusion/experiments/pretrained_models/project.yaml
fi
# force facexlib cache
mkdir -p /cache/weights/ /stable-diffusion/gfpgan/
ln -sf /cache/weights/ /stable-diffusion/gfpgan/
# streamlit config
ln -sf /docker/webui_streamlit.yaml /stable-diffusion/configs/webui/webui_streamlit.yaml
ln -sf /docker/userconfig_streamlit.yaml /stable-diffusion/configs/webui/userconfig_streamlit.yaml

10
services/hlky/run.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/bin/bash
set -Eeuo pipefail
echo "USE_STREAMLIT = ${USE_STREAMLIT}"
if [ "${USE_STREAMLIT}" == "1" ]; then
python -u -m streamlit run scripts/webui_streamlit.py
else
python3 -u scripts/webui.py --outdir /output --ckpt /data/StableDiffusion/v1-5-pruned-emaonly.ckpt ${CLI_ARGS}
fi

View File

@@ -0,0 +1,10 @@
general:
version: 1.24.6
outdir: /output
default_model: "Stable Diffusion v1.5"
default_model_path: /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
outdir_txt2img: /output/txt2img-samples
outdir_img2img: /output/img2img-samples
outdir_img2txt: /output/img2txt
optimized: True
optimized_turbo: True

View File

@@ -1,102 +0,0 @@
# UI defaults configuration file. It is automatically loaded if located at configs/webui/webui_streamlit.yaml.
general:
gpu: 0
outdir: /outputs
ckpt: "/cache/models/model.ckpt"
fp:
name: "embeddings/alex/embeddings_gs-11000.pt"
GFPGAN_dir: "./src/gfpgan"
RealESRGAN_dir: "./src/realesrgan"
RealESRGAN_model: "RealESRGAN_x4plus"
outdir_txt2img: /outputs/txt2img-samples
outdir_img2img: /outputs/img2img-samples
gfpgan_cpu: False
esrgan_cpu: False
extra_models_cpu: False
extra_models_gpu: False
save_metadata: True
skip_grid: False
skip_save: False
grid_format: "jpg:95"
save_format: "png"
n_rows: -1
no_verify_input: False
no_half: False
precision: "autocast"
optimized: False
optimized_turbo: False
update_preview: True
update_preview_frequency: 1
txt2img:
prompt:
height: 512
width: 512
cfg_scale: 5.0
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 50
default_sampler: "k_lms"
separate_prompts: False
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: True
use_RealESRGAN: True
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
img2img:
prompt:
sampling_steps: 50
# Adding an int to toggles enables the corresponding feature.
# 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them)
# 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0)
# 2: Loopback (use images from previous batch when creating next batch)
# 3: Random loopback seed
# 4: Save individual images
# 5: Save grid
# 6: Sort samples by prompt
# 7: Write sample info files
# 8: jpg samples
# 9: Fix faces using GFPGAN
# 10: Upscale images using Real-ESRGAN
sampler_name: k_lms
denoising_strength: 0.45
# 0: Keep masked area
# 1: Regenerate only masked area
mask_mode: 0
# 0: Just resize
# 1: Crop and resize
# 2: Resize and fill
resize_mode: 0
# Leave blank for random seed:
seed: ""
ddim_eta: 0.0
cfg_scale: 5.0
batch_count: 1
batch_size: 1
height: 512
width: 512
# Textual inversion embeddings file path:
fp: ""
loopback: True
random_seed_loopback: True
separate_prompts: False
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: True
use_RealESRGAN: True
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
gfpgan:
strength: 100

View File

@@ -1,31 +1,48 @@
# syntax=docker/dockerfile:1
FROM continuumio/miniconda3:4.12.0
FROM python:3.10-slim
SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
ENV DEBIAN_FRONTEND=noninteractive PIP_EXISTS_ACTION=w PIP_PREFER_BINARY=1 PIP_NO_CACHE_DIR=1
RUN <<EOF
git clone https://github.com/lstein/stable-diffusion.git
cd stable-diffusion
git reset --hard 751283a2de81bee4bb571fbabe4adb19f1d85b97
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip install torch==1.13.0 torchvision --extra-index-url https://download.pytorch.org/whl/cu117
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
RUN apt-get update && apt-get install git -y && apt-get clean
RUN git clone https://github.com/invoke-ai/InvokeAI.git /stable-diffusion
WORKDIR /stable-diffusion
RUN <<EOF
git reset --hard 2b7e3abe57963d199f1d825ddef87ae154c81045
git config --global http.postBuffer 1048576000
ln -sf environments-and-requirements/requirements-lin-cuda.txt requirements.txt
pip install -r requirements.txt
EOF
ARG BRANCH=development SHA=2b7e3abe57963d199f1d825ddef87ae154c81045
RUN <<EOF
git fetch
git reset --hard
git checkout ${BRANCH}
git reset --hard ${SHA}
pip install -r requirements.txt
EOF
RUN pip uninstall opencv-python -y && pip install --force-reinstall opencv-python-headless==4.5.5.64
COPY . /docker/
RUN <<EOF
python3 /docker/info.py /stable-diffusion/frontend/dist/index.html
EOF
ENV ROOT=/stable-diffusion PRELOAD=false CLI_ARGS=""
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD mkdir -p /stable-diffusion/models/ldm/stable-diffusion-v1/ && \
ln -sf /cache/models/model.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt && \
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python3 -u scripts/invoke.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for lstein
The WebUI of [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion) as docker container!
Although it is a simple UI, the project has a lot of potential.
## Setup
Clone this repo, download the `model.ckpt` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd lstein
docker compose up --build
```

47
services/lstein/entrypoint.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/bash
set -Eeuo pipefail
declare -A MOUNTS
# cache
MOUNTS["/root/.cache"]=/data/.cache
# ui specific
MOUNTS["${ROOT}/models/codeformer"]=/data/Codeformer/
MOUNTS["${ROOT}/models/gfpgan/GFPGANv1.4.pth"]=/data/GFPGAN/GFPGANv1.4.pth
MOUNTS["${ROOT}/models/gfpgan/weights"]=/data/.cache/
MOUNTS["${ROOT}/models/realesrgan"]=/data/RealESRGAN/
MOUNTS["${ROOT}/models/bert-base-uncased"]=/data/.cache/huggingface/transformers
MOUNTS["${ROOT}/models/openai/clip-vit-large-patch14"]=/data/.cache/huggingface/transformers
MOUNTS["${ROOT}/models/CompVis/stable-diffusion-safety-checker"]=/data/.cache/huggingface/transformers
MOUNTS["${ROOT}/configs/models.yaml"]=/docker/models.yaml
# hacks
MOUNTS["/opt/conda/lib/python3.10/site-packages/facexlib/weights"]=/data/.cache/
MOUNTS["${ROOT}/models/clipseg"]=/data/.cache/invoke/clipseg/
# MOUNTS["/opt/conda/lib/python3.9/site-packages/realesrgan/weights"]=/data/RealESRGAN
for to_path in "${!MOUNTS[@]}"; do
set -Eeuo pipefail
from_path="${MOUNTS[${to_path}]}"
rm -rf "${to_path}"
mkdir -p "$(dirname "${to_path}")"
# ends with slash, make it!
if [[ "$from_path" == */ ]]; then
mkdir -vp "$from_path"
fi
ln -sT "${from_path}" "${to_path}"
echo Mounted $(basename "${from_path}")
done
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py --no-interactive
fi
exec "$@"

13
services/lstein/info.py Normal file
View File

@@ -0,0 +1,13 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace(' <div id="root"></div>', """
<div id="root"></div>
<div>
Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
</div>
""", 1)
)

View File

@@ -0,0 +1,23 @@
# This file describes the alternative machine learning models
# available to InvokeAI script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
stable-diffusion-1.5:
description: Stable Diffusion version 1.5
weights: /data/StableDiffusion/v1-5-pruned-emaonly.ckpt
vae: /data/VAE/vae-ft-mse-840000-ema-pruned.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
default: true
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting
weights: /data/StableDiffusion/sd-v1-5-inpainting.ckpt
vae: /data/VAE/vae-ft-mse-840000-ema-pruned.ckpt
config: ./configs/stable-diffusion/v1-inpainting-inference.yaml
width: 512
height: 512
default: false