17 Commits
1.0.0 ... 1.1.0

Author SHA1 Message Date
AbdBarho
59892da866 Custom Models Auto (#75) 2022-09-17 13:44:00 +02:00
Abdullah Barhoum
fceb83c2b0 Dev hlky 2022-09-16 21:10:40 +02:00
AbdBarho
17b01a7627 Parallel Downloads (#74) 2022-09-16 20:07:50 +02:00
Abdullah Barhoum
b96d7c30d0 make executable 2022-09-16 18:37:41 +02:00
AbdBarho
aae83bb8f2 Update lstein to dev branch (#73) 2022-09-16 16:40:20 +02:00
AbdBarho
10763a8f61 Update Git Post Buffer 2022-09-16 06:51:14 +02:00
AbdBarho
64e8f093d2 Create stale.yml 2022-09-16 06:41:49 +02:00
AbdBarho
3e0a137c23 Remove outdated (#69) 2022-09-15 22:48:38 +02:00
AbdBarho
a1c16942ff Matrix CI/CD (#68) 2022-09-15 22:02:38 +02:00
AbdBarho
6ae3473214 Update auto (#67)
* Update auto
2022-09-15 21:05:31 +02:00
AbdBarho
5d731cb43c Add gcc (#66)
* update auto

* add gcc
2022-09-15 20:45:21 +02:00
AbdBarho
c1fa2f1457 Updates & Prepare hlky streamlit (#59) 2022-09-14 19:48:00 +02:00
AbdBarho
d8cfdd3af5 Update Versions (#58)
* Update Versions
2022-09-13 21:40:10 +02:00
AbdBarho
03d12cbcd9 Update README.md 2022-09-13 20:24:27 +02:00
AbdBarho
2e76b6c4e7 Update README.md 2022-09-13 19:51:58 +02:00
AbdBarho
5eae2076ce Update Automatic (#55)
* Update Automatic
2022-09-12 22:46:38 +02:00
AbdBarho
725e1f39ba Update (#54)
* Update hlky

* Update auto
2022-09-12 22:21:03 +02:00
21 changed files with 360 additions and 81 deletions

View File

@@ -7,13 +7,17 @@ assignees: ''
--- ---
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue) and in [the issues in the WebUI repo](https://github.com/hlky/stable-diffusion-webui)** **Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue)**
**Describe the bug** **Describe the bug**
**Which UI**
hlky or auto or auto-cpu or lstein?
**Steps to Reproduce** **Steps to Reproduce**
1. Go to '...' 1. Go to '...'
2. Click on '....' 2. Click on '....'
@@ -22,8 +26,11 @@ assignees: ''
**Hardware / Software:** **Hardware / Software:**
- OS: [e.g. Windows / Ubuntu and version] - OS: [e.g. Windows / Ubuntu and version]
- RAM:
- GPU: [Nvidia 1660 / No GPU] - GPU: [Nvidia 1660 / No GPU]
- Version [e.g. 22] - VRAM:
- Docker Version, Docker compose version
- Release version [e.g. 1.0.1]
**Additional context** **Additional context**
Any other context about the problem here. If applicable, add screenshots to help explain your problem. Any other context about the problem here. If applicable, add screenshots to help explain your problem.

View File

@@ -3,12 +3,17 @@ name: Build Images
on: [push] on: [push]
jobs: jobs:
build_all: build:
strategy:
matrix:
profile:
- auto
- hlky
- lstein
- download
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: All name: ${{ matrix.profile }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
# better caching? # better caching?
- run: docker compose --profile auto build --progress plain - run: docker compose --profile ${{ matrix.profile }} build --progress plain
- run: docker compose --profile hlky build --progress plain
- run: docker compose --profile lstein build --progress plain

20
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

View File

@@ -41,12 +41,20 @@ Screenshots:
### lstein ### lstein
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, but less so for the WebUI. [lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
## Setup & Usage ## Setup & Usage
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue! Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Disclaimer ## Disclaimer
The authors of this project are not responsible for any content generated using this interface. The authors of this project are not responsible for any content generated using this interface.

1
cache/.gitignore vendored
View File

@@ -2,3 +2,4 @@
/transformers /transformers
/weights /weights
/models /models
/custom-models

View File

@@ -37,7 +37,7 @@ services:
volumes: volumes:
- *v1 - *v1
- *v2 - *v2
- ./services/AUTOMATIC1111/config.json:/docker/config.json - ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json
environment: environment:
- CLI_ARGS=--medvram --opt-split-attention - CLI_ARGS=--medvram --opt-split-attention
@@ -52,3 +52,6 @@ services:
<<: *base_service <<: *base_service
profiles: ["lstein"] profiles: ["lstein"]
build: ./services/lstein/ build: ./services/lstein/
environment:
- PRELOAD=false
- CLI_ARGS=

View File

@@ -2,10 +2,11 @@
FROM alpine/git:2.36.2 as download FROM alpine/git:2.36.2 as download
RUN <<EOF RUN <<EOF
# who knows # because taming-transformers is huge
git config --global http.postBuffer 1048576000 git config --global http.postBuffer 1048576000
git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer
git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion
git clone https://github.com/salesforce/BLIP.git repositories/BLIP
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
EOF EOF
@@ -40,7 +41,8 @@ RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeForme
# Note: don't update the sha of previous versions because the install will take forever # Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step # instead, update the repo state in a later step
ARG SHA=b5d1af11b7dc718d4d91d379c75e46f4bd2e2fe6
ARG SHA=99585b3514e2d7e987651d5c6a0806f933af012b
RUN <<EOF RUN <<EOF
cd stable-diffusion-webui cd stable-diffusion-webui
git pull --rebase git pull --rebase
@@ -48,7 +50,7 @@ git reset --hard ${SHA}
pip install --prefer-binary --no-cache-dir -r requirements.txt pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF EOF
RUN pip install --prefer-binary -U --no-cache-dir opencv-python-headless markupsafe==2.0.1 RUN pip install --prefer-binary -U --no-cache-dir opencv-python-headless
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS="" ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
@@ -59,4 +61,5 @@ RUN chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
WORKDIR ${WORKDIR} WORKDIR ${WORKDIR}
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr # run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u ../../webui.py --listen --port 7860 ${CLI_ARGS} CMD /docker/mount.sh && \
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir /cache/custom-models --ckpt /cache/models/model.ckpt ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for AUTOMATIC1111
The WebUI of [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as docker container!
## Setup
Clone this repo, download the `model.ckpt` and `GFPGANv1.3.pth` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd AUTOMATIC1111
docker compose up --build
```
You can change the cli parameters in `AUTOMATIC1111/docker-compose.yml`. The full list of cil parameters can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)

View File

@@ -1 +1,58 @@
{"outdir_samples": "/output", "outdir_txt2img_samples": "/output/txt2img-images", "outdir_img2img_samples": "/output/img2img-images", "outdir_extras_samples": "/output/extras-images", "outdir_txt2img_grids": "/output/txt2img-grids", "outdir_img2img_grids": "/output/img2img-grids", "outdir_save": "/output/saved", "__WARNING__": "DON'T CHANGE ANYTHING BEFORE THIS", "outdir_grids": "", "save_to_dirs": false, "save_to_dirs_prompt_len": 10, "samples_save": true, "samples_format": "png", "grid_save": true, "return_grid": true, "grid_format": "png", "grid_extended_filename": false, "grid_only_if_multiple": true, "n_rows": -1, "jpeg_quality": 80, "export_for_4chan": true, "enable_pnginfo": true, "font": "DejaVuSans.ttf", "enable_emphasis": true, "save_txt": false, "ESRGAN_tile": 192, "ESRGAN_tile_overlap": 8, "random_artist_categories": [], "upscale_at_full_resolution_padding": 16, "show_progressbar": true, "show_progress_every_n_steps": 5, "multiple_tqdm": true, "face_restoration_model": "CodeFormer", "code_former_weight": 0.5, "grid_save_to_dirs": false} {
"outdir_samples": "/output",
"outdir_txt2img_samples": "/output/txt2img-images",
"outdir_img2img_samples": "/output/img2img-images",
"outdir_extras_samples": "/output/extras-images",
"outdir_txt2img_grids": "/output/txt2img-grids",
"outdir_img2img_grids": "/output/img2img-grids",
"outdir_save": "/output/saved",
"font": "DejaVuSans.ttf",
"__WARNING__": "DON'T CHANGE ANYTHING BEFORE THIS",
"samples_filename_format": "",
"outdir_grids": "",
"save_to_dirs": false,
"grid_save_to_dirs": false,
"save_to_dirs_prompt_len": 10,
"samples_save": true,
"samples_format": "png",
"grid_save": true,
"return_grid": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"n_rows": -1,
"jpeg_quality": 80,
"export_for_4chan": true,
"enable_pnginfo": true,
"add_model_hash_to_info": false,
"enable_emphasis": true,
"save_txt": false,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"random_artist_categories": [],
"upscale_at_full_resolution_padding": 16,
"show_progressbar": true,
"show_progress_every_n_steps": 7,
"multiple_tqdm": true,
"face_restoration_model": null,
"code_former_weight": 0.5,
"save_images_before_face_restoration": false,
"face_restoration_unload": false,
"interrogate_keep_models_in_memory": false,
"interrogate_use_builtin_artists": true,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0,
"samples_filename_pattern": "",
"directories_filename_pattern": "",
"save_selected_only": false,
"filter_nsfw": false,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"enable_quantization": false,
"enable_batch_seeds": true,
"memmon_poll_rate": 8,
"sd_model_checkpoint": null
}

View File

@@ -7,7 +7,7 @@ file.write_text(
.replace(' return demo', """ .replace(' return demo', """
with demo: with demo:
gr.Markdown( gr.Markdown(
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/tree/master/AUTOMATIC1111)' 'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)'
) )
return demo return demo
""", 1) """, 1)

View File

@@ -32,5 +32,4 @@ rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/C
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
# mount config mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
ln -sf /docker/config.json ${WORKDIR}/config.json

View File

@@ -1,6 +1,6 @@
FROM bash:alpine3.15 FROM bash:alpine3.15
RUN apk add parallel RUN apk add parallel aria2
COPY . /docker COPY . /docker
RUN chmod +x /docker/download.sh RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"] ENTRYPOINT ["/docker/download.sh"]

View File

@@ -2,32 +2,12 @@
set -Eeuo pipefail set -Eeuo pipefail
# [[ "$(sha256sum -b $file | head -c 64)" == "$sha" ]] mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
declare -A MODELS echo "Downloading, this might take a while..."
MODELS['model.ckpt']='https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media' aria2c --input-file /docker/links.txt --dir /cache/models --continue
MODELS['GFPGANv1.3.pth']='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth'
MODELS['RealESRGAN_x4plus.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
MODELS['RealESRGAN_x4plus_anime_6B.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'
MODELS['LDSR.yaml']='https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1'
MODELS['LDSR.ckpt']='https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1'
echo "Downloading..."
for file in "${!MODELS[@]}"; do
url=${MODELS[$file]}
full_path="/cache/models/$file"
if [[ -f "$full_path" ]]; then
echo "- $file exists"
continue
fi
mkdir -p $(dirname $full_path)
wget --tries=10 -c -O $full_path $url
done
echo "Checking SHAs..." echo "Checking SHAs..."
time parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c" parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"

View File

@@ -0,0 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
out=GFPGANv1.3.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR.ckpt

View File

@@ -9,13 +9,14 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion cd stable-diffusion
git reset --hard 2b1ac8daf7ea82c6c56eabab7e80ec1c33106a98 git reset --hard 7623a5734740025d79b710f3744bff9276e1467b
conda env update --file environment.yaml -n base conda env update --file environment.yaml -n base
conda clean -a -y conda clean -a -y
EOF EOF
@@ -26,7 +27,9 @@ RUN pip install -U --no-cache-dir pyperclip
# Note: don't update the sha of previous versions because the install will take forever # Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step # instead, update the repo state in a later step
ARG BRANCH=master ARG BRANCH=master
ARG SHA=2236e8b5854092054e2c30edc559006ace53bf96 ARG SHA=833a91047df999302f699637768741cecee9c37b
# ARG BRANCH=dev
# ARG SHA=5f3d7facdea58fc4f89b8c584d22a4639615a2f8
RUN <<EOF RUN <<EOF
cd stable-diffusion cd stable-diffusion
git fetch git fetch
@@ -56,4 +59,6 @@ WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS="" ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr # run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt --ldsr-dir /latent-diffusion ${CLI_ARGS} CMD /docker/mount.sh && \
python3 -u scripts/webui.py --outdir /output --ckpt /cache/models/model.ckpt --ldsr-dir /latent-diffusion --inbrowser ${CLI_ARGS}
# STREAMLIT_SERVER_PORT=7860 python -m streamlit run scripts/webui_streamlit.py

View File

@@ -33,3 +33,6 @@ fi
# force facexlib cache # force facexlib cache
mkdir -p /cache/weights/ /stable-diffusion/gfpgan/ mkdir -p /cache/weights/ /stable-diffusion/gfpgan/
ln -sf /cache/weights/ /stable-diffusion/gfpgan/ ln -sf /cache/weights/ /stable-diffusion/gfpgan/
# streamlit config
ln -sf /docker/webui_streamlit.yaml /stable-diffusion/configs/webui/webui_streamlit.yaml

View File

@@ -0,0 +1,155 @@
# UI defaults configuration file. It is automatically loaded if located at configs/webui/webui_streamlit.yaml.
general:
gpu: 0
outdir: /outputs
default_model: "Stable Diffusion v1.4"
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "/cache/models/model.ckpt"
fp:
name:
GFPGAN_dir: "./src/gfpgan"
RealESRGAN_dir: "./src/realesrgan"
RealESRGAN_model: "RealESRGAN_x4plus"
outdir_txt2img: /outputs/txt2img-samples
outdir_img2img: /outputs/img2img-samples
gfpgan_cpu: False
esrgan_cpu: False
extra_models_cpu: False
extra_models_gpu: False
save_metadata: True
save_format: "png"
skip_grid: False
skip_save: False
grid_format: "jpg:95"
n_rows: -1
no_verify_input: False
no_half: False
use_float16: False
precision: "autocast"
optimized: False
optimized_turbo: True
optimized_config: "optimizedSD/v1-inference.yaml"
update_preview: True
update_preview_frequency: 5
txt2img:
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
default_sampler: "k_euler"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
write_info_files: True
txt2vid:
default_model: "CompVis/stable-diffusion-v1-4"
custom_models_list:
[
"CompVis/stable-diffusion-v1-4",
"naclbit/trinart_stable_diffusion_v2",
"hakurei/waifu-diffusion",
"osanseviero/BigGAN-deep-128",
]
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
num_inference_steps: 200
default_sampler: "k_euler"
scheduler_name: "klms"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
dynamic_preview_frequency: True
normalize_prompt_weights: True
save_individual_images: True
save_video: True
group_by_prompt: True
write_info_files: True
do_loop: False
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
beta_start: 0.00085
beta_end: 0.012
beta_scheduler_type: "linear"
max_frames: 1000
img2img:
prompt:
sampling_steps: 30
# Adding an int to toggles enables the corresponding feature.
# 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them)
# 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0)
# 2: Loopback (use images from previous batch when creating next batch)
# 3: Random loopback seed
# 4: Save individual images
# 5: Save grid
# 6: Sort samples by prompt
# 7: Write sample info files
# 8: jpg samples
# 9: Fix faces using GFPGAN
# 10: Upscale images using Real-ESRGAN
sampler_name: "k_euler"
denoising_strength: 0.45
# 0: Keep masked area
# 1: Regenerate only masked area
mask_mode: 0
mask_restore: False
# 0: Just resize
# 1: Crop and resize
# 2: Resize and fill
resize_mode: 0
# Leave blank for random seed:
seed: ""
ddim_eta: 0.0
cfg_scale: 7.5
batch_count: 1
batch_size: 1
height: 512
width: 512
# Textual inversion embeddings file path:
fp: ""
loopback: True
random_seed_loopback: True
separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
write_info_files: True
gfpgan:
strength: 100

View File

@@ -9,7 +9,7 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF RUN <<EOF
@@ -20,12 +20,25 @@ conda env update --file environment.yaml -n base
conda clean -a -y conda clean -a -y
EOF EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
ARG BRANCH=development SHA=45af30f3a4c98b50c755717831c5fff75a3a8b43
# ARG BRANCH=main SHA=89da371f4841f7e05da5a1672459d700c3920784
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip uninstall opencv-python -y && pip install --prefer-binary --upgrade --force-reinstall --no-cache-dir opencv-python-headless
COPY . /docker/
RUN python3 /docker/info.py /stable-diffusion/static/dream_web/index.html && chmod +x /docker/mount.sh
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion WORKDIR /stable-diffusion
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD mkdir -p /stable-diffusion/models/ldm/stable-diffusion-v1/ && \ CMD /docker/mount.sh && python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
ln -sf /cache/models/model.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt && \
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for lstein
The WebUI of [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion) as docker container!
Although it is a simple UI, the project has a lot of potential.
## Setup
Clone this repo, download the `model.ckpt` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd lstein
docker compose up --build
```

10
services/lstein/info.py Normal file
View File

@@ -0,0 +1,10 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace('GitHub site</a>', """
GitHub site</a>, Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
""", 1)
)

26
services/lstein/mount.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -eu
ROOT=/stable-diffusion
mkdir -p "${ROOT}/models/ldm/stable-diffusion-v1/"
ln -sf /cache/models/model.ckpt "${ROOT}/models/ldm/stable-diffusion-v1/model.ckpt"
if test -f /cache/models/GFPGANv1.3.pth; then
base="${ROOT}/src/gfpgan/experiments/pretrained_models/"
mkdir -p "${base}"
ln -sf /cache/models/GFPGANv1.3.pth "${base}/GFPGANv1.3.pth"
echo "Mounted GFPGANv1.3.pth"
fi
# facexlib
FACEX_WEIGHTS=/opt/conda/lib/python3.8/site-packages/facexlib/weights
rm -rf "${FACEX_WEIGHTS}"
mkdir -p /cache/weights
ln -sf -T /cache/weights "${FACEX_WEIGHTS}"
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py
fi