11 Commits
1.0.2 ... 1.1.1

Author SHA1 Message Date
AbdBarho
83b78fe504 Update versions (#82)
### Update versions

- auto: dd911a47b3
- hlky: 17748cbc9c
- lstein: 50d607ffea
2022-09-19 22:02:46 +02:00
AbdBarho
84f9cb84e7 Update versions (#77)
AUTOMATIC1111/stable-diffusion-webui@9e892d9

lstein/stable-diffusion@9bcb0df

transformers==4.22 for caching

Refs #78
2022-09-18 13:49:06 +02:00
AbdBarho
6a66ff6abb Update hlky to dev (#76)
Update hlky to dev

abb0c1c377
2022-09-17 16:09:54 +02:00
AbdBarho
59892da866 Custom Models Auto (#75) 2022-09-17 13:44:00 +02:00
Abdullah Barhoum
fceb83c2b0 Dev hlky 2022-09-16 21:10:40 +02:00
AbdBarho
17b01a7627 Parallel Downloads (#74) 2022-09-16 20:07:50 +02:00
Abdullah Barhoum
b96d7c30d0 make executable 2022-09-16 18:37:41 +02:00
AbdBarho
aae83bb8f2 Update lstein to dev branch (#73) 2022-09-16 16:40:20 +02:00
AbdBarho
10763a8f61 Update Git Post Buffer 2022-09-16 06:51:14 +02:00
AbdBarho
64e8f093d2 Create stale.yml 2022-09-16 06:41:49 +02:00
AbdBarho
3e0a137c23 Remove outdated (#69) 2022-09-15 22:48:38 +02:00
18 changed files with 223 additions and 98 deletions

5
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,5 @@
### Update versions
- auto: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/
- hlky: https://github.com/sd-webui/stable-diffusion-webui/commit/
- lstein: https://github.com/lstein/stable-diffusion/commit/

20
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

View File

@@ -41,13 +41,18 @@ Screenshots:
### lstein ### lstein
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, but less so for the WebUI. [lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
## Setup & Usage ## Setup & Usage
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue! Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
## Contributing ## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it. Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Disclaimer ## Disclaimer

1
cache/.gitignore vendored
View File

@@ -2,3 +2,4 @@
/transformers /transformers
/weights /weights
/models /models
/custom-models

View File

@@ -39,7 +39,7 @@ services:
- *v2 - *v2
- ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json - ./services/AUTOMATIC1111/config.json:/stable-diffusion-webui/config.json
environment: environment:
- CLI_ARGS=--medvram --opt-split-attention - CLI_ARGS=--medvram
automatic1111-cpu: automatic1111-cpu:
<<: *automatic <<: *automatic
@@ -52,3 +52,6 @@ services:
<<: *base_service <<: *base_service
profiles: ["lstein"] profiles: ["lstein"]
build: ./services/lstein/ build: ./services/lstein/
environment:
- PRELOAD=false
- CLI_ARGS=

View File

@@ -1,13 +1,16 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
FROM alpine/git:2.36.2 as download FROM alpine/git:2.36.2 as download
RUN git clone --depth 1 https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion
RUN git clone --depth 1 https://github.com/sczhou/CodeFormer.git repositories/CodeFormer
RUN git clone --depth 1 https://github.com/salesforce/BLIP.git repositories/BLIP
RUN <<EOF RUN <<EOF
# because taming-transformers is huge # because taming-transformers is huge
git config --global http.postBuffer 1048576000 git config --global http.postBuffer 1048576000
git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer git clone --depth 1 https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion
git clone https://github.com/salesforce/BLIP.git repositories/BLIP
git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers
rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets rm -rf repositories/taming-transformers/data repositories/taming-transformers/assets
EOF EOF
@@ -27,22 +30,21 @@ RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN <<EOF RUN <<EOF
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui cd stable-diffusion-webui
git reset --hard 13eec4f3d4081fdc43883c5ef02e471a2b6c7212 git reset --hard 7e77938230d4fefb6edccdba0b80b61d8416673e
conda env update --file environment-wsl2.yaml -n base
conda clean -a -y
pip install --prefer-binary --no-cache-dir -r requirements.txt pip install --prefer-binary --no-cache-dir -r requirements.txt
EOF EOF
ENV ROOT=/stable-diffusion-webui \ ENV ROOT=/stable-diffusion-webui \
WORKDIR=/stable-diffusion-webui/repositories/stable-diffusion WORKDIR=/stable-diffusion-webui/repositories/stable-diffusion
COPY --from=download /git/ ${ROOT} COPY --from=download /git/ ${ROOT}
RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeFormer/requirements.txt RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeFormer/requirements.txt
# Note: don't update the sha of previous versions because the install will take forever # Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step # instead, update the repo state in a later step
ARG SHA=2ddaeb318a9626502ef4bf949a312253d8021ff0 ARG SHA=dd911a47b3c3313b3938b700eb26cbd5bb3e1c95
RUN <<EOF RUN <<EOF
cd stable-diffusion-webui cd stable-diffusion-webui
git pull --rebase git pull --rebase
@@ -61,4 +63,5 @@ RUN chmod +x /docker/mount.sh && python3 /docker/info.py ${ROOT}/modules/ui.py
WORKDIR ${WORKDIR} WORKDIR ${WORKDIR}
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr # run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config ${CLI_ARGS} CMD /docker/mount.sh && \
python3 -u ../../webui.py --listen --port 7860 --hide-ui-dir-config --ckpt-dir /cache/custom-models --ckpt /cache/models/model.ckpt ${CLI_ARGS}

View File

@@ -1,14 +0,0 @@
# WebUI for AUTOMATIC1111
The WebUI of [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as docker container!
## Setup
Clone this repo, download the `model.ckpt` and `GFPGANv1.3.pth` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd AUTOMATIC1111
docker compose up --build
```
You can change the cli parameters in `AUTOMATIC1111/docker-compose.yml`. The full list of cil parameters can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)

View File

@@ -44,5 +44,15 @@
"interrogate_clip_num_beams": 1, "interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24, "interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48, "interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0 "interrogate_clip_dict_limit": 1500.0,
"samples_filename_pattern": "",
"directories_filename_pattern": "",
"save_selected_only": false,
"filter_nsfw": false,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"enable_quantization": false,
"enable_batch_seeds": true,
"memmon_poll_rate": 8,
"sd_model_checkpoint": null
} }

View File

@@ -32,5 +32,4 @@ rm -rf ${ROOT}/repositories/CodeFormer/weights/CodeFormer ${ROOT}/repositories/C
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/CodeFormer
ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib ln -sf -T /cache/weights ${ROOT}/repositories/CodeFormer/weights/facelib
# mount config mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
# ln -sf /docker/config.json ${WORKDIR}/config.json

View File

@@ -1,6 +1,6 @@
FROM bash:alpine3.15 FROM bash:alpine3.15
RUN apk add parallel RUN apk add parallel aria2
COPY . /docker COPY . /docker
RUN chmod +x /docker/download.sh RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"] ENTRYPOINT ["/docker/download.sh"]

View File

@@ -2,32 +2,12 @@
set -Eeuo pipefail set -Eeuo pipefail
# [[ "$(sha256sum -b $file | head -c 64)" == "$sha" ]] mkdir -p /cache/torch /cache/transformers /cache/weights /cache/models /cache/custom-models
declare -A MODELS echo "Downloading, this might take a while..."
MODELS['model.ckpt']='https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media' aria2c --input-file /docker/links.txt --dir /cache/models --continue
MODELS['GFPGANv1.3.pth']='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth'
MODELS['RealESRGAN_x4plus.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
MODELS['RealESRGAN_x4plus_anime_6B.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'
MODELS['LDSR.yaml']='https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1'
MODELS['LDSR.ckpt']='https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1'
echo "Downloading..."
for file in "${!MODELS[@]}"; do
url=${MODELS[$file]}
full_path="/cache/models/$file"
if [[ -f "$full_path" ]]; then
echo "- $file exists"
continue
fi
mkdir -p $(dirname $full_path)
wget --tries=10 -c -O $full_path $url
done
echo "Checking SHAs..." echo "Checking SHAs..."
time parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c" parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"

View File

@@ -0,0 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
out=GFPGANv1.3.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR.ckpt

View File

@@ -13,6 +13,7 @@ RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clea
RUN <<EOF RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion cd stable-diffusion
git reset --hard 7623a5734740025d79b710f3744bff9276e1467b git reset --hard 7623a5734740025d79b710f3744bff9276e1467b
@@ -20,15 +21,12 @@ conda env update --file environment.yaml -n base
conda clean -a -y conda clean -a -y
EOF EOF
# new dependency, should be added to the environment.yaml
RUN pip install -U --no-cache-dir pyperclip
# Note: don't update the sha of previous versions because the install will take forever # Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step # instead, update the repo state in a later step
ARG BRANCH=master ARG BRANCH=master
ARG SHA=833a91047df999302f699637768741cecee9c37b # ARG SHA=833a91047df999302f699637768741cecee9c37b
# ARG BRANCH=dev # ARG BRANCH=dev
# ARG SHA=b4de6caf697d311c1238c15a4c863fa529a35522 ARG SHA=17748cbc9c34df44d0381c42e4f0fe1903089438
RUN <<EOF RUN <<EOF
cd stable-diffusion cd stable-diffusion
git fetch git fetch
@@ -38,11 +36,12 @@ conda env update --file environment.yaml -n base
conda clean -a -y conda clean -a -y
EOF EOF
RUN pip uninstall transformers -y && pip install -U --no-cache-dir pyperclip transformers==4.22
# Latent diffusion # Latent diffusion
RUN <<EOF RUN <<EOF
git clone https://github.com/Hafiidz/latent-diffusion.git git clone --depth 1 https://github.com/Hafiidz/latent-diffusion.git
cd latent-diffusion cd latent-diffusion
git reset --hard e1a84a89fcbb49881546cf2acf1e7e250923dba0
# hacks all the way down # hacks all the way down
mv ldm ldm_latent && mv ldm ldm_latent &&
sed -i -- 's/from ldm/from ldm_latent/g' *.py sed -i -- 's/from ldm/from ldm_latent/g' *.py
@@ -55,7 +54,7 @@ COPY . /docker/
RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh RUN python /docker/info.py /stable-diffusion/frontend/frontend.py && chmod +x /docker/mount.sh
WORKDIR /stable-diffusion WORKDIR /stable-diffusion
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS="" ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PYTHONPATH="${PYTHONPATH}:/stable-diffusion" CLI_ARGS=""
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr # run, -u to not buffer stdout / stderr
CMD /docker/mount.sh && \ CMD /docker/mount.sh && \

View File

@@ -2,9 +2,12 @@
general: general:
gpu: 0 gpu: 0
outdir: /outputs outdir: /outputs
ckpt: "/cache/models/model.ckpt"
default_model: "Stable Diffusion v1.4"
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "/cache/models/model.ckpt"
fp: fp:
name: "embeddings/alex/embeddings_gs-11000.pt" name:
GFPGAN_dir: "./src/gfpgan" GFPGAN_dir: "./src/gfpgan"
RealESRGAN_dir: "./src/realesrgan" RealESRGAN_dir: "./src/realesrgan"
RealESRGAN_model: "RealESRGAN_x4plus" RealESRGAN_model: "RealESRGAN_x4plus"
@@ -15,44 +18,90 @@ general:
extra_models_cpu: False extra_models_cpu: False
extra_models_gpu: False extra_models_gpu: False
save_metadata: True save_metadata: True
save_format: "png"
skip_grid: False skip_grid: False
skip_save: False skip_save: False
grid_format: "jpg:95" grid_format: "jpg:95"
save_format: "png"
n_rows: -1 n_rows: -1
no_verify_input: False no_verify_input: False
no_half: False no_half: False
use_float16: False
precision: "autocast" precision: "autocast"
optimized: False optimized: False
optimized_turbo: False optimized_turbo: True
optimized_config: "optimizedSD/v1-inference.yaml"
update_preview: True update_preview: True
update_preview_frequency: 1 update_preview_frequency: 5
txt2img: txt2img:
prompt: prompt:
height: 512 height: 512
width: 512 width: 512
cfg_scale: 5.0 cfg_scale: 7.5
seed: "" seed: ""
batch_count: 1 batch_count: 1
batch_size: 1 batch_size: 1
sampling_steps: 50 sampling_steps: 30
default_sampler: "k_lms" default_sampler: "k_euler"
separate_prompts: False separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True normalize_prompt_weights: True
save_individual_images: True save_individual_images: True
save_grid: True save_grid: True
group_by_prompt: True group_by_prompt: True
save_as_jpg: False save_as_jpg: False
use_GFPGAN: True use_GFPGAN: False
use_RealESRGAN: True use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus" RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0 variant_amount: 0.0
variant_seed: "" variant_seed: ""
write_info_files: True
txt2vid:
default_model: "CompVis/stable-diffusion-v1-4"
custom_models_list:
[
"CompVis/stable-diffusion-v1-4",
"naclbit/trinart_stable_diffusion_v2",
"hakurei/waifu-diffusion",
"osanseviero/BigGAN-deep-128",
]
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
num_inference_steps: 200
default_sampler: "k_euler"
scheduler_name: "klms"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
dynamic_preview_frequency: True
normalize_prompt_weights: True
save_individual_images: True
save_video: True
group_by_prompt: True
write_info_files: True
do_loop: False
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
beta_start: 0.00085
beta_end: 0.012
beta_scheduler_type: "linear"
max_frames: 1000
img2img: img2img:
prompt: prompt:
sampling_steps: 50 sampling_steps: 30
# Adding an int to toggles enables the corresponding feature. # Adding an int to toggles enables the corresponding feature.
# 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them) # 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them)
# 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0) # 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0)
@@ -65,11 +114,12 @@ img2img:
# 8: jpg samples # 8: jpg samples
# 9: Fix faces using GFPGAN # 9: Fix faces using GFPGAN
# 10: Upscale images using Real-ESRGAN # 10: Upscale images using Real-ESRGAN
sampler_name: k_lms sampler_name: "k_euler"
denoising_strength: 0.45 denoising_strength: 0.45
# 0: Keep masked area # 0: Keep masked area
# 1: Regenerate only masked area # 1: Regenerate only masked area
mask_mode: 0 mask_mode: 0
mask_restore: False
# 0: Just resize # 0: Just resize
# 1: Crop and resize # 1: Crop and resize
# 2: Resize and fill # 2: Resize and fill
@@ -77,7 +127,7 @@ img2img:
# Leave blank for random seed: # Leave blank for random seed:
seed: "" seed: ""
ddim_eta: 0.0 ddim_eta: 0.0
cfg_scale: 5.0 cfg_scale: 7.5
batch_count: 1 batch_count: 1
batch_size: 1 batch_size: 1
height: 512 height: 512
@@ -87,16 +137,19 @@ img2img:
loopback: True loopback: True
random_seed_loopback: True random_seed_loopback: True
separate_prompts: False separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True normalize_prompt_weights: True
save_individual_images: True save_individual_images: True
save_grid: True save_grid: True
group_by_prompt: True group_by_prompt: True
save_as_jpg: False save_as_jpg: False
use_GFPGAN: True use_GFPGAN: False
use_RealESRGAN: True use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus" RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0 variant_amount: 0.0
variant_seed: "" variant_seed: ""
write_info_files: True
gfpgan: gfpgan:
strength: 100 strength: 100

View File

@@ -6,7 +6,8 @@ SHELL ["/bin/bash", "-ceuxo", "pipefail"]
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y # now it requires python3.9
RUN conda install python=3.9 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
@@ -15,17 +16,39 @@ RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clea
RUN <<EOF RUN <<EOF
git clone https://github.com/lstein/stable-diffusion.git git clone https://github.com/lstein/stable-diffusion.git
cd stable-diffusion cd stable-diffusion
git reset --hard 751283a2de81bee4bb571fbabe4adb19f1d85b97 git reset --hard e994073b5bdfa3c77313681c5944be1544eb65b6
sed -i -- 's/python=3.8.5/python=3.9/g' environment.yaml
conda env update --file environment.yaml -n base conda env update --file environment.yaml -n base
conda clean -a -y conda clean -a -y
EOF EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
ARG BRANCH=development SHA=50d607ffea3734072a80e38b09ba0c3758af5d40
# ARG BRANCH=main SHA=89da371f4841f7e05da5a1672459d700c3920784
RUN <<EOF
cd stable-diffusion
git fetch
git reset --hard
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip uninstall opencv-python -y && pip install --prefer-binary --force-reinstall --no-cache-dir opencv-python-headless transformers==4.22
COPY . /docker/
RUN <<EOF
python3 /docker/info.py /stable-diffusion/static/dream_web/index.html
chmod +x /docker/mount.sh
sed -i -- 's/outputs\//\/output/g' /stable-diffusion/backend/server.py
EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion WORKDIR /stable-diffusion
EXPOSE 7860 EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD mkdir -p /stable-diffusion/models/ldm/stable-diffusion-v1/ && \ CMD /docker/mount.sh && \
ln -sf /cache/models/model.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt && \
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS} python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
#python3 -u backend/server.py

View File

@@ -1,14 +0,0 @@
# WebUI for lstein
The WebUI of [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion) as docker container!
Although it is a simple UI, the project has a lot of potential.
## Setup
Clone this repo, download the `model.ckpt` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd lstein
docker compose up --build
```

10
services/lstein/info.py Normal file
View File

@@ -0,0 +1,10 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace('GitHub site</a>', """
GitHub site</a>, Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
""", 1)
)

30
services/lstein/mount.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/bash
set -eu
ROOT=/stable-diffusion
mkdir -p "${ROOT}/models/ldm/stable-diffusion-v1/"
ln -sf /cache/models/model.ckpt "${ROOT}/models/ldm/stable-diffusion-v1/model.ckpt"
if test -f /cache/models/GFPGANv1.3.pth; then
base="${ROOT}/src/gfpgan/experiments/pretrained_models/"
mkdir -p "${base}"
ln -sf /cache/models/GFPGANv1.3.pth "${base}/GFPGANv1.3.pth"
echo "Mounted GFPGANv1.3.pth"
fi
# facexlib
FACEX_WEIGHTS=/opt/conda/lib/python3.9/site-packages/facexlib/weights
rm -rf "${FACEX_WEIGHTS}"
mkdir -p /cache/weights
ln -sf -T /cache/weights "${FACEX_WEIGHTS}"
REALESRGAN_WEIGHTS=/opt/conda/lib/python3.9/site-packages/realesrgan/weights
rm -rf "${REALESRGAN_WEIGHTS}"
ln -sf -T /cache/weights "${REALESRGAN_WEIGHTS}"
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py
fi