diff options
author | AUTOMATIC <16777216c@gmail.com> | 2022-08-31 08:04:19 +0000 |
---|---|---|
committer | AUTOMATIC <16777216c@gmail.com> | 2022-08-31 08:04:19 +0000 |
commit | e38ad2ee959e73d69f451efd52417fac928e0a86 (patch) | |
tree | 26e723b2e0eda76c43fa62121621ca55c3c355b8 /README.md | |
parent | 765d7bc6be46064e83ed745001c3da8497b8ae86 (diff) | |
download | stable-diffusion-webui-gfx803-e38ad2ee959e73d69f451efd52417fac928e0a86.tar.gz stable-diffusion-webui-gfx803-e38ad2ee959e73d69f451efd52417fac928e0a86.tar.bz2 stable-diffusion-webui-gfx803-e38ad2ee959e73d69f451efd52417fac928e0a86.zip |
added detailed installation instructions
fixed bug with missing same dir for a new install
added ctrl+c hander to immediately stop the program instead of waiting
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 83 |
1 files changed, 55 insertions, 28 deletions
@@ -6,50 +6,77 @@ Original script with Gradio UI was written by a kind anonymous user. This is a m 
## Installing and running
-### Stable Diffusion
+You need python and git installed to run this. I tested the installation to work with Python 3.8.10,
+you may be able to run this on different versions.
-This script assumes that you already have main Stable Diffusion sutff installed, assumed to be in directory `/sd`.
-If you don't have it installed, follow the guide:
+You need Stable Diffusion model checkpoint, a big file containing the neural network weights. You
+can obtain it from the following places:
+ - [official download](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
+ - [file storage](https://drive.yerf.org/wl/?id=EBfTrmcCCUAGaQBXVIj5lJmEhjoP1tgl)
+ - [torrent](magnet:?xt=urn:btih:3a4a612d75ed088ea542acac52f9f45987488d1c&dn=sd-v1-4.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337)
-- https://rentry.org/kretard
+You optionally can use GPFGAN to improve faces, then you'll need to download the model from [here](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth).
-This repository's `webgui.py` is a replacement for `kdiff.py` from the guide.
+Instructions:
-Particularly, following files must exist:
+```commandline
+:: crate a directory somewhere for stable diffusion and open cmd in it; below the directorty is assumed to be b:\src\sd
+:: make sure you are in the right directory; the command must output b:\src\sd1
+echo %cd%
+
+:: install torch with CUDA support. See https://pytorch.org/get-started/locally/ for more instructions if this fails.
+pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
+
+:: check if torch supports GPU; this must output "True". You need CUDA 11. installed for this. You might be able to use
+:: a different version, but this is what I tested.
+python -c "import torch; print(torch.cuda.is_available())"
-- `/sd/configs/stable-diffusion/v1-inference.yaml`
-- `/sd/models/ldm/stable-diffusion-v1/model.ckpt`
-- `/sd/ldm/util.py`
-- `/sd/k_diffusion/__init__.py`
+:: clone Stable Diffusion repositories
+git clone https://github.com/CompVis/stable-diffusion.git
+git clone https://github.com/CompVis/taming-transformers
-### GFPGAN
+:: install requirements of Stable Diffusion
+pip install transformers==4.19.2 diffusers invisible-watermark
-If you want to use GFPGAN to improve generated faces, you need to install it separately.
-Follow instructions from https://github.com/TencentARC/GFPGAN, but when cloning it, do so into Stable Diffusion main directory, `/sd`.
-After that download [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) and put it
-into the `/sd/GFPGAN/experiments/pretrained_models` directory. If you're getting troubles with GFPGAN support, follow instructions
-from the GFPGAN's repository until `inference_gfpgan.py` script works.
+:: install k-diffusion
+pip install git+https://github.com/crowsonkb/k-diffusion.git
-The following files must exist:
+:: (optional) install GFPGAN to fix faces
+pip install git+https://github.com/TencentARC/GFPGAN.git
-- `/sd/GFPGAN/inference_gfpgan.py`
-- `/sd/GFPGAN/experiments/pretrained_models/GFPGANv1.3.pth`
+:: go into stable diffusion's repo directory
+cd stable-diffusion
-If the GFPGAN directory does not exist, you will not get the option to use GFPGAN in the UI. If it does exist, you will either be able
-to use it, or there will be a message in console with an error related to GFPGAN.
+:: clone web ui
+git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
-### Web UI
+:: install requirements of web ui
+pip install -r stable-diffusion-webui/requirements.txt
-Run the script as:
+:: (outside of command line) put stable diffusion model into models/ldm/stable-diffusion-v1/model.ckpt; you'll have
+:: to create one missing directory;
+:: the command below must output something like: 1 File(s) 4,265,380,512 bytes
+dir models\ldm\stable-diffusion-v1\model.ckpt
-`python webui.py`
+:: (outside of command line) put the GFPGAN model into same directory as webui script
+:: the command below must output something like: 1 File(s) 348,632,874 bytes
+dir stable-diffusion-webui\GFPGANv1.3.pth
+```
+
+After that the installation is finished.
-When running the script, you must be in the main Stable Diffusion directory, `/sd`. If you cloned this repository into a subdirectory
-of `/sd`, say, the `stable-diffusion-webui` directory, you will run it as:
+Run the command to start web ui:
-`python stable-diffusion-webui/webui.py`
+```
+python stable-diffusion-webui/webui.py
+```
+
+If you have a 4GB video card, run the command with `--lowvram` argument:
+
+```
+python stable-diffusion-webui/webui.py --lowvram
+```
-When launching, you may get a very long warning message related to some weights not being used. You may freely ignore it.
After a while, you will get a message like this:
```
|