aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAUTOMATIC <16777216c@gmail.com>2022-08-29 07:23:57 +0000
committerAUTOMATIC <16777216c@gmail.com>2022-08-29 07:23:57 +0000
commitd7acb9975462acc53deb36369d0265cc7a2d446d (patch)
tree7637e9879efefd20c4374650286105192f55b357
parent808590654eba1414ec9753a33715cdac0be436d4 (diff)
downloadstable-diffusion-webui-gfx803-d7acb9975462acc53deb36369d0265cc7a2d446d.tar.gz
stable-diffusion-webui-gfx803-d7acb9975462acc53deb36369d0265cc7a2d446d.tar.bz2
stable-diffusion-webui-gfx803-d7acb9975462acc53deb36369d0265cc7a2d446d.zip
readme for --lowvram
-rw-r--r--README.md14
1 files changed, 14 insertions, 0 deletions
diff --git a/README.md b/README.md
index 8e40dc68..3ce53db6 100644
--- a/README.md
+++ b/README.md
@@ -248,3 +248,17 @@ print("Seed was: " + str(processed.seed))
display(processed.images, processed.seed, processed.info)
```
+
+### `--lowvram`
+Optimizations for GPUs with low VRAM. This should make it possible to generate 512x512 images on videocards with 4GB memory.
+
+The original idea of those ideas is by basujindal: https://github.com/basujindal/stable-diffusion. Model is separated into modules,
+and only one module is kept in GPU memory; when another module needs to run, the previous is removed from GPU memory.
+
+It should be obvious but the nature of those optimizations makes the processing run slower -- about 10 times slower
+compared to normal operation on my RTX 3090.
+
+This is an independent implementation that does not require any modification to original Stable Diffusion code, and
+with all code concenrated in one place rather than scattered around the program.
+
+