diff options
author | brkirch <brkirch@users.noreply.github.com> | 2023-01-25 05:23:10 +0000 |
---|---|---|
committer | brkirch <brkirch@users.noreply.github.com> | 2023-01-25 06:13:04 +0000 |
commit | e3b53fd295aca784253dfc8668ec87b537a72f43 (patch) | |
tree | 6fb26afd730c0561a2506ead2d2c8295d326de40 /modules/shared.py | |
parent | 84d9ce30cb427759547bc7876ed80ab91787d175 (diff) | |
download | stable-diffusion-webui-gfx803-e3b53fd295aca784253dfc8668ec87b537a72f43.tar.gz stable-diffusion-webui-gfx803-e3b53fd295aca784253dfc8668ec87b537a72f43.tar.bz2 stable-diffusion-webui-gfx803-e3b53fd295aca784253dfc8668ec87b537a72f43.zip |
Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers.
In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
Diffstat (limited to 'modules/shared.py')
-rw-r--r-- | modules/shared.py | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/modules/shared.py b/modules/shared.py index 4ce1209b..6a0b96cb 100644 --- a/modules/shared.py +++ b/modules/shared.py @@ -410,6 +410,7 @@ options_templates.update(options_section(('sd', "Stable Diffusion"), { "comma_padding_backtrack": OptionInfo(20, "Increase coherency by padding from the last comma within n tokens when using more than 75 tokens", gr.Slider, {"minimum": 0, "maximum": 74, "step": 1 }),
"CLIP_stop_at_last_layers": OptionInfo(1, "Clip skip", gr.Slider, {"minimum": 1, "maximum": 12, "step": 1}),
"extra_networks_default_multiplier": OptionInfo(1.0, "Multiplier for extra networks", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
+ "upcast_attn": OptionInfo(False, "Upcast cross attention layer to float32"),
}))
options_templates.update(options_section(('compatibility', "Compatibility"), {
|