Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Add UI setting for upcasting attention to float32 | brkirch | 2023-01-25 | 1 | -2/+2 |
| | | | | | | Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers. In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also. | ||||
* | Remove fallback for Protocol import and remove Protocol import and remove ↵ | AUTOMATIC | 2023-01-09 | 1 | -8/+11 |
| | | | | | | instances of Protocol in code add some whitespace between functions to be in line with other code in the repo | ||||
* | Add fallback for Protocol import | ProGamerGov | 2023-01-07 | 1 | -1/+7 |
| | |||||
* | Added license | brkirch | 2023-01-06 | 1 | -1/+1 |
| | |||||
* | Use narrow instead of dynamic_slice | brkirch | 2023-01-06 | 1 | -15/+19 |
| | |||||
* | Add Birch-san's sub-quadratic attention implementation | brkirch | 2023-01-06 | 1 | -0/+201 |