aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack_optimizations.py
Commit message (Expand)AuthorAgeFilesLines
* Merge pull request #11066 from aljungberg/patch-1AUTOMATIC11112023-06-071-1/+1
|\
| * Fix upcast attention dtype error.Alexander Ljungberg2023-06-061-1/+1
* | Merge pull request #10990 from vkage/sd_hijack_optimizations_bugfixAUTOMATIC11112023-06-041-1/+1
|\ \
| * | fix the broken line for #10990AUTOMATIC2023-06-041-1/+1
| * | torch.cuda.is_available() check for SdOptimizationXformersVivek K. Vasishtha2023-06-031-1/+1
| |/
| * revert default cross attention optimization to DoggettxAUTOMATIC2023-06-011-3/+3
* | revert default cross attention optimization to DoggettxAUTOMATIC2023-06-011-3/+3
* | rename print_error to report, use it with together with package nameAUTOMATIC2023-05-311-2/+1
* | Add & use modules.errors.print_error where currently printing exception info ...Aarni Koskela2023-05-291-4/+2
|/
* Add a couple `from __future__ import annotations`es for Py3.9 compatAarni Koskela2023-05-201-0/+1
* Apply suggestions from code reviewAUTOMATIC11112023-05-191-38/+28
* fix linter issuesAUTOMATIC2023-05-181-1/+1
* make it possible for scripts to add cross attention optimizationsAUTOMATIC2023-05-181-3/+132
* Autofix Ruff W (not W605) (mostly whitespace)Aarni Koskela2023-05-111-16/+16
* ruff auto fixesAUTOMATIC2023-05-101-7/+7
* autofixes from ruffAUTOMATIC2023-05-101-1/+0
* Fix for Unet NaNsbrkirch2023-05-081-0/+3
* Update sd_hijack_optimizations.pyFNSpd2023-03-241-1/+1
* Update sd_hijack_optimizations.pyFNSpd2023-03-211-1/+1
* sdp_attnblock_forward hijackPam2023-03-101-0/+24
* argument to disable memory efficient for sdpPam2023-03-101-0/+4
* scaled dot product attentionPam2023-03-061-0/+42
* Add UI setting for upcasting attention to float32brkirch2023-01-251-60/+99
* better support for xformers flash attention on older versions of torchAUTOMATIC2023-01-231-24/+18
* add --xformers-flash-attention option & implTakuma Mori2023-01-211-2/+24
* extra networks UIAUTOMATIC2023-01-211-5/+5
* Added licensebrkirch2023-01-061-0/+1
* Change sub-quad chunk threshold to use percentagebrkirch2023-01-061-9/+9
* Add Birch-san's sub-quadratic attention implementationbrkirch2023-01-061-25/+99
* Use other MPS optimization for large q.shape[0] * q.shape[1]brkirch2022-12-211-4/+6
* cleanup some unneeded imports for hijack filesAUTOMATIC2022-12-101-3/+0
* do not replace entire unet for the resolution hackAUTOMATIC2022-12-101-28/+0
* Patch UNet Forward to support resolutions that are not multiples of 64Billy Cao2022-11-231-0/+31
* Remove wrong self reference in CUDA support for invokeaiCheka2022-10-191-1/+1
* Update sd_hijack_optimizations.pyC43H66N12O12S22022-10-181-0/+3
* readd xformers attnblockC43H66N12O12S22022-10-181-0/+15
* delete xformers attnblockC43H66N12O12S22022-10-181-12/+0
* Use apply_hypernetwork functionbrkirch2022-10-111-10/+4
* Add InvokeAI and lstein to credits, add back CUDA supportbrkirch2022-10-111-0/+13
* Add check for psutilbrkirch2022-10-111-4/+15
* Add cross-attention optimization from InvokeAIbrkirch2022-10-111-0/+79
* rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...AUTOMATIC2022-10-111-1/+1
* fixes related to mergeAUTOMATIC2022-10-111-1/+2
* replace duplicate code with a functionAUTOMATIC2022-10-111-29/+15
* remove functorchC43H66N12O12S22022-10-101-2/+0
* Fix VRAM Issue by only loading in hypernetwork when selected in settingsFampai2022-10-091-3/+3
* make --force-enable-xformers work without needing --xformersAUTOMATIC2022-10-081-1/+1
* add fallback for xformers_attnblock_forwardAUTOMATIC2022-10-081-1/+4
* simplify xfrmers options: --xformers to enable and that's itAUTOMATIC2022-10-081-7/+13
* emergency fix for xformers (continue + shared)AUTOMATIC2022-10-081-8/+8