aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack_optimizations.py
Commit message (Collapse)AuthorAgeFilesLines
* Remove wrong self reference in CUDA support for invokeaiCheka2022-10-191-1/+1
|
* Update sd_hijack_optimizations.pyC43H66N12O12S22022-10-181-0/+3
|
* readd xformers attnblockC43H66N12O12S22022-10-181-0/+15
|
* delete xformers attnblockC43H66N12O12S22022-10-181-12/+0
|
* Use apply_hypernetwork functionbrkirch2022-10-111-10/+4
|
* Add InvokeAI and lstein to credits, add back CUDA supportbrkirch2022-10-111-0/+13
|
* Add check for psutilbrkirch2022-10-111-4/+15
|
* Add cross-attention optimization from InvokeAIbrkirch2022-10-111-0/+79
| | | | | | * Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable
* rename hypernetwork dir to hypernetworks to prevent clash with an old ↵AUTOMATIC2022-10-111-1/+1
| | | | filename that people who use zip instead of git clone will have
* fixes related to mergeAUTOMATIC2022-10-111-1/+2
|
* replace duplicate code with a functionAUTOMATIC2022-10-111-29/+15
|
* remove functorchC43H66N12O12S22022-10-101-2/+0
|
* Fix VRAM Issue by only loading in hypernetwork when selected in settingsFampai2022-10-091-3/+3
|
* make --force-enable-xformers work without needing --xformersAUTOMATIC2022-10-081-1/+1
|
* add fallback for xformers_attnblock_forwardAUTOMATIC2022-10-081-1/+4
|
* simplify xfrmers options: --xformers to enable and that's itAUTOMATIC2022-10-081-7/+13
|
* emergency fix for xformers (continue + shared)AUTOMATIC2022-10-081-8/+8
|
* Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC11112022-10-081-1/+37
|\ | | | | xformers attention
| * update sd_hijack_opt to respect new env variablesC43H66N12O12S22022-10-081-3/+8
| |
| * Update sd_hijack_optimizations.pyC43H66N12O12S22022-10-081-1/+1
| |
| * add xformers attnblock and hypernetwork supportC43H66N12O12S22022-10-081-2/+18
| |
| * switch to the proper way of calling xformersC43H66N12O12S22022-10-081-25/+3
| |
| * add xformers attentionC43H66N12O12S22022-10-071-1/+38
| |
* | Add hypernetwork support to split cross attention v1brkirch2022-10-081-4/+14
| | | | | | | | | | * Add hypernetwork support to split_cross_attention_forward_v1 * Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
* | added support for hypernetworks (???)AUTOMATIC2022-10-071-2/+15
|/
* Merge branch 'master' into stableJairo Correa2022-10-021-8/+0
|
* initial support for training textual inversionAUTOMATIC2022-10-021-0/+164