Age | Commit message (Collapse) | Author | Lines | |
---|---|---|---|---|
2023-01-06 | Add Birch-san's sub-quadratic attention implementation | brkirch | -25/+99 | |
2022-12-20 | Use other MPS optimization for large q.shape[0] * q.shape[1] | brkirch | -4/+6 | |
Check if q.shape[0] * q.shape[1] is 2**18 or larger and use the lower memory usage MPS optimization if it is. This should prevent most crashes that were occurring at certain resolutions (e.g. 1024x1024, 2048x512, 512x2048). Also included is a change to check slice_size and prevent it from being divisible by 4096 which also results in a crash. Otherwise a crash can occur at 1024x512 or 512x1024 resolution. | ||||
2022-12-10 | cleanup some unneeded imports for hijack files | AUTOMATIC | -3/+0 | |
2022-12-10 | do not replace entire unet for the resolution hack | AUTOMATIC | -28/+0 | |
2022-11-23 | Patch UNet Forward to support resolutions that are not multiples of 64 | Billy Cao | -0/+31 | |
Also modifed the UI to no longer step in 64 | ||||
2022-10-19 | Remove wrong self reference in CUDA support for invokeai | Cheka | -1/+1 | |
2022-10-18 | Update sd_hijack_optimizations.py | C43H66N12O12S2 | -0/+3 | |
2022-10-18 | readd xformers attnblock | C43H66N12O12S2 | -0/+15 | |
2022-10-18 | delete xformers attnblock | C43H66N12O12S2 | -12/+0 | |
2022-10-11 | Use apply_hypernetwork function | brkirch | -10/+4 | |
2022-10-11 | Add InvokeAI and lstein to credits, add back CUDA support | brkirch | -0/+13 | |
2022-10-11 | Add check for psutil | brkirch | -4/+15 | |
2022-10-11 | Add cross-attention optimization from InvokeAI | brkirch | -0/+79 | |
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable | ||||
2022-10-11 | rename hypernetwork dir to hypernetworks to prevent clash with an old ↵ | AUTOMATIC | -1/+1 | |
filename that people who use zip instead of git clone will have | ||||
2022-10-11 | fixes related to merge | AUTOMATIC | -1/+2 | |
2022-10-11 | replace duplicate code with a function | AUTOMATIC | -29/+15 | |
2022-10-10 | remove functorch | C43H66N12O12S2 | -2/+0 | |
2022-10-09 | Fix VRAM Issue by only loading in hypernetwork when selected in settings | Fampai | -3/+3 | |
2022-10-08 | make --force-enable-xformers work without needing --xformers | AUTOMATIC | -1/+1 | |
2022-10-08 | add fallback for xformers_attnblock_forward | AUTOMATIC | -1/+4 | |
2022-10-08 | simplify xfrmers options: --xformers to enable and that's it | AUTOMATIC | -7/+13 | |
2022-10-08 | emergency fix for xformers (continue + shared) | AUTOMATIC | -8/+8 | |
2022-10-08 | Merge pull request #1851 from C43H66N12O12S2/flash | AUTOMATIC1111 | -1/+37 | |
xformers attention | ||||
2022-10-08 | update sd_hijack_opt to respect new env variables | C43H66N12O12S2 | -3/+8 | |
2022-10-08 | Update sd_hijack_optimizations.py | C43H66N12O12S2 | -1/+1 | |
2022-10-08 | add xformers attnblock and hypernetwork support | C43H66N12O12S2 | -2/+18 | |
2022-10-08 | Add hypernetwork support to split cross attention v1 | brkirch | -4/+14 | |
* Add hypernetwork support to split_cross_attention_forward_v1 * Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device | ||||
2022-10-08 | switch to the proper way of calling xformers | C43H66N12O12S2 | -25/+3 | |
2022-10-07 | added support for hypernetworks (???) | AUTOMATIC | -2/+15 | |
2022-10-07 | add xformers attention | C43H66N12O12S2 | -1/+38 | |
2022-10-02 | Merge branch 'master' into stable | Jairo Correa | -0/+156 | |
2022-10-02 | initial support for training textual inversion | AUTOMATIC | -0/+164 | |