index
:
stable-diffusion-webui-gfx803.git
master
stable-diffusion-webui by AUTOMATIC1111 with patches for gfx803 GPU and Dockerfile
about
summary
refs
log
tree
commit
diff
stats
log msg
author
committer
range
path:
root
/
modules
/
sd_hijack_optimizations.py
Commit message (
Expand
)
Author
Age
Files
Lines
*
Merge pull request #11066 from aljungberg/patch-1
AUTOMATIC1111
2023-06-07
1
-1
/
+1
|
\
|
*
Fix upcast attention dtype error.
Alexander Ljungberg
2023-06-06
1
-1
/
+1
*
|
Merge pull request #10990 from vkage/sd_hijack_optimizations_bugfix
AUTOMATIC1111
2023-06-04
1
-1
/
+1
|
\
\
|
*
|
fix the broken line for #10990
AUTOMATIC
2023-06-04
1
-1
/
+1
|
*
|
torch.cuda.is_available() check for SdOptimizationXformers
Vivek K. Vasishtha
2023-06-03
1
-1
/
+1
|
|
/
|
*
revert default cross attention optimization to Doggettx
AUTOMATIC
2023-06-01
1
-3
/
+3
*
|
revert default cross attention optimization to Doggettx
AUTOMATIC
2023-06-01
1
-3
/
+3
*
|
rename print_error to report, use it with together with package name
AUTOMATIC
2023-05-31
1
-2
/
+1
*
|
Add & use modules.errors.print_error where currently printing exception info ...
Aarni Koskela
2023-05-29
1
-4
/
+2
|
/
*
Add a couple `from __future__ import annotations`es for Py3.9 compat
Aarni Koskela
2023-05-20
1
-0
/
+1
*
Apply suggestions from code review
AUTOMATIC1111
2023-05-19
1
-38
/
+28
*
fix linter issues
AUTOMATIC
2023-05-18
1
-1
/
+1
*
make it possible for scripts to add cross attention optimizations
AUTOMATIC
2023-05-18
1
-3
/
+132
*
Autofix Ruff W (not W605) (mostly whitespace)
Aarni Koskela
2023-05-11
1
-16
/
+16
*
ruff auto fixes
AUTOMATIC
2023-05-10
1
-7
/
+7
*
autofixes from ruff
AUTOMATIC
2023-05-10
1
-1
/
+0
*
Fix for Unet NaNs
brkirch
2023-05-08
1
-0
/
+3
*
Update sd_hijack_optimizations.py
FNSpd
2023-03-24
1
-1
/
+1
*
Update sd_hijack_optimizations.py
FNSpd
2023-03-21
1
-1
/
+1
*
sdp_attnblock_forward hijack
Pam
2023-03-10
1
-0
/
+24
*
argument to disable memory efficient for sdp
Pam
2023-03-10
1
-0
/
+4
*
scaled dot product attention
Pam
2023-03-06
1
-0
/
+42
*
Add UI setting for upcasting attention to float32
brkirch
2023-01-25
1
-60
/
+99
*
better support for xformers flash attention on older versions of torch
AUTOMATIC
2023-01-23
1
-24
/
+18
*
add --xformers-flash-attention option & impl
Takuma Mori
2023-01-21
1
-2
/
+24
*
extra networks UI
AUTOMATIC
2023-01-21
1
-5
/
+5
*
Added license
brkirch
2023-01-06
1
-0
/
+1
*
Change sub-quad chunk threshold to use percentage
brkirch
2023-01-06
1
-9
/
+9
*
Add Birch-san's sub-quadratic attention implementation
brkirch
2023-01-06
1
-25
/
+99
*
Use other MPS optimization for large q.shape[0] * q.shape[1]
brkirch
2022-12-21
1
-4
/
+6
*
cleanup some unneeded imports for hijack files
AUTOMATIC
2022-12-10
1
-3
/
+0
*
do not replace entire unet for the resolution hack
AUTOMATIC
2022-12-10
1
-28
/
+0
*
Patch UNet Forward to support resolutions that are not multiples of 64
Billy Cao
2022-11-23
1
-0
/
+31
*
Remove wrong self reference in CUDA support for invokeai
Cheka
2022-10-19
1
-1
/
+1
*
Update sd_hijack_optimizations.py
C43H66N12O12S2
2022-10-18
1
-0
/
+3
*
readd xformers attnblock
C43H66N12O12S2
2022-10-18
1
-0
/
+15
*
delete xformers attnblock
C43H66N12O12S2
2022-10-18
1
-12
/
+0
*
Use apply_hypernetwork function
brkirch
2022-10-11
1
-10
/
+4
*
Add InvokeAI and lstein to credits, add back CUDA support
brkirch
2022-10-11
1
-0
/
+13
*
Add check for psutil
brkirch
2022-10-11
1
-4
/
+15
*
Add cross-attention optimization from InvokeAI
brkirch
2022-10-11
1
-0
/
+79
*
rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...
AUTOMATIC
2022-10-11
1
-1
/
+1
*
fixes related to merge
AUTOMATIC
2022-10-11
1
-1
/
+2
*
replace duplicate code with a function
AUTOMATIC
2022-10-11
1
-29
/
+15
*
remove functorch
C43H66N12O12S2
2022-10-10
1
-2
/
+0
*
Fix VRAM Issue by only loading in hypernetwork when selected in settings
Fampai
2022-10-09
1
-3
/
+3
*
make --force-enable-xformers work without needing --xformers
AUTOMATIC
2022-10-08
1
-1
/
+1
*
add fallback for xformers_attnblock_forward
AUTOMATIC
2022-10-08
1
-1
/
+4
*
simplify xfrmers options: --xformers to enable and that's it
AUTOMATIC
2022-10-08
1
-7
/
+13
*
emergency fix for xformers (continue + shared)
AUTOMATIC
2022-10-08
1
-8
/
+8
[next]