index
:
stable-diffusion-webui-gfx803.git
master
stable-diffusion-webui by AUTOMATIC1111 with patches for gfx803 GPU and Dockerfile
about
summary
refs
log
tree
commit
diff
stats
log msg
author
committer
range
path:
root
/
modules
/
sd_hijack_optimizations.py
Commit message (
Expand
)
Author
Age
Files
Lines
*
Autofix Ruff W (not W605) (mostly whitespace)
Aarni Koskela
2023-05-11
1
-16
/
+16
*
ruff auto fixes
AUTOMATIC
2023-05-10
1
-7
/
+7
*
autofixes from ruff
AUTOMATIC
2023-05-10
1
-1
/
+0
*
Fix for Unet NaNs
brkirch
2023-05-08
1
-0
/
+3
*
Update sd_hijack_optimizations.py
FNSpd
2023-03-24
1
-1
/
+1
*
Update sd_hijack_optimizations.py
FNSpd
2023-03-21
1
-1
/
+1
*
sdp_attnblock_forward hijack
Pam
2023-03-10
1
-0
/
+24
*
argument to disable memory efficient for sdp
Pam
2023-03-10
1
-0
/
+4
*
scaled dot product attention
Pam
2023-03-06
1
-0
/
+42
*
Add UI setting for upcasting attention to float32
brkirch
2023-01-25
1
-60
/
+99
*
better support for xformers flash attention on older versions of torch
AUTOMATIC
2023-01-23
1
-24
/
+18
*
add --xformers-flash-attention option & impl
Takuma Mori
2023-01-21
1
-2
/
+24
*
extra networks UI
AUTOMATIC
2023-01-21
1
-5
/
+5
*
Added license
brkirch
2023-01-06
1
-0
/
+1
*
Change sub-quad chunk threshold to use percentage
brkirch
2023-01-06
1
-9
/
+9
*
Add Birch-san's sub-quadratic attention implementation
brkirch
2023-01-06
1
-25
/
+99
*
Use other MPS optimization for large q.shape[0] * q.shape[1]
brkirch
2022-12-21
1
-4
/
+6
*
cleanup some unneeded imports for hijack files
AUTOMATIC
2022-12-10
1
-3
/
+0
*
do not replace entire unet for the resolution hack
AUTOMATIC
2022-12-10
1
-28
/
+0
*
Patch UNet Forward to support resolutions that are not multiples of 64
Billy Cao
2022-11-23
1
-0
/
+31
*
Remove wrong self reference in CUDA support for invokeai
Cheka
2022-10-19
1
-1
/
+1
*
Update sd_hijack_optimizations.py
C43H66N12O12S2
2022-10-18
1
-0
/
+3
*
readd xformers attnblock
C43H66N12O12S2
2022-10-18
1
-0
/
+15
*
delete xformers attnblock
C43H66N12O12S2
2022-10-18
1
-12
/
+0
*
Use apply_hypernetwork function
brkirch
2022-10-11
1
-10
/
+4
*
Add InvokeAI and lstein to credits, add back CUDA support
brkirch
2022-10-11
1
-0
/
+13
*
Add check for psutil
brkirch
2022-10-11
1
-4
/
+15
*
Add cross-attention optimization from InvokeAI
brkirch
2022-10-11
1
-0
/
+79
*
rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...
AUTOMATIC
2022-10-11
1
-1
/
+1
*
fixes related to merge
AUTOMATIC
2022-10-11
1
-1
/
+2
*
replace duplicate code with a function
AUTOMATIC
2022-10-11
1
-29
/
+15
*
remove functorch
C43H66N12O12S2
2022-10-10
1
-2
/
+0
*
Fix VRAM Issue by only loading in hypernetwork when selected in settings
Fampai
2022-10-09
1
-3
/
+3
*
make --force-enable-xformers work without needing --xformers
AUTOMATIC
2022-10-08
1
-1
/
+1
*
add fallback for xformers_attnblock_forward
AUTOMATIC
2022-10-08
1
-1
/
+4
*
simplify xfrmers options: --xformers to enable and that's it
AUTOMATIC
2022-10-08
1
-7
/
+13
*
emergency fix for xformers (continue + shared)
AUTOMATIC
2022-10-08
1
-8
/
+8
*
Merge pull request #1851 from C43H66N12O12S2/flash
AUTOMATIC1111
2022-10-08
1
-1
/
+37
|
\
|
*
update sd_hijack_opt to respect new env variables
C43H66N12O12S2
2022-10-08
1
-3
/
+8
|
*
Update sd_hijack_optimizations.py
C43H66N12O12S2
2022-10-08
1
-1
/
+1
|
*
add xformers attnblock and hypernetwork support
C43H66N12O12S2
2022-10-08
1
-2
/
+18
|
*
switch to the proper way of calling xformers
C43H66N12O12S2
2022-10-08
1
-25
/
+3
|
*
add xformers attention
C43H66N12O12S2
2022-10-07
1
-1
/
+38
*
|
Add hypernetwork support to split cross attention v1
brkirch
2022-10-08
1
-4
/
+14
*
|
added support for hypernetworks (???)
AUTOMATIC
2022-10-07
1
-2
/
+15
|
/
*
Merge branch 'master' into stable
Jairo Correa
2022-10-02
1
-8
/
+0
*
initial support for training textual inversion
AUTOMATIC
2022-10-02
1
-0
/
+164