index
:
stable-diffusion-webui-gfx803.git
master
stable-diffusion-webui by AUTOMATIC1111 with patches for gfx803 GPU and Dockerfile
about
summary
refs
log
tree
commit
diff
stats
log msg
author
committer
range
path:
root
/
modules
/
sd_hijack.py
Age
Commit message (
Expand
)
Author
Lines
2022-10-18
Update sd_hijack.py
C43H66N12O12S2
-1
/
+1
2022-10-18
use legacy attnblock
C43H66N12O12S2
-1
/
+1
2022-10-15
Update sd_hijack.py
C43H66N12O12S2
-1
/
+1
2022-10-12
fix iterator bug for #2295
AUTOMATIC
-4
/
+4
2022-10-12
Account when lines are mismatched
hentailord85ez
-1
/
+11
2022-10-11
Add check for psutil
brkirch
-2
/
+8
2022-10-11
Add cross-attention optimization from InvokeAI
brkirch
-1
/
+4
2022-10-11
rename hypernetwork dir to hypernetworks to prevent clash with an old filenam...
AUTOMATIC
-1
/
+1
2022-10-11
Merge branch 'master' into hypernetwork-training
AUTOMATIC
-30
/
+93
2022-10-11
Comma backtrack padding (#2192)
hentailord85ez
-1
/
+18
2022-10-10
allow pascal onwards
C43H66N12O12S2
-1
/
+1
2022-10-10
Add back in output hidden states parameter
hentailord85ez
-1
/
+1
2022-10-10
Pad beginning of textual inversion embedding
hentailord85ez
-0
/
+5
2022-10-10
Unlimited Token Works
hentailord85ez
-23
/
+46
2022-10-09
Removed unnecessary tmp variable
Fampai
-4
/
+3
2022-10-09
Updated code for legibility
Fampai
-2
/
+5
2022-10-09
Optimized code for Ignoring last CLIP layers
Fampai
-8
/
+4
2022-10-08
Added ability to ignore last n layers in FrozenCLIPEmbedder
Fampai
-2
/
+9
2022-10-08
add --force-enable-xformers option and also add messages to console regarding...
AUTOMATIC
-1
/
+5
2022-10-08
check for ampere without destroying the optimizations. again.
C43H66N12O12S2
-4
/
+3
2022-10-08
check for ampere
C43H66N12O12S2
-3
/
+4
2022-10-08
why did you do this
AUTOMATIC
-1
/
+1
2022-10-08
restore old opt_split_attention/disable_opt_split_attention logic
AUTOMATIC
-1
/
+1
2022-10-08
simplify xfrmers options: --xformers to enable and that's it
AUTOMATIC
-1
/
+1
2022-10-08
Merge pull request #1851 from C43H66N12O12S2/flash
AUTOMATIC1111
-4
/
+6
2022-10-08
Update sd_hijack.py
C43H66N12O12S2
-1
/
+1
2022-10-08
default to split attention if cuda is available and xformers is not
C43H66N12O12S2
-2
/
+2
2022-10-08
fix bug where when using prompt composition, hijack_comments generated before...
MrCheeze
-1
/
+4
2022-10-08
fix bugs related to variable prompt lengths
AUTOMATIC
-5
/
+9
2022-10-08
do not let user choose his own prompt token count limit
AUTOMATIC
-13
/
+12
2022-10-08
let user choose his own prompt token count limit
AUTOMATIC
-6
/
+7
2022-10-08
use new attnblock for xformers path
C43H66N12O12S2
-1
/
+1
2022-10-08
delete broken and unnecessary aliases
C43H66N12O12S2
-6
/
+4
2022-10-07
hypernetwork training mk1
AUTOMATIC
-1
/
+3
2022-10-07
make it possible to use hypernetworks without opt split attention
AUTOMATIC
-2
/
+4
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
-1
/
+1
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
-2
/
+2
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
-2
/
+1
2022-10-07
Update sd_hijack.py
C43H66N12O12S2
-4
/
+9
2022-10-02
Merge branch 'master' into stable
Jairo Correa
-266
/
+52
2022-10-02
fix for incorrect embedding token length calculation (will break seeds that u...
AUTOMATIC
-4
/
+4
2022-10-02
initial support for training textual inversion
AUTOMATIC
-273
/
+51
2022-09-30
Merge branch 'master' into fix-vram
Jairo Correa
-5
/
+113
2022-09-30
add embeddings dir
AUTOMATIC
-1
/
+6
2022-09-29
fix for incorrect model weight loading for #814
AUTOMATIC
-0
/
+9
2022-09-29
new implementation for attention/emphasis
AUTOMATIC
-4
/
+98
2022-09-29
Move silu to sd_hijack
Jairo Correa
-9
/
+3
2022-09-27
switched the token counter to use hidden buttons instead of api call
Liam
-2
/
+1
2022-09-27
added token counter next to txt2img and img2img prompts
Liam
-8
/
+22
2022-09-25
potential fix for embeddings no loading on AMD cards
AUTOMATIC
-2
/
+2
[next]