aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack.py
AgeCommit message (Collapse)AuthorLines
2022-10-16Merge branch 'master' into test_resolve_conflictsMalumaDev-2/+2
2022-10-15Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-15fix to tokens lenght, addend embs generator, add new features to edit the ↵MalumaDev-38/+73
embedding before the generation using text
2022-10-14initMalumaDev-2/+78
2022-10-12fix iterator bug for #2295AUTOMATIC-4/+4
2022-10-12Account when lines are mismatchedhentailord85ez-1/+11
2022-10-11Add check for psutilbrkirch-2/+8
2022-10-11Add cross-attention optimization from InvokeAIbrkirch-1/+4
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable
2022-10-11rename hypernetwork dir to hypernetworks to prevent clash with an old ↵AUTOMATIC-1/+1
filename that people who use zip instead of git clone will have
2022-10-11Merge branch 'master' into hypernetwork-trainingAUTOMATIC-30/+93
2022-10-11Comma backtrack padding (#2192)hentailord85ez-1/+18
Comma backtrack padding
2022-10-10allow pascal onwardsC43H66N12O12S2-1/+1
2022-10-10Add back in output hidden states parameterhentailord85ez-1/+1
2022-10-10Pad beginning of textual inversion embeddinghentailord85ez-0/+5
2022-10-10Unlimited Token Workshentailord85ez-23/+46
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-09Removed unnecessary tmp variableFampai-4/+3
2022-10-09Updated code for legibilityFampai-2/+5
2022-10-09Optimized code for Ignoring last CLIP layersFampai-8/+4
2022-10-08Added ability to ignore last n layers in FrozenCLIPEmbedderFampai-2/+9
2022-10-08add --force-enable-xformers option and also add messages to console ↵AUTOMATIC-1/+5
regarding cross attention optimizations
2022-10-08check for ampere without destroying the optimizations. again.C43H66N12O12S2-4/+3
2022-10-08check for ampereC43H66N12O12S2-3/+4
2022-10-08why did you do thisAUTOMATIC-1/+1
2022-10-08restore old opt_split_attention/disable_opt_split_attention logicAUTOMATIC-1/+1
2022-10-08simplify xfrmers options: --xformers to enable and that's itAUTOMATIC-1/+1
2022-10-08Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC1111-4/+6
xformers attention
2022-10-08Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-08default to split attention if cuda is available and xformers is notC43H66N12O12S2-2/+2
2022-10-08fix bug where when using prompt composition, hijack_comments generated ↵MrCheeze-1/+4
before the final AND will be dropped
2022-10-08fix bugs related to variable prompt lengthsAUTOMATIC-5/+9
2022-10-08do not let user choose his own prompt token count limitAUTOMATIC-13/+12
2022-10-08let user choose his own prompt token count limitAUTOMATIC-6/+7
2022-10-08use new attnblock for xformers pathC43H66N12O12S2-1/+1
2022-10-08delete broken and unnecessary aliasesC43H66N12O12S2-6/+4
2022-10-07hypernetwork training mk1AUTOMATIC-1/+3
2022-10-07make it possible to use hypernetworks without opt split attentionAUTOMATIC-2/+4
2022-10-07Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-07Update sd_hijack.pyC43H66N12O12S2-2/+2
2022-10-07Update sd_hijack.pyC43H66N12O12S2-2/+1
2022-10-07Update sd_hijack.pyC43H66N12O12S2-4/+9
2022-10-02Merge branch 'master' into stableJairo Correa-266/+52
2022-10-02fix for incorrect embedding token length calculation (will break seeds that ↵AUTOMATIC-4/+4
use embeddings, you're welcome!) add option to input initialization text for embeddings
2022-10-02initial support for training textual inversionAUTOMATIC-273/+51
2022-09-30Merge branch 'master' into fix-vramJairo Correa-5/+113
2022-09-30add embeddings dirAUTOMATIC-1/+6
2022-09-29fix for incorrect model weight loading for #814AUTOMATIC-0/+9
2022-09-29new implementation for attention/emphasisAUTOMATIC-4/+98
2022-09-29Move silu to sd_hijackJairo Correa-9/+3
2022-09-27switched the token counter to use hidden buttons instead of api callLiam-2/+1
2022-09-27added token counter next to txt2img and img2img promptsLiam-8/+22