aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack.py
Commit message (Collapse)AuthorAgeFilesLines
...
* | Update sd_hijack.pyC43H66N12O12S22022-10-151-1/+1
|/
* fix iterator bug for #2295AUTOMATIC2022-10-121-4/+4
|
* Account when lines are mismatchedhentailord85ez2022-10-121-1/+11
|
* Add check for psutilbrkirch2022-10-111-2/+8
|
* Add cross-attention optimization from InvokeAIbrkirch2022-10-111-1/+4
| | | | | | * Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable
* rename hypernetwork dir to hypernetworks to prevent clash with an old ↵AUTOMATIC2022-10-111-1/+1
| | | | filename that people who use zip instead of git clone will have
* Merge branch 'master' into hypernetwork-trainingAUTOMATIC2022-10-111-30/+93
|\
| * Comma backtrack padding (#2192)hentailord85ez2022-10-111-1/+18
| | | | | | Comma backtrack padding
| * allow pascal onwardsC43H66N12O12S22022-10-101-1/+1
| |
| * Add back in output hidden states parameterhentailord85ez2022-10-101-1/+1
| |
| * Pad beginning of textual inversion embeddinghentailord85ez2022-10-101-0/+5
| |
| * Unlimited Token Workshentailord85ez2022-10-101-23/+46
| | | | | | Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
| * Removed unnecessary tmp variableFampai2022-10-091-4/+3
| |
| * Updated code for legibilityFampai2022-10-091-2/+5
| |
| * Optimized code for Ignoring last CLIP layersFampai2022-10-091-8/+4
| |
| * Added ability to ignore last n layers in FrozenCLIPEmbedderFampai2022-10-081-2/+9
| |
| * add --force-enable-xformers option and also add messages to console ↵AUTOMATIC2022-10-081-1/+5
| | | | | | | | regarding cross attention optimizations
| * check for ampere without destroying the optimizations. again.C43H66N12O12S22022-10-081-4/+3
| |
| * check for ampereC43H66N12O12S22022-10-081-3/+4
| |
| * why did you do thisAUTOMATIC2022-10-081-1/+1
| |
| * restore old opt_split_attention/disable_opt_split_attention logicAUTOMATIC2022-10-081-1/+1
| |
| * simplify xfrmers options: --xformers to enable and that's itAUTOMATIC2022-10-081-1/+1
| |
| * Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC11112022-10-081-4/+6
| |\ | | | | | | xformers attention
| | * Update sd_hijack.pyC43H66N12O12S22022-10-081-1/+1
| | |
| | * default to split attention if cuda is available and xformers is notC43H66N12O12S22022-10-081-2/+2
| | |
| | * use new attnblock for xformers pathC43H66N12O12S22022-10-081-1/+1
| | |
| | * delete broken and unnecessary aliasesC43H66N12O12S22022-10-081-6/+4
| | |
| | * Update sd_hijack.pyC43H66N12O12S22022-10-071-1/+1
| | |
| | * Update sd_hijack.pyC43H66N12O12S22022-10-071-2/+2
| | |
| | * Update sd_hijack.pyC43H66N12O12S22022-10-071-2/+1
| | |
| | * Update sd_hijack.pyC43H66N12O12S22022-10-071-4/+9
| | |
| * | fix bug where when using prompt composition, hijack_comments generated ↵MrCheeze2022-10-081-1/+4
| | | | | | | | | | | | before the final AND will be dropped
| * | fix bugs related to variable prompt lengthsAUTOMATIC2022-10-081-5/+9
| | |
| * | do not let user choose his own prompt token count limitAUTOMATIC2022-10-081-13/+12
| | |
| * | let user choose his own prompt token count limitAUTOMATIC2022-10-081-6/+7
| | |
* | | hypernetwork training mk1AUTOMATIC2022-10-071-1/+3
|/ /
* / make it possible to use hypernetworks without opt split attentionAUTOMATIC2022-10-071-2/+4
|/
* Merge branch 'master' into stableJairo Correa2022-10-021-266/+52
|\
| * fix for incorrect embedding token length calculation (will break seeds that ↵AUTOMATIC2022-10-021-4/+4
| | | | | | | | | | | | use embeddings, you're welcome!) add option to input initialization text for embeddings
| * initial support for training textual inversionAUTOMATIC2022-10-021-273/+51
| |
* | Merge branch 'master' into fix-vramJairo Correa2022-09-301-5/+113
|\|
| * add embeddings dirAUTOMATIC2022-09-301-1/+6
| |
| * fix for incorrect model weight loading for #814AUTOMATIC2022-09-291-0/+9
| |
| * new implementation for attention/emphasisAUTOMATIC2022-09-291-4/+98
| |
* | Move silu to sd_hijackJairo Correa2022-09-291-9/+3
|/
* switched the token counter to use hidden buttons instead of api callLiam2022-09-271-2/+1
|
* added token counter next to txt2img and img2img promptsLiam2022-09-271-8/+22
|
* potential fix for embeddings no loading on AMD cardsAUTOMATIC2022-09-251-2/+2
|
* Fix token max lengthguaneec2022-09-251-1/+1
|
* --opt-split-attention now on by default for torch.cuda, off for others (cpu ↵AUTOMATIC2022-09-211-1/+1
| | | | and MPS; because the option does not work there according to reports)