aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack.py
AgeCommit message (Collapse)AuthorLines
2022-10-08do not let user choose his own prompt token count limitAUTOMATIC-13/+12
2022-10-08let user choose his own prompt token count limitAUTOMATIC-6/+7
2022-10-08use new attnblock for xformers pathC43H66N12O12S2-1/+1
2022-10-08delete broken and unnecessary aliasesC43H66N12O12S2-6/+4
2022-10-07hypernetwork training mk1AUTOMATIC-1/+3
2022-10-07make it possible to use hypernetworks without opt split attentionAUTOMATIC-2/+4
2022-10-07Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-07Update sd_hijack.pyC43H66N12O12S2-2/+2
2022-10-07Update sd_hijack.pyC43H66N12O12S2-2/+1
2022-10-07Update sd_hijack.pyC43H66N12O12S2-4/+9
2022-10-02Merge branch 'master' into stableJairo Correa-266/+52
2022-10-02fix for incorrect embedding token length calculation (will break seeds that ↵AUTOMATIC-4/+4
use embeddings, you're welcome!) add option to input initialization text for embeddings
2022-10-02initial support for training textual inversionAUTOMATIC-273/+51
2022-09-30Merge branch 'master' into fix-vramJairo Correa-5/+113
2022-09-30add embeddings dirAUTOMATIC-1/+6
2022-09-29fix for incorrect model weight loading for #814AUTOMATIC-0/+9
2022-09-29new implementation for attention/emphasisAUTOMATIC-4/+98
2022-09-29Move silu to sd_hijackJairo Correa-9/+3
2022-09-27switched the token counter to use hidden buttons instead of api callLiam-2/+1
2022-09-27added token counter next to txt2img and img2img promptsLiam-8/+22
2022-09-25potential fix for embeddings no loading on AMD cardsAUTOMATIC-2/+2
2022-09-25Fix token max lengthguaneec-1/+1
2022-09-21--opt-split-attention now on by default for torch.cuda, off for others (cpu ↵AUTOMATIC-1/+1
and MPS; because the option does not work there according to reports)
2022-09-21fix for too large embeddings causing an errorAUTOMATIC-1/+1
2022-09-20fix a off by one error with embedding at the start of the sentenceAUTOMATIC-1/+1
2022-09-20add the part that was missing for word textual inversion checksumsAUTOMATIC-1/+1
2022-09-18Making opt split attention the default. Are you upset about this? Sorry.AUTOMATIC-3/+3
2022-09-18.....C43H66N12O12S2-2/+2
2022-09-18Move scale multiplication to the frontC43H66N12O12S2-2/+2
2022-09-15fix typoC43H66N12O12S2-1/+1
2022-09-15pass dtype to torch.zeros as wellC43H66N12O12S2-1/+1
2022-09-13Complete cross attention updateC43H66N12O12S2-1/+73
2022-09-12Update cross attention to the newest versionC43H66N12O12S2-3/+4
2022-09-11added --opt-split-attention-v1AUTOMATIC-0/+33
2022-09-10Update to cross attention from https://github.com/Doggettx/stable-diffusion #219AUTOMATIC-10/+37
2022-09-08support for sd-concepts as alternatives for textual inversion #151AUTOMATIC-5/+15
2022-09-07directly convert list to tensorxeonvs-4/+1
2022-09-07Added support for launching on Apple Siliconxeonvs-1/+4
2022-09-05re-integrated tiling option as a UI elementAUTOMATIC-0/+20
2022-09-05add an option to enable tiling image generationAUTOMATIC-0/+5
2022-09-05add split attention layer optimization from ↵AUTOMATIC-1/+43
https://github.com/basujindal/stable-diffusion/pull/117
2022-09-03split codebase into multiple files; to anyone this affects negatively: sorryAUTOMATIC-0/+208