aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack.py
AgeCommit message (Collapse)AuthorLines
2022-09-25potential fix for embeddings no loading on AMD cardsAUTOMATIC-2/+2
2022-09-25Fix token max lengthguaneec-1/+1
2022-09-21--opt-split-attention now on by default for torch.cuda, off for others (cpu ↵AUTOMATIC-1/+1
and MPS; because the option does not work there according to reports)
2022-09-21fix for too large embeddings causing an errorAUTOMATIC-1/+1
2022-09-20fix a off by one error with embedding at the start of the sentenceAUTOMATIC-1/+1
2022-09-20add the part that was missing for word textual inversion checksumsAUTOMATIC-1/+1
2022-09-18Making opt split attention the default. Are you upset about this? Sorry.AUTOMATIC-3/+3
2022-09-18.....C43H66N12O12S2-2/+2
2022-09-18Move scale multiplication to the frontC43H66N12O12S2-2/+2
2022-09-15fix typoC43H66N12O12S2-1/+1
2022-09-15pass dtype to torch.zeros as wellC43H66N12O12S2-1/+1
2022-09-13Complete cross attention updateC43H66N12O12S2-1/+73
2022-09-12Update cross attention to the newest versionC43H66N12O12S2-3/+4
2022-09-11added --opt-split-attention-v1AUTOMATIC-0/+33
2022-09-10Update to cross attention from https://github.com/Doggettx/stable-diffusion #219AUTOMATIC-10/+37
2022-09-08support for sd-concepts as alternatives for textual inversion #151AUTOMATIC-5/+15
2022-09-07directly convert list to tensorxeonvs-4/+1
2022-09-07Added support for launching on Apple Siliconxeonvs-1/+4
2022-09-05re-integrated tiling option as a UI elementAUTOMATIC-0/+20
2022-09-05add an option to enable tiling image generationAUTOMATIC-0/+5
2022-09-05add split attention layer optimization from ↵AUTOMATIC-1/+43
https://github.com/basujindal/stable-diffusion/pull/117
2022-09-03split codebase into multiple files; to anyone this affects negatively: sorryAUTOMATIC-0/+208