aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorLines
2023-11-04Update requirements_versions.txtw-e-w-0/+1
2023-11-03Fix parenthesis auto selectionmissionfloyd-1/+1
Fixes #13813
2023-11-02added accordion settings optionsEmily Zeng-250/+254
2023-11-02no idea what i'm doing, trying to support both type of OFT, kblueleaf ↵v0xie-47/+145
diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch
2023-11-02detect diag_oft typev0xie-0/+7
2023-11-01test implementation based on kohaku diag-oft implementationv0xie-21/+38
2023-10-29Fix #13796Meerkov-1/+1
Fix comment error that makes understanding scheduling more confusing.
2023-10-29Remove blank line whitespaceNick Harrison-1/+1
2023-10-29Add assertions for checking additional settings freezing parametersNick Harrison-4/+19
2023-10-29Add new arguments to known command promptsNick Harrison-5/+6
2023-10-28Add MPS manual castKohakuBlueleaf-1/+5
2023-10-28ManualCast for 10/16 series gpuKohaku-Blueleaf-16/+64
2023-10-25call state.jobnext() before postproces*()Won-Kyu Park-2/+2
2023-10-25change torch versionKohaku-Blueleaf-2/+2
2023-10-25ignore mps for fp8Kohaku-Blueleaf-1/+3
2023-10-25Fix alphas cumprodKohaku-Blueleaf-2/+3
2023-10-25Fix alphas_cumprod dtypeKohaku-Blueleaf-0/+1
2023-10-25fp8 for TEKohaku-Blueleaf-0/+7
2023-10-24Fix lintKohaku-Blueleaf-1/+1
2023-10-24Add CPU fp8 supportKohaku-Blueleaf-6/+22
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear) And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
2023-10-23linting issueDavid Benson-1/+1
2023-10-23Update prompts_from_file script to allow concatenating entries with the ↵David Benson-2/+15
general prompt.
2023-10-22style: conform stylev0xie-1/+1
2023-10-22fix: multiplier applied twice in finalize_updownv0xie-1/+22
2023-10-22refactor: remove used OFT functionsv0xie-72/+10
2023-10-21fix: use merge_weight to cache valuev0xie-17/+40
2023-10-21style: cleanup oftv0xie-75/+7
2023-10-21fix: support multiplier, no forward pass hookv0xie-10/+33
2023-10-21fix: return orig weights during updown, merge weights before forwardv0xie-21/+69
2023-10-21refactor: use forward hook instead of custom forwardv0xie-9/+24
2023-10-22fix Blank line contains whitespaceavantcontra-1/+1
2023-10-22fix bug when using --gfpgan-models-pathavantcontra-5/+20
2023-10-21fix the situation with emphasis editing (aaaa:1.1) bbbb (cccc:1.1)AUTOMATIC1111-0/+6
2023-10-21rework some of changes for emphasis editing keys, force conversion of ↵AUTOMATIC1111-57/+47
old-style emphasis
2023-10-19style: fix ambiguous variable namev0xie-2/+2
2023-10-19style: formattingv0xie-37/+2
2023-10-19refactor: fix constraint, re-use get_weightv0xie-24/+16
2023-10-19Add sdxl only argKohaku-Blueleaf-0/+4
2023-10-19Add fp8 for sd unetKohaku-Blueleaf-32/+36
2023-10-18faster by calculating R in updown and using cached R in forwardv0xie-7/+8
2023-10-18faster by using cached R in forwardv0xie-3/+14
2023-10-18inference working but SLOWv0xie-40/+75
2023-10-17wip incorrect OFT implementationv0xie-0/+87
2023-10-16feat: refactorAnthony Fu-7/+13
2023-10-16Interrupt after current generationAnthony Fu-8/+17
2023-10-15Merge pull request #13644 from XpucT/devAUTOMATIC1111-10/+14
Start / Restart generation by Ctrl (Alt) + Enter
2023-10-15Add files via uploadKhachatur Avanesian-167/+167
LF
2023-10-15Update script.jsKhachatur Avanesian-67/+61
2023-10-15Update script.jsKhachatur Avanesian-62/+68
LF instead CRLF
2023-10-15Update script.jsKhachatur Avanesian-10/+12
Exclude lambda