Commit message (Collapse) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | | | use legacy attnblock | C43H66N12O12S2 | 2022-10-18 | 1 | -1/+1 | |
| |/ |/| | ||||||
* | | Update sd_hijack.py | C43H66N12O12S2 | 2022-10-15 | 1 | -1/+1 | |
|/ | ||||||
* | fix iterator bug for #2295 | AUTOMATIC | 2022-10-12 | 1 | -4/+4 | |
| | ||||||
* | Account when lines are mismatched | hentailord85ez | 2022-10-12 | 1 | -1/+11 | |
| | ||||||
* | Add check for psutil | brkirch | 2022-10-11 | 1 | -2/+8 | |
| | ||||||
* | Add cross-attention optimization from InvokeAI | brkirch | 2022-10-11 | 1 | -1/+4 | |
| | | | | | | * Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable | |||||
* | rename hypernetwork dir to hypernetworks to prevent clash with an old ↵ | AUTOMATIC | 2022-10-11 | 1 | -1/+1 | |
| | | | | filename that people who use zip instead of git clone will have | |||||
* | Merge branch 'master' into hypernetwork-training | AUTOMATIC | 2022-10-11 | 1 | -30/+93 | |
|\ | ||||||
| * | Comma backtrack padding (#2192) | hentailord85ez | 2022-10-11 | 1 | -1/+18 | |
| | | | | | | Comma backtrack padding | |||||
| * | allow pascal onwards | C43H66N12O12S2 | 2022-10-10 | 1 | -1/+1 | |
| | | ||||||
| * | Add back in output hidden states parameter | hentailord85ez | 2022-10-10 | 1 | -1/+1 | |
| | | ||||||
| * | Pad beginning of textual inversion embedding | hentailord85ez | 2022-10-10 | 1 | -0/+5 | |
| | | ||||||
| * | Unlimited Token Works | hentailord85ez | 2022-10-10 | 1 | -23/+46 | |
| | | | | | | Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation. | |||||
| * | Removed unnecessary tmp variable | Fampai | 2022-10-09 | 1 | -4/+3 | |
| | | ||||||
| * | Updated code for legibility | Fampai | 2022-10-09 | 1 | -2/+5 | |
| | | ||||||
| * | Optimized code for Ignoring last CLIP layers | Fampai | 2022-10-09 | 1 | -8/+4 | |
| | | ||||||
| * | Added ability to ignore last n layers in FrozenCLIPEmbedder | Fampai | 2022-10-08 | 1 | -2/+9 | |
| | | ||||||
| * | add --force-enable-xformers option and also add messages to console ↵ | AUTOMATIC | 2022-10-08 | 1 | -1/+5 | |
| | | | | | | | | regarding cross attention optimizations | |||||
| * | check for ampere without destroying the optimizations. again. | C43H66N12O12S2 | 2022-10-08 | 1 | -4/+3 | |
| | | ||||||
| * | check for ampere | C43H66N12O12S2 | 2022-10-08 | 1 | -3/+4 | |
| | | ||||||
| * | why did you do this | AUTOMATIC | 2022-10-08 | 1 | -1/+1 | |
| | | ||||||
| * | restore old opt_split_attention/disable_opt_split_attention logic | AUTOMATIC | 2022-10-08 | 1 | -1/+1 | |
| | | ||||||
| * | simplify xfrmers options: --xformers to enable and that's it | AUTOMATIC | 2022-10-08 | 1 | -1/+1 | |
| | | ||||||
| * | Merge pull request #1851 from C43H66N12O12S2/flash | AUTOMATIC1111 | 2022-10-08 | 1 | -4/+6 | |
| |\ | | | | | | | xformers attention | |||||
| | * | Update sd_hijack.py | C43H66N12O12S2 | 2022-10-08 | 1 | -1/+1 | |
| | | | ||||||
| | * | default to split attention if cuda is available and xformers is not | C43H66N12O12S2 | 2022-10-08 | 1 | -2/+2 | |
| | | | ||||||
| | * | use new attnblock for xformers path | C43H66N12O12S2 | 2022-10-08 | 1 | -1/+1 | |
| | | | ||||||
| | * | delete broken and unnecessary aliases | C43H66N12O12S2 | 2022-10-08 | 1 | -6/+4 | |
| | | | ||||||
| | * | Update sd_hijack.py | C43H66N12O12S2 | 2022-10-07 | 1 | -1/+1 | |
| | | | ||||||
| | * | Update sd_hijack.py | C43H66N12O12S2 | 2022-10-07 | 1 | -2/+2 | |
| | | | ||||||
| | * | Update sd_hijack.py | C43H66N12O12S2 | 2022-10-07 | 1 | -2/+1 | |
| | | | ||||||
| | * | Update sd_hijack.py | C43H66N12O12S2 | 2022-10-07 | 1 | -4/+9 | |
| | | | ||||||
| * | | fix bug where when using prompt composition, hijack_comments generated ↵ | MrCheeze | 2022-10-08 | 1 | -1/+4 | |
| | | | | | | | | | | | | before the final AND will be dropped | |||||
| * | | fix bugs related to variable prompt lengths | AUTOMATIC | 2022-10-08 | 1 | -5/+9 | |
| | | | ||||||
| * | | do not let user choose his own prompt token count limit | AUTOMATIC | 2022-10-08 | 1 | -13/+12 | |
| | | | ||||||
| * | | let user choose his own prompt token count limit | AUTOMATIC | 2022-10-08 | 1 | -6/+7 | |
| | | | ||||||
* | | | hypernetwork training mk1 | AUTOMATIC | 2022-10-07 | 1 | -1/+3 | |
|/ / | ||||||
* / | make it possible to use hypernetworks without opt split attention | AUTOMATIC | 2022-10-07 | 1 | -2/+4 | |
|/ | ||||||
* | Merge branch 'master' into stable | Jairo Correa | 2022-10-02 | 1 | -266/+52 | |
|\ | ||||||
| * | fix for incorrect embedding token length calculation (will break seeds that ↵ | AUTOMATIC | 2022-10-02 | 1 | -4/+4 | |
| | | | | | | | | | | | | use embeddings, you're welcome!) add option to input initialization text for embeddings | |||||
| * | initial support for training textual inversion | AUTOMATIC | 2022-10-02 | 1 | -273/+51 | |
| | | ||||||
* | | Merge branch 'master' into fix-vram | Jairo Correa | 2022-09-30 | 1 | -5/+113 | |
|\| | ||||||
| * | add embeddings dir | AUTOMATIC | 2022-09-30 | 1 | -1/+6 | |
| | | ||||||
| * | fix for incorrect model weight loading for #814 | AUTOMATIC | 2022-09-29 | 1 | -0/+9 | |
| | | ||||||
| * | new implementation for attention/emphasis | AUTOMATIC | 2022-09-29 | 1 | -4/+98 | |
| | | ||||||
* | | Move silu to sd_hijack | Jairo Correa | 2022-09-29 | 1 | -9/+3 | |
|/ | ||||||
* | switched the token counter to use hidden buttons instead of api call | Liam | 2022-09-27 | 1 | -2/+1 | |
| | ||||||
* | added token counter next to txt2img and img2img prompts | Liam | 2022-09-27 | 1 | -8/+22 | |
| | ||||||
* | potential fix for embeddings no loading on AMD cards | AUTOMATIC | 2022-09-25 | 1 | -2/+2 | |
| | ||||||
* | Fix token max length | guaneec | 2022-09-25 | 1 | -1/+1 | |
| |