aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* add caption image with overlayDepFA2022-10-091-0/+46
|
* change source of step countDepFA2022-10-091-8/+2
|
* source checkpoint hash from current checkpointDepFA2022-10-091-4/+2
|
* correct case on embeddingFromB64DepFA2022-10-091-1/+1
|
* change json tensor key nameDepFA2022-10-091-3/+3
|
* add encoder and decoder classesDepFA2022-10-091-0/+21
|
* add alternate checkpoint hash sourceDepFA2022-10-091-2/+5
|
* add embedding load and save from b64 jsonDepFA2022-10-091-9/+21
|
* Add pretty image captioning functionsDepFA2022-10-091-0/+31
|
* add embed embedding to uiDepFA2022-10-091-1/+3
|
* Update textual_inversion.pyDepFA2022-10-091-3/+22
|
* support loading .yaml config with same name as modelAUTOMATIC2022-10-082-8/+24
| | | | support EMA weights in processing (????)
* chore: Fix typosAidan Holland2022-10-0810-17/+17
|
* Break after finding the local directory of stable diffusionEdouard Leurent2022-10-081-0/+1
| | | | | Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../. Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
* add 'Ignore last layers of CLIP model' option as a parameter to the infotextAUTOMATIC2022-10-081-1/+5
|
* make --force-enable-xformers work without needing --xformersAUTOMATIC2022-10-081-1/+1
|
* Added ability to ignore last n layers in FrozenCLIPEmbedderFampai2022-10-082-2/+10
|
* Update ui.pyDepFA2022-10-081-1/+1
|
* TI preprocess wordingDepFA2022-10-081-3/+3
| | | I had to check the code to work out what splitting was 🤷🏿
* add --force-enable-xformers option and also add messages to console ↵AUTOMATIC2022-10-082-1/+6
| | | | regarding cross attention optimizations
* add fallback for xformers_attnblock_forwardAUTOMATIC2022-10-081-1/+4
|
* alternate promptArtem Zagidulin2022-10-081-2/+7
|
* Add GZipMiddleware to root demoDepFA2022-10-081-1/+5
|
* check for ampere without destroying the optimizations. again.C43H66N12O12S22022-10-081-4/+3
|
* check for ampereC43H66N12O12S22022-10-081-3/+4
|
* check for 3.10C43H66N12O12S22022-10-081-1/+1
|
* why did you do thisAUTOMATIC2022-10-081-1/+1
|
* Fixed typoMilly2022-10-081-1/+1
|
* restore old opt_split_attention/disable_opt_split_attention logicAUTOMATIC2022-10-081-1/+1
|
* simplify xfrmers options: --xformers to enable and that's itAUTOMATIC2022-10-084-10/+16
|
* emergency fix for xformers (continue + shared)AUTOMATIC2022-10-081-8/+8
|
* Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC11112022-10-086-6/+55
|\ | | | | xformers attention
| * Update sd_hijack.pyC43H66N12O12S22022-10-081-1/+1
| |
| * Update requirements_versions.txtC43H66N12O12S22022-10-081-0/+1
| |
| * Update launch.pyC43H66N12O12S22022-10-081-1/+1
| |
| * update sd_hijack_opt to respect new env variablesC43H66N12O12S22022-10-081-3/+8
| |
| * add xformers_available shared variableC43H66N12O12S22022-10-081-1/+1
| |
| * default to split attention if cuda is available and xformers is notC43H66N12O12S22022-10-081-2/+2
| |
| * check for OS and env variableC43H66N12O12S22022-10-081-2/+7
| |
| * Update requirements.txtC43H66N12O12S22022-10-081-1/+0
| |
| * install xformersC43H66N12O12S22022-10-081-0/+3
| |
| * use new attnblock for xformers pathC43H66N12O12S22022-10-081-1/+1
| |
| * Update sd_hijack_optimizations.pyC43H66N12O12S22022-10-081-1/+1
| |
| * add xformers attnblock and hypernetwork supportC43H66N12O12S22022-10-081-2/+18
| |
| * delete broken and unnecessary aliasesC43H66N12O12S22022-10-081-6/+4
| |
| * switch to the proper way of calling xformersC43H66N12O12S22022-10-081-25/+3
| |
| * Update sd_hijack.pyC43H66N12O12S22022-10-071-1/+1
| |
| * Update sd_hijack.pyC43H66N12O12S22022-10-071-2/+2
| |
| * Update sd_hijack.pyC43H66N12O12S22022-10-071-2/+1
| |
| * Update requirements.txtC43H66N12O12S22022-10-071-0/+2
| |