aboutsummaryrefslogtreecommitdiffstats
path: root/modules
AgeCommit message (Collapse)AuthorLines
2022-10-09Fix typoNicolas Noullet-1/+1
2022-10-09remove line breakfrostydad-1/+0
2022-10-09Fix incorrect sampler name in outputfrostydad-1/+8
2022-10-09Fix VRAM Issue by only loading in hypernetwork when selected in settingsFampai-16/+20
2022-10-09Merge pull request #1752 from Greendayle/dev/deepdanbooruAUTOMATIC1111-5/+98
Added DeepDanbooru interrogator
2022-10-09Support `Download` for txt files.aoirusann-3/+41
2022-10-09Add `Download` & `Download as zip`aoirusann-5/+34
2022-10-09fixed incorrect message about loading config; thanks anon!AUTOMATIC-1/+1
2022-10-09make main model loading and model merger use the same codeAUTOMATIC-8/+12
2022-10-08support loading .yaml config with same name as modelAUTOMATIC-8/+24
support EMA weights in processing (????)
2022-10-08chore: Fix typosAidan Holland-15/+15
2022-10-08Break after finding the local directory of stable diffusionEdouard Leurent-0/+1
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../. Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08add 'Ignore last layers of CLIP model' option as a parameter to the infotextAUTOMATIC-1/+5
2022-10-08make --force-enable-xformers work without needing --xformersAUTOMATIC-1/+1
2022-10-08Added ability to ignore last n layers in FrozenCLIPEmbedderFampai-2/+10
2022-10-08Update ui.pyDepFA-1/+1
2022-10-08TI preprocess wordingDepFA-3/+3
I had to check the code to work out what splitting was 🤷🏿
2022-10-08Merge branch 'master' into dev/deepdanbooruGreendayle-4/+17
2022-10-08add --force-enable-xformers option and also add messages to console ↵AUTOMATIC-1/+6
regarding cross attention optimizations
2022-10-08add fallback for xformers_attnblock_forwardAUTOMATIC-1/+4
2022-10-08made deepdanbooru optional, added to readme, automatic download of deepbooru ↵Greendayle-17/+23
model
2022-10-08alternate promptArtem Zagidulin-2/+7
2022-10-08check for ampere without destroying the optimizations. again.C43H66N12O12S2-4/+3
2022-10-08check for ampereC43H66N12O12S2-3/+4
2022-10-08Merge branch 'master' into dev/deepdanbooruGreendayle-1/+1
2022-10-08why did you do thisAUTOMATIC-1/+1
2022-10-08fix conflictsGreendayle-46/+159
2022-10-08Fixed typoMilly-1/+1
2022-10-08restore old opt_split_attention/disable_opt_split_attention logicAUTOMATIC-1/+1
2022-10-08simplify xfrmers options: --xformers to enable and that's itAUTOMATIC-9/+15
2022-10-08emergency fix for xformers (continue + shared)AUTOMATIC-8/+8
2022-10-08Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC1111-6/+45
xformers attention
2022-10-08Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-08update sd_hijack_opt to respect new env variablesC43H66N12O12S2-3/+8
2022-10-08add xformers_available shared variableC43H66N12O12S2-1/+1
2022-10-08default to split attention if cuda is available and xformers is notC43H66N12O12S2-2/+2
2022-10-08fix bug where when using prompt composition, hijack_comments generated ↵MrCheeze-1/+5
before the final AND will be dropped
2022-10-08fix glob path in hypernetwork.pyddPn08-1/+1
2022-10-08fix AND broken for long promptsAUTOMATIC-0/+9
2022-10-08fix bugs related to variable prompt lengthsAUTOMATIC-12/+37
2022-10-08do not let user choose his own prompt token count limitAUTOMATIC-21/+12
2022-10-08check specifically for skippedTrung Ngo-7/+3
2022-10-08Add button to skip the current iterationTrung Ngo-0/+21
2022-10-08Merge remote-tracking branch 'origin/master'AUTOMATIC-1/+5
2022-10-08let user choose his own prompt token count limitAUTOMATIC-8/+16
2022-10-08fix: handles when state_dict does not existleko-1/+5
2022-10-08use new attnblock for xformers pathC43H66N12O12S2-1/+1
2022-10-08Update sd_hijack_optimizations.pyC43H66N12O12S2-1/+1
2022-10-08add xformers attnblock and hypernetwork supportC43H66N12O12S2-2/+18
2022-10-08Add hypernetwork support to split cross attention v1brkirch-5/+15
* Add hypernetwork support to split_cross_attention_forward_v1 * Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device