aboutsummaryrefslogtreecommitdiffstats
path: root/modules
AgeCommit message (Collapse)AuthorLines
2022-10-29Merge pull request #3877 from Yaiol/masterAUTOMATIC1111-4/+5
Filename tags are wrongly referencing to process size instead of image size
2022-10-29Merge pull request #3717 from benkyoujouzu/masterAUTOMATIC1111-0/+1
Add missing support for linear activation in hypernetwork
2022-10-29Merge pull request #3771 from aria1th/patch-12AUTOMATIC1111-1/+2
Disable unavailable or duplicate options for Activation functions
2022-10-28Merge branch 'AUTOMATIC1111:master' into masterBruno Seoane-1/+1
2022-10-29Re enable linearAngelBottomless-1/+1
2022-10-29Update images.pyYaiol-4/+5
Filename tags [height] and [width] are wrongly referencing to process size instead of resulting image size. Making all upscale files named wrongly.
2022-10-28extras: upscaler blending should not be considered in cache keyChris OBryan-1/+1
2022-10-28extras-tweaks: autoformat changed linesChris OBryan-15/+15
2022-10-28extras: Make image cache LRUChris OBryan-29/+43
This changes the extras image cache into a Least-Recently-Used cache. This allows more experimentation with different upscalers without missing the cache. Max cache size is increased to 5 and is cleared on source image update.
2022-10-28extras: Rework image cacheChris OBryan-20/+32
Bit of a refactor to the image cache to make it easier to extend. Also takes into account the entire image instead of just a cropped portion.
2022-10-28extras: Add option to run upscaling before face fixingChris OBryan-50/+99
Face restoration can look much better if ran after upscaling, as it allows the restoration to fix upscaling artifacts. This patch adds an option to choose which order to run upscaling/face fixing in.
2022-10-28Fix log off by 1Muhammad Rizqi Nur-18/+20
2022-10-28Always ignore "None.pt" in the hypernet directory.timntorres-2/+5
2022-10-27Explicitly state when Hypernet is none.timntorres-1/+1
2022-10-27Read hypernet strength from PNG info.timntorres-0/+1
2022-10-27Add strength to textinfo.timntorres-0/+1
2022-10-28Add missing support for linear activation in hypernetworkbenkyoujouzu-0/+1
2022-10-28Natural sorting for dropdown checkpoint listAntonio-2/+5
Example: Before After 11.ckpt 11.ckpt ab.ckpt ab.ckpt ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt ade_step_1000.ckpt ade_step_500.ckpt ade_step_1500.ckpt ade_step_1000.ckpt ade_step_2000.ckpt ade_step_1500.ckpt ade_step_2500.ckpt ade_step_2000.ckpt ade_step_3000.ckpt ade_step_2500.ckpt ade_step_500.ckpt ade_step_3000.ckpt atp_step_5500.ckpt atp_step_5500.ckpt model1.ckpt model1.ckpt model10.ckpt model10.ckpt model1000.ckpt model33.ckpt model33.ckpt model50.ckpt model400.ckpt model400.ckpt model50.ckpt model1000.ckpt moo44.ckpt moo44.ckpt v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned.ckpt v1-5-pruned.ckpt v1-5-vae.ckpt v1-5-vae.ckpt
2022-10-27Reduce peak memory usage when changing modelsJosh Watzman-4/+7
A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.
2022-10-27Updated name and hover text.random_thoughtss-1/+1
2022-10-27Moved mask weight config to SD sectionrandom_thoughtss-1/+1
2022-10-27Highres fix works with unmasked latent.random_thoughtss-58/+76
Also refactor the mask creation to make it more accesible.
2022-10-27Merge branch 'AUTOMATIC1111:master' into masterrandom-thoughtss-72/+567
2022-10-28Fix random dataset shuffle on TIFlameLaw-2/+2
2022-10-27fixed indentationFlorian Horn-1/+1
2022-10-27added save button and shortcut (s) to Modal ViewFlorian Horn-3/+5
2022-10-27create send to buttons by extensionsyfszzx-3/+5
2022-10-27Disable unavailable or duplicate optionsAngelBottomless-1/+2
2022-10-27create send to buttons in one moduleyfszzx-247/+184
2022-10-26Add id access to scripts list in the cssxmodar-1/+1
2022-10-26prototype progress apievshiron-14/+88
2022-10-26typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dirDepFA-1/+1
2022-10-26Remove folder endpointBruno Seoane-14/+1
2022-10-26Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webuiBruno Seoane-87/+637
2022-10-26add script callback for before image save and change callback for after ↵AUTOMATIC-26/+64
image save to use a class with parameters
2022-10-26add override_settings to API as an alternative to #3629AUTOMATIC-7/+22
2022-10-26Implement PR #3625 but for embeddings.timntorres-1/+1
2022-10-26Implement PR #3309 but for embeddings.timntorres-1/+8
2022-10-26Implement PR #3189 but for embeddings.timntorres-5/+5
2022-10-26patch bug (SeverianVoid's comment on 5245c7a)timntorres-1/+1
2022-10-26img2img, use smartphone photos' EXIF orientationtimntorres-0/+8
2022-10-26fix typo in on_save_imaged/on_image_saved; hope no extension is using it yetAUTOMATIC-1/+1
2022-10-26default_time_format if format is blankw-e-w-1/+1
2022-10-26images: allow nested bracket in filename patternMilly-7/+4
2022-10-26cleanStephen-5/+1
2022-10-26[Bugfix][API] - Fix API response for colab usersStephen-8/+19
2022-10-26enable creating embedding with --medvramAUTOMATIC-0/+3
2022-10-26Merge pull request #3139 from captin411/focal-point-croppingAUTOMATIC1111-5/+392
[Preprocess image] New option to auto crop based on complexity, edges, faces
2022-10-26remove duplicate keys and lowercaseAngelBottomless-1/+1
2022-10-26Weight initialization and More activation funcAngelBottomless-11/+44
add weight init add weight init option in create_hypernetwork fstringify hypernet info save weight initialization info for further debugging fill bias with zero for He/Xavier initialize LayerNorm with Normal fix loading weight_init