aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
AgeCommit message (Collapse)AuthorLines
2022-12-11unconditionally set use_ema=False if value not specified (True never worked, ↵MrCheeze-1/+3
and all configs except v1-inpainting-inference.yaml already correctly set it to False)
2022-12-11fix: fallback model_checkpoint if it's emptyDean van Dugteren-0/+4
This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ```
2022-12-10fix support for 2.0 inpainting model while maintaining support for 1.5 ↵MrCheeze-0/+1
inpainting model
2022-12-10Merge pull request #4841 from R-N/vae-fix-noneAUTOMATIC1111-0/+2
Fix None option of VAE selector
2022-12-08Depth2img model supportJay Smith-0/+46
2022-11-28make it possible to save nai model using safetensorsAUTOMATIC-2/+2
2022-11-27add safetensors support for model merging #4869AUTOMATIC-11/+15
2022-11-27add safetensors to requirementsAUTOMATIC-6/+5
2022-11-27Merge pull request #4930 from Narsil/allow_to_load_safetensors_fileAUTOMATIC1111-2/+9
Supporting `*.safetensors` format.
2022-11-26no-half support for SD 2.0MrCheeze-0/+3
2022-11-21Supporting `*.safetensors` format.Nicolas Patry-2/+9
If a model file exists with extension `.safetensors` then we can load it more safely than with PyTorch weights.
2022-11-19Merge branch 'a1111' into vae-fix-noneMuhammad Rizqi Nur-8/+2
2022-11-19Remove no longer necessary parts and add vae_file safeguardMuhammad Rizqi Nur-8/+2
2022-11-19MiscMuhammad Rizqi Nur-0/+1
Misc
2022-11-19Fix base VAE caching was done after loading VAE, also add safeguardMuhammad Rizqi Nur-0/+1
2022-11-09restore #4035 behaviorcluder-1/+1
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09- do not use ckpt cache, if disabledcluder-10/+17
- cache model after is has been loaded from file
2022-11-04fix one of previous merges breaking the programAUTOMATIC-0/+2
2022-11-04Merge branch 'master' into fix-ckpt-cacheAUTOMATIC1111-2/+3
2022-11-04fix: loading models without vae from cachedigburn-2/+3
2022-11-02Merge branch 'master' into fix-ckpt-cacheMuhammad Rizqi Nur-18/+28
2022-11-02fix #3986 breaking --no-half-vaeAUTOMATIC-0/+9
2022-11-02Reload VAE without reloading sd checkpointMuhammad Rizqi Nur-8/+7
2022-11-02Merge branch 'master' into vae-pickerMuhammad Rizqi Nur-1/+13
2022-11-01Unload sd_model before loading the otherJairo Correa-1/+13
2022-10-31Fix #4035 for real nowMuhammad Rizqi Nur-6/+7
2022-10-31Fix #4035Muhammad Rizqi Nur-1/+1
2022-10-31Checkpoint cache by combination key of checkpoint and vaeMuhammad Rizqi Nur-11/+16
2022-10-30Settings to select VAEMuhammad Rizqi Nur-21/+10
2022-10-29Merge pull request #3818 from jwatzman/masterAUTOMATIC1111-4/+7
Reduce peak memory usage when changing models
2022-10-28Natural sorting for dropdown checkpoint listAntonio-2/+5
Example: Before After 11.ckpt 11.ckpt ab.ckpt ab.ckpt ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt ade_step_1000.ckpt ade_step_500.ckpt ade_step_1500.ckpt ade_step_1000.ckpt ade_step_2000.ckpt ade_step_1500.ckpt ade_step_2500.ckpt ade_step_2000.ckpt ade_step_3000.ckpt ade_step_2500.ckpt ade_step_500.ckpt ade_step_3000.ckpt atp_step_5500.ckpt atp_step_5500.ckpt model1.ckpt model1.ckpt model10.ckpt model10.ckpt model1000.ckpt model33.ckpt model33.ckpt model50.ckpt model400.ckpt model400.ckpt model50.ckpt model1000.ckpt moo44.ckpt moo44.ckpt v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned.ckpt v1-5-pruned.ckpt v1-5-vae.ckpt v1-5-vae.ckpt
2022-10-27Reduce peak memory usage when changing modelsJosh Watzman-4/+7
A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.
2022-10-22call model_loaded_callback after setting shared.sd_model in case scripts ↵AUTOMATIC-1/+2
refer to it using that
2022-10-22fix aesthetic gradients doing nothing after loading a different modelMrCheeze-2/+2
2022-10-22removed aesthetic gradients as built-inAUTOMATIC-2/+5
added support for extensions
2022-10-21loading SD VAE, see PR #3303AUTOMATIC-1/+4
2022-10-21do not load aesthetic clip model until it's neededAUTOMATIC-3/+0
add refresh button for aesthetic embeddings add aesthetic params to images' infotext
2022-10-21Merge branch 'ae'AUTOMATIC-1/+4
2022-10-20XY grid correctly re-assignes model when config changesrandom_thoughtss-3/+3
2022-10-20Added PLMS hijack and made sure to always replace methodsrandom_thoughtss-2/+1
2022-10-19Added support for RunwayML inpainting modelrandom_thoughtss-1/+15
2022-10-19fix for broken checkpoint mergerAUTOMATIC-1/+4
2022-10-19Merge branch 'master' into test_resolve_conflictsMalumaDev-3/+25
2022-10-19more careful loading of model weights (eliminates some issues with ↵AUTOMATIC-3/+25
checkpoints that have weird cond_stage_model layer names)
2022-10-16ui fix, re organization of the codeMalumaDev-1/+4
2022-10-15Merge branch 'master' into ckpt-cacheAUTOMATIC1111-3/+2
2022-10-14add checkpoint cache option to UI for faster model switchingRae Fu-22/+32
switching time reduced from ~1500ms to ~280ms
2022-10-14rework the code for lowram a bitAUTOMATIC-10/+2
2022-10-14load models to VRAM when using `--lowram` paramLjzd-PRO-2/+13
load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
2022-10-10no to different messages plus fix using != to compare to NoneAUTOMATIC-6/+3