aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
AgeCommit message (Collapse)AuthorLines
2023-01-11possible fix for fallback for fast model creation from config, attempt 2AUTOMATIC-0/+1
2023-01-11possible fix for fallback for fast model creation from configAUTOMATIC-0/+3
2023-01-10add support for transformers==4.25.1AUTOMATIC-2/+6
add fallback for when quick model creation fails
2023-01-10add more stuff to ignore when creating model from configAUTOMATIC-4/+28
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10disable torch weight initialization and CLIP downloading/reading checkpoint ↵AUTOMATIC-2/+3
to speedup creating sd model from config
2023-01-09allow model load if previous model failedVladimir Mandic-5/+10
2023-01-04use commandline-supplied cuda device name instead of cuda:0 for safetensors ↵AUTOMATIC-1/+1
PR that doesn't fix anything
2023-01-04Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed'AUTOMATIC-1/+4
2023-01-04fix broken inpainting modelAUTOMATIC-3/+0
2023-01-04find configs for models at runtime rather than when startingAUTOMATIC-13/+18
2023-01-04helpful error message when trying to load 2.0 without configAUTOMATIC-8/+18
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-03call script callbacks for reloaded model after loading embeddingsAUTOMATIC-2/+2
2023-01-02fix the issue with training on SD2.0AUTOMATIC-0/+2
2022-12-31validate textual inversion embeddingsVladimir Mandic-0/+3
2022-12-27Attempting to solve slow loads for `safetensors`.Nicolas Patry-1/+4
Fixes #5893
2022-12-24fix F541 f-string without any placeholdersYuval Aboulafia-4/+4
2022-12-24Removed lenght in sd_model at line 115linuxmobile ( リナックス )-3/+0
Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24Merge pull request #5627 from deanpress/patch-1AUTOMATIC1111-0/+4
fix: fallback model_checkpoint if it's empty
2022-12-11unconditionally set use_ema=False if value not specified (True never worked, ↵MrCheeze-1/+3
and all configs except v1-inpainting-inference.yaml already correctly set it to False)
2022-12-11fix: fallback model_checkpoint if it's emptyDean van Dugteren-0/+4
This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ```
2022-12-10fix support for 2.0 inpainting model while maintaining support for 1.5 ↵MrCheeze-0/+1
inpainting model
2022-12-10Merge pull request #4841 from R-N/vae-fix-noneAUTOMATIC1111-0/+2
Fix None option of VAE selector
2022-12-08Depth2img model supportJay Smith-0/+46
2022-11-28make it possible to save nai model using safetensorsAUTOMATIC-2/+2
2022-11-27add safetensors support for model merging #4869AUTOMATIC-11/+15
2022-11-27add safetensors to requirementsAUTOMATIC-6/+5
2022-11-27Merge pull request #4930 from Narsil/allow_to_load_safetensors_fileAUTOMATIC1111-2/+9
Supporting `*.safetensors` format.
2022-11-26no-half support for SD 2.0MrCheeze-0/+3
2022-11-21Supporting `*.safetensors` format.Nicolas Patry-2/+9
If a model file exists with extension `.safetensors` then we can load it more safely than with PyTorch weights.
2022-11-19Merge branch 'a1111' into vae-fix-noneMuhammad Rizqi Nur-8/+2
2022-11-19Remove no longer necessary parts and add vae_file safeguardMuhammad Rizqi Nur-8/+2
2022-11-19MiscMuhammad Rizqi Nur-0/+1
Misc
2022-11-19Fix base VAE caching was done after loading VAE, also add safeguardMuhammad Rizqi Nur-0/+1
2022-11-09restore #4035 behaviorcluder-1/+1
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09- do not use ckpt cache, if disabledcluder-10/+17
- cache model after is has been loaded from file
2022-11-04fix one of previous merges breaking the programAUTOMATIC-0/+2
2022-11-04Merge branch 'master' into fix-ckpt-cacheAUTOMATIC1111-2/+3
2022-11-04fix: loading models without vae from cachedigburn-2/+3
2022-11-02Merge branch 'master' into fix-ckpt-cacheMuhammad Rizqi Nur-18/+28
2022-11-02fix #3986 breaking --no-half-vaeAUTOMATIC-0/+9
2022-11-02Reload VAE without reloading sd checkpointMuhammad Rizqi Nur-8/+7
2022-11-02Merge branch 'master' into vae-pickerMuhammad Rizqi Nur-1/+13
2022-11-01Unload sd_model before loading the otherJairo Correa-1/+13
2022-10-31Fix #4035 for real nowMuhammad Rizqi Nur-6/+7
2022-10-31Fix #4035Muhammad Rizqi Nur-1/+1
2022-10-31Checkpoint cache by combination key of checkpoint and vaeMuhammad Rizqi Nur-11/+16
2022-10-30Settings to select VAEMuhammad Rizqi Nur-21/+10
2022-10-29Merge pull request #3818 from jwatzman/masterAUTOMATIC1111-4/+7
Reduce peak memory usage when changing models
2022-10-28Natural sorting for dropdown checkpoint listAntonio-2/+5
Example: Before After 11.ckpt 11.ckpt ab.ckpt ab.ckpt ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt ade_step_1000.ckpt ade_step_500.ckpt ade_step_1500.ckpt ade_step_1000.ckpt ade_step_2000.ckpt ade_step_1500.ckpt ade_step_2500.ckpt ade_step_2000.ckpt ade_step_3000.ckpt ade_step_2500.ckpt ade_step_500.ckpt ade_step_3000.ckpt atp_step_5500.ckpt atp_step_5500.ckpt model1.ckpt model1.ckpt model10.ckpt model10.ckpt model1000.ckpt model33.ckpt model33.ckpt model50.ckpt model400.ckpt model400.ckpt model50.ckpt model1000.ckpt moo44.ckpt moo44.ckpt v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned.ckpt v1-5-pruned.ckpt v1-5-vae.ckpt v1-5-vae.ckpt
2022-10-27Reduce peak memory usage when changing modelsJosh Watzman-4/+7
A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.