aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
Commit message (Collapse)AuthorAgeFilesLines
* add support for transformers==4.25.1AUTOMATIC2023-01-101-2/+6
| | | | add fallback for when quick model creation fails
* add more stuff to ignore when creating model from configAUTOMATIC2023-01-101-4/+28
| | | | prevent .vae.safetensors files from being listed as stable diffusion models
* disable torch weight initialization and CLIP downloading/reading checkpoint ↵AUTOMATIC2023-01-101-2/+3
| | | | to speedup creating sd model from config
* allow model load if previous model failedVladimir Mandic2023-01-091-5/+10
|
* use commandline-supplied cuda device name instead of cuda:0 for safetensors ↵AUTOMATIC2023-01-041-1/+1
| | | | PR that doesn't fix anything
* Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed'AUTOMATIC2023-01-041-1/+4
|\
| * Attempting to solve slow loads for `safetensors`.Nicolas Patry2022-12-271-1/+4
| | | | | | | | Fixes #5893
* | fix broken inpainting modelAUTOMATIC2023-01-041-3/+0
| |
* | find configs for models at runtime rather than when startingAUTOMATIC2023-01-041-13/+18
| |
* | helpful error message when trying to load 2.0 without configAUTOMATIC2023-01-041-8/+18
| | | | | | | | failing to load model weights from settings won't break generation for currently loaded model anymore
* | call script callbacks for reloaded model after loading embeddingsAUTOMATIC2023-01-031-2/+2
| |
* | fix the issue with training on SD2.0AUTOMATIC2023-01-011-0/+2
| |
* | validate textual inversion embeddingsVladimir Mandic2022-12-311-0/+3
|/
* fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-4/+4
|
* Removed lenght in sd_model at line 115linuxmobile ( リナックス )2022-12-241-3/+0
| | | | | Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
* Merge pull request #5627 from deanpress/patch-1AUTOMATIC11112022-12-241-0/+4
|\ | | | | fix: fallback model_checkpoint if it's empty
| * fix: fallback model_checkpoint if it's emptyDean van Dugteren2022-12-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ```
* | unconditionally set use_ema=False if value not specified (True never worked, ↵MrCheeze2022-12-111-1/+3
| | | | | | | | and all configs except v1-inpainting-inference.yaml already correctly set it to False)
* | fix support for 2.0 inpainting model while maintaining support for 1.5 ↵MrCheeze2022-12-101-0/+1
|/ | | | inpainting model
* Merge pull request #4841 from R-N/vae-fix-noneAUTOMATIC11112022-12-101-0/+2
|\ | | | | Fix None option of VAE selector
| * Merge branch 'a1111' into vae-fix-noneMuhammad Rizqi Nur2022-11-191-8/+2
| |\
| * | MiscMuhammad Rizqi Nur2022-11-191-0/+1
| | | | | | | | | | | | Misc
| * | Fix base VAE caching was done after loading VAE, also add safeguardMuhammad Rizqi Nur2022-11-191-0/+1
| | |
* | | Depth2img model supportJay Smith2022-12-091-0/+46
| | |
* | | make it possible to save nai model using safetensorsAUTOMATIC2022-11-281-2/+2
| | |
* | | add safetensors support for model merging #4869AUTOMATIC2022-11-271-11/+15
| | |
* | | add safetensors to requirementsAUTOMATIC2022-11-271-6/+5
| | |
* | | Merge pull request #4930 from Narsil/allow_to_load_safetensors_fileAUTOMATIC11112022-11-271-2/+9
|\ \ \ | | | | | | | | Supporting `*.safetensors` format.
| * | | Supporting `*.safetensors` format.Nicolas Patry2022-11-211-2/+9
| |/ / | | | | | | | | | | | | If a model file exists with extension `.safetensors` then we can load it more safely than with PyTorch weights.
* | / no-half support for SD 2.0MrCheeze2022-11-261-0/+3
| |/ |/|
* | Remove no longer necessary parts and add vae_file safeguardMuhammad Rizqi Nur2022-11-191-8/+2
|/
* restore #4035 behaviorcluder2022-11-091-1/+1
| | | | - if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
* - do not use ckpt cache, if disabledcluder2022-11-091-10/+17
| | | | - cache model after is has been loaded from file
* fix one of previous merges breaking the programAUTOMATIC2022-11-041-0/+2
|
* Merge branch 'master' into fix-ckpt-cacheAUTOMATIC11112022-11-041-2/+3
|\
| * fix: loading models without vae from cachedigburn2022-11-041-2/+3
| |
* | Merge branch 'master' into fix-ckpt-cacheMuhammad Rizqi Nur2022-11-021-18/+28
|\|
| * fix #3986 breaking --no-half-vaeAUTOMATIC2022-11-021-0/+9
| |
| * Reload VAE without reloading sd checkpointMuhammad Rizqi Nur2022-11-021-8/+7
| |
| * Merge branch 'master' into vae-pickerMuhammad Rizqi Nur2022-11-011-1/+13
| |\
| | * Unload sd_model before loading the otherJairo Correa2022-11-011-1/+13
| | |
| * | Checkpoint cache by combination key of checkpoint and vaeMuhammad Rizqi Nur2022-10-311-11/+16
| | |
| * | Settings to select VAEMuhammad Rizqi Nur2022-10-301-21/+10
| |/
* | Fix #4035 for real nowMuhammad Rizqi Nur2022-10-311-6/+7
| |
* | Fix #4035Muhammad Rizqi Nur2022-10-311-1/+1
|/
* Merge pull request #3818 from jwatzman/masterAUTOMATIC11112022-10-291-4/+7
|\ | | | | Reduce peak memory usage when changing models
| * Reduce peak memory usage when changing modelsJosh Watzman2022-10-271-4/+7
| | | | | | | | | | | | | | | | | | A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.
* | Natural sorting for dropdown checkpoint listAntonio2022-10-281-2/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Example: Before After 11.ckpt 11.ckpt ab.ckpt ab.ckpt ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt ade_step_1000.ckpt ade_step_500.ckpt ade_step_1500.ckpt ade_step_1000.ckpt ade_step_2000.ckpt ade_step_1500.ckpt ade_step_2500.ckpt ade_step_2000.ckpt ade_step_3000.ckpt ade_step_2500.ckpt ade_step_500.ckpt ade_step_3000.ckpt atp_step_5500.ckpt atp_step_5500.ckpt model1.ckpt model1.ckpt model10.ckpt model10.ckpt model1000.ckpt model33.ckpt model33.ckpt model50.ckpt model400.ckpt model400.ckpt model50.ckpt model1000.ckpt moo44.ckpt moo44.ckpt v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned.ckpt v1-5-pruned.ckpt v1-5-vae.ckpt v1-5-vae.ckpt
* call model_loaded_callback after setting shared.sd_model in case scripts ↵AUTOMATIC2022-10-221-1/+2
| | | | refer to it using that
* fix aesthetic gradients doing nothing after loading a different modelMrCheeze2022-10-221-2/+2
|