aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
Commit message (Collapse)AuthorAgeFilesLines
...
* change hypernets to use sha256 hashesAUTOMATIC2023-01-141-1/+1
|
* change hash to sha256AUTOMATIC2023-01-141-42/+74
|
* fix for an error caused by skipping initialization, for realsies this time: ↵AUTOMATIC2023-01-111-0/+1
| | | | TypeError: expected str, bytes or os.PathLike object, not NoneType
* possible fix for fallback for fast model creation from config, attempt 2AUTOMATIC2023-01-111-0/+1
|
* possible fix for fallback for fast model creation from configAUTOMATIC2023-01-111-0/+3
|
* add support for transformers==4.25.1AUTOMATIC2023-01-101-2/+6
| | | | add fallback for when quick model creation fails
* add more stuff to ignore when creating model from configAUTOMATIC2023-01-101-4/+28
| | | | prevent .vae.safetensors files from being listed as stable diffusion models
* disable torch weight initialization and CLIP downloading/reading checkpoint ↵AUTOMATIC2023-01-101-2/+3
| | | | to speedup creating sd model from config
* allow model load if previous model failedVladimir Mandic2023-01-091-5/+10
|
* use commandline-supplied cuda device name instead of cuda:0 for safetensors ↵AUTOMATIC2023-01-041-1/+1
| | | | PR that doesn't fix anything
* Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed'AUTOMATIC2023-01-041-1/+4
|\
| * Attempting to solve slow loads for `safetensors`.Nicolas Patry2022-12-271-1/+4
| | | | | | | | Fixes #5893
* | fix broken inpainting modelAUTOMATIC2023-01-041-3/+0
| |
* | find configs for models at runtime rather than when startingAUTOMATIC2023-01-041-13/+18
| |
* | helpful error message when trying to load 2.0 without configAUTOMATIC2023-01-041-8/+18
| | | | | | | | failing to load model weights from settings won't break generation for currently loaded model anymore
* | call script callbacks for reloaded model after loading embeddingsAUTOMATIC2023-01-031-2/+2
| |
* | fix the issue with training on SD2.0AUTOMATIC2023-01-011-0/+2
| |
* | validate textual inversion embeddingsVladimir Mandic2022-12-311-0/+3
|/
* fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-4/+4
|
* Removed lenght in sd_model at line 115linuxmobile ( リナックス )2022-12-241-3/+0
| | | | | Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
* Merge pull request #5627 from deanpress/patch-1AUTOMATIC11112022-12-241-0/+4
|\ | | | | fix: fallback model_checkpoint if it's empty
| * fix: fallback model_checkpoint if it's emptyDean van Dugteren2022-12-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ```
* | unconditionally set use_ema=False if value not specified (True never worked, ↵MrCheeze2022-12-111-1/+3
| | | | | | | | and all configs except v1-inpainting-inference.yaml already correctly set it to False)
* | fix support for 2.0 inpainting model while maintaining support for 1.5 ↵MrCheeze2022-12-101-0/+1
|/ | | | inpainting model
* Merge pull request #4841 from R-N/vae-fix-noneAUTOMATIC11112022-12-101-0/+2
|\ | | | | Fix None option of VAE selector
| * Merge branch 'a1111' into vae-fix-noneMuhammad Rizqi Nur2022-11-191-8/+2
| |\
| * | MiscMuhammad Rizqi Nur2022-11-191-0/+1
| | | | | | | | | | | | Misc
| * | Fix base VAE caching was done after loading VAE, also add safeguardMuhammad Rizqi Nur2022-11-191-0/+1
| | |
* | | Depth2img model supportJay Smith2022-12-091-0/+46
| | |
* | | make it possible to save nai model using safetensorsAUTOMATIC2022-11-281-2/+2
| | |
* | | add safetensors support for model merging #4869AUTOMATIC2022-11-271-11/+15
| | |
* | | add safetensors to requirementsAUTOMATIC2022-11-271-6/+5
| | |
* | | Merge pull request #4930 from Narsil/allow_to_load_safetensors_fileAUTOMATIC11112022-11-271-2/+9
|\ \ \ | | | | | | | | Supporting `*.safetensors` format.
| * | | Supporting `*.safetensors` format.Nicolas Patry2022-11-211-2/+9
| |/ / | | | | | | | | | | | | If a model file exists with extension `.safetensors` then we can load it more safely than with PyTorch weights.
* | / no-half support for SD 2.0MrCheeze2022-11-261-0/+3
| |/ |/|
* | Remove no longer necessary parts and add vae_file safeguardMuhammad Rizqi Nur2022-11-191-8/+2
|/
* restore #4035 behaviorcluder2022-11-091-1/+1
| | | | - if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
* - do not use ckpt cache, if disabledcluder2022-11-091-10/+17
| | | | - cache model after is has been loaded from file
* fix one of previous merges breaking the programAUTOMATIC2022-11-041-0/+2
|
* Merge branch 'master' into fix-ckpt-cacheAUTOMATIC11112022-11-041-2/+3
|\
| * fix: loading models without vae from cachedigburn2022-11-041-2/+3
| |
* | Merge branch 'master' into fix-ckpt-cacheMuhammad Rizqi Nur2022-11-021-18/+28
|\|
| * fix #3986 breaking --no-half-vaeAUTOMATIC2022-11-021-0/+9
| |
| * Reload VAE without reloading sd checkpointMuhammad Rizqi Nur2022-11-021-8/+7
| |
| * Merge branch 'master' into vae-pickerMuhammad Rizqi Nur2022-11-011-1/+13
| |\
| | * Unload sd_model before loading the otherJairo Correa2022-11-011-1/+13
| | |
| * | Checkpoint cache by combination key of checkpoint and vaeMuhammad Rizqi Nur2022-10-311-11/+16
| | |
| * | Settings to select VAEMuhammad Rizqi Nur2022-10-301-21/+10
| |/
* | Fix #4035 for real nowMuhammad Rizqi Nur2022-10-311-6/+7
| |
* | Fix #4035Muhammad Rizqi Nur2022-10-311-1/+1
|/