aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #8780 from Brawlence/masterAUTOMATIC11112023-03-251-1/+21
|\ | | | | Unload and re-load checkpoint to VRAM on request (API & Manual)
| * Unload checkpoints on RequestΦφ2023-03-211-1/+21
| | | | | | | | | | | | | | …to free VRAM. New Action buttons in the settings to manually free and reload checkpoints, essentially juggling models between RAM and VRAM.
* | fix variable typocarat-johyun2023-03-231-2/+2
|/
* fix an error loading Lora with empty values in metadataAUTOMATIC2023-03-141-1/+1
|
* Add view metadata button for Lora cards.AUTOMATIC2023-03-141-0/+24
|
* when existsw-e-w2023-02-191-1/+1
|
* fix auto sd download issuew-e-w2023-02-191-1/+7
|
* Add ".vae.ckpt" to ext_blacklistmissionfloyd2023-02-161-1/+1
|
* Download model if none are foundmissionfloyd2023-02-151-1/+1
|
* make it possible to load SD1 checkpoints without CLIPAUTOMATIC2023-02-051-1/+5
|
* fix issue with switching back to checkpoint that had its checksum calculated ↵AUTOMATIC2023-02-041-2/+3
| | | | during runtime mentioned in #7506
* Merge pull request #7470 from ↵AUTOMATIC11112023-02-041-1/+1
|\ | | | | | | | | cbrownstein-lambda/update-error-message-no-checkpoint Update error message WRT missing checkpoint file
| * Update error message WRT missing checkpoint fileCody Brownstein2023-02-011-1/+1
| | | | | | | | The Safetensors format is also supported.
* | add --no-hashingAUTOMATIC2023-02-041-0/+3
|/
* support for searching subdirectory names for extra networksAUTOMATIC2023-01-291-0/+1
|
* fixed a bug where after switching to a checkpoint with unknown hash, you'd ↵AUTOMATIC2023-01-281-3/+1
| | | | | | get empty space instead of checkpoint name in UI fixed a bug where if you update a selected checkpoint on disk and then restart the program, a different checkpoint loads, but the name is shown for the the old one.
* add data-dir flag and set all user data directories based on itMax Audron2023-01-271-3/+3
|
* support detecting midas modelAUTOMATIC2023-01-271-5/+5
| | | | fix broken api for checkpoint list
* remove the need to place configs near modelsAUTOMATIC2023-01-271-115/+113
|
* Merge pull request #6510 from brkirch/unet16-upcast-precisionAUTOMATIC11112023-01-251-0/+10
|\ | | | | Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
| * Add option for float32 sampling with float16 UNetbrkirch2023-01-251-0/+10
| | | | | | | | This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
* | Add instruct-pix2pix hijackKyle2023-01-251-1/+11
|/ | | | | | Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py Adds ddpm_edit.py necessary for instruct-pix2pix
* bring back short hashes to sd checkpoint selectionAUTOMATIC2023-01-191-4/+11
|
* fix bug with "Ignore selected VAE for..." option completely disabling VAE ↵AUTOMATIC2023-01-141-3/+3
| | | | | | election rework VAE resolving code to be more simple
* load hashes from cache for checkpoints that have themAUTOMATIC2023-01-141-3/+6
| | | | add checkpoint hash to footer
* update key to use with checkpoints' sha256 in cacheAUTOMATIC2023-01-141-1/+1
|
* change hypernets to use sha256 hashesAUTOMATIC2023-01-141-1/+1
|
* change hash to sha256AUTOMATIC2023-01-141-42/+74
|
* fix for an error caused by skipping initialization, for realsies this time: ↵AUTOMATIC2023-01-111-0/+1
| | | | TypeError: expected str, bytes or os.PathLike object, not NoneType
* possible fix for fallback for fast model creation from config, attempt 2AUTOMATIC2023-01-111-0/+1
|
* possible fix for fallback for fast model creation from configAUTOMATIC2023-01-111-0/+3
|
* add support for transformers==4.25.1AUTOMATIC2023-01-101-2/+6
| | | | add fallback for when quick model creation fails
* add more stuff to ignore when creating model from configAUTOMATIC2023-01-101-4/+28
| | | | prevent .vae.safetensors files from being listed as stable diffusion models
* disable torch weight initialization and CLIP downloading/reading checkpoint ↵AUTOMATIC2023-01-101-2/+3
| | | | to speedup creating sd model from config
* allow model load if previous model failedVladimir Mandic2023-01-091-5/+10
|
* use commandline-supplied cuda device name instead of cuda:0 for safetensors ↵AUTOMATIC2023-01-041-1/+1
| | | | PR that doesn't fix anything
* Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed'AUTOMATIC2023-01-041-1/+4
|\
| * Attempting to solve slow loads for `safetensors`.Nicolas Patry2022-12-271-1/+4
| | | | | | | | Fixes #5893
* | fix broken inpainting modelAUTOMATIC2023-01-041-3/+0
| |
* | find configs for models at runtime rather than when startingAUTOMATIC2023-01-041-13/+18
| |
* | helpful error message when trying to load 2.0 without configAUTOMATIC2023-01-041-8/+18
| | | | | | | | failing to load model weights from settings won't break generation for currently loaded model anymore
* | call script callbacks for reloaded model after loading embeddingsAUTOMATIC2023-01-031-2/+2
| |
* | fix the issue with training on SD2.0AUTOMATIC2023-01-011-0/+2
| |
* | validate textual inversion embeddingsVladimir Mandic2022-12-311-0/+3
|/
* fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-4/+4
|
* Removed lenght in sd_model at line 115linuxmobile ( リナックス )2022-12-241-3/+0
| | | | | Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
* Merge pull request #5627 from deanpress/patch-1AUTOMATIC11112022-12-241-0/+4
|\ | | | | fix: fallback model_checkpoint if it's empty
| * fix: fallback model_checkpoint if it's emptyDean van Dugteren2022-12-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ```
* | unconditionally set use_ema=False if value not specified (True never worked, ↵MrCheeze2022-12-111-1/+3
| | | | | | | | and all configs except v1-inpainting-inference.yaml already correctly set it to False)
* | fix support for 2.0 inpainting model while maintaining support for 1.5 ↵MrCheeze2022-12-101-0/+1
|/ | | | inpainting model