aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
Commit message (Collapse)AuthorAgeFilesLines
* Remove no longer necessary parts and add vae_file safeguardMuhammad Rizqi Nur2022-11-191-8/+2
|
* restore #4035 behaviorcluder2022-11-091-1/+1
| | | | - if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
* - do not use ckpt cache, if disabledcluder2022-11-091-10/+17
| | | | - cache model after is has been loaded from file
* fix one of previous merges breaking the programAUTOMATIC2022-11-041-0/+2
|
* Merge branch 'master' into fix-ckpt-cacheAUTOMATIC11112022-11-041-2/+3
|\
| * fix: loading models without vae from cachedigburn2022-11-041-2/+3
| |
* | Merge branch 'master' into fix-ckpt-cacheMuhammad Rizqi Nur2022-11-021-18/+28
|\|
| * fix #3986 breaking --no-half-vaeAUTOMATIC2022-11-021-0/+9
| |
| * Reload VAE without reloading sd checkpointMuhammad Rizqi Nur2022-11-021-8/+7
| |
| * Merge branch 'master' into vae-pickerMuhammad Rizqi Nur2022-11-011-1/+13
| |\
| | * Unload sd_model before loading the otherJairo Correa2022-11-011-1/+13
| | |
| * | Checkpoint cache by combination key of checkpoint and vaeMuhammad Rizqi Nur2022-10-311-11/+16
| | |
| * | Settings to select VAEMuhammad Rizqi Nur2022-10-301-21/+10
| |/
* | Fix #4035 for real nowMuhammad Rizqi Nur2022-10-311-6/+7
| |
* | Fix #4035Muhammad Rizqi Nur2022-10-311-1/+1
|/
* Merge pull request #3818 from jwatzman/masterAUTOMATIC11112022-10-291-4/+7
|\ | | | | Reduce peak memory usage when changing models
| * Reduce peak memory usage when changing modelsJosh Watzman2022-10-271-4/+7
| | | | | | | | | | | | | | | | | | A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.
* | Natural sorting for dropdown checkpoint listAntonio2022-10-281-2/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Example: Before After 11.ckpt 11.ckpt ab.ckpt ab.ckpt ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt ade_step_1000.ckpt ade_step_500.ckpt ade_step_1500.ckpt ade_step_1000.ckpt ade_step_2000.ckpt ade_step_1500.ckpt ade_step_2500.ckpt ade_step_2000.ckpt ade_step_3000.ckpt ade_step_2500.ckpt ade_step_500.ckpt ade_step_3000.ckpt atp_step_5500.ckpt atp_step_5500.ckpt model1.ckpt model1.ckpt model10.ckpt model10.ckpt model1000.ckpt model33.ckpt model33.ckpt model50.ckpt model400.ckpt model400.ckpt model50.ckpt model1000.ckpt moo44.ckpt moo44.ckpt v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned.ckpt v1-5-pruned.ckpt v1-5-vae.ckpt v1-5-vae.ckpt
* call model_loaded_callback after setting shared.sd_model in case scripts ↵AUTOMATIC2022-10-221-1/+2
| | | | refer to it using that
* fix aesthetic gradients doing nothing after loading a different modelMrCheeze2022-10-221-2/+2
|
* removed aesthetic gradients as built-inAUTOMATIC2022-10-221-2/+5
| | | | added support for extensions
* loading SD VAE, see PR #3303AUTOMATIC2022-10-211-1/+4
|
* do not load aesthetic clip model until it's neededAUTOMATIC2022-10-211-3/+0
| | | | | add refresh button for aesthetic embeddings add aesthetic params to images' infotext
* Merge branch 'ae'AUTOMATIC2022-10-211-1/+4
|\
| * Merge branch 'master' into test_resolve_conflictsMalumaDev2022-10-191-3/+25
| |\
| * | ui fix, re organization of the codeMalumaDev2022-10-161-1/+4
| | |
* | | XY grid correctly re-assignes model when config changesrandom_thoughtss2022-10-201-3/+3
| | |
* | | Added PLMS hijack and made sure to always replace methodsrandom_thoughtss2022-10-201-2/+1
| | |
* | | Added support for RunwayML inpainting modelrandom_thoughtss2022-10-191-1/+15
| | |
* | | fix for broken checkpoint mergerAUTOMATIC2022-10-191-1/+4
| |/ |/|
* | more careful loading of model weights (eliminates some issues with ↵AUTOMATIC2022-10-191-3/+25
|/ | | | checkpoints that have weird cond_stage_model layer names)
* Merge branch 'master' into ckpt-cacheAUTOMATIC11112022-10-151-3/+2
|\
| * rework the code for lowram a bitAUTOMATIC2022-10-141-10/+2
| |
| * load models to VRAM when using `--lowram` paramLjzd-PRO2022-10-141-2/+13
| | | | | | load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
* | add checkpoint cache option to UI for faster model switchingRae Fu2022-10-141-22/+32
|/ | | | switching time reduced from ~1500ms to ~280ms
* no to different messages plus fix using != to compare to NoneAUTOMATIC2022-10-101-6/+3
|
* Merge pull request #2131 from ssysm/upstream-masterAUTOMATIC11112022-10-101-1/+8
|\ | | | | Add VAE Path Arguments
| * change vae loading methodssysm2022-10-101-2/+9
| |
| * Merge branch 'master' of ↵ssysm2022-10-101-8/+12
| |\ | | | | | | | | | https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master
| * | add vae path argsssysm2022-10-101-1/+1
| | |
* | | --no-half-vaeAUTOMATIC2022-10-101-0/+3
| |/ |/|
* | change up #2056 to make it work how i want it to plus make xy plot write ↵AUTOMATIC2022-10-091-2/+0
| | | | | | | | correct values to images
* | Added extended model details to infotextWilliam Moorehouse2022-10-091-1/+2
| |
* | fix model switching not working properly if there is a different yaml configAUTOMATIC2022-10-091-1/+2
| |
* | fixed incorrect message about loading config; thanks anon!AUTOMATIC2022-10-091-1/+1
| |
* | make main model loading and model merger use the same codeAUTOMATIC2022-10-091-5/+9
|/
* support loading .yaml config with same name as modelAUTOMATIC2022-10-081-7/+23
| | | | support EMA weights in processing (????)
* chore: Fix typosAidan Holland2022-10-081-2/+2
|
* fix: handles when state_dict does not existleko2022-10-081-1/+5
|
* support loading VAEAUTOMATIC2022-10-071-0/+8
|