aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
Commit message (Collapse)AuthorAgeFilesLines
...
* Merge pull request #3818 from jwatzman/masterAUTOMATIC11112022-10-291-4/+7
|\ | | | | Reduce peak memory usage when changing models
| * Reduce peak memory usage when changing modelsJosh Watzman2022-10-271-4/+7
| | | | | | | | | | | | | | | | | | A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM.
* | Natural sorting for dropdown checkpoint listAntonio2022-10-281-2/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Example: Before After 11.ckpt 11.ckpt ab.ckpt ab.ckpt ade_pablo_step_1000.ckpt ade_pablo_step_500.ckpt ade_pablo_step_500.ckpt ade_pablo_step_1000.ckpt ade_step_1000.ckpt ade_step_500.ckpt ade_step_1500.ckpt ade_step_1000.ckpt ade_step_2000.ckpt ade_step_1500.ckpt ade_step_2500.ckpt ade_step_2000.ckpt ade_step_3000.ckpt ade_step_2500.ckpt ade_step_500.ckpt ade_step_3000.ckpt atp_step_5500.ckpt atp_step_5500.ckpt model1.ckpt model1.ckpt model10.ckpt model10.ckpt model1000.ckpt model33.ckpt model33.ckpt model50.ckpt model400.ckpt model400.ckpt model50.ckpt model1000.ckpt moo44.ckpt moo44.ckpt v1-4-pruned-emaonly.ckpt v1-4-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned-emaonly.ckpt v1-5-pruned.ckpt v1-5-pruned.ckpt v1-5-vae.ckpt v1-5-vae.ckpt
* call model_loaded_callback after setting shared.sd_model in case scripts ↵AUTOMATIC2022-10-221-1/+2
| | | | refer to it using that
* fix aesthetic gradients doing nothing after loading a different modelMrCheeze2022-10-221-2/+2
|
* removed aesthetic gradients as built-inAUTOMATIC2022-10-221-2/+5
| | | | added support for extensions
* loading SD VAE, see PR #3303AUTOMATIC2022-10-211-1/+4
|
* do not load aesthetic clip model until it's neededAUTOMATIC2022-10-211-3/+0
| | | | | add refresh button for aesthetic embeddings add aesthetic params to images' infotext
* Merge branch 'ae'AUTOMATIC2022-10-211-1/+4
|\
| * Merge branch 'master' into test_resolve_conflictsMalumaDev2022-10-191-3/+25
| |\
| * | ui fix, re organization of the codeMalumaDev2022-10-161-1/+4
| | |
* | | XY grid correctly re-assignes model when config changesrandom_thoughtss2022-10-201-3/+3
| | |
* | | Added PLMS hijack and made sure to always replace methodsrandom_thoughtss2022-10-201-2/+1
| | |
* | | Added support for RunwayML inpainting modelrandom_thoughtss2022-10-191-1/+15
| | |
* | | fix for broken checkpoint mergerAUTOMATIC2022-10-191-1/+4
| |/ |/|
* | more careful loading of model weights (eliminates some issues with ↵AUTOMATIC2022-10-191-3/+25
|/ | | | checkpoints that have weird cond_stage_model layer names)
* Merge branch 'master' into ckpt-cacheAUTOMATIC11112022-10-151-3/+2
|\
| * rework the code for lowram a bitAUTOMATIC2022-10-141-10/+2
| |
| * load models to VRAM when using `--lowram` paramLjzd-PRO2022-10-141-2/+13
| | | | | | load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
* | add checkpoint cache option to UI for faster model switchingRae Fu2022-10-141-22/+32
|/ | | | switching time reduced from ~1500ms to ~280ms
* no to different messages plus fix using != to compare to NoneAUTOMATIC2022-10-101-6/+3
|
* Merge pull request #2131 from ssysm/upstream-masterAUTOMATIC11112022-10-101-1/+8
|\ | | | | Add VAE Path Arguments
| * change vae loading methodssysm2022-10-101-2/+9
| |
| * Merge branch 'master' of ↵ssysm2022-10-101-8/+12
| |\ | | | | | | | | | https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master
| * | add vae path argsssysm2022-10-101-1/+1
| | |
* | | --no-half-vaeAUTOMATIC2022-10-101-0/+3
| |/ |/|
* | change up #2056 to make it work how i want it to plus make xy plot write ↵AUTOMATIC2022-10-091-2/+0
| | | | | | | | correct values to images
* | Added extended model details to infotextWilliam Moorehouse2022-10-091-1/+2
| |
* | fix model switching not working properly if there is a different yaml configAUTOMATIC2022-10-091-1/+2
| |
* | fixed incorrect message about loading config; thanks anon!AUTOMATIC2022-10-091-1/+1
| |
* | make main model loading and model merger use the same codeAUTOMATIC2022-10-091-5/+9
|/
* support loading .yaml config with same name as modelAUTOMATIC2022-10-081-7/+23
| | | | support EMA weights in processing (????)
* chore: Fix typosAidan Holland2022-10-081-2/+2
|
* fix: handles when state_dict does not existleko2022-10-081-1/+5
|
* support loading VAEAUTOMATIC2022-10-071-0/+8
|
* emergency fix for disabling SD model download after multiple complaintsAUTOMATIC2022-10-021-2/+2
|
* disabled SD model download after multiple complaintsAUTOMATIC2022-10-021-10/+8
|
* fix --ckpt option breaking model selectionAUTOMATIC2022-10-021-1/+1
|
* initial support for training textual inversionAUTOMATIC2022-10-021-1/+3
|
* if --ckpt option is specified, load that modelAUTOMATIC2022-09-301-0/+1
|
* revert the annotation not supported by old pythonsAUTOMATIC2022-09-301-1/+1
|
* remove unwanted formatting/functionality from the PRAUTOMATIC2022-09-301-39/+17
|
* fix bugs in the PRAUTOMATIC2022-09-301-1/+1
|
* Merge pull request #1109 from d8ahazard/ModelLoaderAUTOMATIC11112022-09-301-11/+50
|\ | | | | Model Loader, Fixes
| * Merge remote-tracking branch 'upstream/master' into ModelLoaderd8ahazard2022-09-301-8/+46
| |\
| * | Holy $hit.d8ahazard2022-09-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Yep. Fix gfpgan_model_arch requirement(s). Add Upscaler base class, move from images. Add a lot of methods to Upscaler. Re-work all the child upscalers to be proper classes. Add BSRGAN scaler. Add ldsr_model_arch class, removing the dependency for another repo that just uses regular latent-diffusion stuff. Add one universal method that will always find and load new upscaler models without having to add new "setup_model" calls. Still need to add command line params, but that could probably be automated. Add a "self.scale" property to all Upscalers so the scalers themselves can do "things" in response to the requested upscaling size. Ensure LDSR doesn't get stuck in a longer loop of "upscale/downscale/upscale" as we try to reach the target upscale size. Add typehints for IDE sanity. PEP-8 improvements. Moar.
| * | Use model loader with stable-diffusion too.d8ahazard2022-09-271-19/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hook the model loader into the SD_models file. Add default url/download if checkpoint is not found. Add matching stablediffusion-models-path argument. Add message that --ckpt-dir will be removed in the future, but have it pipe to stablediffusion-models-path for now. Update help strings for models-path args so they're more or less uniform. Move sd_model "setup" call to webUI with the others. Ensure "cleanup_models" method moves existing models to the new locations, including SD, and that we aren't deleting folders that still have stuff in them.
* | | return shortest checkpoint title matchDepFA2022-09-301-11/+1
| | |
* | | add get_closet_checkpoint_matchDepFA2022-09-301-0/+15
| |/ |/|
* | fix for incorrect model weight loading for #814AUTOMATIC2022-09-291-1/+5
| |