Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | fix for an error caused by skipping initialization, for realsies this time: ↵ | AUTOMATIC | 2023-01-11 | 1 | -0/+1 |
| | | | | TypeError: expected str, bytes or os.PathLike object, not NoneType | ||||
* | possible fix for fallback for fast model creation from config, attempt 2 | AUTOMATIC | 2023-01-11 | 1 | -0/+1 |
| | |||||
* | possible fix for fallback for fast model creation from config | AUTOMATIC | 2023-01-11 | 1 | -0/+3 |
| | |||||
* | add support for transformers==4.25.1 | AUTOMATIC | 2023-01-10 | 1 | -2/+6 |
| | | | | add fallback for when quick model creation fails | ||||
* | add more stuff to ignore when creating model from config | AUTOMATIC | 2023-01-10 | 1 | -4/+28 |
| | | | | prevent .vae.safetensors files from being listed as stable diffusion models | ||||
* | disable torch weight initialization and CLIP downloading/reading checkpoint ↵ | AUTOMATIC | 2023-01-10 | 1 | -2/+3 |
| | | | | to speedup creating sd model from config | ||||
* | allow model load if previous model failed | Vladimir Mandic | 2023-01-09 | 1 | -5/+10 |
| | |||||
* | use commandline-supplied cuda device name instead of cuda:0 for safetensors ↵ | AUTOMATIC | 2023-01-04 | 1 | -1/+1 |
| | | | | PR that doesn't fix anything | ||||
* | Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' | AUTOMATIC | 2023-01-04 | 1 | -1/+4 |
|\ | |||||
| * | Attempting to solve slow loads for `safetensors`. | Nicolas Patry | 2022-12-27 | 1 | -1/+4 |
| | | | | | | | | Fixes #5893 | ||||
* | | fix broken inpainting model | AUTOMATIC | 2023-01-04 | 1 | -3/+0 |
| | | |||||
* | | find configs for models at runtime rather than when starting | AUTOMATIC | 2023-01-04 | 1 | -13/+18 |
| | | |||||
* | | helpful error message when trying to load 2.0 without config | AUTOMATIC | 2023-01-04 | 1 | -8/+18 |
| | | | | | | | | failing to load model weights from settings won't break generation for currently loaded model anymore | ||||
* | | call script callbacks for reloaded model after loading embeddings | AUTOMATIC | 2023-01-03 | 1 | -2/+2 |
| | | |||||
* | | fix the issue with training on SD2.0 | AUTOMATIC | 2023-01-01 | 1 | -0/+2 |
| | | |||||
* | | validate textual inversion embeddings | Vladimir Mandic | 2022-12-31 | 1 | -0/+3 |
|/ | |||||
* | fix F541 f-string without any placeholders | Yuval Aboulafia | 2022-12-24 | 1 | -4/+4 |
| | |||||
* | Removed lenght in sd_model at line 115 | linuxmobile ( リナックス ) | 2022-12-24 | 1 | -3/+0 |
| | | | | | Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine. https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379 | ||||
* | Merge pull request #5627 from deanpress/patch-1 | AUTOMATIC1111 | 2022-12-24 | 1 | -0/+4 |
|\ | | | | | fix: fallback model_checkpoint if it's empty | ||||
| * | fix: fallback model_checkpoint if it's empty | Dean van Dugteren | 2022-12-11 | 1 | -0/+4 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes the following error when SD attempts to start with a deleted checkpoint: ``` Traceback (most recent call last): File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module> start() File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start webui.webui() File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui initialize() File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize modules.sd_models.load_model() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model checkpoint_info = checkpoint_info or select_checkpoint() File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint checkpoint_info = checkpoints_list.get(model_checkpoint, None) TypeError: unhashable type: 'list' ``` | ||||
* | | unconditionally set use_ema=False if value not specified (True never worked, ↵ | MrCheeze | 2022-12-11 | 1 | -1/+3 |
| | | | | | | | | and all configs except v1-inpainting-inference.yaml already correctly set it to False) | ||||
* | | fix support for 2.0 inpainting model while maintaining support for 1.5 ↵ | MrCheeze | 2022-12-10 | 1 | -0/+1 |
|/ | | | | inpainting model | ||||
* | Merge pull request #4841 from R-N/vae-fix-none | AUTOMATIC1111 | 2022-12-10 | 1 | -0/+2 |
|\ | | | | | Fix None option of VAE selector | ||||
| * | Merge branch 'a1111' into vae-fix-none | Muhammad Rizqi Nur | 2022-11-19 | 1 | -8/+2 |
| |\ | |||||
| * | | Misc | Muhammad Rizqi Nur | 2022-11-19 | 1 | -0/+1 |
| | | | | | | | | | | | | Misc | ||||
| * | | Fix base VAE caching was done after loading VAE, also add safeguard | Muhammad Rizqi Nur | 2022-11-19 | 1 | -0/+1 |
| | | | |||||
* | | | Depth2img model support | Jay Smith | 2022-12-09 | 1 | -0/+46 |
| | | | |||||
* | | | make it possible to save nai model using safetensors | AUTOMATIC | 2022-11-28 | 1 | -2/+2 |
| | | | |||||
* | | | add safetensors support for model merging #4869 | AUTOMATIC | 2022-11-27 | 1 | -11/+15 |
| | | | |||||
* | | | add safetensors to requirements | AUTOMATIC | 2022-11-27 | 1 | -6/+5 |
| | | | |||||
* | | | Merge pull request #4930 from Narsil/allow_to_load_safetensors_file | AUTOMATIC1111 | 2022-11-27 | 1 | -2/+9 |
|\ \ \ | | | | | | | | | Supporting `*.safetensors` format. | ||||
| * | | | Supporting `*.safetensors` format. | Nicolas Patry | 2022-11-21 | 1 | -2/+9 |
| |/ / | | | | | | | | | | | | | If a model file exists with extension `.safetensors` then we can load it more safely than with PyTorch weights. | ||||
* | / | no-half support for SD 2.0 | MrCheeze | 2022-11-26 | 1 | -0/+3 |
| |/ |/| | |||||
* | | Remove no longer necessary parts and add vae_file safeguard | Muhammad Rizqi Nur | 2022-11-19 | 1 | -8/+2 |
|/ | |||||
* | restore #4035 behavior | cluder | 2022-11-09 | 1 | -1/+1 |
| | | | | - if checkpoint cache is set to 1, keep 2 models in cache (current +1 more) | ||||
* | - do not use ckpt cache, if disabled | cluder | 2022-11-09 | 1 | -10/+17 |
| | | | | - cache model after is has been loaded from file | ||||
* | fix one of previous merges breaking the program | AUTOMATIC | 2022-11-04 | 1 | -0/+2 |
| | |||||
* | Merge branch 'master' into fix-ckpt-cache | AUTOMATIC1111 | 2022-11-04 | 1 | -2/+3 |
|\ | |||||
| * | fix: loading models without vae from cache | digburn | 2022-11-04 | 1 | -2/+3 |
| | | |||||
* | | Merge branch 'master' into fix-ckpt-cache | Muhammad Rizqi Nur | 2022-11-02 | 1 | -18/+28 |
|\| | |||||
| * | fix #3986 breaking --no-half-vae | AUTOMATIC | 2022-11-02 | 1 | -0/+9 |
| | | |||||
| * | Reload VAE without reloading sd checkpoint | Muhammad Rizqi Nur | 2022-11-02 | 1 | -8/+7 |
| | | |||||
| * | Merge branch 'master' into vae-picker | Muhammad Rizqi Nur | 2022-11-01 | 1 | -1/+13 |
| |\ | |||||
| | * | Unload sd_model before loading the other | Jairo Correa | 2022-11-01 | 1 | -1/+13 |
| | | | |||||
| * | | Checkpoint cache by combination key of checkpoint and vae | Muhammad Rizqi Nur | 2022-10-31 | 1 | -11/+16 |
| | | | |||||
| * | | Settings to select VAE | Muhammad Rizqi Nur | 2022-10-30 | 1 | -21/+10 |
| |/ | |||||
* | | Fix #4035 for real now | Muhammad Rizqi Nur | 2022-10-31 | 1 | -6/+7 |
| | | |||||
* | | Fix #4035 | Muhammad Rizqi Nur | 2022-10-31 | 1 | -1/+1 |
|/ | |||||
* | Merge pull request #3818 from jwatzman/master | AUTOMATIC1111 | 2022-10-29 | 1 | -4/+7 |
|\ | | | | | Reduce peak memory usage when changing models | ||||
| * | Reduce peak memory usage when changing models | Josh Watzman | 2022-10-27 | 1 | -4/+7 |
| | | | | | | | | | | | | | | | | | | A few tweaks to reduce peak memory usage, the biggest being that if we aren't using the checkpoint cache, we shouldn't duplicate the model state dict just to immediately throw it away. On my machine with 16GB of RAM, this change means I can typically change models, whereas before it would typically OOM. |