aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_models.py
AgeCommit message (Collapse)AuthorLines
2023-12-06Fix forced reloadKohaku-Blueleaf-1/+1
2023-12-02Ensure the cached weight will not be affectedKohaku-Blueleaf-2/+2
2023-12-02Merge branch 'dev' into test-fp8Kohaku-Blueleaf-4/+13
2023-12-01Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM ↵MrCheeze-4/+13
on load
2023-11-25Fix pre-fp8Kohaku-Blueleaf-1/+1
2023-11-21Option for using fp16 weight when apply loraKohaku-Blueleaf-3/+11
2023-11-19Use options instead of cmd_argsKohaku-Blueleaf-29/+32
2023-11-16Merge branch 'dev' into test-fp8Kohaku-Blueleaf-1/+4
2023-11-05more changes for #13865: fix formatting, rename the function, add comment ↵AUTOMATIC1111-1/+1
and add a readme entry
2023-11-05linterAUTOMATIC1111-2/+2
2023-11-05Merge branch 'dev' into masterAUTOMATIC1111-20/+30
2023-11-05Use devices.torch_gc() instead of empty_cache()Ritesh Gangnani-1/+0
2023-11-05Add SSD-1B as a supported modelRitesh Gangnani-2/+6
2023-10-28ManualCast for 10/16 series gpuKohaku-Blueleaf-9/+12
2023-10-25ignore mps for fp8Kohaku-Blueleaf-1/+3
2023-10-25Fix alphas cumprodKohaku-Blueleaf-1/+2
2023-10-25Fix alphas_cumprod dtypeKohaku-Blueleaf-0/+1
2023-10-25fp8 for TEKohaku-Blueleaf-0/+7
2023-10-24Fix lintKohaku-Blueleaf-1/+1
2023-10-24Add CPU fp8 supportKohaku-Blueleaf-4/+16
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear) And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
2023-10-19Add sdxl only argKohaku-Blueleaf-0/+3
2023-10-19Add fp8 for sd unetKohaku-Blueleaf-0/+3
2023-10-15repair unload sd checkpoint buttonAUTOMATIC1111-12/+1
2023-10-14use shallow copy for #13535AUTOMATIC1111-2/+1
2023-10-14Merge pull request #13535 from chu8129/devAUTOMATIC1111-4/+5
fix: checkpoints_loaded:{checkpoint:state_dict}, model.load_state_dict issue in dict value empty
2023-10-07reverstwangqiuwen-0/+2
2023-10-07upwangqiuwen-6/+5
2023-09-30Merge pull request #13139 from AUTOMATIC1111/ckpt-dir-path-separatorAUTOMATIC1111-2/+3
fix `--ckpt-dir` path separator and option use `short name` for checkpoint dropdown
2023-09-30add missing import, simplify code, use patches module for #13276AUTOMATIC1111-7/+12
2023-09-30Merge pull request #13276 from woweenie/patch-1AUTOMATIC1111-1/+14
patch DDPM.register_betas so that users can put given_betas in model yaml
2023-09-18fix王秋文/qwwang-0/+1
2023-09-15patch DDPM.register_betas so that users can put given_betas in model yamlwoweenie-1/+14
2023-09-15use dict[key]=model; did not update orderdict order, should use move to endqiuwen.wang-0/+1
2023-09-08parsing string to pathw-e-w-2/+3
2023-08-30keep order in list of checkpoints when loading model that doesn't have a ↵AUTOMATIC1111-1/+21
checksum
2023-08-30keep order in list of checkpoints when loading model that doesn't have a ↵AUTOMATIC1111-1/+21
checksum
2023-08-23set devices.dtype_unet correctlyAUTOMATIC1111-1/+2
2023-08-22add --medvram-sdxlAUTOMATIC1111-8/+8
2023-08-21Fix for consistency with shared.opts.sd_vae of UIUminosachi-0/+1
2023-08-20Change where VAE state are stored in modelUminosachi-2/+0
2023-08-20Change to access sd_model attribute with dotUminosachi-2/+2
2023-08-20Store base_vae and loaded_vae_file in sd_modelUminosachi-16/+8
2023-08-20Fix SD VAE switch error after model reuseUminosachi-2/+20
2023-08-17resolve the issue with loading fp16 checkpoints while using --no-halfAUTOMATIC1111-1/+4
2023-08-16send weights to target device instead of CPU memoryAUTOMATIC1111-1/+16
2023-08-16Revert "send weights to target device instead of CPU memory"AUTOMATIC1111-1/+1
This reverts commit 0815c45bcdec0a2e5c60bdd5b33d95813d799c01.
2023-08-16send weights to target device instead of CPU memoryAUTOMATIC1111-1/+1
2023-08-12put refiner into main UI, into the new accordions sectionAUTOMATIC1111-0/+3
add VAE from main model into infotext, not from refiner model option to make scripts UI without gr.Group fix inconsistencies with refiner when usings samplers that do more denoising than steps
2023-08-10resolve merge issuesAUTOMATIC1111-2/+5
2023-08-10Merge branch 'dev' into refinerAUTOMATIC1111-11/+12