aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorLines
2023-06-25feat(extensions): add toggle all checkbox to Installed tabMartín (Netux) Rodríguez-2/+23
Small QoL addition. While there is the option to disable all extensions with the radio buttons at the top, that only acts as an added flag and doesn't really change the state of the extensions in the UI. An use case for this checkbox is to disable all extensions except for a few, which is important for debugging extensions. You could do that before, but you'd have to uncheck and recheck every extension one by one.
2023-06-09add changelog for 1.4.0AUTOMATIC-0/+57
2023-06-09linterAUTOMATIC-1/+1
2023-06-09Merge pull request #11092 from AUTOMATIC1111/Generate-Forever-during-generationAUTOMATIC1111-4/+10
Allow activation of Generate Forever during generation
2023-06-09Merge pull request #11087 from AUTOMATIC1111/persistent_conds_cacheAUTOMATIC1111-10/+17
persistent conds cache
2023-06-09Merge pull request #11123 from akx/dont-die-on-bad-symlink-loraAUTOMATIC1111-1/+5
Don't die when a LoRA is a broken symlink
2023-06-09Merge pull request #10295 from Splendide-Imaginarius/mk2-blur-maskAUTOMATIC1111-13/+38
Split mask blur into X and Y components, patch Outpainting MK2 accordingly
2023-06-09Merge pull request #11048 from DGdev91/force_python1_navi_renoirAUTOMATIC1111-1/+16
Forcing Torch Version to 1.13.1 for RX 5000 series GPUs
2023-06-09Don't die when a LoRA is a broken symlinkAarni Koskela-1/+5
Fixes #11098
2023-06-09Split Outpainting MK2 mask blur into X and Y componentsSplendide Imaginarius-9/+21
Fixes unexpected noise in non-outpainted borders when using MK2 script.
2023-06-09Split mask blur into X and Y componentsSplendide Imaginarius-4/+17
Prequisite to fixing Outpainting MK2 mask blur bug.
2023-06-08Generate Forever during generationw-e-w-4/+10
Generate Forever during generation
2023-06-08persistent conds cachew-e-w-10/+17
Update shared.py
2023-06-07Merge pull request #11058 from AUTOMATIC1111/api-wikiAUTOMATIC1111-2/+2
link footer API to Wiki when API is not active
2023-06-07Merge pull request #11066 from aljungberg/patch-1AUTOMATIC1111-1/+1
Fix upcast attention dtype error.
2023-06-06Fix upcast attention dtype error.Alexander Ljungberg-1/+1
Without this fix, enabling the "Upcast cross attention layer to float32" option while also using `--opt-sdp-attention` breaks generation with an error: ``` File "/ext3/automatic1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 612, in sdp_attnblock_forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead. ``` The fix is to make sure to upcast the value tensor too.
2023-06-06Skip force pyton and pytorch ver if TORCH_COMMAND already setDGdev91-9/+12
2023-06-06link footer API to Wiki when API is not activew-e-w-2/+2
2023-06-06Write "RX 5000 Series" instead of "Navi" in errDGdev91-1/+1
2023-06-06Check python version for Navi 1 onlyDGdev91-1/+1
2023-06-06Force python1 for Navi1 only, use python_cmd for pythonDGdev91-15/+8
2023-06-06Fix error in webui.shDGdev91-0/+1
2023-06-06Forcing Torch Version to 1.13.1 for Navi and Renoir GPUsDGdev91-5/+23
2023-06-06Merge pull request #11047 from ↵AUTOMATIC1111-9/+12
AUTOMATIC1111/parse_generation_parameters_with_error handles exception when parsing generation parameters from png info
2023-06-06print error and continuew-e-w-9/+12
print error and continue
2023-06-05Merge pull request #11037 from AUTOMATIC1111/restart-autolaunchAUTOMATIC1111-1/+2
fix rework-disable-autolaunch for new restart method
2023-06-06SD_WEBUI_RESTARTINGw-e-w-2/+2
2023-06-06restore old disable --autolaunchw-e-w-0/+3
2023-06-05Merge pull request #11031 from akx/zoom-and-pan-namespaceAUTOMATIC1111-124/+111
Zoom and pan: namespace & simplify
2023-06-05Merge pull request #11043 from akx/restart-envvarAUTOMATIC1111-13/+33
Restart: only do restart if running via the wrapper script
2023-06-05Restart: only do restart if running via the wrapper scriptAarni Koskela-13/+33
2023-06-06rework-disable-autolaunchw-e-w-4/+2
2023-06-05revert the message to how it wasAUTOMATIC-1/+3
2023-06-05Merge pull request #10956 from akx/lenAUTOMATIC1111-48/+47
Simplify a bunch of `len(x) > 0`/`len(x) == 0` style expressions
2023-06-05Zoom and Pan: simplify waitForOptsAarni Koskela-8/+6
2023-06-05Zoom and Pan: use for instead of forEachAarni Koskela-10/+7
2023-06-05Zoom and Pan: simplify getElements (it's not actually async)Aarni Koskela-10/+4
2023-06-05Zoom and Pan: use elementIDs from closure scopeAarni Koskela-18/+18
2023-06-05Zoom and Pan: move helpers into its namespace to avoid littering global scopeAarni Koskela-97/+95
2023-06-05Merge branch 'master' into devAUTOMATIC-0/+6
2023-06-05Merge branch 'release_candidate'AUTOMATIC-13/+20
2023-06-04Merge pull request #11013 from ramyma/get_latent_upscale_modes_apiAUTOMATIC1111-0/+12
Get latent upscale modes API endpoint
2023-06-04Add endpoint to get latent_upscale_modes for hires fixramyma-0/+12
2023-06-04prevent calculating cons for second pass of hires fix when they are the same ↵AUTOMATIC-7/+13
as for the first pass
2023-06-04fix for conds of second hires fox pass being calculated using first pass's ↵AUTOMATIC-3/+36
networks, and add an option to revert to old behavior
2023-06-04Merge pull request #10997 from ↵AUTOMATIC1111-15/+16
AUTOMATIC1111/fix-conds-caching-with-extra-network fix conds caching with extra network
2023-06-04Merge pull request #10990 from vkage/sd_hijack_optimizations_bugfixAUTOMATIC1111-1/+1
torch.cuda.is_available() check for SdOptimizationXformers
2023-06-04fix the broken line for #10990AUTOMATIC-1/+1
2023-06-04Merge pull request #10975 from AUTOMATIC1111/restart3AUTOMATIC1111-15/+38
A yet another method to restart webui.
2023-06-04Merge pull request #10980 from AUTOMATIC1111/sysinfoAUTOMATIC1111-2/+242
Added sysinfo tab to settings