Age | Commit message (Collapse) | Author | Lines | |
---|---|---|---|---|
2023-06-18 | Fix Typo of hints.js | zhtttylz | -2/+2 | |
2023-06-18 | update the description of --add-stop-rout | w-e-w | -1/+1 | |
2023-06-16 | :bug: Allow Script to have metaclass | huchenlei | -1/+2 | |
2023-06-16 | fix very slow loading speed of .safetensors files | dhwz | -2/+6 | |
2023-06-15 | Add an opt-in infotext user name setting | Jared Deckard | -1/+2 | |
2023-06-15 | Add a user pattern to the filename generator | Jared Deckard | -0/+1 | |
2023-06-14 | Note the Gradio user in the Exif data | Jared Deckard | -3/+11 | |
2023-06-15 | git clone show progress | w-e-w | -2/+2 | |
2023-06-14 | Fix gradio special args in the call queue | Jared Deckard | -0/+3 | |
2023-06-14 | terminate -> stop | w-e-w | -3/+3 | |
2023-06-14 | response 501 if not a able to restart | w-e-w | -0/+1 | |
2023-06-14 | update workflow kill test server | w-e-w | -1/+1 | |
2023-06-14 | rename routes | w-e-w | -5/+5 | |
2023-06-14 | Formatting code with Prettier | Danil Boldyrev | -75/+95 | |
2023-06-14 | Reworked the disabling of functions, refactored part of the code | Danil Boldyrev | -135/+121 | |
2023-06-13 | textual_inversion/logging.py: clean up duplicate key from sets (and sort ↵ | Aarni Koskela | -4/+44 | |
them) (Ruff B033) | ||||
2023-06-13 | Upgrade ruff to 272 | Aarni Koskela | -1/+1 | |
2023-06-13 | Remove stray space from SwinIR model URL | Aarni Koskela | -3/+2 | |
2023-06-13 | Upscaler.load_model: don't return None, just use exceptions | Aarni Koskela | -64/+52 | |
2023-06-13 | Add TODO comments to sus model loads | Aarni Koskela | -0/+2 | |
2023-06-13 | Fix up `if "http" in ...:` to be more sensible startswiths | Aarni Koskela | -8/+8 | |
2023-06-13 | Move `load_file_from_url` to modelloader | Aarni Koskela | -18/+39 | |
2023-06-13 | Use os.makedirs(..., exist_ok=True) | Aarni Koskela | -15/+6 | |
2023-06-12 | remove console.log | Danil Boldyrev | -2/+0 | |
2023-06-12 | Improved error output, improved settings menu | Danil Boldyrev | -38/+145 | |
2023-06-12 | remove fastapi.Response | w-e-w | -1/+1 | |
2023-06-12 | move _stop route to api | w-e-w | -9/+9 | |
2023-06-12 | update model checkpoint switch code | Su Wei | -5/+4 | |
2023-06-10 | quit restart | w-e-w | -1/+10 | |
2023-06-09 | fixed typos | arch-fan | -2/+2 | |
2023-06-09 | Merge branch 'dev' into release_candidate | AUTOMATIC | -977/+3244 | |
2023-06-09 | add changelog for 1.4.0 | AUTOMATIC | -0/+57 | |
2023-06-09 | linter | AUTOMATIC | -1/+1 | |
2023-06-09 | Merge pull request #11092 from AUTOMATIC1111/Generate-Forever-during-generation | AUTOMATIC1111 | -4/+10 | |
Allow activation of Generate Forever during generation | ||||
2023-06-09 | Merge pull request #11087 from AUTOMATIC1111/persistent_conds_cache | AUTOMATIC1111 | -10/+17 | |
persistent conds cache | ||||
2023-06-09 | Merge pull request #11123 from akx/dont-die-on-bad-symlink-lora | AUTOMATIC1111 | -1/+5 | |
Don't die when a LoRA is a broken symlink | ||||
2023-06-09 | Merge pull request #10295 from Splendide-Imaginarius/mk2-blur-mask | AUTOMATIC1111 | -13/+38 | |
Split mask blur into X and Y components, patch Outpainting MK2 accordingly | ||||
2023-06-09 | Merge pull request #11048 from DGdev91/force_python1_navi_renoir | AUTOMATIC1111 | -1/+16 | |
Forcing Torch Version to 1.13.1 for RX 5000 series GPUs | ||||
2023-06-09 | Don't die when a LoRA is a broken symlink | Aarni Koskela | -1/+5 | |
Fixes #11098 | ||||
2023-06-09 | Split Outpainting MK2 mask blur into X and Y components | Splendide Imaginarius | -9/+21 | |
Fixes unexpected noise in non-outpainted borders when using MK2 script. | ||||
2023-06-09 | Split mask blur into X and Y components | Splendide Imaginarius | -4/+17 | |
Prequisite to fixing Outpainting MK2 mask blur bug. | ||||
2023-06-09 | add model exists status check to modeuls/api/api.py , /sdapi/v1/options [POST] | Su Wei | -1/+6 | |
2023-06-08 | Generate Forever during generation | w-e-w | -4/+10 | |
Generate Forever during generation | ||||
2023-06-08 | persistent conds cache | w-e-w | -10/+17 | |
Update shared.py | ||||
2023-06-07 | Merge pull request #11058 from AUTOMATIC1111/api-wiki | AUTOMATIC1111 | -2/+2 | |
link footer API to Wiki when API is not active | ||||
2023-06-07 | Merge pull request #11066 from aljungberg/patch-1 | AUTOMATIC1111 | -1/+1 | |
Fix upcast attention dtype error. | ||||
2023-06-06 | Fix upcast attention dtype error. | Alexander Ljungberg | -1/+1 | |
Without this fix, enabling the "Upcast cross attention layer to float32" option while also using `--opt-sdp-attention` breaks generation with an error: ``` File "/ext3/automatic1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 612, in sdp_attnblock_forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead. ``` The fix is to make sure to upcast the value tensor too. | ||||
2023-06-06 | Skip force pyton and pytorch ver if TORCH_COMMAND already set | DGdev91 | -9/+12 | |
2023-06-06 | link footer API to Wiki when API is not active | w-e-w | -2/+2 | |
2023-06-06 | Write "RX 5000 Series" instead of "Navi" in err | DGdev91 | -1/+1 | |