aboutsummaryrefslogtreecommitdiffstats
path: root/modules
AgeCommit message (Collapse)AuthorLines
2023-01-28uses autos new regex, checks len of re_paramEllangoK-2/+2
2023-01-26adds components to infotext_fieldsEllangoK-0/+14
allows for loading script params
2023-01-25re_param captures quotes with commas properlyEllangoK-3/+3
and removes unnecessary regex
2023-01-25fix for unet hijack breaking the train tabAUTOMATIC-2/+5
2023-01-25make clicking extra networks button one more time close the extra networks UIAUTOMATIC-2/+7
2023-01-25Merge pull request #6510 from brkirch/unet16-upcast-precisionAUTOMATIC1111-71/+188
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25change to code for live preview fix on OSX to be bit more obviousAUTOMATIC-2/+2
2023-01-25Merge pull request #7151 from brkirch/fix-approx-nnAUTOMATIC1111-1/+5
Fix Approx NN previews changing first generation result
2023-01-25Add instruct-pix2pix hijackKyle-1/+1483
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py Adds ddpm_edit.py necessary for instruct-pix2pix
2023-01-25Merge pull request #7146 from EllangoK/masterAUTOMATIC1111-1/+1
Adds X/Y/Z Grid Script
2023-01-25Add UI setting for upcasting attention to float32brkirch-64/+108
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers. In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25Add option for float32 sampling with float16 UNetbrkirch-8/+81
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-24remove fairscale requirement, add fake fairscale to make BLIP not complain ↵AUTOMATIC-1/+1
about it mk2
2023-01-24remove fairscale requirement, add fake fairscale to make BLIP not complain ↵AUTOMATIC-2/+9
about it
2023-01-24handling sub grids and merging into oneEllangoK-1/+1
2023-01-24also return the removed field to sdapi/v1/upscalers because someone might ↵AUTOMATIC-0/+2
have relied on it existing
2023-01-24repair sdapi/v1/upscalers returning bogus resultsAUTOMATIC-8/+10
2023-01-23Fix different first gen with Approx NN previewsbrkirch-1/+5
The loading of the model for approx nn live previews can change the internal state of PyTorch, resulting in a different image. This can be avoided by preloading the approx nn model in advance.
2023-01-23add image decod exception handlingVladimir Mandic-1/+5
2023-01-24fix BLIP failing to import depending on configurationAUTOMATIC-1/+16
2023-01-24Merge pull request #7113 from vladmandic/interrogateAUTOMATIC1111-16/+26
Add selector to interrogate categories
2023-01-23add support for apostrophe in extra network namesAUTOMATIC-2/+4
2023-01-23add option to skip interrogate categoriesVladimir Mandic-15/+19
2023-01-23Merge branch 'AUTOMATIC1111:master' into interrogateVladimir Mandic-12/+31
2023-01-23Merge pull request #7032 from gmq/extra-network-stylesAUTOMATIC1111-1/+6
Extra network view style
2023-01-23api-image-formatVladimir Mandic-10/+24
2023-01-23a possible fix for broken image upscalingAUTOMATIC-1/+1
2023-01-23improve interrogateVladimir Mandic-12/+18
2023-01-23better support for xformers flash attention on older versions of torchAUTOMATIC-24/+30
2023-01-23Merge remote-tracking branch 'takuma104/xformers-flash-attention'AUTOMATIC-2/+25
2023-01-23fix open directory button failingAUTOMATIC-2/+1
2023-01-23Merge pull request #7031 from EllangoK/masterAUTOMATIC1111-1/+1
Fixes various button overflowing UI and compact checkbox
2023-01-23Merge pull request #7093 from Shondoit/fix-dark-modeAUTOMATIC1111-2/+2
Fix dark mode
2023-01-23third time's the charmAUTOMATIC-1/+1
2023-01-23add missing import to previous commitAUTOMATIC-0/+1
2023-01-23Fix dark modeShondoit-2/+2
Fixes #7048 Co-Authored-By: J.J. Tolton <jjtolton@gmail.com>
2023-01-23rework extras tab to use script systemAUTOMATIC-459/+500
2023-01-22feat(extra-networks): remove view dropdownGuillermo Moreno-11/+9
2023-01-22feat(extra-networks): add default view settingGuillermo Moreno-4/+8
2023-01-22feat(extra-networks): add thumbs view styleGuillermo Moreno-9/+12
2023-01-22split oversize extras.py to postprocessing.pyAUTOMATIC-473/+18
2023-01-22Split history extras.py to postprocessing.pyAndrey-0/+0
2023-01-22Split history extras.py to postprocessing.pyAndrey-0/+466
2023-01-22Split history extras.py to postprocessing.pyAndrey-0/+0
2023-01-22Split history extras.py to postprocessing.pyAndrey-0/+0
2023-01-22amend previous commit to work in a proper fashion when saving previewsAUTOMATIC-2/+2
2023-01-22add an option to reorder tabs for extra networksAUTOMATIC-1/+18
2023-01-22add option to discard weights in checkpoint merger UIAUTOMATIC-1/+12
2023-01-22fix missing field for aesthetic embedding extensionAUTOMATIC-1/+3
2023-01-22attention ctrl+up/down enhancementsAUTOMATIC-3/+5