aboutsummaryrefslogtreecommitdiffstats
path: root/modules
Commit message (Expand)AuthorAgeFilesLines
...
| | * | | | | removing underscores and colonsGreendayle2022-10-051-1/+1
| | * | | | | better model searchGreendayle2022-10-051-2/+9
| | * | | | | removing problematic tagGreendayle2022-10-051-3/+2
| | * | | | | deepdanbooru interrogatorGreendayle2022-10-052-5/+79
| * | | | | | Support `Download` for txt files.aoirusann2022-10-092-3/+41
| * | | | | | Add `Download` & `Download as zip`aoirusann2022-10-091-5/+34
| * | | | | | fixed incorrect message about loading config; thanks anon!AUTOMATIC2022-10-091-1/+1
| * | | | | | make main model loading and model merger use the same codeAUTOMATIC2022-10-092-8/+12
* | | | | | | remove braces from stepsDepFA2022-10-091-1/+1
* | | | | | | change caption methodDepFA2022-10-091-9/+21
* | | | | | | add caption image with overlayDepFA2022-10-091-0/+46
* | | | | | | change source of step countDepFA2022-10-091-8/+2
* | | | | | | source checkpoint hash from current checkpointDepFA2022-10-091-4/+2
* | | | | | | correct case on embeddingFromB64DepFA2022-10-091-1/+1
* | | | | | | change json tensor key nameDepFA2022-10-091-3/+3
* | | | | | | add encoder and decoder classesDepFA2022-10-091-0/+21
* | | | | | | add alternate checkpoint hash sourceDepFA2022-10-091-2/+5
* | | | | | | add embedding load and save from b64 jsonDepFA2022-10-091-9/+21
* | | | | | | Add pretty image captioning functionsDepFA2022-10-091-0/+31
* | | | | | | add embed embedding to uiDepFA2022-10-091-1/+3
* | | | | | | Update textual_inversion.pyDepFA2022-10-091-3/+22
|/ / / / / /
* | | | | | support loading .yaml config with same name as modelAUTOMATIC2022-10-082-8/+24
* | | | | | chore: Fix typosAidan Holland2022-10-088-15/+15
* | | | | | Break after finding the local directory of stable diffusionEdouard Leurent2022-10-081-0/+1
* | | | | | add 'Ignore last layers of CLIP model' option as a parameter to the infotextAUTOMATIC2022-10-081-1/+5
* | | | | | make --force-enable-xformers work without needing --xformersAUTOMATIC2022-10-081-1/+1
* | | | | | Added ability to ignore last n layers in FrozenCLIPEmbedderFampai2022-10-082-2/+10
* | | | | | Update ui.pyDepFA2022-10-081-1/+1
* | | | | | TI preprocess wordingDepFA2022-10-081-3/+3
| |_|_|_|/ |/| | | |
* | | | | add --force-enable-xformers option and also add messages to console regarding...AUTOMATIC2022-10-082-1/+6
* | | | | add fallback for xformers_attnblock_forwardAUTOMATIC2022-10-081-1/+4
* | | | | alternate promptArtem Zagidulin2022-10-081-2/+7
* | | | | check for ampere without destroying the optimizations. again.C43H66N12O12S22022-10-081-4/+3
* | | | | check for ampereC43H66N12O12S22022-10-081-3/+4
| |_|_|/ |/| | |
* | | | why did you do thisAUTOMATIC2022-10-081-1/+1
| |_|/ |/| |
* | | Fixed typoMilly2022-10-081-1/+1
* | | restore old opt_split_attention/disable_opt_split_attention logicAUTOMATIC2022-10-081-1/+1
* | | simplify xfrmers options: --xformers to enable and that's itAUTOMATIC2022-10-083-9/+15
* | | emergency fix for xformers (continue + shared)AUTOMATIC2022-10-081-8/+8
* | | Merge pull request #1851 from C43H66N12O12S2/flashAUTOMATIC11112022-10-083-6/+45
|\ \ \
| * | | Update sd_hijack.pyC43H66N12O12S22022-10-081-1/+1
| * | | update sd_hijack_opt to respect new env variablesC43H66N12O12S22022-10-081-3/+8
| * | | add xformers_available shared variableC43H66N12O12S22022-10-081-1/+1
| * | | default to split attention if cuda is available and xformers is notC43H66N12O12S22022-10-081-2/+2
| * | | use new attnblock for xformers pathC43H66N12O12S22022-10-081-1/+1
| * | | Update sd_hijack_optimizations.pyC43H66N12O12S22022-10-081-1/+1
| * | | add xformers attnblock and hypernetwork supportC43H66N12O12S22022-10-081-2/+18
| * | | delete broken and unnecessary aliasesC43H66N12O12S22022-10-081-6/+4
| * | | switch to the proper way of calling xformersC43H66N12O12S22022-10-081-25/+3
| * | | Update sd_hijack.pyC43H66N12O12S22022-10-071-1/+1