aboutsummaryrefslogtreecommitdiffstats
path: root/modules/textual_inversion
Commit message (Expand)AuthorAgeFilesLines
* move embedding logic to separate fileDepFA2022-10-111-0/+234
* Merge branch 'master' into embed-embeddings-in-imagesDepFA2022-10-112-6/+8
|\
| * fixes related to mergeAUTOMATIC2022-10-111-5/+7
| * Merge branch 'master' into hypernetwork-trainingAUTOMATIC2022-10-113-14/+22
| |\
| * | hypernetwork training mk1AUTOMATIC2022-10-071-1/+0
* | | use simple lcg in xorDepFA2022-10-111-2/+8
* | | colour depth conversion fixDepFA2022-10-101-1/+1
* | | add dependencyDepFA2022-10-101-1/+1
* | | update data dis[play styleDepFA2022-10-101-23/+65
* | | convert back to rgb as some hosts add alphaDepFA2022-10-101-1/+1
* | | add pixel data footerDepFA2022-10-101-2/+46
* | | Merge branch 'master' into embed-embeddings-in-imagesDepFA2022-10-103-14/+22
|\ \ \ | | |/ | |/|
| * | Custom Width and Heightalg-wiki2022-10-103-19/+18
| * | Fixed progress bar output for epochalg-wiki2022-10-101-1/+1
| * | Textual Inversion: Added custom training image size and number of repeats per...alg-wiki2022-10-103-8/+17
| |/
* | remove braces from stepsDepFA2022-10-091-1/+1
* | change caption methodDepFA2022-10-091-9/+21
* | change source of step countDepFA2022-10-091-8/+2
* | source checkpoint hash from current checkpointDepFA2022-10-091-4/+2
* | correct case on embeddingFromB64DepFA2022-10-091-1/+1
* | change json tensor key nameDepFA2022-10-091-3/+3
* | add encoder and decoder classesDepFA2022-10-091-0/+21
* | add alternate checkpoint hash sourceDepFA2022-10-091-2/+5
* | add embedding load and save from b64 jsonDepFA2022-10-091-9/+21
* | Update textual_inversion.pyDepFA2022-10-091-3/+22
|/
* removed unused import, fixed typoRaphael Stoeckli2022-10-061-2/+1
* Add sanitizer for captions in Textual inversionRaphael Stoeckli2022-10-061-0/+28
* add support for gelbooru tags in filenames for textual inversionAUTOMATIC2022-10-042-3/+8
* fix broken date in TIAUTOMATIC2022-10-031-1/+1
* keep textual inversion dataset latents in CPU memory to save a bit of VRAMAUTOMATIC2022-10-022-0/+5
* preprocessing for textual inversion addedAUTOMATIC2022-10-023-3/+87
* disabled SD model download after multiple complaintsAUTOMATIC2022-10-021-1/+1
* add checkpoint info to saved embeddingsAUTOMATIC2022-10-021-1/+12
* fix using aaaa-100 embedding when the prompt has aaaa-10000 and you have both...AUTOMATIC2022-10-021-1/+2
* fix for incorrect embedding token length calculation (will break seeds that u...AUTOMATIC2022-10-022-10/+7
* initial support for training textual inversionAUTOMATIC2022-10-023-0/+366