aboutsummaryrefslogtreecommitdiffstats
path: root/modules/textual_inversion/textual_inversion.py
Commit message (Collapse)AuthorAgeFilesLines
...
| | | * | add job info to modulesVladimir Mandic2023-01-031-0/+1
| | | |/
| | | * feat(api): return more data for embeddingsPhilpax2023-01-021-4/+4
| | | |
| | | * fix the issue with training on SD2.0AUTOMATIC2023-01-011-2/+1
| | | |
| | | * changed embedding accepted shape detection to use existing code and support ↵AUTOMATIC2022-12-311-24/+6
| | | | | | | | | | | | | | | | the new alt-diffusion model, and reformatted messages a bit #6149
| | | * validate textual inversion embeddingsVladimir Mandic2022-12-311-5/+38
| | | |
| | | * fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-1/+1
| | | |
| | | * Fix various typosJim Hays2022-12-151-8/+8
| | | |
| | | * Merge branch 'master' into racecond_fixAUTOMATIC11112022-12-031-148/+186
| | | |\
| | | | * Use devices.autocast instead of torch.autocastbrkirch2022-11-301-1/+1
| | | | |
| | | | * Merge remote-tracking branch 'flamelaw/master'AUTOMATIC2022-11-271-141/+182
| | | | |\
| | | | | * set TI AdamW default weight decay to 0flamelaw2022-11-261-1/+1
| | | | | |
| | | | | * small fixesflamelaw2022-11-221-1/+1
| | | | | |
| | | | | * fix pin_memory with different latent sampling methodflamelaw2022-11-211-6/+1
| | | | | |
| | | | | * Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw2022-11-201-137/+183
| | | | | |
| | | | * | Add support Stable Diffusion 2.0AUTOMATIC2022-11-261-4/+3
| | | | |/
| | | | * change StableDiffusionProcessing to internally use sampler name instead of ↵AUTOMATIC2022-11-191-2/+2
| | | | | | | | | | | | | | | | | | | | sampler index
| | | * | Fixes race condition in training when VAE is unloadedFampai2022-11-041-0/+5
| | | |/ | | | | | | | | | | | | | | | | set_current_image can attempt to use the VAE when it is unloaded to the CPU while training
| | * / fixed textual inversion training with inpainting modelsNerogar2022-11-011-1/+26
| | |/
| * | Simplify grad clipMuhammad Rizqi Nur2022-11-051-9/+7
| | |
| * | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-11-021-0/+10
| |\|
| | * Fixed minor bugFampai2022-10-311-0/+1
| | | | | | | | | | | | | | | when unloading vae during TI training, generating images after training will error out
| | * Merge branch 'master' of ↵Fampai2022-10-311-25/+62
| | |\ | | | | | | | | | | | | https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations
| | * | Added TI training optimizationsFampai2022-10-311-0/+9
| | | | | | | | | | | | | | | | | | | | option to use xattention optimizations when training option to unload vae when training
| * | | Merge masterMuhammad Rizqi Nur2022-10-301-30/+66
| |\ \ \ | | | |/ | | |/|
| | * | Fix dataset still being loaded even when training will be skippedMuhammad Rizqi Nur2022-10-291-1/+1
| | | |
| | * | Add missing info on hypernetwork/embedding model logMuhammad Rizqi Nur2022-10-291-13/+26
| | | | | | | | | | | | | | | | | | | | | | | | Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513 Also group the saving into one
| | * | Revert "Add cleanup after training"Muhammad Rizqi Nur2022-10-291-95/+90
| | | | | | | | | | | | | | | | This reverts commit 3ce2bfdf95bd5f26d0f6e250e67338ada91980d1.
| | * | Add cleanup after trainingMuhammad Rizqi Nur2022-10-291-90/+95
| | | |
| | * | Add input validations before loading dataset for trainingMuhammad Rizqi Nur2022-10-291-12/+36
| | |/
| * | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-10-291-12/+12
| |\|
| | * Fix log off by 1Muhammad Rizqi Nur2022-10-281-12/+12
| | |
| * | Learning rate sched syntax support for grad clippingMuhammad Rizqi Nur2022-10-281-3/+9
| | |
| * | Gradient clipping for textual embeddingMuhammad Rizqi Nur2022-10-281-1/+10
| |/
| * typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dirDepFA2022-10-261-1/+1
| |
| * Implement PR #3625 but for embeddings.timntorres2022-10-261-1/+1
| |
| * Implement PR #3309 but for embeddings.timntorres2022-10-261-1/+8
| |
| * Implement PR #3189 but for embeddings.timntorres2022-10-261-5/+5
| |
| * enable creating embedding with --medvramAUTOMATIC2022-10-261-0/+3
| |
| * Merge branch 'ae'AUTOMATIC2022-10-211-0/+1
| |\
| | * Merge branch 'master' into test_resolve_conflictsMalumaDev2022-10-181-0/+1
| | |\
| | * \ Merge branch 'master' into test_resolve_conflictsMalumaDev2022-10-161-0/+1
| | |\ \
| | * \ \ Merge branch 'master' into test_resolve_conflictsMalumaDev2022-10-151-4/+13
| | |\ \ \
| | * \ \ \ Merge branch 'master' into test_resolve_conflictsMalumaDev2022-10-151-25/+54
| | |\ \ \ \
| | * | | | | initMalumaDev2022-10-141-10/+25
| | | | | | |
| * | | | | | allow overwrite old embeddingDepFA2022-10-191-2/+3
| | |_|_|_|/ | |/| | | |
* | | | | | Removed two unused importsMelan2022-10-241-1/+0
| | | | | |
* | | | | | Some changes to the tensorboard code and hypernetwork supportMelan2022-10-201-18/+27
| | | | | |
* | | | | | Fixed a typo in a variableMelan2022-10-201-5/+5
| | | | | |
* | | | | | Add support for Tensorboard for training embeddingsMelan2022-10-201-1/+30
|/ / / / /
* | | | / print list of embeddings on reloadDepFA2022-10-171-0/+1
| |_|_|/ |/| | |