aboutsummaryrefslogtreecommitdiffstats
path: root/modules/textual_inversion
Commit message (Collapse)AuthorAgeFilesLines
...
| | | | * Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixesAUTOMATIC11112022-12-032-3/+3
| | | | |\ | | | | | | | | | | | | Use devices.autocast() and fix MPS randn issues
| | | | | * Use devices.autocast instead of torch.autocastbrkirch2022-11-302-3/+3
| | | | | |
| | | | * | Fix divide by 0 errorPhytoEpidemic2022-12-021-3/+3
| | | | |/ | | | | | | | | | | Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script.
| | | | * Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewordsAUTOMATIC11112022-11-271-1/+1
| | | | |\ | | | | | | | | | | | | resolve [name] after resolving [filewords] in training
| | | | | * resolve [name] after resolving [filewords] in trainingparasi2022-11-131-1/+1
| | | | | |
| | | | * | Merge remote-tracking branch 'flamelaw/master'AUTOMATIC2022-11-272-189/+274
| | | | |\ \
| | | | | * | set TI AdamW default weight decay to 0flamelaw2022-11-261-1/+1
| | | | | | |
| | | | | * | small fixesflamelaw2022-11-221-1/+1
| | | | | | |
| | | | | * | fix pin_memory with different latent sampling methodflamelaw2022-11-212-10/+20
| | | | | | |
| | | | | * | fix random sampling with pin_memoryflamelaw2022-11-201-1/+1
| | | | | | |
| | | | | * | remove unnecessary commentflamelaw2022-11-201-9/+0
| | | | | | |
| | | | | * | Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw2022-11-202-185/+269
| | | | | | |
| | | | * | | Add support Stable Diffusion 2.0AUTOMATIC2022-11-261-4/+3
| | | | | | |
| | | | * | | moved deepdanbooru to pure pytorch implementationAUTOMATIC2022-11-201-8/+4
| | | | |/ /
| | | | * | Merge pull request #4812 from space-nuko/feature/interrupt-preprocessingAUTOMATIC11112022-11-191-1/+1
| | | | |\ \ | | | | | | | | | | | | | | Add interrupt button to preprocessing
| | | | | * | Add interrupt button to preprocessingspace-nuko2022-11-181-1/+1
| | | | | |/
| | | | * / change StableDiffusionProcessing to internally use sampler name instead of ↵AUTOMATIC2022-11-191-2/+2
| | | | |/ | | | | | | | | | | | | | | | sampler index
| | | | * Merge pull request #4117 from TinkTheBoush/masterAUTOMATIC11112022-11-111-1/+6
| | | | |\ | | | | | | | | | | | | Adding optional tag shuffling for training
| | | | | * Update dataset.pyKyuSeok Jung2022-11-111-1/+1
| | | | | |
| | | | | * Update dataset.pyKyuSeok Jung2022-11-111-1/+1
| | | | | |
| | | | | * adding tag drop out optionKyuSeok Jung2022-11-111-4/+4
| | | | | |
| | | | | * change option position to Training settingTinkTheBoush2022-11-042-5/+4
| | | | | |
| | | | | * Merge branch 'master' into masterKyuSeok Jung2022-11-022-2/+15
| | | | | |\
| | | | | * | append_tag_shuffleTinkTheBoush2022-11-012-4/+10
| | | | | | |
| | | * | | | Fixes race condition in training when VAE is unloadedFampai2022-11-041-0/+5
| | | | |_|/ | | | |/| | | | | | | | | | | | | | | | | | | | set_current_image can attempt to use the VAE when it is unloaded to the CPU while training
| | * / | | fixed textual inversion training with inpainting modelsNerogar2022-11-011-1/+26
| | |/ / /
| * | | | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-11-091-69/+93
| |\ \ \ \ | | | |/ / | | |/| |
| | * | | move functions out of main body for image preprocessing for easier hijackingAUTOMATIC2022-11-081-69/+93
| | |/ /
| * | | Simplify grad clipMuhammad Rizqi Nur2022-11-051-9/+7
| | | |
| * | | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-11-022-2/+15
| |\| |
| | * | Fixed minor bugFampai2022-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | when unloading vae during TI training, generating images after training will error out
| | * | Merge branch 'master' of ↵Fampai2022-10-313-39/+85
| | |\| | | | | | | | | | | | | https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations
| | * | Added TI training optimizationsFampai2022-10-312-2/+14
| | | | | | | | | | | | | | | | | | | | option to use xattention optimizations when training option to unload vae when training
| * | | Merge masterMuhammad Rizqi Nur2022-10-303-44/+89
| |\ \ \ | | | |/ | | |/|
| | * | Merge pull request #3928 from R-N/validate-before-loadAUTOMATIC11112022-10-302-25/+64
| | |\ \ | | | | | | | | | | Optimize training a little
| | | * | Fix dataset still being loaded even when training will be skippedMuhammad Rizqi Nur2022-10-291-1/+1
| | | | |
| | | * | Add missing info on hypernetwork/embedding model logMuhammad Rizqi Nur2022-10-291-13/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513 Also group the saving into one
| | | * | Revert "Add cleanup after training"Muhammad Rizqi Nur2022-10-291-95/+90
| | | | | | | | | | | | | | | | | | | | This reverts commit 3ce2bfdf95bd5f26d0f6e250e67338ada91980d1.
| | | * | Additional assert on datasetMuhammad Rizqi Nur2022-10-291-0/+2
| | | | |
| | | * | Add cleanup after trainingMuhammad Rizqi Nur2022-10-291-90/+95
| | | | |
| | | * | Add input validations before loading dataset for trainingMuhammad Rizqi Nur2022-10-291-12/+36
| | | |/
| | * | Improve lr schedule error messageMuhammad Rizqi Nur2022-10-291-2/+2
| | | |
| | * | Allow trailing comma in learning rateMuhammad Rizqi Nur2022-10-291-13/+20
| | |/
| * | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-10-293-15/+15
| |\|
| | * Merge pull request #3858 from R-N/log-csvAUTOMATIC11112022-10-292-13/+13
| | |\ | | | | | | | | Fix log off by 1 #3847
| | | * Fix log off by 1Muhammad Rizqi Nur2022-10-282-13/+13
| | | |
| | * | Fix random dataset shuffle on TIFlameLaw2022-10-271-2/+2
| | |/
| * | Learning rate sched syntax support for grad clippingMuhammad Rizqi Nur2022-10-282-6/+17
| | |
| * | Gradient clipping for textual embeddingMuhammad Rizqi Nur2022-10-281-1/+10
| |/
| * typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dirDepFA2022-10-261-1/+1
| |