aboutsummaryrefslogtreecommitdiffstats
path: root/modules/textual_inversion
AgeCommit message (Collapse)AuthorLines
2023-01-13Merge branch 'master' into tensorboardAUTOMATIC1111-338/+1128
2023-01-13Merge pull request #6689 from Poktay/add_gradient_settings_to_logging_fileAUTOMATIC1111-1/+1
add gradient settings to training settings log files
2023-01-13print bucket sizes for training without resizing images #6620AUTOMATIC-3/+19
fix an error when generating a picture with embedding in it
2023-01-13Merge pull request #6620 from guaneec/varsize_batchAUTOMATIC1111-4/+32
Enable batch_size>1 for mixed-sized training
2023-01-12add gradient settings to training settings log filesJosh R-1/+1
2023-01-12Allow creation of zero vectors for TIShondoit-3/+6
2023-01-11set descriptionsVladimir Mandic-2/+9
2023-01-10Support loading textual inversion embeddings from safetensors filesLee Bousfield-0/+3
2023-01-11Enable batch_size>1 for mixed-sized trainingdan-4/+32
2023-01-09make a dropdown for prompt template selectionAUTOMATIC-8/+27
2023-01-09remove/simplify some changes from #6481AUTOMATIC-11/+7
2023-01-09Merge branch 'master' into varsizeAUTOMATIC1111-62/+103
2023-01-08make it possible for extensions/scripts to add their own embedding directoriesAUTOMATIC-66/+104
2023-01-08skip images in embeddings dir if they have a second .preview extensionAUTOMATIC-0/+4
2023-01-08Move batchsize checkdan-2/+2
2023-01-08Add checkbox for variable training dimsdan-4/+4
2023-01-08Allow variable img sizedan-9/+13
2023-01-07CLIP hijack reworkAUTOMATIC-1/+0
2023-01-06rework saving training params to file #6372AUTOMATIC-20/+27
2023-01-06Merge pull request #6372 from ↵AUTOMATIC1111-1/+25
timntorres/save-ti-hypernet-settings-to-txt-revised Save hypernet and textual inversion settings to text file, revised.
2023-01-06allow loading embeddings from subdirectoriesFaber-11/+12
2023-01-05typo in TIKuma-1/+1
2023-01-05Include model in log file. Exclude directory.timntorres-13/+9
2023-01-05Clean up ti, add same behavior to hypernetwork.timntorres-5/+9
2023-01-05Add option to save ti settings to file.timntorres-3/+27
2023-01-04Merge branch 'master' into gradient-clippingAUTOMATIC1111-219/+354
2023-01-04use shared function from processing for creating dummy mask when training ↵AUTOMATIC-24/+9
inpainting model
2023-01-04fix the mergeAUTOMATIC-9/+5
2023-01-04Merge branch 'master' into inpaint_textual_inversionAUTOMATIC1111-285/+439
2023-01-04Merge pull request #6253 from Shondoit/ti-optimAUTOMATIC1111-8/+32
Save Optimizer next to TI embedding
2023-01-03add job info to modulesVladimir Mandic-0/+2
2023-01-03Save Optimizer next to TI embeddingShondoit-8/+32
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-02feat(api): return more data for embeddingsPhilpax-4/+4
2023-01-02fix the issue with training on SD2.0AUTOMATIC-2/+1
2022-12-31changed embedding accepted shape detection to use existing code and support ↵AUTOMATIC-24/+6
the new alt-diffusion model, and reformatted messages a bit #6149
2022-12-31validate textual inversion embeddingsVladimir Mandic-5/+38
2022-12-24fix F541 f-string without any placeholdersYuval Aboulafia-1/+1
2022-12-14Fix various typosJim Hays-13/+13
2022-12-03Merge branch 'master' into racecond_fixAUTOMATIC1111-274/+381
2022-12-03Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixesAUTOMATIC1111-3/+3
Use devices.autocast() and fix MPS randn issues
2022-12-02Fix divide by 0 errorPhytoEpidemic-3/+3
Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script.
2022-11-30Use devices.autocast instead of torch.autocastbrkirch-3/+3
2022-11-27Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewordsAUTOMATIC1111-1/+1
resolve [name] after resolving [filewords] in training
2022-11-27Merge remote-tracking branch 'flamelaw/master'AUTOMATIC-189/+274
2022-11-27set TI AdamW default weight decay to 0flamelaw-1/+1
2022-11-26Add support Stable Diffusion 2.0AUTOMATIC-4/+3
2022-11-23small fixesflamelaw-1/+1
2022-11-21fix pin_memory with different latent sampling methodflamelaw-10/+20
2022-11-20moved deepdanbooru to pure pytorch implementationAUTOMATIC-8/+4
2022-11-20fix random sampling with pin_memoryflamelaw-1/+1