Age | Commit message (Collapse) | Author | Lines | |
---|---|---|---|---|
2022-12-03 | Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes | AUTOMATIC1111 | -3/+3 | |
Use devices.autocast() and fix MPS randn issues | ||||
2022-12-02 | Fix divide by 0 error | PhytoEpidemic | -3/+3 | |
Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script. | ||||
2022-11-30 | Use devices.autocast instead of torch.autocast | brkirch | -3/+3 | |
2022-11-27 | Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords | AUTOMATIC1111 | -1/+1 | |
resolve [name] after resolving [filewords] in training | ||||
2022-11-27 | Merge remote-tracking branch 'flamelaw/master' | AUTOMATIC | -189/+274 | |
2022-11-27 | set TI AdamW default weight decay to 0 | flamelaw | -1/+1 | |
2022-11-26 | Add support Stable Diffusion 2.0 | AUTOMATIC | -4/+3 | |
2022-11-23 | small fixes | flamelaw | -1/+1 | |
2022-11-21 | fix pin_memory with different latent sampling method | flamelaw | -10/+20 | |
2022-11-20 | moved deepdanbooru to pure pytorch implementation | AUTOMATIC | -8/+4 | |
2022-11-20 | fix random sampling with pin_memory | flamelaw | -1/+1 | |
2022-11-20 | remove unnecessary comment | flamelaw | -9/+0 | |
2022-11-20 | Gradient accumulation, autocast fix, new latent sampling method, etc | flamelaw | -185/+269 | |
2022-11-19 | Merge pull request #4812 from space-nuko/feature/interrupt-preprocessing | AUTOMATIC1111 | -1/+1 | |
Add interrupt button to preprocessing | ||||
2022-11-19 | change StableDiffusionProcessing to internally use sampler name instead of ↵ | AUTOMATIC | -2/+2 | |
sampler index | ||||
2022-11-17 | Add interrupt button to preprocessing | space-nuko | -1/+1 | |
2022-11-13 | resolve [name] after resolving [filewords] in training | parasi | -1/+1 | |
2022-11-11 | Merge pull request #4117 from TinkTheBoush/master | AUTOMATIC1111 | -1/+6 | |
Adding optional tag shuffling for training | ||||
2022-11-11 | Update dataset.py | KyuSeok Jung | -1/+1 | |
2022-11-11 | Update dataset.py | KyuSeok Jung | -1/+1 | |
2022-11-11 | adding tag drop out option | KyuSeok Jung | -4/+4 | |
2022-11-09 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | -69/+93 | |
2022-11-08 | move functions out of main body for image preprocessing for easier hijacking | AUTOMATIC | -69/+93 | |
2022-11-05 | Simplify grad clip | Muhammad Rizqi Nur | -9/+7 | |
2022-11-04 | change option position to Training setting | TinkTheBoush | -5/+4 | |
2022-11-04 | Fixes race condition in training when VAE is unloaded | Fampai | -0/+5 | |
set_current_image can attempt to use the VAE when it is unloaded to the CPU while training | ||||
2022-11-02 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | -2/+15 | |
2022-11-02 | Merge branch 'master' into master | KyuSeok Jung | -2/+15 | |
2022-11-01 | fixed textual inversion training with inpainting models | Nerogar | -1/+26 | |
2022-11-01 | append_tag_shuffle | TinkTheBoush | -4/+10 | |
2022-10-31 | Fixed minor bug | Fampai | -0/+1 | |
when unloading vae during TI training, generating images after training will error out | ||||
2022-10-31 | Merge branch 'master' of ↵ | Fampai | -39/+85 | |
https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations | ||||
2022-10-31 | Added TI training optimizations | Fampai | -2/+14 | |
option to use xattention optimizations when training option to unload vae when training | ||||
2022-10-30 | Merge master | Muhammad Rizqi Nur | -44/+89 | |
2022-10-30 | Merge pull request #3928 from R-N/validate-before-load | AUTOMATIC1111 | -25/+64 | |
Optimize training a little | ||||
2022-10-30 | Fix dataset still being loaded even when training will be skipped | Muhammad Rizqi Nur | -1/+1 | |
2022-10-30 | Add missing info on hypernetwork/embedding model log | Muhammad Rizqi Nur | -13/+26 | |
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513 Also group the saving into one | ||||
2022-10-30 | Revert "Add cleanup after training" | Muhammad Rizqi Nur | -95/+90 | |
This reverts commit 3ce2bfdf95bd5f26d0f6e250e67338ada91980d1. | ||||
2022-10-29 | Additional assert on dataset | Muhammad Rizqi Nur | -0/+2 | |
2022-10-29 | Add cleanup after training | Muhammad Rizqi Nur | -90/+95 | |
2022-10-29 | Add input validations before loading dataset for training | Muhammad Rizqi Nur | -12/+36 | |
2022-10-29 | Improve lr schedule error message | Muhammad Rizqi Nur | -2/+2 | |
2022-10-29 | Allow trailing comma in learning rate | Muhammad Rizqi Nur | -13/+20 | |
2022-10-29 | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | -15/+15 | |
2022-10-29 | Merge pull request #3858 from R-N/log-csv | AUTOMATIC1111 | -13/+13 | |
Fix log off by 1 #3847 | ||||
2022-10-28 | Fix log off by 1 | Muhammad Rizqi Nur | -13/+13 | |
2022-10-28 | Learning rate sched syntax support for grad clipping | Muhammad Rizqi Nur | -6/+17 | |
2022-10-28 | Gradient clipping for textual embedding | Muhammad Rizqi Nur | -1/+10 | |
2022-10-28 | Fix random dataset shuffle on TI | FlameLaw | -2/+2 | |
2022-10-26 | typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir | DepFA | -1/+1 | |