Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Merge branch 'master' into gradient-clipping | AUTOMATIC1111 | 2023-01-04 | 5 | -219/+354 |
|\ | |||||
| * | use shared function from processing for creating dummy mask when training ↵ | AUTOMATIC | 2023-01-04 | 1 | -24/+9 |
| | | | | | | | | inpainting model | ||||
| * | fix the merge | AUTOMATIC | 2023-01-04 | 1 | -9/+5 |
| | | |||||
| * | Merge branch 'master' into inpaint_textual_inversion | AUTOMATIC1111 | 2023-01-04 | 5 | -285/+439 |
| |\ | |||||
| | * | Merge pull request #6253 from Shondoit/ti-optim | AUTOMATIC1111 | 2023-01-04 | 1 | -8/+32 |
| | |\ | | | | | | | | | Save Optimizer next to TI embedding | ||||
| | | * | Save Optimizer next to TI embedding | Shondoit | 2023-01-03 | 1 | -8/+32 |
| | | | | | | | | | | | | | | | | Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory) | ||||
| | * | | add job info to modules | Vladimir Mandic | 2023-01-03 | 2 | -0/+2 |
| | |/ | |||||
| | * | feat(api): return more data for embeddings | Philpax | 2023-01-02 | 1 | -4/+4 |
| | | | |||||
| | * | fix the issue with training on SD2.0 | AUTOMATIC | 2023-01-01 | 1 | -2/+1 |
| | | | |||||
| | * | changed embedding accepted shape detection to use existing code and support ↵ | AUTOMATIC | 2022-12-31 | 1 | -24/+6 |
| | | | | | | | | | | | | the new alt-diffusion model, and reformatted messages a bit #6149 | ||||
| | * | validate textual inversion embeddings | Vladimir Mandic | 2022-12-31 | 1 | -5/+38 |
| | | | |||||
| | * | fix F541 f-string without any placeholders | Yuval Aboulafia | 2022-12-24 | 1 | -1/+1 |
| | | | |||||
| | * | Fix various typos | Jim Hays | 2022-12-15 | 2 | -13/+13 |
| | | | |||||
| | * | Merge branch 'master' into racecond_fix | AUTOMATIC1111 | 2022-12-03 | 5 | -274/+381 |
| | |\ | |||||
| | | * | Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes | AUTOMATIC1111 | 2022-12-03 | 2 | -3/+3 |
| | | |\ | | | | | | | | | | | Use devices.autocast() and fix MPS randn issues | ||||
| | | | * | Use devices.autocast instead of torch.autocast | brkirch | 2022-11-30 | 2 | -3/+3 |
| | | | | | |||||
| | | * | | Fix divide by 0 error | PhytoEpidemic | 2022-12-02 | 1 | -3/+3 |
| | | |/ | | | | | | | | | Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script. | ||||
| | | * | Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords | AUTOMATIC1111 | 2022-11-27 | 1 | -1/+1 |
| | | |\ | | | | | | | | | | | resolve [name] after resolving [filewords] in training | ||||
| | | | * | resolve [name] after resolving [filewords] in training | parasi | 2022-11-13 | 1 | -1/+1 |
| | | | | | |||||
| | | * | | Merge remote-tracking branch 'flamelaw/master' | AUTOMATIC | 2022-11-27 | 2 | -189/+274 |
| | | |\ \ | |||||
| | | | * | | set TI AdamW default weight decay to 0 | flamelaw | 2022-11-26 | 1 | -1/+1 |
| | | | | | | |||||
| | | | * | | small fixes | flamelaw | 2022-11-22 | 1 | -1/+1 |
| | | | | | | |||||
| | | | * | | fix pin_memory with different latent sampling method | flamelaw | 2022-11-21 | 2 | -10/+20 |
| | | | | | | |||||
| | | | * | | fix random sampling with pin_memory | flamelaw | 2022-11-20 | 1 | -1/+1 |
| | | | | | | |||||
| | | | * | | remove unnecessary comment | flamelaw | 2022-11-20 | 1 | -9/+0 |
| | | | | | | |||||
| | | | * | | Gradient accumulation, autocast fix, new latent sampling method, etc | flamelaw | 2022-11-20 | 2 | -185/+269 |
| | | | | | | |||||
| | | * | | | Add support Stable Diffusion 2.0 | AUTOMATIC | 2022-11-26 | 1 | -4/+3 |
| | | | | | | |||||
| | | * | | | moved deepdanbooru to pure pytorch implementation | AUTOMATIC | 2022-11-20 | 1 | -8/+4 |
| | | |/ / | |||||
| | | * | | Merge pull request #4812 from space-nuko/feature/interrupt-preprocessing | AUTOMATIC1111 | 2022-11-19 | 1 | -1/+1 |
| | | |\ \ | | | | | | | | | | | | | Add interrupt button to preprocessing | ||||
| | | | * | | Add interrupt button to preprocessing | space-nuko | 2022-11-18 | 1 | -1/+1 |
| | | | |/ | |||||
| | | * / | change StableDiffusionProcessing to internally use sampler name instead of ↵ | AUTOMATIC | 2022-11-19 | 1 | -2/+2 |
| | | |/ | | | | | | | | | | | | | sampler index | ||||
| | | * | Merge pull request #4117 from TinkTheBoush/master | AUTOMATIC1111 | 2022-11-11 | 1 | -1/+6 |
| | | |\ | | | | | | | | | | | Adding optional tag shuffling for training | ||||
| | | | * | Update dataset.py | KyuSeok Jung | 2022-11-11 | 1 | -1/+1 |
| | | | | | |||||
| | | | * | Update dataset.py | KyuSeok Jung | 2022-11-11 | 1 | -1/+1 |
| | | | | | |||||
| | | | * | adding tag drop out option | KyuSeok Jung | 2022-11-11 | 1 | -4/+4 |
| | | | | | |||||
| | | | * | change option position to Training setting | TinkTheBoush | 2022-11-04 | 2 | -5/+4 |
| | | | | | |||||
| | | | * | Merge branch 'master' into master | KyuSeok Jung | 2022-11-02 | 2 | -2/+15 |
| | | | |\ | |||||
| | | | * | | append_tag_shuffle | TinkTheBoush | 2022-11-01 | 2 | -4/+10 |
| | | | | | | |||||
| | * | | | | Fixes race condition in training when VAE is unloaded | Fampai | 2022-11-04 | 1 | -0/+5 |
| | | |_|/ | | |/| | | | | | | | | | | | | | | | | | set_current_image can attempt to use the VAE when it is unloaded to the CPU while training | ||||
| * / | | | fixed textual inversion training with inpainting models | Nerogar | 2022-11-01 | 1 | -1/+26 |
| |/ / / | |||||
* | | | | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | 2022-11-09 | 1 | -69/+93 |
|\ \ \ \ | | |/ / | |/| | | |||||
| * | | | move functions out of main body for image preprocessing for easier hijacking | AUTOMATIC | 2022-11-08 | 1 | -69/+93 |
| |/ / | |||||
* | | | Simplify grad clip | Muhammad Rizqi Nur | 2022-11-05 | 1 | -9/+7 |
| | | | |||||
* | | | Merge branch 'master' into gradient-clipping | Muhammad Rizqi Nur | 2022-11-02 | 2 | -2/+15 |
|\| | | |||||
| * | | Fixed minor bug | Fampai | 2022-10-31 | 1 | -0/+1 |
| | | | | | | | | | | | | | | | when unloading vae during TI training, generating images after training will error out | ||||
| * | | Merge branch 'master' of ↵ | Fampai | 2022-10-31 | 3 | -39/+85 |
| |\| | | | | | | | | | | https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations | ||||
| * | | Added TI training optimizations | Fampai | 2022-10-31 | 2 | -2/+14 |
| | | | | | | | | | | | | | | | option to use xattention optimizations when training option to unload vae when training | ||||
* | | | Merge master | Muhammad Rizqi Nur | 2022-10-30 | 3 | -44/+89 |
|\ \ \ | | |/ | |/| | |||||
| * | | Merge pull request #3928 from R-N/validate-before-load | AUTOMATIC1111 | 2022-10-30 | 2 | -25/+64 |
| |\ \ | | | | | | | | | Optimize training a little | ||||
| | * | | Fix dataset still being loaded even when training will be skipped | Muhammad Rizqi Nur | 2022-10-29 | 1 | -1/+1 |
| | | | |