Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Fix None type error for TI module | butaixianran | 2023-03-24 | 1 | -1/+5 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | When user using model_name.png as a preview image, textural_inversion.py still treat it as an embeding, and didn't handle its error, just let python throw out an None type error like following: ```bash File "D:\Work\Dev\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 155, in load_from_file name = data.get('name', name) AttributeError: 'NoneType' object has no attribute 'get' ``` With just a simple `if data:` checking as following, there will be no error, breaks nothing, and now this module can works fine with user's preview images. Old code: ```python data = extract_image_data_embed(embed_image) name = data.get('name', name) ``` New code: ```python data = extract_image_data_embed(embed_image) if data: name = data.get('name', name) else: # if data is None, means this is not an embeding, just a preview image return ``` Also, since there is no more errors on textual inversion module, from now on, extra network can set "model_name.png" as preview image for embedings. | ||||
* | Add ability to choose using weighted loss or not | Shondoit | 2023-02-15 | 1 | -4/+9 |
| | |||||
* | Call weighted_forward during training | Shondoit | 2023-02-15 | 1 | -1/+2 |
| | |||||
* | do not display the message for TI unless the list of loaded embeddings changed | AUTOMATIC | 2023-01-29 | 1 | -3/+7 |
| | |||||
* | allow symlinks in the textual inversion embeddings folder | Alex "mcmonkey" Goodwin | 2023-01-25 | 1 | -1/+1 |
| | |||||
* | extra networks UI | AUTOMATIC | 2023-01-21 | 1 | -0/+2 |
| | | | | rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight> | ||||
* | add option to show/hide warnings | AUTOMATIC | 2023-01-18 | 1 | -1/+5 |
| | | | | | removed hiding warnings from LDSR fixed/reworked few places that produced warnings | ||||
* | big rework of progressbar/preview system to allow multiple users to prompts ↵ | AUTOMATIC | 2023-01-15 | 1 | -3/+3 |
| | | | | at the same time and do not get previews of each other | ||||
* | change hash to sha256 | AUTOMATIC | 2023-01-14 | 1 | -3/+3 |
| | |||||
* | fix a bug caused by merge | AUTOMATIC | 2023-01-13 | 1 | -0/+1 |
| | |||||
* | Merge branch 'master' into tensorboard | AUTOMATIC1111 | 2023-01-13 | 1 | -205/+434 |
|\ | |||||
| * | print bucket sizes for training without resizing images #6620 | AUTOMATIC | 2023-01-13 | 1 | -1/+1 |
| | | | | | | | | fix an error when generating a picture with embedding in it | ||||
| * | Allow creation of zero vectors for TI | Shondoit | 2023-01-12 | 1 | -3/+6 |
| | | |||||
| * | set descriptions | Vladimir Mandic | 2023-01-11 | 1 | -1/+3 |
| | | |||||
| * | Support loading textual inversion embeddings from safetensors files | Lee Bousfield | 2023-01-11 | 1 | -0/+3 |
| | | |||||
| * | make a dropdown for prompt template selection | AUTOMATIC | 2023-01-09 | 1 | -8/+27 |
| | | |||||
| * | remove/simplify some changes from #6481 | AUTOMATIC | 2023-01-09 | 1 | -2/+2 |
| | | |||||
| * | Merge branch 'master' into varsize | AUTOMATIC1111 | 2023-01-09 | 1 | -62/+103 |
| |\ | |||||
| | * | make it possible for extensions/scripts to add their own embedding directories | AUTOMATIC | 2023-01-08 | 1 | -66/+104 |
| | | | |||||
| | * | skip images in embeddings dir if they have a second .preview extension | AUTOMATIC | 2023-01-08 | 1 | -0/+4 |
| | | | |||||
| * | | Add checkbox for variable training dims | dan | 2023-01-07 | 1 | -2/+2 |
| | | | |||||
| * | | Allow variable img size | dan | 2023-01-07 | 1 | -2/+2 |
| |/ | |||||
| * | CLIP hijack rework | AUTOMATIC | 2023-01-06 | 1 | -1/+0 |
| | | |||||
| * | rework saving training params to file #6372 | AUTOMATIC | 2023-01-06 | 1 | -20/+3 |
| | | |||||
| * | Merge pull request #6372 from ↵ | AUTOMATIC1111 | 2023-01-06 | 1 | -1/+25 |
| |\ | | | | | | | | | | | | | timntorres/save-ti-hypernet-settings-to-txt-revised Save hypernet and textual inversion settings to text file, revised. | ||||
| | * | Include model in log file. Exclude directory. | timntorres | 2023-01-05 | 1 | -13/+9 |
| | | | |||||
| | * | Clean up ti, add same behavior to hypernetwork. | timntorres | 2023-01-05 | 1 | -5/+9 |
| | | | |||||
| | * | Add option to save ti settings to file. | timntorres | 2023-01-05 | 1 | -3/+27 |
| | | | |||||
| * | | allow loading embeddings from subdirectories | Faber | 2023-01-05 | 1 | -11/+12 |
| | | | |||||
| * | | typo in TI | Kuma | 2023-01-05 | 1 | -1/+1 |
| |/ | |||||
| * | Merge branch 'master' into gradient-clipping | AUTOMATIC1111 | 2023-01-04 | 1 | -162/+251 |
| |\ | |||||
| | * | use shared function from processing for creating dummy mask when training ↵ | AUTOMATIC | 2023-01-04 | 1 | -24/+9 |
| | | | | | | | | | | | | inpainting model | ||||
| | * | fix the merge | AUTOMATIC | 2023-01-04 | 1 | -9/+5 |
| | | | |||||
| | * | Merge branch 'master' into inpaint_textual_inversion | AUTOMATIC1111 | 2023-01-04 | 1 | -160/+244 |
| | |\ | |||||
| | | * | Merge pull request #6253 from Shondoit/ti-optim | AUTOMATIC1111 | 2023-01-04 | 1 | -8/+32 |
| | | |\ | | | | | | | | | | | Save Optimizer next to TI embedding | ||||
| | | | * | Save Optimizer next to TI embedding | Shondoit | 2023-01-03 | 1 | -8/+32 |
| | | | | | | | | | | | | | | | | | | | | Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory) | ||||
| | | * | | add job info to modules | Vladimir Mandic | 2023-01-03 | 1 | -0/+1 |
| | | |/ | |||||
| | | * | feat(api): return more data for embeddings | Philpax | 2023-01-02 | 1 | -4/+4 |
| | | | | |||||
| | | * | fix the issue with training on SD2.0 | AUTOMATIC | 2023-01-01 | 1 | -2/+1 |
| | | | | |||||
| | | * | changed embedding accepted shape detection to use existing code and support ↵ | AUTOMATIC | 2022-12-31 | 1 | -24/+6 |
| | | | | | | | | | | | | | | | | the new alt-diffusion model, and reformatted messages a bit #6149 | ||||
| | | * | validate textual inversion embeddings | Vladimir Mandic | 2022-12-31 | 1 | -5/+38 |
| | | | | |||||
| | | * | fix F541 f-string without any placeholders | Yuval Aboulafia | 2022-12-24 | 1 | -1/+1 |
| | | | | |||||
| | | * | Fix various typos | Jim Hays | 2022-12-15 | 1 | -8/+8 |
| | | | | |||||
| | | * | Merge branch 'master' into racecond_fix | AUTOMATIC1111 | 2022-12-03 | 1 | -148/+186 |
| | | |\ | |||||
| | | | * | Use devices.autocast instead of torch.autocast | brkirch | 2022-11-30 | 1 | -1/+1 |
| | | | | | |||||
| | | | * | Merge remote-tracking branch 'flamelaw/master' | AUTOMATIC | 2022-11-27 | 1 | -141/+182 |
| | | | |\ | |||||
| | | | | * | set TI AdamW default weight decay to 0 | flamelaw | 2022-11-26 | 1 | -1/+1 |
| | | | | | | |||||
| | | | | * | small fixes | flamelaw | 2022-11-22 | 1 | -1/+1 |
| | | | | | | |||||
| | | | | * | fix pin_memory with different latent sampling method | flamelaw | 2022-11-21 | 1 | -6/+1 |
| | | | | | | |||||
| | | | | * | Gradient accumulation, autocast fix, new latent sampling method, etc | flamelaw | 2022-11-20 | 1 | -137/+183 |
| | | | | | |