aboutsummaryrefslogtreecommitdiffstats
path: root/modules/textual_inversion/textual_inversion.py
Commit message (Collapse)AuthorAgeFilesLines
* sort self.word_embeddings without instantiating it a new dictBrad Smith2023-04-141-3/+6
|
* sort embeddings by name (case insensitive)Brad Smith2023-04-081-2/+5
|
* Fix None type error for TI modulebutaixianran2023-03-241-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | When user using model_name.png as a preview image, textural_inversion.py still treat it as an embeding, and didn't handle its error, just let python throw out an None type error like following: ```bash File "D:\Work\Dev\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 155, in load_from_file name = data.get('name', name) AttributeError: 'NoneType' object has no attribute 'get' ``` With just a simple `if data:` checking as following, there will be no error, breaks nothing, and now this module can works fine with user's preview images. Old code: ```python data = extract_image_data_embed(embed_image) name = data.get('name', name) ``` New code: ```python data = extract_image_data_embed(embed_image) if data: name = data.get('name', name) else: # if data is None, means this is not an embeding, just a preview image return ``` Also, since there is no more errors on textual inversion module, from now on, extra network can set "model_name.png" as preview image for embedings.
* Add ability to choose using weighted loss or notShondoit2023-02-151-4/+9
|
* Call weighted_forward during trainingShondoit2023-02-151-1/+2
|
* do not display the message for TI unless the list of loaded embeddings changedAUTOMATIC2023-01-291-3/+7
|
* allow symlinks in the textual inversion embeddings folderAlex "mcmonkey" Goodwin2023-01-251-1/+1
|
* extra networks UIAUTOMATIC2023-01-211-0/+2
| | | | rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
* add option to show/hide warningsAUTOMATIC2023-01-181-1/+5
| | | | | removed hiding warnings from LDSR fixed/reworked few places that produced warnings
* big rework of progressbar/preview system to allow multiple users to prompts ↵AUTOMATIC2023-01-151-3/+3
| | | | at the same time and do not get previews of each other
* change hash to sha256AUTOMATIC2023-01-141-3/+3
|
* fix a bug caused by mergeAUTOMATIC2023-01-131-0/+1
|
* Merge branch 'master' into tensorboardAUTOMATIC11112023-01-131-205/+434
|\
| * print bucket sizes for training without resizing images #6620AUTOMATIC2023-01-131-1/+1
| | | | | | | | fix an error when generating a picture with embedding in it
| * Allow creation of zero vectors for TIShondoit2023-01-121-3/+6
| |
| * set descriptionsVladimir Mandic2023-01-111-1/+3
| |
| * Support loading textual inversion embeddings from safetensors filesLee Bousfield2023-01-111-0/+3
| |
| * make a dropdown for prompt template selectionAUTOMATIC2023-01-091-8/+27
| |
| * remove/simplify some changes from #6481AUTOMATIC2023-01-091-2/+2
| |
| * Merge branch 'master' into varsizeAUTOMATIC11112023-01-091-62/+103
| |\
| | * make it possible for extensions/scripts to add their own embedding directoriesAUTOMATIC2023-01-081-66/+104
| | |
| | * skip images in embeddings dir if they have a second .preview extensionAUTOMATIC2023-01-081-0/+4
| | |
| * | Add checkbox for variable training dimsdan2023-01-071-2/+2
| | |
| * | Allow variable img sizedan2023-01-071-2/+2
| |/
| * CLIP hijack reworkAUTOMATIC2023-01-061-1/+0
| |
| * rework saving training params to file #6372AUTOMATIC2023-01-061-20/+3
| |
| * Merge pull request #6372 from ↵AUTOMATIC11112023-01-061-1/+25
| |\ | | | | | | | | | | | | timntorres/save-ti-hypernet-settings-to-txt-revised Save hypernet and textual inversion settings to text file, revised.
| | * Include model in log file. Exclude directory.timntorres2023-01-051-13/+9
| | |
| | * Clean up ti, add same behavior to hypernetwork.timntorres2023-01-051-5/+9
| | |
| | * Add option to save ti settings to file.timntorres2023-01-051-3/+27
| | |
| * | allow loading embeddings from subdirectoriesFaber2023-01-051-11/+12
| | |
| * | typo in TIKuma2023-01-051-1/+1
| |/
| * Merge branch 'master' into gradient-clippingAUTOMATIC11112023-01-041-162/+251
| |\
| | * use shared function from processing for creating dummy mask when training ↵AUTOMATIC2023-01-041-24/+9
| | | | | | | | | | | | inpainting model
| | * fix the mergeAUTOMATIC2023-01-041-9/+5
| | |
| | * Merge branch 'master' into inpaint_textual_inversionAUTOMATIC11112023-01-041-160/+244
| | |\
| | | * Merge pull request #6253 from Shondoit/ti-optimAUTOMATIC11112023-01-041-8/+32
| | | |\ | | | | | | | | | | Save Optimizer next to TI embedding
| | | | * Save Optimizer next to TI embeddingShondoit2023-01-031-8/+32
| | | | | | | | | | | | | | | | | | | | Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
| | | * | add job info to modulesVladimir Mandic2023-01-031-0/+1
| | | |/
| | | * feat(api): return more data for embeddingsPhilpax2023-01-021-4/+4
| | | |
| | | * fix the issue with training on SD2.0AUTOMATIC2023-01-011-2/+1
| | | |
| | | * changed embedding accepted shape detection to use existing code and support ↵AUTOMATIC2022-12-311-24/+6
| | | | | | | | | | | | | | | | the new alt-diffusion model, and reformatted messages a bit #6149
| | | * validate textual inversion embeddingsVladimir Mandic2022-12-311-5/+38
| | | |
| | | * fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-1/+1
| | | |
| | | * Fix various typosJim Hays2022-12-151-8/+8
| | | |
| | | * Merge branch 'master' into racecond_fixAUTOMATIC11112022-12-031-148/+186
| | | |\
| | | | * Use devices.autocast instead of torch.autocastbrkirch2022-11-301-1/+1
| | | | |
| | | | * Merge remote-tracking branch 'flamelaw/master'AUTOMATIC2022-11-271-141/+182
| | | | |\
| | | | | * set TI AdamW default weight decay to 0flamelaw2022-11-261-1/+1
| | | | | |
| | | | | * small fixesflamelaw2022-11-221-1/+1
| | | | | |