aboutsummaryrefslogtreecommitdiffstats
path: root/modules/textual_inversion
Commit message (Collapse)AuthorAgeFilesLines
* Fix None type error for TI modulebutaixianran2023-03-241-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | When user using model_name.png as a preview image, textural_inversion.py still treat it as an embeding, and didn't handle its error, just let python throw out an None type error like following: ```bash File "D:\Work\Dev\AI\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 155, in load_from_file name = data.get('name', name) AttributeError: 'NoneType' object has no attribute 'get' ``` With just a simple `if data:` checking as following, there will be no error, breaks nothing, and now this module can works fine with user's preview images. Old code: ```python data = extract_image_data_embed(embed_image) name = data.get('name', name) ``` New code: ```python data = extract_image_data_embed(embed_image) if data: name = data.get('name', name) else: # if data is None, means this is not an embeding, just a preview image return ``` Also, since there is no more errors on textual inversion module, from now on, extra network can set "model_name.png" as preview image for embedings.
* fix for #6700AUTOMATIC2023-02-191-1/+1
|
* Add ability to choose using weighted loss or notShondoit2023-02-152-9/+19
|
* Call weighted_forward during trainingShondoit2023-02-151-1/+2
|
* Add PNG alpha channel as weight maps to data entriesShondoit2023-02-151-13/+38
|
* do not display the message for TI unless the list of loaded embeddings changedAUTOMATIC2023-01-291-3/+7
|
* add data-dir flag and set all user data directories based on itMax Audron2023-01-271-3/+2
|
* allow symlinks in the textual inversion embeddings folderAlex "mcmonkey" Goodwin2023-01-251-1/+1
|
* extra networks UIAUTOMATIC2023-01-211-0/+2
| | | | rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
* Merge pull request #6844 from guaneec/crop-uiAUTOMATIC11112023-01-191-4/+34
|\ | | | | Add auto-sized cropping UI
| * Fix of fixdan2023-01-191-2/+2
| |
| * Simplification and bugfixdan2023-01-191-7/+5
| |
| * Add auto-sized cropping UIdan2023-01-171-3/+35
| |
* | add option to show/hide warningsAUTOMATIC2023-01-181-1/+5
|/ | | | | removed hiding warnings from LDSR fixed/reworked few places that produced warnings
* add fields to settings fileVladimir Mandic2023-01-151-1/+1
|
* big rework of progressbar/preview system to allow multiple users to prompts ↵AUTOMATIC2023-01-152-4/+4
| | | | at the same time and do not get previews of each other
* change hash to sha256AUTOMATIC2023-01-141-3/+3
|
* fix a bug caused by mergeAUTOMATIC2023-01-131-0/+1
|
* Merge branch 'master' into tensorboardAUTOMATIC11112023-01-138-338/+1128
|\
| * Merge pull request #6689 from Poktay/add_gradient_settings_to_logging_fileAUTOMATIC11112023-01-131-1/+1
| |\ | | | | | | add gradient settings to training settings log files
| | * add gradient settings to training settings log filesJosh R2023-01-131-1/+1
| | |
| * | print bucket sizes for training without resizing images #6620AUTOMATIC2023-01-133-3/+19
| | | | | | | | | | | | fix an error when generating a picture with embedding in it
| * | Merge pull request #6620 from guaneec/varsize_batchAUTOMATIC11112023-01-131-4/+32
| |\ \ | | |/ | |/| Enable batch_size>1 for mixed-sized training
| | * Enable batch_size>1 for mixed-sized trainingdan2023-01-101-4/+32
| | |
| * | Allow creation of zero vectors for TIShondoit2023-01-121-3/+6
| | |
| * | set descriptionsVladimir Mandic2023-01-112-2/+9
| | |
| * | Support loading textual inversion embeddings from safetensors filesLee Bousfield2023-01-111-0/+3
| |/
| * make a dropdown for prompt template selectionAUTOMATIC2023-01-091-8/+27
| |
| * remove/simplify some changes from #6481AUTOMATIC2023-01-092-11/+7
| |
| * Merge branch 'master' into varsizeAUTOMATIC11112023-01-091-62/+103
| |\
| | * make it possible for extensions/scripts to add their own embedding directoriesAUTOMATIC2023-01-081-66/+104
| | |
| | * skip images in embeddings dir if they have a second .preview extensionAUTOMATIC2023-01-081-0/+4
| | |
| * | Move batchsize checkdan2023-01-071-2/+2
| | |
| * | Add checkbox for variable training dimsdan2023-01-072-4/+4
| | |
| * | Allow variable img sizedan2023-01-072-9/+13
| |/
| * CLIP hijack reworkAUTOMATIC2023-01-061-1/+0
| |
| * rework saving training params to file #6372AUTOMATIC2023-01-062-20/+27
| |
| * Merge pull request #6372 from ↵AUTOMATIC11112023-01-061-1/+25
| |\ | | | | | | | | | | | | timntorres/save-ti-hypernet-settings-to-txt-revised Save hypernet and textual inversion settings to text file, revised.
| | * Include model in log file. Exclude directory.timntorres2023-01-051-13/+9
| | |
| | * Clean up ti, add same behavior to hypernetwork.timntorres2023-01-051-5/+9
| | |
| | * Add option to save ti settings to file.timntorres2023-01-051-3/+27
| | |
| * | allow loading embeddings from subdirectoriesFaber2023-01-051-11/+12
| | |
| * | typo in TIKuma2023-01-051-1/+1
| |/
| * Merge branch 'master' into gradient-clippingAUTOMATIC11112023-01-045-219/+354
| |\
| | * use shared function from processing for creating dummy mask when training ↵AUTOMATIC2023-01-041-24/+9
| | | | | | | | | | | | inpainting model
| | * fix the mergeAUTOMATIC2023-01-041-9/+5
| | |
| | * Merge branch 'master' into inpaint_textual_inversionAUTOMATIC11112023-01-045-285/+439
| | |\
| | | * Merge pull request #6253 from Shondoit/ti-optimAUTOMATIC11112023-01-041-8/+32
| | | |\ | | | | | | | | | | Save Optimizer next to TI embedding
| | | | * Save Optimizer next to TI embeddingShondoit2023-01-031-8/+32
| | | | | | | | | | | | | | | | | | | | Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
| | | * | add job info to modulesVladimir Mandic2023-01-032-0/+2
| | | |/