aboutsummaryrefslogtreecommitdiffstats
path: root/modules/hypernetworks/hypernetwork.py
Commit message (Collapse)AuthorAgeFilesLines
* sort hypernetworks and checkpoints by nameAUTOMATIC2023-03-281-1/+1
|
* Merge branch 'master' into weighted-learningAUTOMATIC11112023-02-191-2/+2
|\
| * Support for hypernetworks with --upcast-samplingbrkirch2023-02-061-2/+2
| |
* | Add ability to choose using weighted loss or notShondoit2023-02-151-4/+9
| |
* | Call weighted_forward during trainingShondoit2023-02-151-1/+2
|/
* add --no-hashingAUTOMATIC2023-02-041-1/+1
|
* enable compact view for train tabAUTOMATIC2023-01-211-0/+2
| | | | prevent previews from ruining hypernetwork training
* extra networks UIAUTOMATIC2023-01-211-32/+75
| | | | rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
* add option to show/hide warningsAUTOMATIC2023-01-181-1/+6
| | | | | removed hiding warnings from LDSR fixed/reworked few places that produced warnings
* Fix tensorboard related functionsaria1th2023-01-151-7/+6
|
* Fix loss_dict problemaria1th2023-01-151-1/+3
|
* fix missing 'mean loss' for tensorboard integrationAngelBottomless2023-01-151-1/+1
|
* big rework of progressbar/preview system to allow multiple users to prompts ↵AUTOMATIC2023-01-151-3/+3
| | | | at the same time and do not get previews of each other
* change hypernets to use sha256 hashesAUTOMATIC2023-01-141-17/+23
|
* change hash to sha256AUTOMATIC2023-01-141-2/+2
|
* Merge branch 'master' into tensorboardAUTOMATIC11112023-01-131-143/+484
|\
| * set descriptionsVladimir Mandic2023-01-111-1/+3
| |
| * Variable dropout ratearia1th2023-01-101-25/+76
| | | | | | | | | | | | | | | | | | | | Implements variable dropout rate from #4549 Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training. Changes function name to match torch.nn.module standard Fixes RNG reset issue when generating previews by restoring RNG state
| * make a dropdown for prompt template selectionAUTOMATIC2023-01-091-2/+5
| |
| * Move batchsize checkdan2023-01-071-1/+1
| |
| * Add checkbox for variable training dimsdan2023-01-071-1/+1
| |
| * rework saving training params to file #6372AUTOMATIC2023-01-061-21/+7
| |
| * Include model in log file. Exclude directory.timntorres2023-01-051-18/+10
| |
| * Clean up ti, add same behavior to hypernetwork.timntorres2023-01-051-1/+30
| |
| * Merge branch 'master' into gradient-clippingAUTOMATIC11112023-01-041-139/+202
| |\
| | * add job info to modulesVladimir Mandic2023-01-031-0/+1
| | |
| | * Merge pull request #5992 from yuvalabou/F541AUTOMATIC11112022-12-251-2/+2
| | |\ | | | | | | | | Fix F541: f-string without any placeholders
| | | * fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-2/+2
| | | |
| | * | implement train apiVladimir Mandic2022-12-241-0/+26
| | |/
| | * Merge branch 'master' into racecond_fixAUTOMATIC11112022-12-031-130/+216
| | |\
| | | * Use devices.autocast instead of torch.autocastbrkirch2022-11-301-1/+1
| | | |
| | | * last_layer_dropout default to Falseflamelaw2022-11-231-1/+1
| | | |
| | | * fix dropout, implement train/eval modeflamelaw2022-11-231-6/+18
| | | |
| | | * small fixesflamelaw2022-11-221-3/+3
| | | |
| | | * fix pin_memory with different latent sampling methodflamelaw2022-11-211-1/+4
| | | |
| | | * Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw2022-11-201-123/+146
| | | |
| | | * change StableDiffusionProcessing to internally use sampler name instead of ↵AUTOMATIC2022-11-191-2/+2
| | | | | | | | | | | | | | | | sampler index
| | * | Fixes race condition in training when VAE is unloadedFampai2022-11-041-0/+4
| | | | | | | | | | | | | | | | | | | | set_current_image can attempt to use the VAE when it is unloaded to the CPU while training
| * | | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-11-071-6/+53
| |\ \ \ | | | |/ | | |/|
| | * | rework the code to not use the walrus operator because colab's 3.7 does not ↵AUTOMATIC2022-11-051-2/+5
| | | | | | | | | | | | | | | | support it
| | * | Merge pull request #4273 from Omegastick/ordered_hypernetworksAUTOMATIC11112022-11-051-1/+1
| | |\ \ | | | | | | | | | | Sort hypernetworks list
| | | * | Sort straight out of the globIsaac Poulton2022-11-041-2/+2
| | | | |
| | | * | Sort hypernetworksIsaac Poulton2022-11-041-1/+1
| | | |/
| | * | only save if option is enabledaria1th2022-11-041-1/+1
| | | |
| | * | split before declaring file namearia1th2022-11-041-1/+1
| | | |
| | * | applyaria1th2022-11-041-5/+49
| | |/
| * | Simplify grad clipMuhammad Rizqi Nur2022-11-051-9/+7
| | |
| * | Merge branch 'master' into gradient-clippingMuhammad Rizqi Nur2022-11-041-12/+24
| |\|
| | * Merge branch 'master' into hn-activationAUTOMATIC11112022-11-041-33/+56
| | |\
| | * | Revert unresolved changes in Bias initializationAngelBottomless2022-10-271-1/+1
| | | | | | | | | | | | it should be zeros_ or parameterized in future properly.