aboutsummaryrefslogtreecommitdiffstats
path: root/modules/hypernetworks
Commit message (Collapse)AuthorAgeFilesLines
* fix whitespace for #13084AUTOMATIC11112023-09-091-1/+1
|
* Fix #13080 - Hypernetwork/TI preview generationAngelBottomless2023-09-051-2/+2
| | | | | Fixes sampler name reference Same patch will be done for TI.
* resolve some of circular import issues for kohakuAUTOMATIC11112023-08-041-3/+2
|
* get attention optimizations to workAUTOMATIC11112023-07-131-1/+1
|
* Use closing() with processing classes everywhereAarni Koskela2023-07-101-2/+4
| | | | Follows up on #11569
* Remove a bunch of unused/vestigial codeAarni Koskela2023-06-051-24/+0
| | | | As found by Vulture and some eyes
* rename print_error to report, use it with together with package nameAUTOMATIC2023-05-311-4/+3
|
* Add & use modules.errors.print_error where currently printing exception info ↵Aarni Koskela2023-05-291-9/+5
| | | | by hand
* Autofix Ruff W (not W605) (mostly whitespace)Aarni Koskela2023-05-111-6/+6
|
* suggestions and fixes from the PRAUTOMATIC2023-05-101-2/+2
|
* fixes for B007AUTOMATIC2023-05-101-6/+6
|
* ruff auto fixesAUTOMATIC2023-05-102-3/+3
|
* imports cleanup for ruffAUTOMATIC2023-05-102-4/+1
|
* sort hypernetworks and checkpoints by nameAUTOMATIC2023-03-281-1/+1
|
* Merge branch 'master' into weighted-learningAUTOMATIC11112023-02-191-2/+2
|\
| * Support for hypernetworks with --upcast-samplingbrkirch2023-02-061-2/+2
| |
* | Add ability to choose using weighted loss or notShondoit2023-02-151-4/+9
| |
* | Call weighted_forward during trainingShondoit2023-02-151-1/+2
|/
* add --no-hashingAUTOMATIC2023-02-041-1/+1
|
* enable compact view for train tabAUTOMATIC2023-01-211-0/+2
| | | | prevent previews from ruining hypernetwork training
* extra networks UIAUTOMATIC2023-01-212-35/+77
| | | | rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
* add option to show/hide warningsAUTOMATIC2023-01-181-1/+6
| | | | | removed hiding warnings from LDSR fixed/reworked few places that produced warnings
* Fix tensorboard related functionsaria1th2023-01-151-7/+6
|
* Fix loss_dict problemaria1th2023-01-151-1/+3
|
* fix missing 'mean loss' for tensorboard integrationAngelBottomless2023-01-151-1/+1
|
* big rework of progressbar/preview system to allow multiple users to prompts ↵AUTOMATIC2023-01-151-3/+3
| | | | at the same time and do not get previews of each other
* change hypernets to use sha256 hashesAUTOMATIC2023-01-141-17/+23
|
* change hash to sha256AUTOMATIC2023-01-141-2/+2
|
* Merge branch 'master' into tensorboardAUTOMATIC11112023-01-132-165/+491
|\
| * set descriptionsVladimir Mandic2023-01-111-1/+3
| |
| * Variable dropout ratearia1th2023-01-102-27/+78
| | | | | | | | | | | | | | | | | | | | Implements variable dropout rate from #4549 Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training. Changes function name to match torch.nn.module standard Fixes RNG reset issue when generating previews by restoring RNG state
| * make a dropdown for prompt template selectionAUTOMATIC2023-01-091-2/+5
| |
| * Move batchsize checkdan2023-01-071-1/+1
| |
| * Add checkbox for variable training dimsdan2023-01-071-1/+1
| |
| * rework saving training params to file #6372AUTOMATIC2023-01-061-21/+7
| |
| * Include model in log file. Exclude directory.timntorres2023-01-051-18/+10
| |
| * Clean up ti, add same behavior to hypernetwork.timntorres2023-01-051-1/+30
| |
| * Merge branch 'master' into gradient-clippingAUTOMATIC11112023-01-042-166/+206
| |\
| | * add job info to modulesVladimir Mandic2023-01-031-0/+1
| | |
| | * Merge pull request #5992 from yuvalabou/F541AUTOMATIC11112022-12-251-2/+2
| | |\ | | | | | | | | Fix F541: f-string without any placeholders
| | | * fix F541 f-string without any placeholdersYuval Aboulafia2022-12-241-2/+2
| | | |
| | * | implement train apiVladimir Mandic2022-12-242-27/+30
| | |/
| | * Merge branch 'master' into racecond_fixAUTOMATIC11112022-12-032-131/+217
| | |\
| | | * Use devices.autocast instead of torch.autocastbrkirch2022-11-301-1/+1
| | | |
| | | * last_layer_dropout default to Falseflamelaw2022-11-231-1/+1
| | | |
| | | * fix dropout, implement train/eval modeflamelaw2022-11-231-6/+18
| | | |
| | | * small fixesflamelaw2022-11-221-3/+3
| | | |
| | | * fix pin_memory with different latent sampling methodflamelaw2022-11-211-1/+4
| | | |
| | | * Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw2022-11-201-123/+146
| | | |
| | | * change StableDiffusionProcessing to internally use sampler name instead of ↵AUTOMATIC2022-11-191-2/+2
| | | | | | | | | | | | | | | | sampler index