Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | fix whitespace for #13084 | AUTOMATIC1111 | 2023-09-09 | 1 | -1/+1 |
| | |||||
* | Fix #13080 - Hypernetwork/TI preview generation | AngelBottomless | 2023-09-05 | 1 | -2/+2 |
| | | | | | Fixes sampler name reference Same patch will be done for TI. | ||||
* | resolve some of circular import issues for kohaku | AUTOMATIC1111 | 2023-08-04 | 1 | -3/+2 |
| | |||||
* | get attention optimizations to work | AUTOMATIC1111 | 2023-07-13 | 1 | -1/+1 |
| | |||||
* | Use closing() with processing classes everywhere | Aarni Koskela | 2023-07-10 | 1 | -2/+4 |
| | | | | Follows up on #11569 | ||||
* | Remove a bunch of unused/vestigial code | Aarni Koskela | 2023-06-05 | 1 | -24/+0 |
| | | | | As found by Vulture and some eyes | ||||
* | rename print_error to report, use it with together with package name | AUTOMATIC | 2023-05-31 | 1 | -4/+3 |
| | |||||
* | Add & use modules.errors.print_error where currently printing exception info ↵ | Aarni Koskela | 2023-05-29 | 1 | -9/+5 |
| | | | | by hand | ||||
* | Autofix Ruff W (not W605) (mostly whitespace) | Aarni Koskela | 2023-05-11 | 1 | -6/+6 |
| | |||||
* | suggestions and fixes from the PR | AUTOMATIC | 2023-05-10 | 1 | -2/+2 |
| | |||||
* | fixes for B007 | AUTOMATIC | 2023-05-10 | 1 | -6/+6 |
| | |||||
* | ruff auto fixes | AUTOMATIC | 2023-05-10 | 2 | -3/+3 |
| | |||||
* | imports cleanup for ruff | AUTOMATIC | 2023-05-10 | 2 | -4/+1 |
| | |||||
* | sort hypernetworks and checkpoints by name | AUTOMATIC | 2023-03-28 | 1 | -1/+1 |
| | |||||
* | Merge branch 'master' into weighted-learning | AUTOMATIC1111 | 2023-02-19 | 1 | -2/+2 |
|\ | |||||
| * | Support for hypernetworks with --upcast-sampling | brkirch | 2023-02-06 | 1 | -2/+2 |
| | | |||||
* | | Add ability to choose using weighted loss or not | Shondoit | 2023-02-15 | 1 | -4/+9 |
| | | |||||
* | | Call weighted_forward during training | Shondoit | 2023-02-15 | 1 | -1/+2 |
|/ | |||||
* | add --no-hashing | AUTOMATIC | 2023-02-04 | 1 | -1/+1 |
| | |||||
* | enable compact view for train tab | AUTOMATIC | 2023-01-21 | 1 | -0/+2 |
| | | | | prevent previews from ruining hypernetwork training | ||||
* | extra networks UI | AUTOMATIC | 2023-01-21 | 2 | -35/+77 |
| | | | | rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight> | ||||
* | add option to show/hide warnings | AUTOMATIC | 2023-01-18 | 1 | -1/+6 |
| | | | | | removed hiding warnings from LDSR fixed/reworked few places that produced warnings | ||||
* | Fix tensorboard related functions | aria1th | 2023-01-15 | 1 | -7/+6 |
| | |||||
* | Fix loss_dict problem | aria1th | 2023-01-15 | 1 | -1/+3 |
| | |||||
* | fix missing 'mean loss' for tensorboard integration | AngelBottomless | 2023-01-15 | 1 | -1/+1 |
| | |||||
* | big rework of progressbar/preview system to allow multiple users to prompts ↵ | AUTOMATIC | 2023-01-15 | 1 | -3/+3 |
| | | | | at the same time and do not get previews of each other | ||||
* | change hypernets to use sha256 hashes | AUTOMATIC | 2023-01-14 | 1 | -17/+23 |
| | |||||
* | change hash to sha256 | AUTOMATIC | 2023-01-14 | 1 | -2/+2 |
| | |||||
* | Merge branch 'master' into tensorboard | AUTOMATIC1111 | 2023-01-13 | 2 | -165/+491 |
|\ | |||||
| * | set descriptions | Vladimir Mandic | 2023-01-11 | 1 | -1/+3 |
| | | |||||
| * | Variable dropout rate | aria1th | 2023-01-10 | 2 | -27/+78 |
| | | | | | | | | | | | | | | | | | | | | Implements variable dropout rate from #4549 Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training. Changes function name to match torch.nn.module standard Fixes RNG reset issue when generating previews by restoring RNG state | ||||
| * | make a dropdown for prompt template selection | AUTOMATIC | 2023-01-09 | 1 | -2/+5 |
| | | |||||
| * | Move batchsize check | dan | 2023-01-07 | 1 | -1/+1 |
| | | |||||
| * | Add checkbox for variable training dims | dan | 2023-01-07 | 1 | -1/+1 |
| | | |||||
| * | rework saving training params to file #6372 | AUTOMATIC | 2023-01-06 | 1 | -21/+7 |
| | | |||||
| * | Include model in log file. Exclude directory. | timntorres | 2023-01-05 | 1 | -18/+10 |
| | | |||||
| * | Clean up ti, add same behavior to hypernetwork. | timntorres | 2023-01-05 | 1 | -1/+30 |
| | | |||||
| * | Merge branch 'master' into gradient-clipping | AUTOMATIC1111 | 2023-01-04 | 2 | -166/+206 |
| |\ | |||||
| | * | add job info to modules | Vladimir Mandic | 2023-01-03 | 1 | -0/+1 |
| | | | |||||
| | * | Merge pull request #5992 from yuvalabou/F541 | AUTOMATIC1111 | 2022-12-25 | 1 | -2/+2 |
| | |\ | | | | | | | | | Fix F541: f-string without any placeholders | ||||
| | | * | fix F541 f-string without any placeholders | Yuval Aboulafia | 2022-12-24 | 1 | -2/+2 |
| | | | | |||||
| | * | | implement train api | Vladimir Mandic | 2022-12-24 | 2 | -27/+30 |
| | |/ | |||||
| | * | Merge branch 'master' into racecond_fix | AUTOMATIC1111 | 2022-12-03 | 2 | -131/+217 |
| | |\ | |||||
| | | * | Use devices.autocast instead of torch.autocast | brkirch | 2022-11-30 | 1 | -1/+1 |
| | | | | |||||
| | | * | last_layer_dropout default to False | flamelaw | 2022-11-23 | 1 | -1/+1 |
| | | | | |||||
| | | * | fix dropout, implement train/eval mode | flamelaw | 2022-11-23 | 1 | -6/+18 |
| | | | | |||||
| | | * | small fixes | flamelaw | 2022-11-22 | 1 | -3/+3 |
| | | | | |||||
| | | * | fix pin_memory with different latent sampling method | flamelaw | 2022-11-21 | 1 | -1/+4 |
| | | | | |||||
| | | * | Gradient accumulation, autocast fix, new latent sampling method, etc | flamelaw | 2022-11-20 | 1 | -123/+146 |
| | | | | |||||
| | | * | change StableDiffusionProcessing to internally use sampler name instead of ↵ | AUTOMATIC | 2022-11-19 | 1 | -2/+2 |
| | | | | | | | | | | | | | | | | sampler index |