Age | Commit message (Collapse) | Author | Lines | |
---|---|---|---|---|
2022-10-12 | xy_grid: Find hypernetwork by closest name | Milly | -0/+11 | |
2022-10-12 | Merge branch 'master' into feature/scale_to | AUTOMATIC1111 | -259/+2003 | |
2022-10-12 | Ensure the directory exists before saving to it | brkirch | -0/+2 | |
The directory for the images saved with the Save button may still not exist, so it needs to be created prior to opening the log.csv file. | ||||
2022-10-12 | just add the deepdanbooru settings unconditionally | AUTOMATIC | -9/+4 | |
2022-10-12 | Merge remote-tracking branch 'origin/steve3d' | AUTOMATIC | -1/+1 | |
2022-10-12 | Merge pull request #2143 from JC-Array/deepdanbooru_pre_process | AUTOMATIC1111 | -27/+114 | |
deepbooru tags for textual inversion preproccessing | ||||
2022-10-12 | Truncate error text to fix service lockup / stall | Greg Fuller | -1/+8 | |
What: * Update wrap_gradio_call to add a limit to the maximum amount of text output Why: * wrap_gradio_call currently prints out a list of the arguments provided to the failing function. * if that function is save_image, this causes the entire image to be printed to stderr * If the image is large, this can cause the service to lock up while attempting to print all the text * It is easy to generate large images using the x/y plot script * it is easy to encounter image save exceptions, including if the output directory does not exist / cannot be written to, or if the file is too big * The huge amount of log spam is confusing and not particularly helpful | ||||
2022-10-12 | create dir for hypernetworks | AUTOMATIC | -0/+1 | |
2022-10-12 | Update shared.py | supersteve3d | -1/+1 | |
Correct typo to "Unload VAE and CLIP from VRAM when training" in settings tab. | ||||
2022-10-11 | resolved conflicts, moved settings under interrogate section, settings only ↵ | JC_Array | -13/+12 | |
show if deepbooru flag is enabled | ||||
2022-10-11 | Merge branch 'AUTOMATIC1111:master' into deepdanbooru_pre_process | JC-Array | -195/+736 | |
2022-10-11 | reports that training with medvram is possible. | AUTOMATIC | -2/+2 | |
2022-10-11 | apply lr schedule to hypernets | AUTOMATIC | -45/+54 | |
2022-10-11 | Merge branch 'learning_rate-scheduling' into learnschedule | AUTOMATIC1111 | -223/+1787 | |
2022-10-11 | prevent extra modules from being saved/loaded with hypernet | AUTOMATIC | -1/+1 | |
2022-10-11 | add an option to unload models during hypernetwork training to save VRAM | AUTOMATIC | -18/+46 | |
2022-10-11 | produce error when training with medvram/lowvram enabled | AUTOMATIC | -0/+5 | |
2022-10-11 | removed unneeded print | JC_Array | -1/+0 | |
2022-10-11 | add option to select hypernetwork modules when creating | AUTOMATIC | -4/+6 | |
2022-10-11 | Merge pull request #2201 from alg-wiki/textual__inversion | AUTOMATIC1111 | -10/+11 | |
Textual Inversion: Preprocess and Training will only pick-up image files instead | ||||
2022-10-11 | Use apply_hypernetwork function | brkirch | -10/+4 | |
2022-10-11 | Add InvokeAI and lstein to credits, add back CUDA support | brkirch | -0/+13 | |
2022-10-11 | Add check for psutil | brkirch | -6/+23 | |
2022-10-11 | Add cross-attention optimization from InvokeAI | brkirch | -3/+86 | |
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable | ||||
2022-10-11 | Merge pull request #2227 from papuSpartan/master | AUTOMATIC1111 | -0/+1 | |
Refresh list of models/ckpts upon hitting restart gradio in the setti… | ||||
2022-10-11 | become even stricter with pickles | AUTOMATIC | -0/+17 | |
no pickle shall pass thank you again, RyotaK | ||||
2022-10-11 | move list refresh to webui.py and add stdout indicating it's doing so | papuSpartan | -3/+0 | |
2022-10-11 | more renames | AUTOMATIC | -4/+4 | |
2022-10-11 | rename hypernetwork dir to hypernetworks to prevent clash with an old ↵ | AUTOMATIC | -5/+5 | |
filename that people who use zip instead of git clone will have | ||||
2022-10-11 | Added new line at the end of ngrok.py | JamnedZ | -1/+1 | |
2022-10-11 | Cleaned ngrok integration | JamnedZ | -0/+21 | |
2022-10-11 | add a space holder | Ben | -1/+4 | |
2022-10-11 | Layout fix | Ben | -14/+14 | |
2022-10-11 | Fix typo in comments | Martin Cairns | -1/+1 | |
2022-10-11 | Remove debug code for checking that first sigma value is same after code cleanup | Martin Cairns | -1/+0 | |
2022-10-11 | Handle different parameters for DPM fast & adaptive | Martin Cairns | -7/+18 | |
2022-10-11 | fixes related to merge | AUTOMATIC | -147/+73 | |
2022-10-11 | Removed my local edits to checkpoint image generation | alg-wiki | -5/+2 | |
2022-10-11 | Switched to exception handling | alg-wiki | -18/+18 | |
2022-10-11 | Merge branch 'master' into hypernetwork-training | AUTOMATIC | -183/+1853 | |
2022-10-11 | replace duplicate code with a function | AUTOMATIC | -38/+29 | |
2022-10-11 | Comma backtrack padding (#2192) | hentailord85ez | -1/+19 | |
Comma backtrack padding | ||||
2022-10-11 | Added slider for deepbooru score threshold in settings | Kenneth | -1/+2 | |
2022-10-11 | Make the ctrl+enter shortcut use the generate button on the current tab | Jairo Correa | -1/+1 | |
2022-10-10 | Refresh list of models/ckpts upon hitting restart gradio in the settings pane | papuSpartan | -0/+4 | |
2022-10-10 | added alpha sort and threshold variables to create process method in ↵ | JC_Array | -1/+1 | |
preprocessing | ||||
2022-10-10 | Merge branch 'deepdanbooru_pre_process' into master | JC-Array | -22/+111 | |
2022-10-10 | added deepbooru settings (threshold and sort by alpha or likelyhood) | JC_Array | -11/+31 | |
2022-10-10 | corrected tag return in get_deepbooru_tags | JC_Array | -1/+0 | |
2022-10-10 | import time missing, added to deepbooru fixxing error on get_deepbooru_tags | JC_Array | -0/+1 | |