aboutsummaryrefslogtreecommitdiffstats
path: root/modules/sd_hijack.py
AgeCommit message (Collapse)AuthorLines
2022-12-10cleanup some unneeded imports for hijack filesAUTOMATIC-8/+2
2022-12-10do not replace entire unet for the resolution hackAUTOMATIC-2/+3
2022-12-10Merge pull request #4978 from aliencaocao/support_any_resolutionAUTOMATIC1111-0/+1
Patch UNet Forward to support resolutions that are not multiples of 64
2022-12-03move #5216 to the extensionAUTOMATIC-1/+1
2022-12-03Merge remote-tracking branch 'wywywywy/autoencoder-hijack'AUTOMATIC-1/+1
2022-12-03Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixesAUTOMATIC1111-5/+1
Use devices.autocast() and fix MPS randn issues
2022-12-02Fixed AttributeError where openaimodel is not foundSmirkingFace-0/+1
2022-11-29Add autoencoder to sd_hijackwywywywy-1/+1
2022-11-28Refactor and instead check if mps is being used, not availabilitybrkirch-5/+1
2022-11-27Merge remote-tracking branch 'flamelaw/master'AUTOMATIC-2/+7
2022-11-27Merge branch 'master' into support_any_resolutionBilly Cao-269/+29
2022-11-26restore hypernetworks to seemingly working stateAUTOMATIC-1/+2
2022-11-26Add support Stable Diffusion 2.0AUTOMATIC-269/+28
2022-11-23Patch UNet Forward to support resolutions that are not multiples of 64Billy Cao-0/+2
Also modifed the UI to no longer step in 64
2022-11-20Gradient accumulation, autocast fix, new latent sampling method, etcflamelaw-2/+7
2022-11-18cleanly undo circular hijack #4818killfrenzy96-1/+1
2022-11-12use the new devices.has_mps() function in register_buffer for DDIM/PLMS fix ↵AUTOMATIC-2/+1
for OSX
2022-11-11move DDIM/PLMS fix for OSX out of the file with inpainting code.AUTOMATIC-0/+23
2022-11-01Unload sd_model before loading the otherJairo Correa-0/+4
2022-10-22removed aesthetic gradients as built-inAUTOMATIC-1/+0
added support for extensions
2022-10-21make aestetic embedding ciompatible with prompts longer than 75 tokensAUTOMATIC-1/+1
2022-10-21Merge branch 'ae'AUTOMATIC-15/+15
2022-10-18Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-18use legacy attnblockC43H66N12O12S2-1/+1
2022-10-16ui fix, re organization of the codeMalumaDev-97/+5
2022-10-16ui fixMalumaDev-1/+1
2022-10-16ui fixMalumaDev-2/+1
2022-10-16Merge remote-tracking branch 'origin/test_resolve_conflicts' into ↵MalumaDev-2/+2
test_resolve_conflicts
2022-10-16fixed dropbox updateMalumaDev-2/+2
2022-10-16Merge branch 'master' into test_resolve_conflictsMalumaDev-2/+2
2022-10-15Update sd_hijack.pyC43H66N12O12S2-1/+1
2022-10-15fix to tokens lenght, addend embs generator, add new features to edit the ↵MalumaDev-38/+73
embedding before the generation using text
2022-10-14initMalumaDev-2/+78
2022-10-12fix iterator bug for #2295AUTOMATIC-4/+4
2022-10-12Account when lines are mismatchedhentailord85ez-1/+11
2022-10-11Add check for psutilbrkirch-2/+8
2022-10-11Add cross-attention optimization from InvokeAIbrkirch-1/+4
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS) * Add command line option for it * Make it default when CUDA is unavailable
2022-10-11rename hypernetwork dir to hypernetworks to prevent clash with an old ↵AUTOMATIC-1/+1
filename that people who use zip instead of git clone will have
2022-10-11Merge branch 'master' into hypernetwork-trainingAUTOMATIC-30/+93
2022-10-11Comma backtrack padding (#2192)hentailord85ez-1/+18
Comma backtrack padding
2022-10-10allow pascal onwardsC43H66N12O12S2-1/+1
2022-10-10Add back in output hidden states parameterhentailord85ez-1/+1
2022-10-10Pad beginning of textual inversion embeddinghentailord85ez-0/+5
2022-10-10Unlimited Token Workshentailord85ez-23/+46
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-09Removed unnecessary tmp variableFampai-4/+3
2022-10-09Updated code for legibilityFampai-2/+5
2022-10-09Optimized code for Ignoring last CLIP layersFampai-8/+4
2022-10-08Added ability to ignore last n layers in FrozenCLIPEmbedderFampai-2/+9
2022-10-08add --force-enable-xformers option and also add messages to console ↵AUTOMATIC-1/+5
regarding cross attention optimizations
2022-10-08check for ampere without destroying the optimizations. again.C43H66N12O12S2-4/+3