diff options
author | d8ahazard <d8ahazard@gmail.com> | 2022-09-29 22:46:23 +0000 |
---|---|---|
committer | d8ahazard <d8ahazard@gmail.com> | 2022-09-29 22:46:23 +0000 |
commit | 0dce0df1ee63b2f158805c1a1f1a3743cc4a104b (patch) | |
tree | dfcec33656d06835e71961b117b63e510cb9bff2 /modules/sd_samplers.py | |
parent | 31ad536c331df14dd785bfd2a1f93f91a8f7839e (diff) | |
download | stable-diffusion-webui-gfx803-0dce0df1ee63b2f158805c1a1f1a3743cc4a104b.tar.gz stable-diffusion-webui-gfx803-0dce0df1ee63b2f158805c1a1f1a3743cc4a104b.tar.bz2 stable-diffusion-webui-gfx803-0dce0df1ee63b2f158805c1a1f1a3743cc4a104b.zip |
Holy $hit.
Yep.
Fix gfpgan_model_arch requirement(s).
Add Upscaler base class, move from images.
Add a lot of methods to Upscaler.
Re-work all the child upscalers to be proper classes.
Add BSRGAN scaler.
Add ldsr_model_arch class, removing the dependency for another repo that just uses regular latent-diffusion stuff.
Add one universal method that will always find and load new upscaler models without having to add new "setup_model" calls. Still need to add command line params, but that could probably be automated.
Add a "self.scale" property to all Upscalers so the scalers themselves can do "things" in response to the requested upscaling size.
Ensure LDSR doesn't get stuck in a longer loop of "upscale/downscale/upscale" as we try to reach the target upscale size.
Add typehints for IDE sanity.
PEP-8 improvements.
Moar.
Diffstat (limited to 'modules/sd_samplers.py')
-rw-r--r-- | modules/sd_samplers.py | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/modules/sd_samplers.py b/modules/sd_samplers.py index 666ee1ee..cfc3ee40 100644 --- a/modules/sd_samplers.py +++ b/modules/sd_samplers.py @@ -154,9 +154,9 @@ class VanillaStableDiffusionSampler: # existing code fails with cetin step counts, like 9
try:
- samples_ddim, _ = self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=p.ddim_eta)
+ samples_ddim, _ = self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_t=x, eta=p.ddim_eta)
except Exception:
- samples_ddim, _ = self.sampler.sample(S=steps+1, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=p.ddim_eta)
+ samples_ddim, _ = self.sampler.sample(S=steps+1, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_t=x, eta=p.ddim_eta)
return samples_ddim
|