Age | Commit message (Collapse) | Author | Lines | |
---|---|---|---|---|
2023-01-30 | make the program read Eta and Eta DDIM from generation parameters | AUTOMATIC | -1/+0 | |
2023-01-29 | Extra network in hr abomination fix | invincibledude | -1/+2 | |
2023-01-29 | Extra networks loading fix | invincibledude | -10/+3 | |
2023-01-29 | Extra networks loading fix | invincibledude | -2/+2 | |
2023-01-29 | Extra networks loading fix | invincibledude | -2/+14 | |
2023-01-29 | Merge branch 'master' into master | InvincibleDude | -7/+26 | |
2023-01-29 | remove Batch size and Batch pos from textinfo (goodbye) | AUTOMATIC | -2/+0 | |
2023-01-28 | Merge pull request #7309 from brkirch/fix-embeddings | AUTOMATIC1111 | -7/+8 | |
Fix embeddings, upscalers, and refactor `--upcast-sampling` | ||||
2023-01-28 | Refactor conditional casting, fix upscalers | brkirch | -7/+8 | |
2023-01-27 | add data-dir flag and set all user data directories based on it | Max Audron | -1/+2 | |
2023-01-26 | add an option to enable sections from extras tab in txt2img/img2img | AUTOMATIC | -1/+6 | |
fix some style inconsistenices | ||||
2023-01-26 | Fix full previews, --no-half-vae | brkirch | -4/+4 | |
2023-01-25 | add edit_image_conditioning from my earlier edits in case there's an attempt ↵ | AUTOMATIC | -1/+9 | |
to inegrate pix2pix properly this allows to use pix2pix model in img2img though it won't work well this way | ||||
2023-01-25 | Merge pull request #6510 from brkirch/unet16-upcast-precision | AUTOMATIC1111 | -7/+8 | |
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half | ||||
2023-01-25 | change to code for live preview fix on OSX to be bit more obvious | AUTOMATIC | -2/+2 | |
2023-01-25 | Add UI setting for upcasting attention to float32 | brkirch | -1/+1 | |
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers. In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also. | ||||
2023-01-25 | Add option for float32 sampling with float16 UNet | brkirch | -7/+8 | |
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half(). | ||||
2023-01-24 | Merge branch 'AUTOMATIC1111:master' into master | InvincibleDude | -2/+6 | |
2023-01-23 | Fix different first gen with Approx NN previews | brkirch | -1/+5 | |
The loading of the model for approx nn live previews can change the internal state of PyTorch, resulting in a different image. This can be avoided by preloading the approx nn model in advance. | ||||
2023-01-22 | Gen params paste improvement | invincibledude | -2/+2 | |
2023-01-22 | Gen params paste improvement | invincibledude | -2/+2 | |
2023-01-22 | UI and PNG info improvements | invincibledude | -2/+2 | |
2023-01-22 | UI and PNG info improvements | invincibledude | -0/+3 | |
2023-01-22 | hr conditioning | invincibledude | -1/+1 | |
2023-01-22 | hr conditioning | invincibledude | -2/+2 | |
2023-01-22 | hr conditioning | invincibledude | -7/+12 | |
2023-01-22 | hr conditioning | invincibledude | -4/+5 | |
2023-01-22 | hr conditioning | invincibledude | -21/+13 | |
2023-01-22 | hr conditioning | invincibledude | -26/+46 | |
2023-01-22 | Hr-fix separate prompt experimentation | invincibledude | -21/+22 | |
2023-01-22 | Logging for debugging | invincibledude | -0/+3 | |
2023-01-22 | Fix | invincibledude | -1/+1 | |
2023-01-22 | Hr separate prompt test | invincibledude | -1/+22 | |
2023-01-22 | PLMS edge-case handling fix 5 | invincibledude | -2/+0 | |
2023-01-22 | PLMS edge-case handling fix 3 | invincibledude | -2/+2 | |
2023-01-22 | PLMS edge-case handling fix 2 | invincibledude | -2/+6 | |
2023-01-22 | PLMS edge-case handling fix | invincibledude | -1/+1 | |
2023-01-22 | enable compact view for train tab | AUTOMATIC | -2/+6 | |
prevent previews from ruining hypernetwork training | ||||
2023-01-21 | Type mismatch fix | invincibledude | -2/+2 | |
2023-01-21 | First test of different sampler for hi-res fix | invincibledude | -1/+6 | |
2023-01-21 | extract extra network data from prompt earlier | AUTOMATIC | -2/+2 | |
2023-01-21 | make it so that extra networks are not removed from infotext | AUTOMATIC | -1/+3 | |
2023-01-21 | extra networks UI | AUTOMATIC | -11/+13 | |
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight> | ||||
2023-01-18 | Merge pull request #6854 from EllangoK/master | AUTOMATIC1111 | -4/+4 | |
Saves Extra Generation Parameters to params.txt | ||||
2023-01-18 | use DDIM in hires fix is the sampler is PLMS | AUTOMATIC | -1/+2 | |
2023-01-17 | Changed params.txt save to after manual init call | EllangoK | -4/+4 | |
2023-01-16 | make StableDiffusionProcessing class not hold a reference to shared.sd_model ↵ | AUTOMATIC | -4/+5 | |
object | ||||
2023-01-16 | Add a check and explanation for tensor with all NaNs. | AUTOMATIC | -0/+3 | |
2023-01-14 | change hypernets to use sha256 hashes | AUTOMATIC | -1/+1 | |
2023-01-12 | Fix extension parameters not being saved to last used parameters | space-nuko | -4/+4 | |