aboutsummaryrefslogtreecommitdiffstats
path: root/modules/gfpgan_model.py
Commit message (Collapse)AuthorAgeFilesLines
* Be more clear about Spandrel model nomenclatureAarni Koskela2023-12-301-4/+6
|
* Verify architecture for loaded Spandrel modelsAarni Koskela2023-12-301-0/+1
|
* Unify CodeFormer and GFPGAN restoration backends, use Spandrel for GFPGANAarni Koskela2023-12-301-112/+54
|
* Use Spandrel for upscaling and face restoration architectures (aside from ↵Aarni Koskela2023-12-301-6/+7
| | | | GFPGAN and LDSR)
* fix Blank line contains whitespaceavantcontra2023-10-211-1/+1
|
* fix bug when using --gfpgan-models-pathavantcontra2023-10-211-5/+20
|
* Merge pull request #10823 from akx/model-loadyAUTOMATIC11112023-06-271-1/+1
|\ | | | | Upscaler model loading cleanup
| * Fix up `if "http" in ...:` to be more sensible startswithsAarni Koskela2023-06-131-1/+1
| |
* | Use os.makedirs(..., exist_ok=True)Aarni Koskela2023-06-131-4/+1
|/
* rename print_error to report, use it with together with package nameAUTOMATIC2023-05-311-3/+2
|
* Add & use modules.errors.print_error where currently printing exception info ↵Aarni Koskela2023-05-291-4/+2
| | | | by hand
* F401 fixes for ruffAUTOMATIC2023-05-101-1/+1
|
* add data-dir flag and set all user data directories based on itMax Audron2023-01-271-3/+2
|
* Set device for facelib/facexlib and gfpganbrkirch2022-11-121-1/+3
| | | | | * FaceXLib/FaceLib doesn't pass the device argument to RetinaFace but instead chooses one itself and sets it to a global - in order to use a device other than its internally chosen default it is necessary to manually replace the default value * The GFPGAN constructor needs the device argument to work with MPS or a CUDA device ID that differs from the default
* Merge branch 'master' into cpu-cmdline-optbrkirch2022-10-041-3/+13
|\
| * send all three of GFPGAN's and codeformer's models to CPU memory instead of ↵AUTOMATIC2022-10-041-2/+12
| | | | | | | | just one for #1283
* | Merge branch 'master' into masterbrkirch2022-10-041-5/+1
|\|
| * use existing function for gfpganAUTOMATIC2022-10-031-5/+1
| |
* | When device is MPS, use CPU for GFPGAN insteadbrkirch2022-10-011-3/+3
|/ | | | GFPGAN will not work if the device is MPS, so default to CPU instead.
* remove unwanted formatting/functionality from the PRAUTOMATIC2022-09-301-9/+3
|
* Holy $hit.d8ahazard2022-09-291-18/+40
| | | | | | | | | | | | | | | | | Yep. Fix gfpgan_model_arch requirement(s). Add Upscaler base class, move from images. Add a lot of methods to Upscaler. Re-work all the child upscalers to be proper classes. Add BSRGAN scaler. Add ldsr_model_arch class, removing the dependency for another repo that just uses regular latent-diffusion stuff. Add one universal method that will always find and load new upscaler models without having to add new "setup_model" calls. Still need to add command line params, but that could probably be automated. Add a "self.scale" property to all Upscalers so the scalers themselves can do "things" in response to the requested upscaling size. Ensure LDSR doesn't get stuck in a longer loop of "upscale/downscale/upscale" as we try to reach the target upscale size. Add typehints for IDE sanity. PEP-8 improvements. Moar.
* Re-implement universal model loadingd8ahazard2022-09-261-31/+29
|
* gfpgan: just download the damn modelAUTOMATIC2022-09-231-6/+13
|
* Instance of CUDA out of memory on a low-res batch, even with ↵AUTOMATIC2022-09-121-5/+10
| | | | --opt-split-attention-v1 (found cause) #255
* codeformer supportAUTOMATIC2022-09-071-2/+18
|
* option to unload GFPGAN after usingAUTOMATIC2022-09-031-3/+12
|
* split codebase into multiple files; to anyone this affects negatively: sorryAUTOMATIC2022-09-031-0/+58