diff options
author | Kohaku-Blueleaf <59680068+KohakuBlueleaf@users.noreply.github.com> | 2023-10-23 17:49:05 +0000 |
---|---|---|
committer | Kohaku-Blueleaf <59680068+KohakuBlueleaf@users.noreply.github.com> | 2023-10-23 17:49:05 +0000 |
commit | eaa9f5162fbca2ebcb2682eb861bc7e5510a2b66 (patch) | |
tree | f8bf60786db8d42a0a0e85deb56c885780bda654 /modules/extras.py | |
parent | 5f9ddfa46f28ca2aa9e0bd832f6bbd67069be63e (diff) | |
download | stable-diffusion-webui-gfx803-eaa9f5162fbca2ebcb2682eb861bc7e5510a2b66.tar.gz stable-diffusion-webui-gfx803-eaa9f5162fbca2ebcb2682eb861bc7e5510a2b66.tar.bz2 stable-diffusion-webui-gfx803-eaa9f5162fbca2ebcb2682eb861bc7e5510a2b66.zip |
Add CPU fp8 support
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear)
And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
Diffstat (limited to 'modules/extras.py')
0 files changed, 0 insertions, 0 deletions