A light-weight FaceRender is support for more than 100x faster rendering on Mac OS. #457
Replies: 28 comments 27 replies
-
fatal: couldn't find remote ref pull/458/head |
Beta Was this translation helpful? Give feedback.
-
I noticed a small issue. When using the "--facerender pirender --size 512" mode, the images are heavily distorted. However, when using size 256, the images appear normal. |
Beta Was this translation helpful? Give feedback.
-
WOW! Impressive work... I am running windows on an AMD Ryzen 5000 series Laptop with small 2 GB Nvidia (weak laptop) and I can confirm this works 10X faster for the image rendering. It did take the GPU right up to 100%. I did a quick test on my standard benchmark 9 second file that showed 36 mins with facevid2vid and with pirender its came in under 4 mins... Nice looking interface in Gradio too! |
Beta Was this translation helpful? Give feedback.
-
Hi Vinthony, how are you bro? I need one more help, I've tried but I'm not getting big movements with the lips, the mouth is half open most of the time, can you help me with a better setup to fix this? Thank you in advance once again. |
Beta Was this translation helpful? Give feedback.
-
Hi ! , Image passed as input is closed lips , no teeth visible , same image passed to earlier model which used facevid2vid have good results with teeth Thank you |
Beta Was this translation helpful? Give feedback.
-
On a MacBook Pro Ventura 13.4 Python crash with following message
|
Beta Was this translation helpful? Give feedback.
-
Finally got things sort of running on my M1 Max with MacOS 13.4.1, torch==2.1.0.dev20230710, torchvision==0.16.0.dev20230710. Both input image was 512x512 Using 256 model: Using 512 model: Additional changes made to the sauceI have to do the following changes in additional to this PR:
Change the file |
Beta Was this translation helpful? Give feedback.
-
Dear @vinthony , thanks a ton for coming up with this awesome model for facerender enhancement, this is really a game changer for me being on mac m1 pro chip and huge thanks to @jjmlovesgit as well for introducing me to this page 🙏🏻 Currently, the facerender is much faster than before however now, the Face Enhancer step seems to be taking way too long to complete. Is there any specific model/weight or some steps you would recommend me to try ? Any advise would be greatly appreciated !! |
Beta Was this translation helpful? Give feedback.
-
I think SadTalker is of excellent quality. Thank you very much for releasing this wonderful library. Currently I have the normal SadTalker Extension working in my local environment with Stabule diffusion web-ui, but the light-weight FaceRender is not working!
Sorry for the rudimentary question. Best regards. |
Beta Was this translation helpful? Give feedback.
-
I am running windows on an Intel i5 12th series Laptop with small 4 GB Nvidia. I am getting CUDA out of memory error and It did not take the GPU right up to 100%. Can someone help me ? |
Beta Was this translation helpful? Give feedback.
-
请确认您有正确的访问权限并且仓库存在。 请确认您有正确的访问权限并且仓库存在。 请确认您有正确的访问权限并且仓库存在。 请确认您有正确的访问权限并且仓库存在。 |
Beta Was this translation helpful? Give feedback.
-
Hi @vinthony, the license of PiRender is CC-BY-NC, will this cause issues with the Apache 2.0 license which this project is licensed under? Will this mean that on this mode this project will no longer be able to be used for commercial purposes? |
Beta Was this translation helpful? Give feedback.
-
As of today, the face render option just disappeared from the webui and rendering is extremely slow again. Git status confirmed that I'm on the pirender branch. What could be the problem? |
Beta Was this translation helpful? Give feedback.
-
hey I was wondering if there's a cost efficient api of this somewhere? |
Beta Was this translation helpful? Give feedback.
-
Dear @vinthony, you've done an amazing job with SadTalker. I've been creating real-time avatar assistants since 2005 (Virtual Assistant Denise), and AI has improved everything. I also understand the complexity of a project such as this. As an Open Source project, yours is getting really close to the big players, and I'm sure they are following your progress. Please add to your wish list a very light, real-time version. This will be a game-changer for us! It's not by chance that they are charging USD 1,00 / minute to use their real-time API. As a note related to this pull, I've tested it on several Windows / NVidia computers with the same results. The face render using pirender is extremely fast, but for some reason, as already noticed here, it does not show the avatar teeth. When using the facevid2vid option with GFPGan, it takes longer compared to the current regular build, and the resulting video is blurry, as if we haven't used the face enhancer. I hope you find time to keep working on this project, as I'm sure there are many people like me wishing you luck. |
Beta Was this translation helpful? Give feedback.
-
can face-enhancer use gpu on mac m1? it run very slow. @vinthony |
Beta Was this translation helpful? Give feedback.
-
Thsi is really good! I'd suggest changing the title and description tho. I am not sure why you added that this feature was meant only to speed up stuff in MacOS. I just tested it on WSL (UBUNTU) under Windows, and it worked very very fast too. |
Beta Was this translation helpful? Give feedback.
-
mac Intel cpu, using mps, the effect is blurry, is there any good solution? |
Beta Was this translation helpful? Give feedback.
-
Thank you for the software improvement, This is my usage experience, and I want to remind you that I'm using it as an extension on Automatic 1111 due to installation problems. Again ty, i hope my experience helps you work. |
Beta Was this translation helpful? Give feedback.
-
@sahreen-haider, |
Beta Was this translation helpful? Give feedback.
-
I'm using Apple Silicon M2 Pro and have an error showing this:
Nearly every time when FaceRender reaches 75%, it stops working. The error was with using Stable Diffusion extension mode, but I succeeded in using command line mode. Any ideas? |
Beta Was this translation helpful? Give feedback.
-
We can check branch from Colab. |
Beta Was this translation helpful? Give feedback.
-
This is awesome! Big thanks for the faster face render. You just made our AI project twice as fast ;) You rock! |
Beta Was this translation helpful? Give feedback.
-
hey guys, I am so new in coding an ai tools but I am learning. I am currently using sad talker as an extension of stable diffusion webui. I want to use pirenderer in sad talker but I could not make this installation. I couldn't do checkout to the new branch section, when I do it terminal says incurable. Also I could not do pirenderer installation section. I don't know which files should I extract or how to code it. My pc is Mac m3. If any of you succeed the installation please explain briefly. Thanks. |
Beta Was this translation helpful? Give feedback.
-
First of all, this is awesome - so FAST, thank you! Example html page with the video generated:
|
Beta Was this translation helpful? Give feedback.
-
Amazing speed, so fast!!! |
Beta Was this translation helpful? Give feedback.
-
Is this able to be merged into the current version? I looked and seems like it may need to be re-merged? Also is this planned to be merged officially ever? I really need this for my Mac and can't see the program as useful for me without it, since it is so slow otherwise on mps. The mps support would be very much appreciated. |
Beta Was this translation helpful? Give feedback.
-
I was able to run SadTalker using automatic1111 as an extension. However, it is only using CPU on my mac silicon M3. git fetch origin pull/458/head:pirender fatal: not a git repository (or any of the parent directories): .git |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I create a new pull request(#458) to include a new facerender for FAST inference.
CONV3D is not optimized in MacOS(device==
mps
), making the device of MacOS too slow to generate.Here, I integrate the weights/model of https://github.com/RenYurui/PIRender as an alternation facerender for a generation.
This face-render is much faster than our previous one but with (a little bit) different qualities (I do not test the visual quality of different facerenders due to time limitation), for a 3 second audio,
To use the new facerender,
Download the checkpoint from https://drive.google.com/file/d/1-0xOf6g58OmtKtEWJlU3VlnfRqPN9Uq7/view or the Github of PIRender and put the file of
epoch_00190_iteration_000400000_checkpoint.pt
directly in thecheckpoints
folder as before.For the command line user, a new command line option
--facerender
is introduced with the choice offacevid2vid
andpirender
.For WEBUI/Gradio users, a new button is added.
This facerender can also run very fast on GPU, I have not tested it yet but the conclusion is the same.
Beta Was this translation helpful? Give feedback.
All reactions