-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce jitter #63
Comments
The multi-segmenter model seems to be more stable than MLKit or MEET. (See: https://codepen.io/edamlmmv/pen/PoLYmGG) You can see an example in this codepen demo where I try use a 15x15 blur kernel to apply gaussian blur : I've tried implementing backgroundBlurStage into my demo, here is my attempt : My understanding of WebGL is limited and I was wondering if you could give me some cue as to how to advance my implementation. I've tried using the model in Volcomix's demo but the buffer overflow and I cannot build the TFLite tool. Also, Google ImageSegmenter now uses WebGPU and it seems on-part or faster than using our own WASM functions. |
Rectification, it's almost working. https://codepen.io/edamlmmv/pen/xxBKqma To bind the mask use: I think it's because I am missing the loadSegmentationStage |
There is a jitter when using segmentation with a video or camera. Select one of the three videos in the demo and notice that the mask isn't stable.
Is there a way to reduce the jitter? Maybe by averaging the mask of consequent frames?
I've found this issue and this issue discussing this in mediapipe.
When I upload the Dance - 32938.mp4 video from the demo to the selfie_segmentation demo it looks much better.
Is there a way to improve the segmentation or should we replace the segmentation step with the mediapipe solution?
The text was updated successfully, but these errors were encountered: