Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, excellent work! If my primary focus is on depth accuracy and I'm working on dynamic scenes, would it be feasible to omit the Gaussian splatting process in the later stages? #41

Open
booker-max opened this issue Dec 23, 2024 · 2 comments

Comments

@booker-max
Copy link

Hello, this is a fantastic idea!
I have a question: If I am only concerned with the performance of depth estimation and do not need to perform novel view synthesis, I want to conduct large-scale training on my own dataset.
My dataset primarily consists of dynamic scenes in autonomous driving scenarios, which suggests that Gaussian splatting might not be effective for further enhancing the depth estimation model.
Instead, I plan to use only your proposed mono-MVS fusion strategy for this task.
I suspect this approach would handle dynamic objects well due to the injection of monocular depth features.
Do you think this is feasible? Or do you have other suggestions?

@haofeixu
Copy link
Member

haofeixu commented Jan 5, 2025

Yes, this is feasible and our depth model could also work for dynamic scenes thanks to the monocular features.

@booker-max
Copy link
Author

Yes, this is feasible and our depth model could also work for dynamic scenes thanks to the monocular features.

Thank you very much. I have a side question—if I use a fisheye camera, would this approach still work, given the monocular features?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants