Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speeds of (GPU 8x, 14x and yolov4dense) running on desktop GPU (RTX2080Ti) are same #7

Closed
pham-cong-nguyen opened this issue Jan 11, 2021 · 5 comments
Labels
documentation Improvements or additions to documentation

Comments

@pham-cong-nguyen
Copy link

I run:
detect.py --weights 'weights/best14x-49.pt' -- img-size 512 --> runing time (11ms on RTX2080ti)
detect.py --weights 'weights/best8x-514.pt -- img-size 512 --> runing time (11ms on RTX2080ti)
detect.py --weights 'weights/yolov4dense.pt ' -- img-size 512 --> runing time (11ms on RTX2080ti)

But when using check_compression.py, I see that FLOPS of these weights is still good

I just pip install -U -r requirements.txt, without docker build

So can you explain to me about this problem?

@nightsnack
Copy link
Owner

Yes sure. Our work targets on mobile devices and includes the pruning and compiler two parts. The repo on github is only about pruning. The pruned model needs the mobile compiler support to inference or acceleration. The mobile compiler is not open sourced right now.
For RTX 2080Ti, because Pytorch (running on desktop GPU) doesn't support sparse matrix, the inference speed of sparse models and dense models are the same.

@nightsnack nightsnack changed the title Speed detect of Yolobile run with best8x-514.pt, best14x-49.pt and yolov4dense.pt is same Speeds of (GPU 8x, 14x and yolov4dense) running on desktop GPU (RTX2080Ti) are same Jan 11, 2021
@nightsnack nightsnack added the documentation Improvements or additions to documentation label Jan 11, 2021
@nightsnack nightsnack pinned this issue Jan 11, 2021
@nightsnack
Copy link
Owner

This is a fairly common question. I pinned your issue.

@pham-cong-nguyen
Copy link
Author

Thanks for your answer. So I will wait for Nvidia's update

@Wuqiman
Copy link

Wuqiman commented Jan 12, 2021

The mobile compiler
Hi, may i ask you about that the mobile compiler is based on MNN? When the mobile compiler could open source?Thank you very much~

@nightsnack
Copy link
Owner

Hi @Wuqiman , mobile compiler is not based on Alibaba MNN. Please refer to README.md. The compiler source code is associated with our collaborator at William & Mary, and has joint IP related stuff. We cannot open source this part now. Sorry for the inconvenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants