Integration of different Neural Network Optimizer #2694
Unanswered
jihwankimqd
asked this question in
Q&A
Replies: 1 comment 1 reply
-
We did have similar discussions in the past https://discourse.ros.org/t/unified-ml-inference-in-autoware-a-proposal/14058. From the discussion, we decided to use TVM so we are thinking of converting TensorRT implementation into TVM implementation when we move Universe packages into Core as a stable release, but we don't have enough resources to do that so they are currently remaining as TensorRT. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My understanding is that Autoware Universe supports a layered architecture where individual modules can be separated/extended. Currently, some of the Neural Network related functionalities in the Perception module have a CUDA dependency. I want to try to implement the functionality with a different optimizer such as OpenVINO (contrary to TensorRT). As of now, I can't seem to find any guidelines on the docs on developing and testing this functionality. I would appreciate any guidance or help!
Beta Was this translation helpful? Give feedback.
All reactions