Skip to content
Jason Ku edited this page Mar 29, 2018 · 1 revision

This page contains the typical timing that we note for relevant pre-processing and GPU inference steps, which are used when deploying this code in a ROS node.

All times are in ms.

Preprocessing

  • Sample Dict: 20-22
    • Create BEV: 12-15
    • (optional) Ground plane estimation: <1
    • Anchors:
      • Create: <1 (pre-generated positions and sizes)
      • (optional) Filter: 8-9

GPU Inference

AVOD

  • Total: 60
    • RPN: 50
    • Second stage: 7.5
    • Python overhead: 2.5

AVOD-FPN

  • Total: 80
Clone this wiki locally