Skip to content

Commit

Permalink
Update notebook to use postprocessing_use_tensorrt_nms:bool instead o…
Browse files Browse the repository at this point in the history
…f engine
  • Loading branch information
BloodAxe committed Apr 19, 2024
1 parent a6cf4d3 commit 0a3c076
Showing 1 changed file with 11 additions and 12 deletions.
23 changes: 11 additions & 12 deletions src/super_gradients/examples/model_export/models_export.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2209,31 +2209,30 @@
"source": [
"### Supported backends\n",
"\n",
"Currently, we support two backends for exporting models:\n",
"Currently, model can be exported to ONNX format and be run on these inference engines:\n",
"\n",
"* ONNX Runtime\n",
"* TensorRT\n",
"* OpenVINO\n",
"\n",
"The only difference between these two backends is what NMS implementation will be used.\n",
"ONNX Runtime uses NMS implementation from ONNX opset, while TensorRT uses its own NMS implementation which is expected to be faster.\n",
"When model is exported with postprocessing enabled, the NMS module is attached to the ONNX graph.\n",
"There are two implementations of NMS that can be used: ONNX NMS and TensorRT NMS.\n",
"\n",
"A disadvantage of TensorRT backend is that you cannot run model exported for TensorRT backend by ONNX Runtime.\n",
"You can, however, run models exported for ONNX Runtime backend inside TensorRT.\n",
"A TensorRT NMS as named suggests is NMS implementation available exclusively for TensorRT runtime.\n",
"It is generally slightly faster than ONNX NMS. \n",
"A disadvantage of TensorRT NMS is that you cannot run model exported with this NMS on other inference engines like ONNX Runtime.\n",
"You can, however, run models exported with ONNX NMS inside TensorRT.\n",
"\n",
"Therefore, ONNX Runtime backend is recommended for most use-cases and is used by default.\n",
"\n",
"You can specify the desired execution backend by setting the `execution_backend` argument of `export()` method:\n",
"You can specify the desired NMS by setting the `postprocessing_use_tensorrt_nms_` argument of `export()` method:\n",
"\n",
"```python\n",
"from super_gradients.conversion import ExportTargetBackend\n",
"\n",
"model.export(..., engine=ExportTargetBackend.ONNXRUNTIME)\n",
"model.export(..., postprocessing_use_tensorrt_nms=False)\n",
"```\n",
"\n",
"```python\n",
"from super_gradients.conversion import ExportTargetBackend\n",
"\n",
"model.export(..., engine=ExportTargetBackend.TENSORRT)\n",
"model.export(..., postprocessing_use_tensorrt_nms=True)\n",
"```"
],
"metadata": {
Expand Down

0 comments on commit 0a3c076

Please sign in to comment.