Skip to content

Export [detectron2](https://github.com/facebookresearch/detectron2) model to [onnx](https://github.com/onnx/onnx) and run inference using [caffe2 onnx backend](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html). This let's you run inference on a raspberry pi with acceptable inference times.

Notifications You must be signed in to change notification settings

randombenj/detectron2onnx-inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Detectron2 ONNX Inference

Exporting detectron2 models to onnx and running inference on them is surprisingly hard.

This rapository contains my personal learnings with detectron2 and onnx inference.

About

Export [detectron2](https://github.com/facebookresearch/detectron2) model to [onnx](https://github.com/onnx/onnx) and run inference using [caffe2 onnx backend](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html). This let's you run inference on a raspberry pi with acceptable inference times.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published