🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN, TNN, NCNN and TensorRT.
-
Updated
Oct 28, 2024 - C++
🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN, TNN, NCNN and TensorRT.
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
Yolov5 TensorRT Implementations
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
Анализ трафика на круговом движении с использованием компьютерного зрения
Using TensorRT for Inference Model Deployment.
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
this is a tensorrt version unet, inspired by tensorrtx
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
VitPose without MMCV dependencies
C++ TensorRT Implementation of NanoSAM
3d object detection model smoke c++ inference code
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."