Skip to content

Latest commit

 

History

History
137 lines (97 loc) · 3.59 KB

README.md

File metadata and controls

137 lines (97 loc) · 3.59 KB

InternImage for Object Detection

This folder contains the implementation of the InternImage for object detection.

Our detection code is developed on top of MMDetection v2.28.1.

Usage

Install

  • Clone this repo:
git clone https://github.com/OpenGVLab/InternImage.git
cd InternImage
  • Create a conda virtual environment and activate it:
conda create -n internimage python=3.7 -y
conda activate internimage

For examples, to install torch==1.11 with CUDA==11.3:

pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113  -f https://download.pytorch.org/whl/torch_stable.html
  • Install timm==0.6.11 and mmcv-full==1.5.0:
pip install -U openmim
mim install mmcv-full==1.5.0
pip install timm==0.6.11 mmdet==2.28.1
  • Install other requirements:
pip install opencv-python termcolor yacs pyyaml scipy
  • Compile CUDA operators
cd ./ops_dcnv3
sh ./make.sh
# unit test (should see all checking is True)
python test.py
  • You can also install the operator using .whl files

DCNv3-1.0-whl

Data Preparation

Prepare COCO according to the guidelines in MMDetection v2.28.1.

Evaluation

To evaluate our InternImage on COCO val, run:

sh dist_test.sh <config-file> <checkpoint> <gpu-num> --eval bbox segm

For example, to evaluate the InternImage-T with a single GPU:

python test.py configs/coco/mask_rcnn_internimage_t_fpn_1x_coco.py checkpoint_dir/det/mask_rcnn_internimage_t_fpn_1x_coco.pth --eval bbox segm

For example, to evaluate the InternImage-B with a single node with 8 GPUs:

sh dist_test.sh configs/coco/mask_rcnn_internimage_b_fpn_1x_coco.py checkpoint_dir/det/mask_rcnn_internimage_b_fpn_1x_coco.py 8 --eval bbox segm

Training on COCO

To train an InternImage on COCO, run:

sh dist_train.sh <config-file> <gpu-num>

For example, to train InternImage-T with 8 GPU on 1 node, run:

sh dist_train.sh configs/coco/mask_rcnn_internimage_t_fpn_1x_coco.py 8

Manage Jobs with Slurm

For example, to train InternImage-L with 32 GPU on 4 node, run:

GPUS=32 sh slurm_train.sh <partition> <job-name> configs/coco/cascade_internimage_xl_fpn_3x_coco.py work_dirs/cascade_internimage_xl_fpn_3x_coco

Export

To export a detection model from PyTorch to TensorRT, run:

MODEL="model_name"
CKPT_PATH="/path/to/model/ckpt.pth"

python deploy.py \
    "./deploy/configs/mmdet/instance-seg/instance-seg_tensorrt_dynamic-320x320-1344x1344.py" \
    "./configs/coco/${MODEL}.py" \
    "${CKPT_PATH}" \
    "./deploy/demo.jpg" \
    --work-dir "./work_dirs/mmdet/instance-seg/${MODEL}" \
    --device cuda \
    --dump-info

For example, to export mask_rcnn_internimage_t_fpn_1x_coco from PyTorch to TensorRT, run:

MODEL="mask_rcnn_internimage_t_fpn_1x_coco"
CKPT_PATH="/path/to/model/ckpt/mask_rcnn_internimage_t_fpn_1x_coco.pth"

python deploy.py \
    "./deploy/configs/mmdet/instance-seg/instance-seg_tensorrt_dynamic-320x320-1344x1344.py" \
    "./configs/coco/${MODEL}.py" \
    "${CKPT_PATH}" \
    "./deploy/demo.jpg" \
    --work-dir "./work_dirs/mmdet/instance-seg/${MODEL}" \
    --device cuda \
    --dump-info