From 4d541cee13bc5916d237e43082c89bfc66e1c52b Mon Sep 17 00:00:00 2001 From: zifeng-radxa Date: Wed, 11 Sep 2024 09:53:18 +0800 Subject: [PATCH 1/2] docs: add zhouyi sdk quick start signed-off-by: "Morgan ZHANG" --- docs/sirider/s1/app-development/zhouyi_npu.md | 90 ++++++++++++++----- .../sirider/s1/app-development/zhouyi_npu.md | 78 ++++++++++++---- 2 files changed, 129 insertions(+), 39 deletions(-) diff --git a/docs/sirider/s1/app-development/zhouyi_npu.md b/docs/sirider/s1/app-development/zhouyi_npu.md index b421d59c..1a822c2e 100644 --- a/docs/sirider/s1/app-development/zhouyi_npu.md +++ b/docs/sirider/s1/app-development/zhouyi_npu.md @@ -1,6 +1,7 @@ --- sidebar_position: 1 --- + # 周易Z2 AIPU “周易” AIPU 是由安谋中国针对深度学习而自主研发的创新性 AI 专用处理器,它采用了创新性的架构设计,提供完整的硬件和软件生态,并且具有 PPA 最佳平衡。 @@ -8,13 +9,42 @@ sidebar_position: 1 “周易” AIPU 也支持业界主流的 AI 规模框架,包括 TensorFlow、ONNX 等,未来也将支持更多不同的扩展框架。 “周易” Z2 AIPU 将主要面向中高端安防、智能座舱和 ADAS、边缘服务器等应用场景。 + +## 快速例子 + +radxa 提供一个开箱即用的目标分类例子,旨在用户可以直接在 sirider s1 使用 AIPU 推理 resnet50 模型,免去复杂的模型编译和执行代码编译, +这对想快速使用 AIPU 而不想从头编译模型的用户是最佳的选择,如您对完整工作流程感兴趣可以参考 [周易 Z2 AIPU 使用教程](zhouyi_npu#周易-z2-aipu-使用教程) + +- 克隆仓库代码 + ```bash + git clone https://github.com/zifeng-radxa/siriders1_NPU_example.git + ``` +- 安装依赖 + ```bash + cd siriders1_NPU_example + pip3 install -r requirements.txt + ``` +- 生成用于模型输入的文件 + + ```bash + python3 input_gen.py --img_path + ``` + +- 模型推理 + ```bash + export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/libs + ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin + ``` + ## 周易 Z2 AIPU 使用教程 ### x86 PC 端安装周易 AIPU SDK + 周易 SDK 是一个全栈平台,可为用户提供快速上市的开发和部署能力。 ![image](https://user-images.githubusercontent.com/85479712/198521602-49e13a31-bb49-424f-b782-5108274d63c3.png) - 准备一个 python3.8 的环境 + - (可选)安装 [Anaconda](https://www.anaconda.com/) 如果系统中没有安装 Python 3.8(必要版本),或者同时有多个版本的 Python 环境,建议使用 [Anaconda](https://www.anaconda.com/) 创建新的 Python 3.8 环境 @@ -40,16 +70,16 @@ sidebar_position: 1 conda activate aipu ``` - - 退出环境 + - 退出环境 ```bash conda deactivate ``` - + - 在[瑞莎下载站](https://dl.radxa.com/sirider/s1/)下载周易 Z2 SDK 安装包后解压安装 - ```bash - tar -xvf Zhouyi_Z2.tar.gz - cd Zhouyi_Z2 && bash +x SETUP.SH - ``` + ```bash + tar -xvf Zhouyi_Z2.tar.gz + cd Zhouyi_Z2 && bash +x SETUP.SH + ``` - 安装后得到的完整 SDK 文件如下 - `AI610-SDK-r1p3-AIoT` : ARM ZhouYi Z2 工具包 @@ -57,18 +87,22 @@ sidebar_position: 1 - `siengine` : siengine 提供的 ARM ZhouYi Z2 模型编译(nn-compiler-user-case-example)及板子部署(nn-runtime-user-case-example)的 demos - 配置 nn-compiler 环境 - ```bash - cd AI610-SDK-r1p3-AIoT/AI610-SDK-r1p3-00eac0/Out-Of-Box/out-of-box-nn-compiler - pip3 install -r lib_dependency.txt - ``` + + ```bash + cd AI610-SDK-r1p3-AIoT/AI610-SDK-r1p3-00eac0/Out-Of-Box/out-of-box-nn-compiler + pip3 install -r lib_dependency.txt + ``` + 因为此 SDK 不包含模拟功能, 故安装过程会出现安装 AIPUSimProfiler 的报错,可以忽略 - + 若使用 venv 的用户请在 env_setup.sh 中 pip3 install 部分去掉 --user 选项 - ```bash - source env_setup.sh - ``` + + ```bash + source env_setup.sh + ``` ### x86 PC 端模型转换 + nn-compiler 可以将 TensorFlow、ONNX 等框架模型转换成可以在周易 AIPU 进行硬件加速推理的模型文件 :::tip 此案例中将介绍开箱即用案例:resnet50 目标分类 @@ -89,20 +123,24 @@ nn-compiler 可以将 TensorFlow、ONNX 等框架模型转换成可以在周易 python3 generate_calibration_data.py ``` - 生成用于模型推理的照片文件 + ```bash python3 generate_input_binary.py ``` + 文件在 ./resnet50/input_3_224_224.bin -- (可选) 配置 build.cfg (开箱即用案例已提供) +- (可选) 配置 build.cfg (开箱即用案例已提供) ```bash vim ./resnet50/build.cfg ``` - 生成 aipu 模型 + ```bash cd ./restnet50 aipubuild build.cfg ``` + 在 ./restnet50 中得到 aipu_mlperf_resnet50.bin :::tip @@ -110,8 +148,11 @@ nn-compiler 可以将 TensorFlow、ONNX 等框架模型转换成可以在周易 ::: ### 板端使用周易 Z2 推理 AIPU 模型 + 在使用周易 Z2 AIPU 推理前需要在 x86 主机进行交叉编译生成可执行文件 `aiputest`,然后拷贝到 Sirider S1 中执行 + #### 在 x86 PC 端交叉编译二进制可执行文件 + - 安装 [gcc-linaro-7.5.0-2019.12-x86_64_aarch64-linux-gnu](https://releases.linaro.org/components/toolchain/binaries/latest-7/aarch64-linux-gnu/) 交叉编译工具链 ```bash tar -xvf gcc-linaro-7.5.0-2019.12-x86_64_aarch64-linux-gnu.tar @@ -120,30 +161,37 @@ nn-compiler 可以将 TensorFlow、ONNX 等框架模型转换成可以在周易 - 编译 aiputest - 修改 UMDSRC 变量 + ```bash - cd siengine/nn-runtime-user-case-example + cd siengine/nn-runtime-user-case-example vim CMakeLists.txt #set(UMDSRC "${CMAKE_SOURCE_DIR}/../AI610-SDK-${AIPU_VERSION}-00eac0/AI610-SDK-1012-${AIPU_VERSION}-eac0/Linux-driver/driver/umd") set(UMDSRC "${CMAKE_SOURCE_DIR}/../../AI610-SDK-${AIPU_VERSION}-AIoT/AI610-SDK-r1p3-00eac0/AI610-SDK-1012-${AIPU_VERSION}-eac0/Linux-driver/driver/umd") ``` + - 交叉编译 + ```bash mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release .. make ``` + 编译生成的文件在 `siengine/nn-runtime-user-case-example/out/linux/aipu_test` #### 在 Sirider S1 进行板端推理 + - 将生成的 `aipu_mlperf_resnet50.bin` 模型文件,`input_3_224_224.bin` 照片文件,`aipu_test` 可执行文件,`out/linux/libs` 动态库文件夹复制到 Sirider S1 中 - 执行 aipu_test + ```bash export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin ``` + ```bash - (aiot-focal_overlayfs)root@linux:~/ssd# ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin - usage: ./aipu_test aipu.bin input0.bin + (aiot-focal_overlayfs)root@linux:~/ssd# ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin + usage: ./aipu_test aipu.bin input0.bin aipu_init_context success aipu_load_graph_helper success: aipu_mlperf_resnet50.bin aipu_create_job success @@ -171,13 +219,15 @@ nn-compiler 可以将 TensorFlow、ONNX 等框架模型转换成可以在周易 aipu_unload_graph success aipu_deinit_ctx success ``` + 两次的推理总时间 + ```bash real 0m0.043s user 0m0.008s sys 0m0.023s ``` - + 这里结果仅显示 推理结果的标签值,最大置信度 637 即对应 [imagenet1000](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a) 中的 `mailbag, postbag` - ![input.webp](/img/sirider/s1/aipu_1.webp) \ No newline at end of file + ![input.webp](/img/sirider/s1/aipu_1.webp) diff --git a/i18n/en/docusaurus-plugin-content-docs/current/sirider/s1/app-development/zhouyi_npu.md b/i18n/en/docusaurus-plugin-content-docs/current/sirider/s1/app-development/zhouyi_npu.md index d9d2ed7a..0c771bba 100644 --- a/i18n/en/docusaurus-plugin-content-docs/current/sirider/s1/app-development/zhouyi_npu.md +++ b/i18n/en/docusaurus-plugin-content-docs/current/sirider/s1/app-development/zhouyi_npu.md @@ -10,6 +10,31 @@ The "Zhouyi" AIPU supports mainstream AI frameworks, including TensorFlow and ON The "Zhouyi" Z2 AIPU is primarily targeted at high-end security, intelligent cockpits and ADAS (Advanced Driver Assistance Systems), edge servers, and other application scenarios. +## Quick Example + +Radxa provides a ready-to-use object classification example, aiming for users to directly use the AIPU to infer the ResNet50 model on Sirider S1, eliminating the need for complex model compilation and code execution. This is the best choice for users who want to use the AIPU without compiling models from scratch. If you are interested in the complete workflow, please refer to the [Zhouyi Z2 AIPU User Guide](zhouyi_npu#zhouyi-z2-aipu-user-guide). + +- Clone the repository + ```bash + git clone https://github.com/zifeng-radxa/siriders1_NPU_example.git + ``` +- Install dependencies + ```bash + cd siriders1_NPU_example + pip3 install -r requirements.txt + ``` +- Generate input file for the model + + ```bash + python3 input_gen.py --img_path + ``` + +- Model inference + ```bash + export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/libs + ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin + ``` + ## Zhouyi Z2 AIPU User Guide ### Install Zhouyi AIPU SDK on x86 PC @@ -18,8 +43,8 @@ The Zhouyi SDK is a full-stack platform that provides users with rapid developme ![image](https://user-images.githubusercontent.com/85479712/198521602-49e13a31-bb49-424f-b782-5108274d63c3.png) - - Prepare a python 3.8 environment + - (Optional) Install [Anaconda](https://www.anaconda.com/) If Python 3.8 (required version) is not installed on your system or if you have multiple Python versions, it is recommended to use [Anaconda](https://www.anaconda.com/) to create a new Python 3.8 environment. @@ -54,10 +79,10 @@ The Zhouyi SDK is a full-stack platform that provides users with rapid developme ``` - Download the Zhouyi Z2 SDK installation package from the [Radxa Download Station](https://dl.radxa.com/sirider/s1/) and extract it for installation: - ```bash - tar -xvf Zhouyi_Z2.tar.gz - cd Zhouyi_Z2 && bash +x SETUP.SH - ``` + ```bash + tar -xvf Zhouyi_Z2.tar.gz + cd Zhouyi_Z2 && bash +x SETUP.SH + ``` - After installation, the complete SDK files are as follows: - `AI610-SDK-r1p3-AIoT`: ARM Zhouyi Z2 toolkit @@ -65,16 +90,19 @@ The Zhouyi SDK is a full-stack platform that provides users with rapid developme - `siengine`: Demos provided by siengine for ARM Zhouyi Z2 model compilation (nn-compiler-user-case-example) and board deployment (nn-runtime-user-case-example) - Configure the nn-compiler environment: - ```bash - cd AI610-SDK-r1p3-AIoT/AI610-SDK-r1p3-00eac0/Out-Of-Box/out-of-box-nn-compiler - pip3 install -r lib_dependency.txt - ``` + + ```bash + cd AI610-SDK-r1p3-AIoT/AI610-SDK-r1p3-00eac0/Out-Of-Box/out-of-box-nn-compiler + pip3 install -r lib_dependency.txt + ``` + Since this SDK does not include simulation functionality, errors may occur when installing AIPUSimProfiler. These can be ignored. If using a virtual environment (venv), please remove the --user option from the pip3 install part in env_setup.sh: - ```bash - source env_setup.sh - ``` + + ```bash + source env_setup.sh + ``` ### Model Conversion on x86 PC @@ -99,9 +127,11 @@ For the complete SDK documentation, please refer to `AI610-SDK-r1p3-AIoT/AI610-S python3 generate_calibration_data.py ``` - Generate image files for model inference: + ```bash python3 generate_input_binary.py ``` + The file is located in ./resnet50/input_3_224_224.bin. - (Optional) Configure build.cfg (provided in out-of-the-box example): @@ -109,15 +139,18 @@ For the complete SDK documentation, please refer to `AI610-SDK-r1p3-AIoT/AI610-S vim ./resnet50/build.cfg ``` - Generate the aipu model: + ```bash cd ./restnet50 aipubuild build.cfg ``` - The aipu model is generated in ./restnet50 as aipu_mlperf_resnet50.bin. - - :::tip + + The aipu model is generated in ./restnet50 as aipu_mlperf_resnet50.bin. + + :::tip If `aipubuild` command not found, try `export PATH=$PATH:/root/.local/bin`. ::: + ### Use Zhouyi Z2 for AIPU Model Inference on the Board Before using Zhouyi Z2 AIPU for inference, a cross-compiled executable file `aiputest` needs to be generated on the x86 host and then copied to the Sirider S1 for execution. @@ -132,31 +165,37 @@ Before using Zhouyi Z2 AIPU for inference, a cross-compiled executable file `aip - Compile aiputest: - Modify the UMDSRC variable: + ```bash - cd siengine/nn-runtime-user-case-example + cd siengine/nn-runtime-user-case-example vim CMakeLists.txt #set(UMDSRC "${CMAKE_SOURCE_DIR}/../AI610-SDK-${AIPU_VERSION}-00eac0/AI610-SDK-1012-${AIPU_VERSION}-eac0/Linux-driver/driver/umd") set(UMDSRC "${CMAKE_SOURCE_DIR}/../../AI610-SDK-${AIPU_VERSION}-AIoT/AI610-SDK-r1p3-00eac0/AI610-SDK-1012-${AIPU_VERSION}-eac0/Linux-driver/driver/umd") ``` + - Cross-compile: + ```bash mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release .. make ``` + The compiled file is located in `siengine/nn-runtime-user-case-example/out/linux/aipu_test`. #### Inference on the Sirider S1 - Copy the generated `aipu_mlperf_resnet50.bin` model file, `input_3_224_224.bin` image file, `aipu_test` executable file, and `out/linux/libs` dynamic library folder to the Sirider S1. - Execute aipu_test: + ```bash export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin ``` + ```bash - (aiot-focal_overlayfs)root@linux:~/ssd# ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin - usage: ./aipu_test aipu.bin input0.bin + (aiot-focal_overlayfs)root@linux:~/ssd# ./aipu_test aipu_mlperf_resnet50.bin input_3_224_224.bin + usage: ./aipu_test aipu.bin input0.bin aipu_init_context success aipu_load_graph_helper success: aipu_mlperf_resnet50.bin aipu_create_job success @@ -184,7 +223,9 @@ Before using Zhouyi Z2 AIPU for inference, a cross-compiled executable file `aip aipu_unload_graph success aipu_deinit_ctx success ``` + The total time for two inferences: + ```bash real 0m0.043s user 0m0.008s @@ -194,4 +235,3 @@ Before using Zhouyi Z2 AIPU for inference, a cross-compiled executable file `aip The result here only shows the labels of the inference results, with the highest confidence being 637, corresponding to `mailbag, postbag` in [imagenet1000](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). ![input.webp](/img/sirider/s1/aipu_1.webp) - From 3061312fde108d5b75485078320b0573c8f46b05 Mon Sep 17 00:00:00 2001 From: zifeng-radxa Date: Wed, 11 Sep 2024 10:55:19 +0800 Subject: [PATCH 2/2] docs: add python requirements in yolov5 docs signed-off-by: "Morgan ZHANG" --- .../common/dev/_rknn-toolkit-lite2-yolov5.mdx | 52 +++++++++++------- .../common/dev/_rknn-toolkit-lite2-yolov5.mdx | 55 ++++++++++++------- 2 files changed, 65 insertions(+), 42 deletions(-) diff --git a/docs/common/dev/_rknn-toolkit-lite2-yolov5.mdx b/docs/common/dev/_rknn-toolkit-lite2-yolov5.mdx index 32a60308..37e67e17 100644 --- a/docs/common/dev/_rknn-toolkit-lite2-yolov5.mdx +++ b/docs/common/dev/_rknn-toolkit-lite2-yolov5.mdx @@ -10,9 +10,11 @@ - 板端利用 rknn-toolkit2-lite 的 Python API 板端推理模型 ### PC端模型转换 + :::tip Radxa 已提供预转换好的 `yolov5s_rk35XX.rknn` 模型,用户可直接参考[板端推理 YOLOv5 ](#板端推理-yolov5)跳过 PC 端模型转换章节 ::: + - 如使用 conda 请先激活 rknn conda 环境 ```bash @@ -49,41 +51,49 @@ Radxa 已提供预转换好的 `yolov5s_rk35XX.rknn` 模型,用户可直接参 - 将 yolov5.rknn 模型拷贝到板端 ### 板端推理 YOLOv5 + :::tip RK3566/3568 芯片用户使用 NPU 前需要在 rsetup overlays 中开启, 具体请参考 [rsetup](../radxa-os/rsetup) ::: - (可选)下载 radxa 准备的 yolov5s rknn 模型 - | 平台 | 下载链接 | - | -------- | ------------------------------------------------------------ | - | rk3566 | [yolov5s_rk3566.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3566.rknn) | - | rk3568 | [yolov5s_rk3568.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3568.rknn) | - | rk3588 | [yolov5s_rk3588.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3588.rknn) | + | 平台 | 下载链接 | + | -------- | ------------------------------------------------------------ | + | rk3566 | [yolov5s_rk3566.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3566.rknn) | + | rk3568 | [yolov5s_rk3568.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3568.rknn) | + | rk3588 | [yolov5s_rk3588.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3588.rknn) | - 修改 `rknn_model_zoo/py_utils/rknn_executor.py` 代码 - ```python - 1 # from rknn.api import RKNN - 2 try: - 3 from rknn.api import RKNN - 4 except: - 5 from rknnlite.api import RKNNLite as RKNN - ... - ... - 18 ret = rknn.init_runtime() - ``` -- 修改 `rknn_model_zoo/examples/yolov5/python` 代码 - ```python - 262 outputs = model.run([np.expand_dims(input_data, 0)]) - ``` + + 请根据[板端安装 RKNN Model Zoo](./rknn_install#可选-板端安装-rknn-model-zoo) 配置 RKNN Model Zoo 代码仓库 + + ```python + 1 # from rknn.api import RKNN + 2 try: + 3 from rknn.api import RKNN + 4 except: + 5 from rknnlite.api import RKNNLite as RKNN + ... + ... + 18 ret = rknn.init_runtime() + ``` + +- 修改 `rknn_model_zoo/examples/yolov5/python/yolov5.py` 代码 + ```python + 262 outputs = model.run([np.expand_dims(input_data, 0)]) + ``` +- 安装依赖环境 + ```bash + pip3 install opencv-python-headless + ``` - 运行 yolov5 示例代码 - 请根据[板端安装 RKNN Model Zoo](./rknn_install#可选-板端安装-rknn-model-zoo)配置 RKNN Model Zoo 代码仓库 ```bash cd rknn_model_zoo/examples/yolov5/python python3 yolov5.py --model_path --img_save ``` - 如你使用的是自己转换的模型需从 PC 端拷贝到板端,并用 --model_path 参数指定模型路径 + 如你使用的是自己转换的模型需从 PC 端拷贝到板端,并用 --model_path 参数指定模型路径 ```bash rock@radxa-zero3:~/rknn_model_zoo/examples/yolov5/python$ python3 yolov5.py --model_path ./yolov5s_rk3566.rknn --target rk3566 --img_save diff --git a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx index f6e9f157..ea288b5a 100644 --- a/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx +++ b/i18n/en/docusaurus-plugin-content-docs/current/common/dev/_rknn-toolkit-lite2-yolov5.mdx @@ -10,9 +10,11 @@ Deploying YOLOv5 with RKNN requires two steps: - On the board, use the Python API of `rknn-toolkit2-lite` for on-board model inference. ### PC Model Conversion + :::tip Radxa provides a pre-converted `yolov5s_rk35XX.rknn` model, and users can directly refer to [YOLOv5 On-Board Inference](#yolov5-on-board-inference) to skip the PC model conversion section. ::: + - If you are using conda, please activate the rknn conda environment first. ```bash @@ -49,39 +51,50 @@ Radxa provides a pre-converted `yolov5s_rk35XX.rknn` model, and users can direct - Copy the `yolov5.rknn` model to the board. ### YOLOv5 On-Board Inference + :::tip For RK3566/3568 chip users, NPU must be enabled in rsetup overlays before use. Please refer to [rsetup](../radxa-os/rsetup) for details. ::: - (Optional) Download the Radxa-provided YOLOv5s RKNN model. - | Platform | Download Link | - | -------- | --------------------------------------------------------------------- | - | rk3566 | [yolov5s_rk3566.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3566.rknn) | - | rk3568 | [yolov5s_rk3568.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3568.rknn) | - | rk3588 | [yolov5s_rk3588.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3588.rknn) | + | Platform | Download Link | + | -------- | --------------------------------------------------------------------- | + | rk3566 | [yolov5s_rk3566.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3566.rknn) | + | rk3568 | [yolov5s_rk3568.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3568.rknn) | + | rk3588 | [yolov5s_rk3588.rknn](https://github.com/zifeng-radxa/rknn_model_zoo/releases/download/yolov5s_rknn/yolov5s_rk3588.rknn) | - Modify the `rknn_model_zoo/py_utils/rknn_executor.py` code - ```python - 1 # from rknn.api import RKNN - 2 try: - 3 from rknn.api import RKNN - 4 except: - 5 from rknnlite.api import RKNNLite as RKNN - ... - ... - 18 ret = rknn.init_runtime() - ``` -- Modify the `rknn_model_zoo/examples/yolov5/python` code - ```python - 262 outputs = model.run([np.expand_dims(input_data, 0)]) - ``` -- Run the YOLOv5 example code + Please configure the RKNN Model Zoo code repository according to [Install RKNN Model Zoo on the Board](./rknn_install#optional-install-rknn-model-zoo-on-the-board). + ```python + 1 # from rknn.api import RKNN + 2 try: + 3 from rknn.api import RKNN + 4 except: + 5 from rknnlite.api import RKNNLite as RKNN + ... + ... + 18 ret = rknn.init_runtime() + ``` + +- Modify the `rknn_model_zoo/examples/yolov5/python/yolov5.py` code + + ```python + 262 outputs = model.run([np.expand_dims(input_data, 0)]) + ``` + +- Install the required environment + ```bash + pip3 install opencv-python-headless + ``` +- Run the YOLOv5 example code + ```bash cd rknn_model_zoo/examples/yolov5/python python3 yolov5.py --model_path --img_save ``` + If you are using a self-converted model, copy it from the PC to the board, and specify the model path with the `--model_path` parameter. ```bash @@ -117,4 +130,4 @@ For RK3566/3568 chip users, NPU must be enabled in rsetup overlays before use. P - All inference results are saved in `./result`. -![result0](/img/general-tutorial/rknn/result.webp) \ No newline at end of file +![result0](/img/general-tutorial/rknn/result.webp)