Skip to content

Commit

Permalink
update README and example
Browse files Browse the repository at this point in the history
  • Loading branch information
zhangwm-pt committed Nov 3, 2022
1 parent e52dcc7 commit 32e1034
Show file tree
Hide file tree
Showing 5 changed files with 2,288 additions and 16 deletions.
105 changes: 89 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,103 @@
## 简介
English | [简体中文](./README_CN.md)

SHL(曾用名CSI-NN2) 是 T-HEAD 提供的一组针对玄铁 CPU 平台的神经网络库 API。抽象了各种常用的网络层的接口,并且提供一系列已优化的二进制库。
SHL(Structure of Heterogeneous Library, Chinese name: ShiHulan) is a high-performance Heterogeneous computing library provided by T-HEAD.
The interface of SHL uses T-HEAD neural network library API for XuanTie CPU platform: CSI-NN2, and provides a series of optimized binary libraries.

SHL 的特性:
Features for SHL:

- C 代码版本的参考实现。
- 提供玄铁系列 CPU 的汇编优化实现。
- 支持对称量化和非对称量化。
- 支持8位定点,16位定点和16位浮点等数据类型。
- 兼容 NCHW NHWC 格式。
- 搭配 [HHB](https://www.yuque.com/za4k4z/oxlbxl) 实现代码自动调用。
- 覆盖 CPU,NPU 等不同体系结构。
- 附加异构参考实现。
- Reference implementation of c code version
- Assembly optimization implementation for XuanTie CPU
- Supports symmetric quantization and asymmetric quantization
- Support 8bit, 16bit, and f16 data types
- compaatible with NCHW and NHWC formates
- Use [HHB](https://www.yuque.com/za4k4z/kvkcoh) to automatically call API
- Covers different architectures, such as CPU and NPU
- Reference heterogeneous schedule implementation

SHL 提供了完成的接口声明和接口的参考实现,各个设备提供商可以依此针对性的完成各个接口的优化工作。
In principle, SHL only provides the reference implementation of XuanTie CPU platform, and the optimization of each NPU target platform is completed by the vendor of the specific platform.

## 文档说明
# Use SHL

- [中文文档](https://www.yuque.com/za4k4z/isgz8o)
- [SHL API](https://www.yuque.com/za4k4z/kkzsw9)
- [SHL deployment tools](https://www.yuque.com/za4k4z/kvkcoh)

## 致谢
# Installation

## Official Python packages

SHL released packages are published in PyPi, can install with hhb.

```
pip3 install hhb
```

binary libary is at /usr/local/lib/python3.6/dist-packages/tvm/install_nn2/

## Build SHL from Source

Here is one example to build C906 library.

We need to install T-HEAD RISC-V GCC 2.6, which can get from T-HEAD OCC, download, decompress, and set path environment.

```
wget https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/resource//1663142514282/Xuantie-900-gcc-linux-5.10.4-glibc-x86_64-V2.6.1-20220906.tar.gz
tar xf Xuantie-900-gcc-linux-5.10.4-glibc-x86_64-V2.6.1-20220906.tar.gz
export PATH=${PWD}/Xuantie-900-gcc-linux-5.10.4-glibc-x86_64-V2.6.1/bin:$PATH
```

Download source code

```
git clone https://github.com/T-head-Semi/csi-nn2.git
```

compile c906

```
cd csi-nn2
make nn2_c906
```

install c906

```
make install_nn2
```

# Quick Start Example

Here is one example for XuanTie C906 to run mobilenetv1. It shows how to call SHL API to inference the whole model.

compile command:

```
cd example
make c906_m1_f16
```

c906_mobilenetv1_f16.elf will be generated after completion.
After copying it to the development board with C906 CPU [such as D1], execute:

```
./c906_mobilenetv1_f16.elf
```

NOTE: Original mobilenetv1's every conv2d has one BN(batch norm), but the example assumes BN had been fused into conv2d。About how to use deployment tools to fuse BN, and emit right weight float16 value, can reference [HHB](https://www.yuque.com/za4k4z/kvkcoh).

# Resources

- [T-HEAD Open Chip Community](https://xrvm.com/)
- [Use SHL to run MLPerf tiny](https://github.com/mlcommons/tiny_results_v0.7/tree/main/open/Alibaba)

## Acknowledgement

SHL refers to the following projects:

SHL 参考、借鉴了下列项目:
- [Caffe](https://github.com/BVLC/caffe)
- [Tensorflow](https://github.com/tensorflow/tensorflow)
- [ncnn](https://github.com/Tencent/ncnn)
- [MNN](https://github.com/alibaba/MNN)
- [Tengine](https://github.com/OAID/Tengine)
- [CMSIS_5](https://github.com/ARM-software/CMSIS_5)
- [ONNX](https://github.com/onnx/onnx)
- [XNNPACK](https://github.com/google/XNNPACK)
100 changes: 100 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
[English](./README.md) | 简体中文

SHL 是 T-HEAD 提供的一组针对玄铁 CPU 平台的神经网络库 API。抽象了各种常用的网络层的接口,并且提供一系列已优化的二进制库。

SHL 的特性:

- C 代码版本的参考实现。
- 提供玄铁系列 CPU 的汇编优化实现。
- 支持对称量化和非对称量化。
- 支持8位定点,16位定点和16位浮点等数据类型。
- 兼容 NCHW 和 NHWC 格式。
- 搭配 [HHB](https://www.yuque.com/za4k4z/oxlbxl) 实现代码自动调用。
- 覆盖 CPU,NPU 等不同体系结构。
- 附加异构参考实现。

SHL 提供了完成的接口声明和接口的参考实现,各个设备提供商可以依此针对性的完成各个接口的优化工作。

# 使用 SHL

- [SHL 接口和设计文档](https://www.yuque.com/za4k4z/isgz8o)
- [SHL 配套部署工具](https://www.yuque.com/za4k4z/oxlbxl)

# 安装

## 通过 PyPi 安装

SHL 的预编译库可以通过 PyPi 安装 hhb 时,一起安装。

```
pip3 install hhb
```

二进制库的安装目录在 /usr/local/lib/python3.6/dist-packages/tvm/install_nn2/

## 通过源码重新编译

以 Ubuntu 上编译 c906 优化为例。

编译 C906 需要用到 T-HEAD RISC-V GCC, 从 OCC 下载 GCC 2.6 版本,解压并设置路径。

```
wget https://occ-oss-prod.oss-cn-hangzhou.aliyuncs.com/resource//1663142514282/Xuantie-900-gcc-linux-5.10.4-glibc-x86_64-V2.6.1-20220906.tar.gz
tar xf Xuantie-900-gcc-linux-5.10.4-glibc-x86_64-V2.6.1-20220906.tar.gz
export PATH=${PWD}/Xuantie-900-gcc-linux-5.10.4-glibc-x86_64-V2.6.1/bin:$PATH
```

下载源码

```
git clone https://github.com/T-head-Semi/csi-nn2.git
```

编译 c906

```
cd csi-nn2
make nn2_c906
```

安装 c906

```
make install_nn2
```

# 快速上手示例

以玄铁 CPU C906 执行 mobilenetv1 为例,可以参考 example 中的示例,示例中以较简易的方式描述了如何调用 SHL 的接口。

编译命令如下:

```
cd example
make c906_m1_f16
```

完成后会生成 c906_mobilenetv1_f16.elf 文件。将其复制到带 C906 CPU 的开发板【比如 D1】后,执行:

```
./c906_mobilenetv1_f16.elf
```

NOTE: 原始 mobilenetv1 中每层 conv2d 后接一层 batch norm,示例中假设已经通过部署工具将其融合进 conv2d。关于如何使用部署工具融合 batch norm,以及生成对应的权重数值,可以参考 [HHB](https://www.yuque.com/za4k4z/oxlbxl) 的使用。

# 资源

- [T-HEAD 芯片开放社区](https://occ.t-head.cn/)
- [SHL 应用在 MLPerf tiny](https://github.com/mlcommons/tiny_results_v0.7/tree/main/open/Alibaba)

# 致谢

SHL 参考、借鉴了下列项目:
- [Caffe](https://github.com/BVLC/caffe)
- [Tensorflow](https://github.com/tensorflow/tensorflow)
- [ncnn](https://github.com/Tencent/ncnn)
- [MNN](https://github.com/alibaba/MNN)
- [Tengine](https://github.com/OAID/Tengine)
- [CMSIS_5](https://github.com/ARM-software/CMSIS_5)
- [ONNX](https://github.com/onnx/onnx)
- [XNNPACK](https://github.com/google/XNNPACK)
9 changes: 9 additions & 0 deletions example/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@

c906_m1_f16:
riscv64-unknown-linux-gnu-gcc c906_mobilenetv1_f16.c -o c906_mobilenetv1_f16.elf -I../include ../install_nn2/lib/libshl_c906.a -lm -static

c906_c2d_f32:
riscv64-unknown-linux-gnu-gcc c906_conv2d_f32.c -o c906_conv2d_f32.elf -I../include ../install_nn2/lib/libshl_c906.a -lm -static

clean:
rm -rf *.elf
107 changes: 107 additions & 0 deletions example/c906_conv2d_f32.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
/*
* Copyright (C) 2016-2022 T-Head Semiconductor Co., Ltd. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the License); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an AS IS BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/* SHL version 2.1.x */

#include <shl_ref.h>

int main(int argc, char **argv)
{
struct csinn_session *sess = csinn_alloc_session();
sess->base_run_mode = CSINN_RM_LAYER;
struct csinn_tensor *input = csinn_alloc_tensor(sess);
struct csinn_tensor *output = csinn_alloc_tensor(sess);
struct csinn_tensor *kernel = csinn_alloc_tensor(sess);
struct csinn_tensor *bias = csinn_alloc_tensor(sess);
struct csinn_conv2d_params *params =
csinn_alloc_params(sizeof(struct csinn_conv2d_params), sess);

input->dim[0] = 1; // batch
input->dim[1] = 512; // in_channel
input->dim[2] = 14; // height
input->dim[3] = 14; // width
kernel->dim[0] = 512;
kernel->dim[1] = 512;
kernel->dim[2] = 1;
kernel->dim[3] = 1;
bias->dim[0] = 512;
output->dim[0] = 1; // batch
output->dim[1] = 512; // out_channel
output->dim[2] = 14; // height
output->dim[3] = 14; // width

params->stride_height = 1;
params->stride_width = 1;
params->pad_left = 0;
params->pad_right = 0;
params->pad_top = 0;
params->pad_down = 0;
params->dilation_width = 0;
params->dilation_height = 0;
params->base.layout = CSINN_LAYOUT_NCHW;
params->group = 1;
params->conv_extra.fuse_zp2bias = false;

input->dim_count = 4;
input->layout = CSINN_LAYOUT_NCHW;
input->is_const = 0;
input->quant_channel = 1;

kernel->dim_count = 4;
kernel->layout = CSINN_LAYOUT_OIHW;
kernel->is_const = 1;
kernel->quant_channel = 1;

bias->dim_count = 1;
bias->layout = CSINN_LAYOUT_O;
bias->is_const = 1;
bias->quant_channel = 1;

output->dim_count = 4;
output->layout = CSINN_LAYOUT_NCHW;
output->is_const = 0;
output->quant_channel = 1;

input->dtype = CSINN_DTYPE_FLOAT32;
kernel->dtype = CSINN_DTYPE_FLOAT32;
bias->dtype = CSINN_DTYPE_FLOAT32;
output->dtype = CSINN_DTYPE_FLOAT32;

params->base.api = CSINN_C906;

/* alloc random input */
input->data = malloc(14 * 14 * 512 * 4);
/* alloc random kernel */
kernel->data = malloc(512 * 512 * 1 * 1 * 4);
/* alloc random bias */
bias->data = malloc(512 * 4);
/* alloc random output */
output->data = malloc(14 * 14 * 512 * 4);

csinn_conv2d_init(input, output, kernel, bias, params);

uint64_t start_time, end_time;
start_time = shl_get_timespec();
csinn_conv2d(input, output, kernel, bias, params);
end_time = shl_get_timespec();
printf("Run graph execution time: %.5fms, FPS=%.2f\n",
((float)(end_time - start_time)) / 1000000,
1000000000.0 / ((float)(end_time - start_time)));

return 0;
}
Loading

0 comments on commit 32e1034

Please sign in to comment.