Skip to content

Releases: XUANTIE-RV/tvm

v2.6.0

19 Oct 12:19
Compare
Choose a tag to compare

这是 HHB 2.6 的 release note,适用于搭载玄铁 CPU 的各类芯片开发板。
This is the release note of HHB 2.6, which is suitable for all kinds of chip development boards equipped with Xuantie CPU.
当前版本包括了一些功能加强和问题修复的说明。
The release includes some functional enhancements and bug fixes.

Features and enhancements

当前版本新增如下特性和加强

  • 新增了 c920v2 目标
  • 可选第三方量化工具 ppq

The current version adds the following features and enhancements

  • Added c920v2 target
  • Optional third-party quantification tool ppq

Limitations

当前版本有如下限制:

  • th1520 平台的 NPU 上,softmax 不能作为第一层。更多详细信息参考平台支持的 OP 列表
  • BERT,mobileVit,swin-transformer 和 facedetect 模型的权重使用 8bit 量化算法,会有精度问题

The current version has the following limitations:

  • On the NPU of the th1520 platform, softmax cannot be used as the first layer. For more details, refer to the OP list supported by the platform
  • The weights of BERT, mobileVit, swin-transformer and facedetect models use 8bit quantization algorithm, which may cause accuracy problems

Bug fixes

当前版本修复了以下问题:

The current version fixes the following issues:

Known issues

当前版本有如下已知问题:

  • th1520 平台的 NPU 上,部分 leaky relu + add, split + concat,concat + concat 的组合会造成精度异常。

The current version has the following known issues:

  • On the NPU of the th1520 platform, some combinations of leaky relu + add, split + concat, and concat + concat will cause abnormal precision.

Deprecated features

当前版本开始,以下特性不再支持或者不再推荐使用:

  • --channel-quantization: 不再可用,相应的 int4_asym_w_sym,int8_asym_w_sym 和 float16_w_int8 默认使用通道量化。

Starting from the current version, the following features are no longer supported or recommended:

  • --channel-quantization: No longer available, corresponding int4_asym_w_sym, int8_asym_w_sym and float16_w_int8 use channel quantization by default.

v2.4.0

19 Jul 11:37
Compare
Choose a tag to compare

这是 HHB 2.4 的 release note,适用于搭载玄铁 CPU 的各类芯片开发板。
This is the release note of HHB 2.4, which is suitable for all kinds of chip development boards equipped with Xuantie CPU.
当前版本包括了一些功能加强和问题修复的说明。
The release includes some functional enhancements and bug fixes.

Features and enhancements

当前版本新增如下特性和加强

  • HHB 可运行在 python 3.8 和 python3.10
  • HHB 可运行在 ubuntu20.04 和 ubuntu 22.04
  • float16_w_int8 量化模式下,新增通道量化
  • 新增以下模型支持:
    • BERT
    • Swin transformer
    • MobilnetVit

The current version adds the following features and enhancements

  • HHB can run on python 3.8 and python3.10
  • HHB can run on ubuntu20.04 and ubuntu 22.04
  • Add channel quantization in float16_w_int8 quantization mode
  • Added support for the following models:
    • BERT
    • Swin transformer
    • MobilnetVit

Limitations

当前版本有如下限制:

  • th1520 平台的 NPU 上,softmax 不能作为第一层。更多详细信息参考平台支持的 OP 列表
  • BERT,mobileVit,swin-transformer 和 facedetect 模型的权重使用 8bit 量化算法,会有精度问题

The current version has the following limitations:

  • On the NPU of the th1520 platform, softmax cannot be used as the first layer. For more details, refer to the OP list supported by the platform
  • The weights of BERT, mobileVit, swin-transformer and facedetect models use 8bit quantization algorithm, which may cause accuracy problems

Bug fixes

当前版本修复了以下问题:

  • 修正 split 在 NCHW 转 NHWC 格式时的问题
  • 修正生成的 main.c 在 gcc 9.4 版本以上的编译报错
  • 修正生成的 main.c 多参数穿参数的问题
  • 修正异构时通道量化的问题

The current version fixes the following issues:

  • Fixed the problem of split when converting from NCHW to NHWC format
  • Correct the compilation error of the generated main.c above gcc version 9.4
  • Correct the problem that the generated main.c multi-parameter crosses the parameter
  • Fixed the problem of channel quantization when heterogeneous

Known issues

当前版本有如下已知问题:

  • th1520 平台的 NPU 上,部分 leaky relu + add, split + concat,concat + concat 的组合会造成精度异常。

The current version has the following known issues:

  • On the NPU of the th1520 platform, some combinations of leaky relu + add, split + concat, and concat + concat will cause abnormal precision.

Deprecated features

当前版本开始,以下特性不再支持或者不再推荐使用:

  • 通道量化不再影响输入和输出
  • anole 平台不再推荐支持
  • int8 对称量化不再推荐使用

Starting from the current version, the following features are no longer supported or recommended:

  • Channel quantization no longer affects input and output
  • The anole platform is no longer recommended for support
  • int8 symmetric quantization is deprecated

v2.2.0

20 Apr 06:57
Compare
Choose a tag to compare
2.2.0

v2.0.4

06 Sep 03:21
Compare
Choose a tag to compare

v2.0.4

v1.12.17

18 Jul 08:38
Compare
Choose a tag to compare

v1.12.17