Skip to content

Releases: Samsung/ONE

ONERT-MICRO 2.0.0

30 Sep 08:54
c74c988
Compare
Choose a tag to compare

Release Notes for onert-micro 2.0.0

Operations updated

  • New ops supported : Cast, Ceil, Elu, Fill
  • New CMSIS NN accelerated ops : SVDF, Relu, Relu6

New features for on-device training

  • New Trainable Operation : GRU, StridedSlice
    • limitation : You can train GRU's weights. Since input gradient is not supported now, GRU layer should be the last layer for training.
  • New Loss Function : Sparse Categorical Cross Entropy (experimental)

ONE Release 1.29.0

29 Aug 14:51
1eabd3f
Compare
Choose a tag to compare

Release Note 1.29.0

ONE Compiler

  • Support more optimization option(s): --transform_sqrt_div_to_rsqrt_mul, --fold_mul,
    --fuse_add_to_fullyconnected_bias, --fuse_mul_to_fullyconnected_weights,
    --fuse_mul_with_fullyconnected.
  • Add more optimization: CanonicalizePass.
  • tflite2circle supports more data types: FLOAT64, UINT64, UINT32, UINT16.
  • luci-interpreter supports more data types on some ops.
  • Support multiple option names in command schema.

ONE Release 1.28.0

18 Jul 02:39
44fd15b
Compare
Choose a tag to compare

Release Note 1.28.0

ONE Runtime

Python API

  • Support experimental python API
    • Refer howto document for more details

On-device Training

  • Support on-device training with circle model
    • Training parameter can be passed to onert via onert`s experimental API or loading new model format including training information: circle_plus
    • Trained model can be exported to circle model via experimental API nnfw_train_export_circle
    • Supporting Transfer learning from a pre-trained circle model
  • Introduce circle_plus_gen tool
    • Generates a circle_plus model file with a given training hyperparameters
    • Shows a _circle_plus model details

Runtime configuration API

  • onert supports runtime configuration API for prepare and execution phase via experimental APIs
    • nnfw_set_prepare_config sets configuration for prepare phase, and nnfw_reset_prepare_config resets it to default value
    • nnfw_set_execution_config sets configuration for execution phase, and nnfw_reset_execution_config resets it to default value
    • Supporting prepare phase configuration: prepare execution time profile
    • Supporting execution phase configuration: dump minmax data, dump execution trace, dump execution time
  • Introduce new API to set onert workspace directory: nnfw_set_workspace
    • onert workspace directory is used to store intermediate files during prepare and execution phase

Minmax Recorder

  • Now onert's minmax recorder dumps raw file format instead of HDF5 format
  • onert dumps minmax data into workspace directory

On-device Compilation

  • onert supports full quantization of uint8/int16 type weight and activation.
    • To quantize activation, onert requires minmax data of activation.
  • onert supports on-device code generation for special backend requiring special binary format such as DSP, NPU.
    • Introduce new experimental API for code generation: nnfw_codegen

Type-aware model I/O usage

  • If loaded model is quantized model, onert allows float type I/O buffer
    • onert converts float type input buffer to quantized type internally
    • onert fills float type output buffers by converting quantized type output data to float type internally
  • On multimodel package, onert allows edges between quantized model and float type model

ONE Release 1.27.2

16 Jul 07:58
2fd3af9
Compare
Choose a tag to compare

Release Note 1.27.2

ONE Compiler

  • Support target option with command args in one-profile, one-codegen.
  • Update man pages of one-cmds tools.

ONE Release 1.27.1

11 Jul 12:41
20b23fa
Compare
Choose a tag to compare

Release Note 1.27.1

ONE Compiler

  • Command schema supports multiple names.
  • Fix invalid warning on boolean type option in onecc.

ONERT-MICRO 2.0.0-pre

09 Jul 03:52
e3929a0
Compare
Choose a tag to compare

Release Notes for onert-micro 2.0.0-pre

Overall Structure Refactored

  • c++ api has been changed : onert-micro c++ api
  • 60 ops supported : Abs, Add, AddN, AveragePool2D, ArgMax, ArgMin, Concatenation, BatchToSpaceD, Cos, Div, DepthwiseCov2D, Dequatize, FullyCoected, Cov2D, Logistic, Log, Gather, GatherD, Exp, Greater, GreaterEqual, ExpadDims, Equal, Floor, FloorDiv, FloorMod, Pad, Reshape, ReLU, ReLU6, Roud, Less, L2ormalize, L2Pool2D, LessEqual, LeakyReLU, LogSoftmax, Mul, Maximum, MaxPool2D, Miimum, otEqual, Si, SquaredDifferece, Slice, Sub, Split, SpaceToBatchD, StridedSlice, Square, Sqrt, SpaceToDepth, Tah, Traspose, TrasposeCov, Softmax, While, Rsqrt, Upack

onert-micro supports on-device training feature

  • Trainable Operations : 5 operations ( Conv2D, FullyConnected, MaxPool2D, Reshape, Softmax )
  • Loss : MSE, Categorical Cross Entropy
  • Optimizer : ADAM, SGD
  • C api for training feature : onert-micro c api header
  • limitation : For now, you can train topologically linear model

ONE Release 1.27.0

28 Jun 05:12
a74a468
Compare
Choose a tag to compare

Release Note 1.27.0

ONE Compiler

  • Support more Op(s): CircleGRU, CircleRelu0To1
  • Support more optimization option(s): resolve_former_customop, --forward_transpose_op,
    fold_shape, remove_gather_guard, fuse_add_with_conv, fold_squeeze, fuse_rsqrt
  • Support INT4, UINT4 data types
  • Support 4bit quantization of ONNX fake quantize model
  • Introduce global configuration target feature
  • Introduce command schema feature
  • Use C++17

ONE Release 1.26.0

04 Jan 08:20
3f51fd8
Compare
Choose a tag to compare

Release Note 1.26.0

ONE Compiler

  • Support more Op(s): HardSwish, CumSum, BroadcastTo
  • Support more optimization option(s): decompose_softmax, decompose_hardswish, fuse_slice_with_tconv,
    fuse_mul_with_conv, remove_unnecessary_add, fuse_horizontal_fc_layers, common_subexpression_elimination,
    remove_unnecessary_transpose
  • one-quantize supports more option
    • Requantization option to convert TF2-quantized int8 model to uint8 model (--requantize)
    • A new option to automatically find mixed-precision configuration (--ampq)
    • A new option to save calibrated min/max values (--save_min_max)
    • Add new parameters for moving average calibration (--moving_avg_batch, --moving_avg_const)
  • Introduce q-implant that writes quantization parameters and weights into the circle model
  • Introduce minmax-embedder that embeds min/max values into the circle model

ONE Release 1.24.1

12 Oct 10:31
Compare
Choose a tag to compare

Release Note 1.24.1

ONE Compiler

  • Updates error message of rawdata2hdf5 test

ONERT-MICRO 1.0.0

27 Sep 00:05
4d5a78f
Compare
Choose a tag to compare

onert-micro-cortexm.tar.gz

Release Notes for onert-micro 1.0

Supported operations

More operations are supported as follows:

  • AveragePool2D, Elu, Exp, Abs, Neg, Div, AddN, Relu, Relu6, Leak_Relu, Pad, PadV2, ArgMin, ArgMax, Resize_Bilinear, LogicalAnd, LogicalOr, Equal, NotEqual, Greater, GreaterEqual, LessEqual

Etc

  • Address sanitizer build option(ENABLE_SANITIZER) is added
  • Fix buffer overflow in static analyzer's defect