Skip to content

v0.6.0

Compare
Choose a tag to compare
@nathanielsimard nathanielsimard released this 21 Mar 14:40
· 1332 commits to main since this release

Backend API

  • Almost all tensor operations now receive owned tensors instead of references, which enables backend implementations to reuse tensor-allocated memory.
  • Backends now have a different type for their int tensor, with its own set of operations.
  • Removed the IntegerBackend type.
  • Simpler Element trait with fewer functions.
  • New index-related operations (index_select , index_select_assign , index_select_dim and index_select_dim_assign).

Tensor API

  • The Tensor struct now has a third generic parameter Kind with a default value of Float.
  • There are three kinds of tensors: Float, Bool, and Int,
    • Float Tensor ⇒ Tensor<B, D> or Tensor<B, D, Float>
    • Bool Tensor ⇒ Tensor<B, D, Bool>
    • Int Tensor ⇒ Tensor<B, D, Int>
  • You still don’t have to import any trait to have functions enabled, but they have an extra constraint based on the kind of tensor, so you can’t call matmul on a bool tensor. All of it with zero match or if statement, just pure zero-cost abstraction.
  • The BoolTensor struct has been removed.

Autodiff

  • Not all tensors are tracked by default. You now have to call require_grad.
  • The state is not always captured. Operations manually have to clone the state they need for their backward step. This results in a massive performance enhancement.

No Std

  • Some Burn crates don't require std anymore, which enables them to run on any platform:
    • burn-core
    • burn-ndarray
    • burn-common
    • burn-tensor
  • We have a WebAssembly demo with MNIST inference. The code is also available here with a lot of details explaining the process of compiling a model to WebAssembly.

Performance

  • The Tch backend now leverages in-place operations.
  • The NdArray backend now leverages in-place operations.
  • The convolution and maxpooling layers in the NdArray backend have been rewritten with much better performance.
  • The cross-entropy loss module leverages the new index_select operation, resulting in a big performance boost when the number of classes is high.

And of course, a lot of fixes and enhancements everywhere.

Thanks to all the contributors for their work @antimora @twitchax @h4rr9