Skip to content

BigDL release 2.3.0

Compare
Choose a tag to compare
@glorysdj glorysdj released this 24 Apr 02:17
· 5 commits to branch-2.3 since this release
ce43fac

Highlights

Note: BigDL v2.3.0 has been updated to include functional and security updates. Users should update to the latest version.

Nano

  • Enhanced trace and quantization process (for PyTorch and TensorFlow model optimizations)
  • New inference optimization methods (including Intel ARC series GPU support, CPU fp16, JIT int8, etc.)
  • New inference/training features (including TorchCCL support, async inference pipeline, compressed model saving, automatic channels_last_3d, multi-instance training for customized TF train loop, etc.)
  • Performance enhancement and overhead reduction for inference optimized model
  • More user-friendly document and API design

Orca:

  • Step-by-step distributed TensorFlow and PyTorch tutorials for different data inputs.
  • Improvement and examples for distributed MMCV pipelines.
  • Further enhancement for Orca Estimator (more flexible PyTorch train loops via Hook, improved multi-output prediction, memory optimization for OpenVINO, etc.)

Chronos

  • 70% latency reduction for Forecasters
  • New bigdl.chronos.aiops module for AIOps use case on top of Chronos algorithms.
  • Enhanced TF-based TCNForecaster to better accuracy

Friesian:

  • Automatic deployment of RecSys serving pipeline on Kubernetes with Helm Chart

PPML

  • TDX (both VM and CoCo) support for Big Data, DL Training & Serving (including TDX-VM orchestration & k8s deployment, TDXCC installation & deployment, attestation and key management support, etc.)
  • New Trusted Machine Learning toolkit (with secure and distributed SparkML & LightGBM support)
  • Trusted Big Data toolkit upgrade (>2x EPC usage reduction, Apache Flink support, Azure MAA support, multi-KMS support, etc.)
  • Trusted Deep Learning toolkit upgrade (with improved performance using BigDL Nano, tcmalloc, etc.)
  • Trusted DL Serving toolkit upgrade (with Torch Serve, TF-Serving, and improved throughput and latency)