🎉 We're happy to announce the PyEPO 0.4.0 release. 🎉
Happy Holiday! We're thrilled to bring you an exciting new feature in this release:
We are excited to announce the addition of a new module, perturbationGradient
, designed to implement Perturbation Gradient (PG) loss. This module provides flexibility for various optimization tasks with configurable parameters such as sigma
(step size) and two_sides
(differencing type).
This feature is based on the paper "Decision-Focused Learning with Directional Gradients". It is a surrogate loss function of objective value, which measures the decision quality of the optimization problem. According to Danskin’s Theorem, the PG Loss is derived from different zeroth order approximations and has an informative gradient. Thus, it allows us to design an algorithm based on stochastic gradient descent.
In addition, thank you, @RuoyuChen615, for providing her version of PG loss. Her version of PG loss implementation has provided good insights, helping us refine and enhance this module.
We're eager for you to test these out and share your feedback with us. As always, thank you for being a part of our growing community!