Improved Hypergradient optimizers, providing better generalization and faster convergence.
-
Updated
Apr 3, 2024 - Jupyter Notebook
Improved Hypergradient optimizers, providing better generalization and faster convergence.
Code for the paper Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates by Riccardo Grazzi, Massimiliano Pontil and Saverio Salzo (ICML 2024).
Add a description, image, and links to the hypergradient topic page so that developers can more easily learn about it.
To associate your repository with the hypergradient topic, visit your repo's landing page and select "manage topics."