Implementation of the (regularized) Anderson acceleration (aka Approximate Maximum Polynomial Extrapolation -- AMPE). This repository accompanies the following paper:
T. D. Nguyen, A. R. Balef, C. T. Dinh, N. H. Tran, D. T. Ngo, T. A. Le, and P. L. Vo. "Accelerating federated edge learning," in IEEE Communications Letters, 25(10):3282–3286, 2021.
Check out my blog post here for an introduction.
You can find the implementation in the aa.py file. The AndersonAcceleration
class should be in instantiated with the window_size
(reg
(
>>> import numpy as np
>>> from aa import AndersonAcceleration
>>> acc = AndersonAcceleration(window_size=2, reg=0)
>>> x = np.random.rand(100) # some iterate
>>> x_acc = acc.apply(x) # accelerated from x
You will need to apply acc.apply
, which will solve for
We will minimize a strictly convex quadratic objective. Check quadratic_example.ipynb
for more detail. The below plot shows the optimality gap between
We will minimize the logistic_regression_example.ipynb
for more detail. Similarly, AA is much more favorable than the vanilla GD when optimizing this objective.
TODO:
- Add a version for
torch
's model parameters