Code for "An adaptively weighted stochastic gradient MCMC algorithm for Monte Carlo simulation and global optimization (Statistics and Computing 2022)"
The adaptively weighted scheme can outperform the vanilla alternative by almost hundreds of times in the following cases (but not limited to) and is much better the existing baselines.
Index | Function name | Dimension | Link |
---|---|---|---|
1 | Rastrigin | 20 | link |
2 | Griewank | 20 | link |
3 | Sum Squares | 20 | link |
4 | Rosenbrock | 20 | link |
5 | Zakharov | 20 | link |
6 | Powell | 24 | link |
7 | Dixon & Price | 25 | link |
8 | Levy | 30 | link |
9 | Sphere | 30 | link |
10 | Ackley | 30 | link |
Although MNIST has been talked about a billion times, the MCMC algorithms cannot achieve free exploration / fluctuating losses using a fixed learning rate. Luckily, such a tragedy has been solved through this code.