Manually hand-coding step-size optimizers (momentum and Adam) for deep learning neural networks. This was achieved through further modification of the logic used in stochastic gradient descent.
Project contains a ComputationalGraphPrimer file that contains all the classes required for training model through stochastic gradient descent (SGD). The 2 other files for single-neuron classifier and multi-neuron classifier contain classes with functions overwritten in order to achieve stochastic gradient descent with momentum and Adam.