Classify images of handwritten digits with a LeNet Convolutional Neural Network and a Deep Neural Network
This repository contains 2 Python files that both:
-
Import images of handwritten digits from MNIST
-
Train a neural network using KERAS to classify the images
The difference is that Convolutional_Neural_Network.ipynb
uses convolutional neural networks to train the model. mnist_deep_learning.py
uses a Deep Neural Network to achieve the same goal, but has a lower accuracy on test data.
The first step was to import 60,000 labelled images of handwritten digits from the mnist dataset.
Figure 1: Subset of Training Data
Figure 2: Dataset Distribution
Figure 3: LeNet Model Summary
The model resulted in a 98.75% training accuracy and 99.12% validation accuracy.
Figure 4: Accuracy and Loss Plots of Training and Validation Data
We then tested the model on numerous unseen test images found online, all of which passed. Below is an example of a handwritten image of the number 2, which was successfully classified by the model.
Figure 5: Example of Unseen Test Image
You can run Convolutional_Neural_Network.ipynb
on Google Colab. It is best to press Runtime
> Change runtime type
> Hardware Accelerator
> GPU
to improve the runtime signficantly.