Author: Ivan Bongiorni, Data Scientist at GfK; LinkedIn.
This is a collection of my Notebooks on TensorFlow 2.0
The training of models is based on TensorFlow's eager execution method. I'll try to minimize referencese to Keras.
- Basic feed forward stuff
- Autoencoders
- Convolutional Neural Networks
- Recurrent Neural Networks
- Applications to NLP
Basic feed forward stuff:
-
Basic classifier: implementation of a feed forward Classifier with simple, full-Batch Gradient Descent in Eager execution.
-
Mini batch gradient descent: training a model with Mini Batch Gradient Descent.
-
Save and restore models: how to train a model, save it, then restore it and keep training.
-
Train a Neural Network with frozen layers
Autoencoders:
-
Autoencoder for dimensionality reduction: implementation of a stacked Autoencoder for dimensionality reduction of datasets.
-
Denoising Autoencoder (see CNN section below).
-
Recurrent Autoencoder (see RNN section below).
Convolutional Neural Networks:
-
Basic CNN classifier: a basic Convolutional Neural Network for multiclass classification.
-
Advanced CNN classifier with custom data augmentation.
-
Mixed-CNN classifier.
-
Denoising Autoencoder.
Recurrent Neural Networks:
-
Seq2seq models.
RNN + Natural Language Processing
- LSTM Text generator from this repository of mine.