Here I have 3 major types of NLP projects that I worked on: Embeddings, LSTM/GRU models, text generation/prediction. Note that I learned these methods as I went and therefore other better approaches very likely exist with fewer potential errors. However, I believe that these models are relatively robust for the problems I tackled in each file. Below is the order from least complex to most complex models/problems:
- NLP_Embeddings
- LSTM_GRU_models
- NLP_Predictions
Feel free to adjust hyper parameters to see how they affect the loss/accuracy! As always, thanks for taking an interest in my learning process!