Neural Computing and Deep Learning Course - Fall 2022
This repository contains 8 projects developed as part of the Deep Learning and Neural Computing course for the Fall 2022 semester at AUT University. Each project focuses on different neural network architectures and applications, using various datasets and techniques. The course is led by Professor Safabakhsh, a renowned expert in the field of deep learning and neural computing. The projects have been designed to provide students with a comprehensive understanding of the subject matter and its practical applications, under Professor Safabakhsh's guidance and expertise.
This project implements Adaline and Perceptron algorithms for binary classification. The algorithms are tested on a synthetic dataset and evaluated based on accuracy and convergence rate.
In this project, a multilayer perceptron (MLP) neural network is trained on the Smoke Detection Dataset to classify images as containing smoke or not. The performance of the network is evaluated based on accuracy and other metrics.
Project 3: Clustering MNIST Dataset with MiniSOM and Surface Reconstruction for Rabbit Image with SOM
This project uses self-organizing maps (SOM) to cluster the MNIST dataset and perform surface reconstruction for rabbit image. MiniSOM is used for clustering MNIST digits, while SOM is used for reconstructing rabbit image based on their low-dimensional representations.
Project 4: Image Classification with Convolutional Neural Networks: A Case Study with Linnaeus 5 Dataset
In this project, a convolutional neural network (CNN) is trained on the Linnaeus 5 dataset to classify images into 5 categories. Feature extraction, transfer learning, and Inception Network are also explored to improve the network performance.
This project explores recurrent neural networks (RNN) and their application in time series analysis. The Tehran stock market dataset is used to predict future stock prices, and the autoencoder network is introduced as a dimensionality reduction technique.
This project focuses on sentiment analysis using different neural network architectures. LSTM, GRU, and CNN models are trained on the Large Movie Review Dataset to classify movie reviews as positive or negative.
In this project, a conditional generative adversarial network (CGAN) is used to generate handwritten digits for the MNIST dataset. The generator network is trained to produce realistic digits based on random noise and a specified class label.
This project introduces the transformer network and its related concepts. BERT is used as a transformer-based model for sentiment analysis on the Large Movie Review Dataset. The Hugging Face Transformers library is also explored to facilitate the model training and evaluation.
Feel free to explore each project's folder for more details, including the code, datasets, and results.