Welcome to our repository dedicated to advancing traffic prediction through state-of-the-art neural network architectures, including the Multi-Layer Perceptron (MLP), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). Our models are meticulously trained and evaluated on extensive historical traffic data sourced from diverse junctions.
Our dataset, meticulously curated from [mention source or collection methodology], comprises a wealth of historical traffic data spanning various junctions over specific time frames. Each record encapsulates critical attributes like DateTime, Vehicle Count, and Junction ID. Before model training commences, we conduct essential preprocessing steps, including:
- Feature Engineering: Unveiling pertinent features like year, month, day, hour, and day of the week from DateTime.
- Normalization: Standardizing numerical features to ensure uniform model training.
- Data Segmentation: Partitioning the dataset into training and testing subsets to facilitate rigorous model evaluation.
To kickstart the project effortlessly:
- Clone the Repository:
git clone https://github.com/yourusername/traffic-prediction.git
- Install Dependencies:
pip install -r requirements.txt
- Execute the Main Script:
python traffic_prediction.py
The MLP model is a formidable feedforward neural network, meticulously engineered to discern intricate relationships between input features and traffic volume. Key features include:
- Dense layers empowered by ReLU activation functions.
- Dropout layers for effective regularization and prevention of overfitting.
- Stochastic Gradient Descent (SGD) optimizer with a dynamic learning rate schedule.
GRU, a sophisticated variant of recurrent neural networks, excels in capturing temporal dependencies within sequential data. Our GRU model boasts:
- Multiple GRU layers adept at uncovering temporal patterns in traffic data.
- Strategically placed dropout layers to bolster generalization.
- SGD optimizer with a meticulously crafted learning rate schedule.
Renowned for its prowess in capturing long-term dependencies, LSTM stands tall among recurrent neural networks. Our LSTM model boasts:
- LSTM layers furnished with memory cells to retain and update information over time.
- Dropout layers for preemptive overfitting mitigation.
- SGD optimizer fine-tuned with a dynamic learning rate schedule for efficient convergence.
We meticulously evaluate model performance using industry-standard metrics such as Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). These metrics provide invaluable insights into the accuracy and robustness of our models. Additionally, we generate comparative plots to visually juxtapose predicted traffic volumes against actual values, facilitating qualitative assessment.