The prerequisites for starting a 100-Days-of-Machine-Learning are following:
-
Basic mathematical concepts: Calculus, and Probability are fundamental concepts used in ML.
-
Programming skills: Familiarity with programming language such as Python is required as it is commonly used in ML.
-
Basic knowledge of statistics: An understanding of statistical concepts such as mean, median, standard deviation, and probability distributions is important.
-
Familiarity with data structures and algorithms: Basic knowledge of data structures such as arrays, lists, and matrices, as well as algorithms such as sorting and searching, is helpful.
-
Basic knowledge of data analysis: Understanding of data analysis concepts such as
data cleaning
,preprocessing
, anddata visualization
will help in understanding the data and make the best use of it.
Learning Agenda of Day 1 is :
- What is ML? How does it work?
- Examples and Real World Applications
- Nature of ML Problems
- Traditional Computer Science vs Machine Learning
- Machine Learning Flow
- Machine Learning Advantages vs Disadvantages
Day 2: Supervised Learning Setup
Learning Agenda of Day 2 is :
- What is Supervised Learning?
- Algorithm vs. Model & Rules vs. Learning
- Formalizing the setup or Formulation: Regression vs Classification
- Feature Space vs Label Space
Day 3: Hypothesis Space
Learning Agenda of Day 3 is :
- What is Hypothesis Space?
- How to choose Hypothesis Space?
- How to evaluate Hypothesis Space? or How do we evaluate the performance?
Day 4: Hypothesis Space Cont.
Learning Agenda of Day 4 is :
- Loss Functions
- 0/1 Loss
- Squared Loss
- Absolute Loss
- Root Mean Squared Loss
- How NOT to reduce the Loss?
- Concept of Generalization: Generlization Loss
- Training, Validation and Testing Sets
Day 5: Nearest Neighbors Methods
Learning Agenda of Day 5 is :
- KNN Algorithm
- Basic Idea
- Formal Definition
- KNN Decision Boundary
- A supervised, non-parametric algorithm
- Used for classification and regression
- An Instance-based learning algorithm
- A lazy learning algorithm
- Characteristics of KNN
- Practical Issues
Learning Agenda of Day 6 is :
- Similarity/Distance Metrics
- Constraints/Properties on Distance Metrics
- Euclidean Distance
- Manhatten Distance
- Minkowski Distance
- Chebyshev Distance
- Norm of a Vector and Its Properties
- Cosine Distance
- Practical Issues in Computing Distance
Learning Agenda of Day 7 is :
- KNN Algorithm Formulation: Regression vs Classification
- Complexity of KNN
- Choosing the value of K - The Theory
- Tuning the hyper-parameter K- The Method
- KNN- The good, the bad, the ugly.
Learning Agenda of Day 8 is :
- Algorithm Convergence
- Error Convergence
- Learning Problem
- Bayes Optimal Classifier
- 1-NN Error as n-$\infty$
Learning Agenda of Day 80 is :
- Overview of deep learning and its applications
- Artificial neural networks and their structure
- Perceptrons and the concept of linear separation
- Activation functions and their role in neural networks
- Gradient descent and backpropagation algorithms
- Types of deep learning architectures: feedforward, convolutional, and recurrent
- The concept of overfitting and regularization techniques
- Introduction to popular deep learning frameworks such as TensorFlow and PyTorch
- Terminology and key concepts such as layers, weights, biases, loss function, optimizer, etc.
Learning Agenda of Day 81 is :
- Supervised learning and the concept of labeled data
- Understanding datasets and data preprocessing techniques
- Number of examples, features, and any missing values or outliers.
- Normalization, Feature scaling, and Handling missing values
- Building a simple feedforward neural network using a popular deep learning framework such as TensorFlow or PyTorch
- Understanding the basic components of a neural network such as layers, weights, biases, and activation functions.
- The process of creating and defining the architecture of the model, loading and preparing the data and training the model using popular libraries and frameworks such as TensorFlow and PyTorch.
Learning Agenda of Day 82 is :
-
Understanding the basic components of a neural network:
- A neural network is made up of layers of interconnected nodes or artificial neurons.
- Each layer contains a set of weights and biases that are updated during training, and an activation function that determines the output of each neuron. -
- Understanding how these components work together is important for building and training a neural network.
-
Training a neural network using a supervised learning algorithm such as stochastic gradient descent:
- The process of training a neural network involves adjusting the weights and biases of the network to minimize the error between the predicted output and the true output.
- Stochastic gradient descent (SGD) is a popular optimization algorithm used to update the weights and biases during training.
-
Evaluating the performance of a neural network using metrics such as accuracy, precision, and recall:
- Once the model is trained, it's important to evaluate its performance on new, unseen data.
- Common metrics used to evaluate the performance of a neural network include accuracy, precision, and recall.
- These metrics provide insight into how well the model is able to make predictions and identify patterns in the data.
Learning Agenda of Day 83 is :
-
Practice building simple feedforward neural networks for image classification tasks using popular datasets such as MNIST and CIFAR-10:
- These datasets are widely used in the machine learning community and are a great way to practice building and training neural networks for image classification tasks.
-
Practice building simple feedforward neural networks for text classification tasks using popular datasets such as IMDB and 20 Newsgroups:
- These datasets are also widely used in the machine learning community and are a great way to practice building and training neural networks for text classification tasks.
-
Introduction to Hyperparameter tuning:
- Hyperparameter tuning is the process of adjusting the configuration of a model in order to improve its performance.
Learning Agenda of Day 84 is :
-
Understanding the architecture of CNNs:
- This includes learning about the different types of layers that make up a CNN, such as convolutional layers, pooling layers, and fully connected layers.
- Understanding the purpose and function of each layer is important for building and training a CNN.
-
Convolutional layers and filters:
- Convolutional layers are a key component of CNNs, they are responsible for detecting patterns and features in images.
- The filters in convolutional layers are used to detect specific features in the image, such as edges or textures.
- Understanding how convolutional layers and filters work is important for building and training a CNN.
-
Pooling layers:
- Pooling layers are used to reduce the spatial size of the feature maps produced by convolutional layers.
- They are used to reduce the dimensionality of the data and make the network more robust to small translations and distortions.
- Understanding the role and function of pooling layers is important for building and training a CNN.
Learning Agenda of Day 85 is :
-
Fully connected layers:
- Fully connected layers are the final layers in a CNN and are used to classify images. They take the output of the previous layers and use it to make a prediction about the image.
- Understanding the role of fully connected layers is important for building and training a CNN.
-
Training a CNN:
- Training a CNN involves adjusting the weights and biases of the network to minimize the error between the predicted output and the true output.
- The process of training a CNN includes different hyperparameters such as the learning rate, batch size, and number of epochs, it's important to understand how to adjust these hyperparameters to improve the performance of the model.
-
Transfer learning in CNNs:
- Transfer learning is the process of using a pre-trained model as a starting point to train a new model for a different task. This can save a significant amount of time and computational resources.
- Learn about the use of pre-trained models in CNNs and how to fine-tune them for different tasks.
Learning Agenda of Day 86 is :
-
Applications of CNNs:
- CNNs have a wide range of applications in image recognition and classification tasks.
- Learn about the different tasks that CNNs can be used for such as object detection, semantic segmentation, and facial recognition.
- Understanding the different applications of CNNs will help you understand the potential of this type of model and how to apply them to different problems.
-
Practice building and training CNNs for image classification tasks using popular datasets such as CIFAR-10 and ImageNet:
- Practicing building and training CNNs for image classification tasks using popular datasets is important for gaining hands-on experience and understanding how to apply CNNs to real-world problems. It also provides a way to benchmark your progress and compare your results with other models.
Learning Agenda of Day 87 is :
-
Understanding the architecture of RNNs:
- This includes learning about the different types of layers that make up an RNN, such as the input layer, recurrent layer, and output layer.
- Understanding the purpose and function of each layer is important for building and training an RNN.
-
Recurrent layers and memory cells:
- Recurrent layers are a key component of RNNs, they are responsible for processing sequential data.
- Memory cells in recurrent layers are used to store information from previous time steps and use it to inform the current time step.
- Understanding how recurrent layers and memory cells work is important for building and training an RNN.
Learning Agenda of Day 88 is :
-
Training an RNN:
- Training an RNN involves adjusting the weights and biases of the network to minimize the error between the predicted output and the true output.
- The process of training an RNN includes different hyperparameters such as the learning rate, batch size, and number of epochs, it's important to understand how to adjust these hyperparameters to improve the performance of the model.
-
Applications of RNNs:
- RNNs have a wide range of applications in sequence data tasks.
- Learn about the different tasks that RNNs can be used for such as natural language processing, speech recognition, and time series forecasting.
- Understanding the different applications of RNNs will help you understand the potential of this type of model and how to apply them to different problems.
Learning Agenda of Day 89 is :
-
Practice building and training RNNs for text generation, language translation and speech recognition tasks using popular datasets such as IMDB, Wikipedia and CommonVoice:
- Practicing building and training RNNs for different tasks using popular datasets is important for gaining hands-on experience and understanding how to apply RNNs to real-world problems.
- It also provides a way to benchmark your progress and compare your results with other models.
-
Introduction to variants of RNNs such as LSTM and GRU:
- RNNs have several variants, such as LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit), which are designed to better handle the problem of vanishing gradients, a problem that occurs when training traditional RNNs.
- Understanding the differences between these variants and how they work will help you choose the right type of RNN for different tasks.
Learning Agenda of Day 90 is :
-
Understanding Generative models: Learn about the different types of generative models and their applications, such as Generative Adversarial Networks (GANs) and Variational Autoencoder (VAE)
-
GANs architecture:
- Learn about the architecture of GANs, including the generator and discriminator networks, and how they are used to generate new data.
-
Training GANs:
- Learn about the process of training GANs, including the different hyperparameters that can be adjusted and how to choose appropriate values for them.
-
Applications of GANs :
- Learn about the different tasks that GANs can be used for such as image synthesis, style transfer, and data augmentation.
-
Practice building and training GANs for image generation tasks using popular datasets such as CIFAR-10 and ImageNet.
Learning Agenda of Day 91 is :
-
Understanding Generative models:
- Learn about the different types of generative models and their applications, such as Generative Adversarial Networks (GANs) and Variational Autoencoder (VAE)
-
VAE architecture:
- Learn about the architecture of VAEs and how they are used to generate new data.
-
Training VAEs:
- Learn about the process of training VAEs, including the different hyperparameters that can be adjusted and how to choose appropriate values for them.
-
Applications of VAEs:
- Learn about the different tasks that VAEs can be used for such as image synthesis, style transfer, and data augmentation.
-
Practice building and training VAEs for image generation tasks using popular datasets such as CIFAR-10 and ImageNet.
Learning Agenda of Day 92 is :
-
Understanding Transfer Learning:
- Learn about the concept of transfer learning, and how pre-trained models can be used as a starting point to train models for new tasks.
-
Using pre-trained models:
- Learn about the different types of pre-trained models available and how to use them for different tasks such as image classification, object detection, and natural language processing.
-
Fine-tuning pre-trained models:
- Learn about the process of fine-tuning pre-trained models for different tasks, including how to adjust the hyperparameters and how to choose appropriate values for them.
-
Applications of Transfer Learning:
- Learn about the different tasks that transfer learning can be used for such as image classification, object detection, and natural language processing.
-
Practice using pre-trained models for image classification tasks using popular datasets such as CIFAR-10 and ImageNet.
Learning Agenda of Day 93 is :
-
Understanding Autoencoder:
- Learn about the concept of Autoencoder and their architecture, including the encoder and decoder networks.
-
Types of Autoencoder:
- Learn about the different types of Autoencoder such as vanilla Autoencoder, Denoising Autoencoder, and Variational Autoencoder.
-
Training Autoencoder:
- Learn about the process of training Autoencoder, including the different hyperparameters that can be adjusted and how to choose appropriate values for them.
-
Applications of Autoencoder:
- Learn about the different tasks that Autoencoder can be used for such as dimensionality reduction, data denoising, and feature learning.
-
Practice building and training Autoencoder for image generation tasks using popular datasets such as MNIST and CIFAR-10.
-
Introduction to Autoencoder variants such as Convolutional Autoencoder and Recurrent Autoencoder
Learning Agenda of Day 94 is :
-
Review:
- Review the key concepts and techniques covered throughout the course, and practice building deep learning models using the knowledge and skills you have acquired.
-
Practice building your own deep learning models:
- Use the knowledge and skills you have acquired to build your own deep learning models for different tasks, such as image classification, text generation, and time series forecasting.
-
Next steps:
- Learn about the different resources available for further learning and development, including tutorials, articles, and online courses.