Skip to content

Commit

Permalink
renumbering chapters
Browse files Browse the repository at this point in the history
  • Loading branch information
dvgodoy committed Aug 21, 2023
1 parent 20602f7 commit 2590595
Show file tree
Hide file tree
Showing 38 changed files with 36,363 additions and 36,363 deletions.
16 changes: 8 additions & 8 deletions labs/Lab 1A.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"id": "7d922f0a"
},
"source": [
"## 1.7 Lab 1A: Non-Linear Regression\n",
"## 2.7 Lab 1A: Non-Linear Regression\n",
"\n",
"In this lab, you will use the same [Auto MPG Dataset](https://archive.ics.uci.edu/ml/datasets/auto+mpg), but we'll bring more features to the mix, as you will also learn how to encode discrete/categorical features so they can be used to train the model.\n",
"\n",
Expand Down Expand Up @@ -115,7 +115,7 @@
"id": "4c3ac779"
},
"source": [
"### 1.7.1 Train-Validation-Test Split\n",
"### 2.7.1 Train-Validation-Test Split\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step1.png)\n",
"\n",
Expand Down Expand Up @@ -147,7 +147,7 @@
"id": "dcabaa3f"
},
"source": [
"### 1.7.2 Cleaning Data\n",
"### 2.7.2 Cleaning Data\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step2.png)\n",
"\n",
Expand All @@ -172,7 +172,7 @@
"id": "69fad9fe"
},
"source": [
"### 1.7.3 Continuous Attributes\n",
"### 2.7.3 Continuous Attributes\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step3.png)\n",
"\n",
Expand Down Expand Up @@ -233,7 +233,7 @@
"id": "492d5d39"
},
"source": [
"### 1.7.4 Categorical Attributes\n",
"### 2.7.4 Categorical Attributes\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step3.png)\n",
"\n",
Expand Down Expand Up @@ -342,7 +342,7 @@
"id": "92deaced"
},
"source": [
"### 1.7.5 Target and Task\n",
"### 2.7.5 Target and Task\n",
"\n",
"Your features are already taken care of, so it's time to create column tensors for your target attribute. Make sure they are of the type `float32`."
]
Expand All @@ -368,7 +368,7 @@
"id": "10723837"
},
"source": [
"### 1.7.6 Custom Dataset\n",
"### 2.7.6 Custom Dataset\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step4.png)\n",
"\n",
Expand Down Expand Up @@ -485,7 +485,7 @@
"id": "89150a50"
},
"source": [
"### 1.7.7 Data Loaders\n",
"### 2.7.7 Data Loaders\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step5.png)\n",
"\n",
Expand Down
14 changes: 7 additions & 7 deletions labs/Lab 1B.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@
"id": "7b674d35",
"metadata": {},
"source": [
"## 2.12 Lab 1B: Non-Linear Regression\n",
"## 3.12 Lab 1B: Non-Linear Regression\n",
"\n",
"In this lab, we will keep using the same [Auto MPG Dataset](https://archive.ics.uci.edu/ml/datasets/auto+mpg), and we'll be building upon the previous lab (Lab 1A).\n",
"\n",
Expand All @@ -66,7 +66,7 @@
"\n",
"The following section offers a quick recap of the work done in the previous lab. You're welcome to use your own solution as starting point, but please keep in mind that you may need to do some adjustments in this case. We suggest you work on this lab using the suggested recap first and, only once you're finished try replacing the recap with your own code.\n",
"\n",
"### 2.12.1 Recap\n",
"### 3.12.1 Recap\n",
"\n",
"Let's recap what we did in the last lab to properly load and preprocess our dataset, so we can use it to train a non-linear regression in PyTorch. You may run all the cells in this section as they are.\n",
"\n",
Expand Down Expand Up @@ -276,7 +276,7 @@
"id": "1259b83e"
},
"source": [
"### 2.12.2 Embeddings: From Categorical to Continuous\n",
"### 3.12.2 Embeddings: From Categorical to Continuous\n",
"\n",
"Write code to create a list of embedding layers, each layer configured to handle one particular attribute, that is, one layer to embed `cyl` and another one to embed `origin`. You're free to choose the number of elements/dimensions that the resulting arrays will have."
]
Expand Down Expand Up @@ -420,19 +420,19 @@
"id": "cfbf0b18"
},
"source": [
"### 2.12.3 Custom Model\n",
"### 3.12.3 Custom Model\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step1.png)\n",
"\n",
"Your next task is to build a custom model that can handle continuous and categorical features (via embeddings), and that is non-linear in nature. Before moving on, let's briefly discuss two topics: `ModuleList` and the importance of non-linearities.\n",
"\n",
"#### 2.12.3.1 `ModuleList`\n",
"#### 3.12.3.1 `ModuleList`\n",
"\n",
"`ModuleList` is a special type of list, one that allows PyTorch to recursively look for learnable parameters of layers and model inside its contents. As it turns out, if the class attribute of your custom model is a regular Python list, any layers or models inside it will be ignore by PyTorch during training. By explicitly making a `ModuleList` out of a regular Python list we ensure that its parameters are also accounted for.\n",
"\n",
"In our custom model, we have a list of embedding layers, one for each categorical attribute. Therefore, if we want our model to properly learn these embeeddings, we need to make it a `ModuleList`.\n",
"\n",
"#### 2.12.3.2 Methods\n",
"#### 3.12.3.2 Methods\n",
"\n",
"A custom model class must implement a couple of methods:\n",
"- `__init__(self)`\n",
Expand Down Expand Up @@ -538,7 +538,7 @@
"id": "ce91b059"
},
"source": [
"### 2.12.4 Training\n",
"### 3.12.4 Training\n",
"\n",
"Now it is time to write your own training loop. First, you need to instantiate your model.\n",
"\n",
Expand Down
8 changes: 4 additions & 4 deletions labs/Lab 2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"id": "bd8d017e"
},
"source": [
"## 3.4 Lab 2: Price Prediction\n",
"## 4.4 Lab 2: Price Prediction\n",
"\n",
"In this lab, we'll keep using the [100,000 UK Used Car Dataset](https://www.kaggle.com/datasets/adityadesai13/used-car-dataset-ford-and-mercedes) from Kaggle. It contains scraped data of used car listings split into CSV files according to the manufacturer: Audi, BMW, Ford, Hyundai, Mercedes, Skoda, Toyota, Vauxhall, and VW. It also contains a few extra files of particular models (`cclass.csv`, `focus.csv`, `unclean_cclass.csv`, and `unclean_focus.csv`) that we won't be using.\n",
"\n",
Expand Down Expand Up @@ -81,7 +81,7 @@
"id": "6e5527f1",
"metadata": {},
"source": [
"### 3.4.1 Recap\n",
"### 4.4.1 Recap\n",
"\n",
"Let's recap what we did in Chapter 3 to load our data into a datapipe, so we can use it to train a new model in PyTorch. You may run all the cells in this section as they are."
]
Expand Down Expand Up @@ -240,7 +240,7 @@
"id": "d14b9812"
},
"source": [
"### 3.4.3 Custom Model\n",
"### 4.4.3 Custom Model\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step1.png)\n",
"\n",
Expand Down Expand Up @@ -355,7 +355,7 @@
"id": "d11d1d4f",
"metadata": {},
"source": [
"### 3.4.4 Training\n",
"### 4.4.4 Training\n",
"\n",
"Now it is time to write your own training loop once again. First, you need to instantiate your model.\n",
"\n",
Expand Down
14 changes: 7 additions & 7 deletions labs/Lab 3.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"id": "cfb26a63"
},
"source": [
"## 4.8 Lab 3: Classifying Images\n",
"## 5.8 Lab 3: Classifying Images\n",
"\n",
"Now it is YOUR turn to classify some images! First, you will need to choose and load a [model for image classification](https://pytorch.org/vision/stable/models.html#classification) and its corresponding [weights](https://pytorch.org/vision/stable/models.html#table-of-all-available-classification-weights).\n",
"\n",
Expand All @@ -64,7 +64,7 @@
"id": "3a26e4c6"
},
"source": [
"### 4.8.1 Load Weights\n",
"### 5.8.1 Load Weights\n",
"\n",
"Load the weights from the model of your choice into its own object:"
]
Expand All @@ -91,7 +91,7 @@
"id": "94318ce8"
},
"source": [
"### 4.8.2 Load Model\n",
"### 5.8.2 Load Model\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step1.png)\n",
"\n",
Expand Down Expand Up @@ -125,7 +125,7 @@
"id": "e0a1aa48"
},
"source": [
"### 4.8.3 Extract Metadata\n",
"### 5.8.3 Extract Metadata\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step3.png)\n",
"\n",
Expand Down Expand Up @@ -197,7 +197,7 @@
"id": "1912b6da"
},
"source": [
"### 4.8.4 Making Predictions\n",
"### 5.8.4 Making Predictions\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step5.png)\n",
"\n",
Expand Down Expand Up @@ -379,7 +379,7 @@
"id": "9c06de70"
},
"source": [
"#### 4.8.4.1 Probabilities\n",
"#### 5.8.4.1 Probabilities\n",
"\n",
"In many cases, it may be interesting to return the probabilities next to the predictions. Convert the logits produced by the model into probabilities:"
]
Expand Down Expand Up @@ -464,7 +464,7 @@
"id": "0d26d132",
"metadata": {},
"source": [
"#### 4.8.4.2 Testing\n",
"#### 5.8.4.2 Testing\n",
"\n",
"In a real-world deployment, you won't have the input data neatly assembled as a dataset. You will have to create a mini-batch of the user's input data, feed it to the model to get its predicted logits, and then convert them into one or more predictions and probabilities that need to be returned to the user.\n",
"\n",
Expand Down
18 changes: 9 additions & 9 deletions labs/Lab 4.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
"id": "b17bded5"
},
"source": [
"## 6.5 Lab 4: Sentiment Analysis"
"## 7.5 Lab 4: Sentiment Analysis"
]
},
{
Expand All @@ -90,7 +90,7 @@
"id": "585fd02c"
},
"source": [
"### 6.5.1 Model\n",
"### 7.5.1 Model\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step1.png)\n",
"\n",
Expand Down Expand Up @@ -130,7 +130,7 @@
"id": "52f10c74"
},
"source": [
"### 6.5.2 Dataset\n",
"### 7.5.2 Dataset\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step1.png)\n",
"\n",
Expand Down Expand Up @@ -199,7 +199,7 @@
"id": "6eebde7e"
},
"source": [
"### 6.5.3 Transforms\n",
"### 7.5.3 Transforms\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step3.png)\n",
"\n",
Expand Down Expand Up @@ -552,7 +552,7 @@
"id": "171714e6"
},
"source": [
"### 6.5.4 Training"
"### 7.5.4 Training"
]
},
{
Expand All @@ -570,7 +570,7 @@
"id": "a08043f4",
"metadata": {},
"source": [
"#### 6.5.4.1 Loss Function\n",
"#### 7.5.4.1 Loss Function\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step2.png)\n",
"\n",
Expand All @@ -596,7 +596,7 @@
"id": "1b2bc46a",
"metadata": {},
"source": [
"#### 6.5.4.2 Optimizer\n",
"#### 7.5.4.2 Optimizer\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step3.png)"
]
Expand Down Expand Up @@ -629,7 +629,7 @@
"id": "3b3c47f2",
"metadata": {},
"source": [
"#### 6.4.4.2 Training Loop"
"#### 7.4.4.2 Training Loop"
]
},
{
Expand Down Expand Up @@ -781,7 +781,7 @@
"id": "724660c3"
},
"source": [
"### 6.5.5 Inference\n",
"### 7.5.5 Inference\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/model_step5.png)\n",
"\n",
Expand Down
14 changes: 7 additions & 7 deletions labs/Lab 5A.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"id": "888c5e9f"
},
"source": [
"## 10.8 Lab 5A: Fine-Tuning Object Detection Models\n",
"## 11.8 Lab 5A: Fine-Tuning Object Detection Models\n",
"\n",
"In this lab, you'll build a dataset, including data augmentation, and fine-tune a custom object detection model by replacing its standard backbone with a different computer vision model. In the end, you'll evaluate the model using metrics from the COCO challenge."
]
Expand All @@ -60,7 +60,7 @@
"id": "3ce02517"
},
"source": [
"### 10.8.1 Oxford-IIIT Pet Dataset\n",
"### 11.8.1 Oxford-IIIT Pet Dataset\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step1.png)\n",
"\n",
Expand Down Expand Up @@ -96,7 +96,7 @@
"id": "2a68cf78"
},
"source": [
"### 10.8.2 Annotations\n",
"### 11.8.2 Annotations\n",
"\n",
"The annotations follow the Pascal VOC challenge format, and are stored as individual XML files, one for each annotated image, inside the `oxford-iiit-pet/annotations/xmls` subfolder. Use the `xml_to_csv()` helper function to convert all these files into a Pandas dataframe and inspect its contents."
]
Expand Down Expand Up @@ -449,7 +449,7 @@
"id": "9e6986c8"
},
"source": [
"### 10.8.3 Train-Validation Split\n",
"### 11.8.3 Train-Validation Split\n",
"\n",
"The original list of files does not give any indication regarding the split between training and validation sets, so you'll have to do it yourself.\n",
"\n",
Expand Down Expand Up @@ -497,7 +497,7 @@
"id": "5082d0b1"
},
"source": [
"### 10.8.4 Loading Model's Weights\n",
"### 11.8.4 Loading Model's Weights\n",
"\n",
"You're using a new backbone for your Faster R-CNN model, so you need to pick one that's different from ResNet50. You could, for example, choose a smaller model from the ResNet family, but it's likely more fun to choose a completely different model instead. We suggest you use MobileNet V2 as the new backbone.\n",
"\n",
Expand Down Expand Up @@ -553,7 +553,7 @@
"id": "425ab7a6"
},
"source": [
"### 10.8.5 Data Augmentation\n",
"### 11.8.5 Data Augmentation\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step3.png)\n",
"\n",
Expand Down Expand Up @@ -616,7 +616,7 @@
"id": "b5bfd44b"
},
"source": [
"### 10.8.6 Datasets and DataLoaders\n",
"### 11.8.6 Datasets and DataLoaders\n",
"\n",
"![](https://raw.githubusercontent.com/dvgodoy/assets/main/PyTorchInPractice/images/ch0/data_step4.png)\n",
"\n",
Expand Down
Loading

0 comments on commit 2590595

Please sign in to comment.