Skip to content

Commit

Permalink
Reformated lists
Browse files Browse the repository at this point in the history
  • Loading branch information
otavioon committed Feb 2, 2024
1 parent e4ec230 commit 4875885
Show file tree
Hide file tree
Showing 4 changed files with 45 additions and 93 deletions.
6 changes: 6 additions & 0 deletions notebooks/01_structuring_input.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
"The returned type of the `__getitem__` method is not specified, but it is usually a 2-element tuple with the input and the target. The input is the data that will be used to make the predictions, and the target is the data that the model will try to predict.\n",
"\n",
"For now, this framework provide implementations for the `Dataset` objects for time-series data, where data is organized in two different ways:\n",
"\n",
"- A directory with several CSV files, where each file contains a time-series. Each row in a CSV file is a time-step, each column is a feature, and the whole file is a time-series. This is handled by the `SeriesFolderCSVDataset` class.\n",
"- A single CSV file with a windowed time-series. Each row in the CSV file is a window, and each column is a feature. This is handled by the `MultiModalSeriesCSVDataset` class.\n",
"\n",
Expand Down Expand Up @@ -70,6 +71,7 @@
"Note that, each feature (column) represent a dimension of the time-series, while the rows represent the time-steps.\n",
"\n",
"Thus, the `SeriesFolderCSVDataset` class minimally requires:\n",
"\n",
"- `data_path`: the path to the directory containing the CSV files\n",
"- `features`: a list of strings with the names of the features columns, e.g. `['accel-x', 'accel-y', 'accel-z', 'gyro-x', 'gyro-y', 'gyro-z']`\n",
"- `label`: a string with the name of the label column, e.g. `'class'`"
Expand Down Expand Up @@ -121,6 +123,7 @@
"source": [
"We can get the number of samples in the dataset with the `len` function, and we can retrive a sample with the `__getitem__` method, that is, using `[]`, such as `dataset[0]`.\n",
"The dataset may return:\n",
"\n",
"- A 2-element tuple, where the first element is a 2D numpy array with shape `(num_features, time_steps)`, and the second element is a 1D tensor with shape `(time_steps,)`.\n",
"- A 2D numpy array with shape `(num_features, time_steps)`, if `label` is `None`, at the time of the dataset object's creation.\n",
"\n",
Expand Down Expand Up @@ -226,6 +229,7 @@
"Note that, each feature (column) represent a dimension of the time-series, while the rows represent the samples.\n",
"\n",
"The `MultiModalSeriesCSVDataset` class minimally requires:\n",
"\n",
"- `data_path`: the path to the CSV file\n",
"- `feature_prefixes`: a list of strings with the prefixes of the feature columns, e.g. `['accel-x', 'accel-y']`. The class will look for columns with these prefixes and will consider them as features of a modality.\n",
"- `label`: a string with the name of the label column, e.g. `'class'`\n",
Expand Down Expand Up @@ -273,6 +277,7 @@
"source": [
"We can get the number of samples in the dataset with the `len` function, and we can retrive a sample with the `__getitem__` method, that is, using `[]`, such as `dataset[0]`.\n",
"The dataset may return:\n",
"\n",
"- A 2-element tuple, where the first element is a 2D numpy array with shape `(num_features, time_steps)`, and the second element is a 1D tensor with shape `(time_steps,)`.\n",
"- A 2D numpy array with shape `(num_features, time_steps)`, if `label` is `None`, at the time of the dataset object's creation.\n",
"\n",
Expand Down Expand Up @@ -460,6 +465,7 @@
"## Summary\n",
"\n",
"First, we need to check which is our dataset and how the data is organized.\n",
"\n",
"- If data is organized in a directory with several CSV files, we use the `SeriesFolderCSVDataset` class. \n",
"- If data is organized in a single CSV file with a windowed time-series, we use the `MultiModalSeriesCSVDataset` class.\n",
"\n",
Expand Down
1 change: 1 addition & 0 deletions notebooks/02_training_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@
"The `train_dataloader` method will use `train.csv`, `val_dataloader` will use `validation.csv` and `test_dataloader` will use `test.csv`.\n",
"\n",
"To create a `MultiModalHARSeriesDataModule`, we must pass:\n",
"\n",
"- `data_path`: the path to the `KuHar` folder;\n",
"- `feature_prefixes`: the prefixes of the features in the CSV files. In this case, we have `accel-x`, `accel-y`, `accel-z`, `gyro-x`, `gyro-y` and `gyro-z`;\n",
"- `batch_size`: the batch size for the data loaders; and\n",
Expand Down
6 changes: 6 additions & 0 deletions notebooks/03_training_ssl_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
"3. Train the model using PyTorch Lightning.\n",
"\n",
"We can instantiate the model in two ways:\n",
"\n",
"1. Instantiate each element, such as the encoder, the autoregressive model, and the CPC model, and then pass them to the CPC model; or\n",
"2. Using builder methods to instantiate the model. In this case, we do not need to instantiate each element separately, but we can still customize the model by passing the desired parameters to the builder methods. This is the approach we will use in this notebook.\n",
"\n",
Expand Down Expand Up @@ -71,6 +72,7 @@
"This class will create `DataLoader` of `SeriesFolderCSVDataset` for each split (train, validation, and test), and will setup data correctly.\n",
"\n",
"In this notebook, we will use the `UserActivityFolderDataModule` to create the `LightningDataModule` for us. This class minimally requires:\n",
"\n",
"- `data_path`: the root directory of the data;\n",
"- `features`: the name of the features columns;\n",
"- `pad`: a boolean indicating if the samples should be padded to the same length, that is, the length of the longest sample in the dataset. The padding scheme will replicate the samples, from the beginning, until the length of the longest sample is reached. "
Expand Down Expand Up @@ -121,6 +123,7 @@
"This will instantiate an CPC self-supervised model, with the default encoder (`ssl_tools.models.layers.gru.GRUEncoder`), that is an GRU+Linear, and the default autoregressive model (`torch.nn.GRU`), a linear layer.\n",
"\n",
"We can parametrize the creation of the model by passing the desired parameters to the builder method. The `build_cpc` method can be parametrized the following parameters:\n",
"\n",
"- `encoding_size`: the size of the encoded representation;\n",
"- `in_channels`: number of input features;\n",
"- `gru_hidden_size`: number of features in the hidden state of the GRU;\n",
Expand Down Expand Up @@ -497,6 +500,7 @@
"It worth to mention that the `SSLDiscriminator` class `forward` method receives the input data and the labels, and returns the predictions. This is different from the `forward` method of the self-supervised models, that receives only the input data and returns the latent representations of the input data.\n",
"\n",
"It worth to notice that the fine-tune train process can be done in two ways: \n",
"\n",
"1. Fine-tuning the whole model, that is, backbone (encoder) and classifier, with the labeled data; or \n",
"2. Fine-tuning only the classifier, with the labeled data.\n",
"The `SSLDisriminator` class can handle both cases, with the `update_backbone` parameter. If `update_backbone` is `True`, the whole model is fine-tuned (case 1, above), otherwise, only the classifier is fine-tuned (case 2, above).\n",
Expand Down Expand Up @@ -554,6 +558,7 @@
"source": [
"We will create the `SSLDisriminator` model. \n",
"The `SSLDisriminator` minimally requires:\n",
"\n",
"- `backbone`: the backbone model, that is, the pre-trained model;\n",
"- `head`: the prediction head model;\n",
"- `loss_fn`: the loss function to be used to train the model;\n",
Expand Down Expand Up @@ -931,6 +936,7 @@
"metadata": {},
"source": [
"Finally, if we want to get the predictions of the model, we can:\n",
"\n",
"1. Call the `forward` method of the model, passing the input data (iterating over all batches of the dataloader); or \n",
"2. Use the `Trainer.predict` method, passing the data module. If you use the `Trainer.predict` method, the model will be set to evaluation mode, and the predictions will be done using the `predict_dataloader` defined in the `LightningDataModule`. This is usually the test set (`test_dataloader`)."
]
Expand Down
125 changes: 32 additions & 93 deletions notebooks/04_using_experiments.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,9 @@
"\n",
"The `LightningTrain` class allow to train arbitrary models. \n",
"The `LightningSSLTrain` class is a derived class that is used to train models using self-supervised learning. It adds 4 new methods: \n",
"- `get_pretrain_model` and `get_pretrain_data_module`: the user must return the model and data module used to pretrain the model.\n",
"- `get_finetune_model` and `get_finetune_data_module`: the user must return the model and data module used to finetune the model.\n",
"\n",
"* `get_pretrain_model` and `get_pretrain_data_module`: the user must return the model and data module used to pretrain the model.\n",
"* `get_finetune_model` and `get_finetune_data_module`: the user must return the model and data module used to finetune the model.\n",
"\n",
"The `training_mode` variable is used to indicate if the model is in pretrain or finetune mode. In fact, the `get_model` and `get_data_module` methods will call the `get_pretrain_model` and `get_pretrain_data_module` methods if `training_mode` is `pretrain`, and the `get_finetune_model` and `get_finetune_data_module` methods if `training_mode` is `finetune`. \n",
"\n",
Expand All @@ -88,13 +89,14 @@
"As `CPCTrain` is a derived class, we can pass the parameters from all parent classes (`epochs`, `accelerator`, `batch_size`, *etc.*), as well as the parameters from the `CPCTrain` class (`window_size`, `num_classes`, *etc.*) in the class constructor.\n",
"\n",
"The `CPCTrain` includes parameters to create the model as well as the data module. These parameters include:\n",
"- `data`: the path to the dataset folder. For pretrain, the data must be the path to a dataset where the samples are the whole time-series of an user. For finetune, the data must be the path to a dataset where the samples are the windows of the time-series, as in previous notebooks.\n",
"- `encoding_size`: the size of the latent representation of the CPC model.\n",
"- `in_channel`: the number of features in the input data.\n",
"- `window_size`: size of the input windows (`X_t`) to be fed to the encoder.\n",
"- `pad_length`: boolean indicating if the input windows should be padded to the `window_size` or not.\n",
"- `num_classes`: number of classes in the dataset.\n",
"- `update_backbone`: boolean indicating if the backbone should be updated during finetuning (only useful for fine-tuning process).\n",
"\n",
"* `data`: the path to the dataset folder. For pretrain, the data must be the path to a dataset where the samples are the whole time-series of an user. For finetune, the data must be the path to a dataset where the samples are the windows of the time-series, as in previous notebooks.\n",
"* `encoding_size`: the size of the latent representation of the CPC model.\n",
"* `in_channel`: the number of features in the input data.\n",
"* `window_size`: size of the input windows (`X_t`) to be fed to the encoder.\n",
"* `pad_length`: boolean indicating if the input windows should be padded to the `window_size` or not.\n",
"* `num_classes`: number of classes in the dataset.\n",
"* `update_backbone`: boolean indicating if the backbone should be updated during finetuning (only useful for fine-tuning process).\n",
"\n",
"Only the `data` parameter is required, the others have default values. Please check the documentation of the `CPCTrain` class for more details.\n",
"\n",
Expand All @@ -114,7 +116,7 @@
{
"data": {
"text/plain": [
"LightningExperiment(experiment_dir=logs/pretrain/CPC/2024-02-01_22-29-11, model=CPC, run_id=2024-02-01_22-29-11, finished=False)"
"LightningExperiment(experiment_dir=logs/pretrain/CPC/2024-02-01_23-52-39, model=CPC, run_id=2024-02-01_23-52-39, finished=False)"
]
},
"execution_count": 1,
Expand Down Expand Up @@ -157,22 +159,8 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/pretrain/CPC/2024-02-01_22-29-11 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n",
"GPU available: True (cuda), used: True\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Setting up experiment: CPC...\n",
"Running experiment: CPC...\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/pretrain/CPC/2024-02-01_23-52-39 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n",
"GPU available: True (cuda), used: True\n",
"TPU available: False, using: 0 TPU cores\n",
"IPU available: False, using: 0 IPUs\n",
"HPU available: False, using: 0 HPUs\n",
Expand All @@ -184,8 +172,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Setting up experiment: CPC...\n",
"Running experiment: CPC...\n",
"Training will start\n",
"\tExperiment path: logs/pretrain/CPC/2024-02-01_22-29-11\n"
"\tExperiment path: logs/pretrain/CPC/2024-02-01_23-52-39\n"
]
},
{
Expand Down Expand Up @@ -244,7 +234,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a0c0cc6d18374062a4e91d2d0c6bc11c",
"model_id": "d3d79b4d1870487a9225f9f9afdc262c",
"version_major": 2,
"version_minor": 0
},
Expand All @@ -265,11 +255,11 @@
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">--&gt; Overall fit time: 20.050 seconds\n",
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">--&gt; Overall fit time: 19.928 seconds\n",
"</pre>\n"
],
"text/plain": [
"--> Overall fit time: 20.050 seconds\n"
"--> Overall fit time: 19.928 seconds\n"
]
},
"metadata": {},
Expand Down Expand Up @@ -303,7 +293,7 @@
"output_type": "stream",
"text": [
"Training finished\n",
"Last checkpoint saved at: logs/pretrain/CPC/2024-02-01_22-29-11/checkpoints/last.ckpt\n",
"Last checkpoint saved at: logs/pretrain/CPC/2024-02-01_23-52-39/checkpoints/last.ckpt\n",
"Teardown experiment: CPC...\n"
]
}
Expand Down Expand Up @@ -351,7 +341,7 @@
{
"data": {
"text/plain": [
"PosixPath('logs/pretrain/CPC/2024-02-01_22-29-11/checkpoints/last.ckpt')"
"PosixPath('logs/pretrain/CPC/2024-02-01_23-52-39/checkpoints/last.ckpt')"
]
},
"execution_count": 3,
Expand Down Expand Up @@ -388,7 +378,7 @@
{
"data": {
"text/plain": [
"LightningExperiment(experiment_dir=logs/finetune/CPC/2024-02-01_22-38-32, model=CPC, run_id=2024-02-01_22-38-32, finished=False)"
"LightningExperiment(experiment_dir=logs/finetune/CPC/2024-02-02_00-03-12, model=CPC, run_id=2024-02-02_00-03-12, finished=False)"
]
},
"execution_count": 4,
Expand Down Expand Up @@ -430,7 +420,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/finetune/CPC/2024-02-01_22-38-32 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n",
"/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:198: Experiment logs directory logs/finetune/CPC/2024-02-02_00-03-12 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!\n",
"GPU available: True (cuda), used: True\n",
"TPU available: False, using: 0 TPU cores\n",
"IPU available: False, using: 0 IPUs\n",
Expand All @@ -445,10 +435,10 @@
"text": [
"Setting up experiment: CPC...\n",
"Running experiment: CPC...\n",
"Loading model from: logs/pretrain/CPC/2024-02-01_22-29-11/checkpoints/last.ckpt...\n",
"Loading model from: logs/pretrain/CPC/2024-02-01_23-52-39/checkpoints/last.ckpt...\n",
"Model loaded successfully\n",
"Training will start\n",
"\tExperiment path: logs/finetune/CPC/2024-02-01_22-38-32\n"
"\tExperiment path: logs/finetune/CPC/2024-02-02_00-03-12\n"
]
},
{
Expand Down Expand Up @@ -505,7 +495,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d4283777cdd14ff1ba3978d107c1d23a",
"model_id": "f478640308374e5894df3302aa127e92",
"version_major": 2,
"version_minor": 0
},
Expand All @@ -528,58 +518,6 @@
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"`Trainer.fit` stopped: `max_epochs=10` reached.\n"
]
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">--&gt; Overall fit time: 9.757 seconds\n",
"</pre>\n"
],
"text/plain": [
"--> Overall fit time: 9.757 seconds\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
],
"text/plain": []
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
"</pre>\n"
],
"text/plain": [
"\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training finished\n",
"Last checkpoint saved at: logs/finetune/CPC/2024-02-01_22-38-32/checkpoints/last.ckpt\n",
"Teardown experiment: CPC...\n"
]
}
],
"source": [
Expand All @@ -596,7 +534,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -631,7 +569,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -899,8 +837,9 @@
"## Other advantages of using `LightningExperiment`\n",
"\n",
"The `LightningExperiment` class also provides other advantages, such as:\n",
"- Automatically generate CLI applications for the experiments, using the `jsonargparse` library. In fact, every parameter in the class constructor is automatically converted to a command line argument. This allows the user to run the experiment from the command line, using the same parameters as in the class constructor.\n",
"- Default `metrics.csv` and `hparams.yaml` files are created, and the hyperparameters are logged in the `hparams.yaml` file. The `metrics.csv` file contains the metrics logged during training, and can be used to analyze the performance of the model."
"\n",
"* Automatically generate CLI applications for the experiments, using the `jsonargparse` library. In fact, every parameter in the class constructor is automatically converted to a command line argument. This allows the user to run the experiment from the command line, using the same parameters as in the class constructor.\n",
"* Default `metrics.csv` and `hparams.yaml` files are created, and the hyperparameters are logged in the `hparams.yaml` file. The `metrics.csv` file contains the metrics logged during training, and can be used to analyze the performance of the model."
]
}
],
Expand Down

0 comments on commit 4875885

Please sign in to comment.