-
- -
-

minerva.models.nets.deep_conv_lstm

-
-

Classes

- - - - - - - - - - - - -

ConvLSTMCell

DeepConvLSTM

Simple pipeline for supervised models.

SimpleSupervisedModel

Simple pipeline for supervised models.

-
-
-

Module Contents

-
-
-class minerva.models.nets.deep_conv_lstm.ConvLSTMCell(input_shape)
-

Bases: torch.nn.Module

-
-
Parameters:
-

input_shape (tuple)

-
-
-
-
-_calculate_conv_output_shape(backbone, input_shape)
-
-
Parameters:
-

input_shape (Tuple[int, int, int])

-
-
Return type:
-

int

-
-
-
- -
-
-forward(x)
-
- -
- -
-
-class minerva.models.nets.deep_conv_lstm.DeepConvLSTM(input_shape=(1, 6, 60), num_classes=6, learning_rate=0.001)
-

Bases: minerva.models.nets.base.SimpleSupervisedModel

-

Simple pipeline for supervised models.

-

This class implements a very common deep learning pipeline, which is -composed by the following steps:

-
    -
  1. Make a forward pass with the input data on the backbone model;

  2. -
  3. Make a forward pass with the input data on the fc model;

  4. -
  5. Compute the loss between the output and the label data;

  6. -
  7. Optimize the model (backbone and FC) parameters with respect to the loss.

  8. -
-

This reduces the code duplication for autoencoder models, and makes it -easier to implement new models by only changing the backbone model. More -complex models, that does not follow this pipeline, should not inherit from -this class. -Note that, for this class the input data is a tuple of tensors, where the -first tensor is the input data and the second tensor is the mask or label.

-

Initialize the model with the backbone, fc, loss function and -metrics. Metrics are used to evaluate the model during training, -validation, testing or prediction. It will be logged using -lightning logger at the end of each epoch. Metrics should implement -the torchmetrics.Metric interface.

-
-

Parameters

-
-
backbonetorch.nn.Module

The backbone model. Usually the encoder/decoder part of the model.

-
-
fctorch.nn.Module

The fully connected model, usually used to classification tasks. -Use torch.nn.Identity() if no FC model is needed.

-
-
loss_fntorch.nn.Module

The function used to compute the loss.

-
-
learning_ratefloat, optional

The learning rate to Adam optimizer, by default 1e-3

-
-
flattenbool, optional

If True the input data will be flattened before passing through -the fc model, by default True

-
-
train_metricsDict[str, Metric], optional

The metrics to be used during training, by default None

-
-
val_metricsDict[str, Metric], optional

The metrics to be used during validation, by default None

-
-
test_metricsDict[str, Metric], optional

The metrics to be used during testing, by default None

-
-
predict_metricsDict[str, Metric], optional

The metrics to be used during prediction, by default None

-
-
-
-
-_calculate_fc_input_features(backbone, input_shape)
-

Run a single forward pass with a random input to get the number of -features after the convolutional layers.

-
-

Parameters

-
-
backbonetorch.nn.Module

The backbone of the network

-
-
input_shapeTuple[int, int, int]

The input shape of the network.

-
-
-
-
-

Returns

-
-
int

The number of features after the convolutional layers.

-
-
-
-
-
Parameters:
-
    -
  • backbone (torch.nn.Module)

  • -
  • input_shape (Tuple[int, int, int])

  • -
-
-
Return type:
-

int

-
-
-
- -
-
-_create_backbone(input_shape)
-
-
Parameters:
-

input_shape (Tuple[int, int])

-
-
Return type:
-

torch.nn.Module

-
-
-
- -
-
-_create_fc(input_features, num_classes)
-
-
Parameters:
-
    -
  • input_features (int)

  • -
  • num_classes (int)

  • -
-
-
Return type:
-

torch.nn.Module

-
-
-
- -
-
-
Parameters:
-
    -
  • input_shape (Tuple[int, int, int])

  • -
  • num_classes (int)

  • -
  • learning_rate (float)

  • -
-
-
-
- -
-
-class minerva.models.nets.deep_conv_lstm.SimpleSupervisedModel(backbone, fc, loss_fn, learning_rate=0.001, flatten=True, train_metrics=None, val_metrics=None, test_metrics=None)
-

Bases: lightning.LightningModule

-

Simple pipeline for supervised models.

-

This class implements a very common deep learning pipeline, which is -composed by the following steps:

-
    -
  1. Make a forward pass with the input data on the backbone model;

  2. -
  3. Make a forward pass with the input data on the fc model;

  4. -
  5. Compute the loss between the output and the label data;

  6. -
  7. Optimize the model (backbone and FC) parameters with respect to the loss.

  8. -
-

This reduces the code duplication for autoencoder models, and makes it -easier to implement new models by only changing the backbone model. More -complex models, that does not follow this pipeline, should not inherit from -this class. -Note that, for this class the input data is a tuple of tensors, where the -first tensor is the input data and the second tensor is the mask or label.

-

Initialize the model with the backbone, fc, loss function and -metrics. Metrics are used to evaluate the model during training, -validation, testing or prediction. It will be logged using -lightning logger at the end of each epoch. Metrics should implement -the torchmetrics.Metric interface.

-
-

Parameters

-
-
backbonetorch.nn.Module

The backbone model. Usually the encoder/decoder part of the model.

-
-
fctorch.nn.Module

The fully connected model, usually used to classification tasks. -Use torch.nn.Identity() if no FC model is needed.

-
-
loss_fntorch.nn.Module

The function used to compute the loss.

-
-
learning_ratefloat, optional

The learning rate to Adam optimizer, by default 1e-3

-
-
flattenbool, optional

If True the input data will be flattened before passing through -the fc model, by default True

-
-
train_metricsDict[str, Metric], optional

The metrics to be used during training, by default None

-
-
val_metricsDict[str, Metric], optional

The metrics to be used during validation, by default None

-
-
test_metricsDict[str, Metric], optional

The metrics to be used during testing, by default None

-
-
predict_metricsDict[str, Metric], optional

The metrics to be used during prediction, by default None

-
-
-
-
-_compute_metrics(y_hat, y, step_name)
-

Calculate the metrics for the given step.

-
-

Parameters

-
-
y_hattorch.Tensor

The output data from the forward pass.

-
-
ytorch.Tensor

The input data/label.

-
-
step_namestr

Name of the step. It will be used to get the metrics from the -self.metrics attribute.

-
-
-
-
-

Returns

-
-
Dict[str, torch.Tensor]

A dictionary with the metrics values.

-
-
-
-
-
Parameters:
-
    -
  • y_hat (torch.Tensor)

  • -
  • y (torch.Tensor)

  • -
  • step_name (str)

  • -
-
-
Return type:
-

Dict[str, torch.Tensor]

-
-
-
- -
-
-_loss_func(y_hat, y)
-

Calculate the loss between the output and the input data.

-
-

Parameters

-
-
y_hattorch.Tensor

The output data from the forward pass.

-
-
ytorch.Tensor

The input data/label.

-
-
-
-
-

Returns

-
-
torch.Tensor

The loss value.

-
-
-
-
-
Parameters:
-
    -
  • y_hat (torch.Tensor)

  • -
  • y (torch.Tensor)

  • -
-
-
Return type:
-

torch.Tensor

-
-
-
- -
-
-_single_step(batch, batch_idx, step_name)
-

Perform a single train/validation/test step. It consists in making a -forward pass with the input data on the backbone model, computing the -loss between the output and the input data, and logging the loss.

-
-

Parameters

-
-
batchtorch.Tensor

The input data. It must be a 2-element tuple of tensors, where the -first tensor is the input data and the second tensor is the mask.

-
-
batch_idxint

The index of the batch.

-
-
step_namestr

The name of the step. It will be used to log the loss. The possible -values are: “train”, “val” and “test”. The loss will be logged as -“{step_name}_loss”.

-
-
-
-
-

Returns

-
-
torch.Tensor

A tensor with the loss value.

-
-
-
-
-
Parameters:
-
    -
  • batch (torch.Tensor)

  • -
  • batch_idx (int)

  • -
  • step_name (str)

  • -
-
-
Return type:
-

torch.Tensor

-
-
-
- -
-
-configure_optimizers()
-
- -
-
-forward(x)
-

Perform a forward pass with the input data on the backbone model.

-
-

Parameters

-
-
xtorch.Tensor

The input data.

-
-
-
-
-

Returns

-
-
torch.Tensor

The output data from the forward pass.

-
-
-
-
-
Parameters:
-

x (torch.Tensor)

-
-
Return type:
-

torch.Tensor

-
-
-
- -
-
-predict_step(batch, batch_idx, dataloader_idx=None)
-
- -
-
-test_step(batch, batch_idx)
-
-
Parameters:
-
    -
  • batch (torch.Tensor)

  • -
  • batch_idx (int)

  • -
-
-
-
- -
-
-training_step(batch, batch_idx)
-
-
Parameters:
-
    -
  • batch (torch.Tensor)

  • -
  • batch_idx (int)

  • -
-
-
-
- -
-
-validation_step(batch, batch_idx)
-
-
Parameters:
-
    -
  • batch (torch.Tensor)

  • -
  • batch_idx (int)

  • -
-
-
-
- -
-
-
Parameters:
-
    -
  • backbone (torch.nn.Module)

  • -
  • fc (torch.nn.Module)

  • -
  • loss_fn (torch.nn.Module)

  • -
  • learning_rate (float)

  • -
  • flatten (bool)

  • -
  • train_metrics (Dict[str, torchmetrics.Metric])

  • -
  • val_metrics (Dict[str, torchmetrics.Metric])

  • -
  • test_metrics (Dict[str, torchmetrics.Metric])

  • -
-
-
-
- -
-
- - -
-