Skip to content

Commit

Permalink
fix(optimizer): derive clone
Browse files Browse the repository at this point in the history
  • Loading branch information
drewxs committed Sep 3, 2023
1 parent 2497b39 commit 5d8fff8
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions src/optimizer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ use crate::Tensor;

/// Stochastic Gradient Descent (SGD): A basic optimizer that updates the weights based on
/// the gradients of the loss function with respect to the weights multiplied by a learning rate.
#[derive(Debug)]
#[derive(Clone, Debug)]
pub struct SGD {
learning_rate: f64,
}
Expand All @@ -28,7 +28,7 @@ impl SGD {
/// This allows the learning rate to decrease for parameters that have consistently large gradients
/// and increase for parameters that have consistently small gradients.
/// Includes an option to apply weight decay regularization to the gradients.
#[derive(Debug)]
#[derive(Clone, Debug)]
pub struct Adagrad {
learning_rate: f64,
epsilon: f64,
Expand Down Expand Up @@ -72,7 +72,7 @@ impl Adagrad {
}

/// Optimizer enum that allows for different optimizers to be used with neural networks.
#[derive(Debug)]
#[derive(Clone, Copy, Debug)]
pub enum Optimizer {
SGD {
learning_rate: f64,
Expand Down

0 comments on commit 5d8fff8

Please sign in to comment.