diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index f718aca..0a6734f 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-10-01T08:55:00","documenter_version":"1.7.0"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-10-01T10:02:51","documenter_version":"1.7.0"}}
\ No newline at end of file
diff --git a/dev/bibliography/index.html b/dev/bibliography/index.html
index 5b05b50..4f38605 100644
--- a/dev/bibliography/index.html
+++ b/dev/bibliography/index.html
@@ -1,2 +1,2 @@
-
We want to minimize the sum of training_loss and reg, and for this task we can use FastForwardBackward, which implements the fast proximal gradient method (also known as fast forward-backward splitting, or FISTA). Therefore we construct the algorithm, then apply it to our problem by providing a starting point, and the objective terms f=training_loss (smooth) and g=reg (non smooth).
ffb = ProximalAlgorithms.FastForwardBackward()
solution, iterations = ffb(x0 = zeros(n_features + 1), f = training_loss, g = reg)