diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index e5d92b2..65c3cba 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-21T11:06:02","documenter_version":"1.2.1"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-21T12:04:52","documenter_version":"1.2.1"}}
\ No newline at end of file
diff --git a/dev/bibliography/index.html b/dev/bibliography/index.html
index 96b6534..4448515 100644
--- a/dev/bibliography/index.html
+++ b/dev/bibliography/index.html
@@ -1,2 +1,2 @@
-
We want to minimize the sum of training_loss and reg, and for this task we can use FastForwardBackward, which implements the fast proximal gradient method (also known as fast forward-backward splitting, or FISTA). Therefore we construct the algorithm, then apply it to our problem by providing a starting point, and the objective terms f=training_loss (smooth) and g=reg (non smooth).
ffb = ProximalAlgorithms.FastForwardBackward()
solution, iterations = ffb(x0 = zeros(n_features + 1), f = training_loss, g = reg)