diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index e5d92b2..65c3cba 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-21T11:06:02","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-21T12:04:52","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/bibliography/index.html b/dev/bibliography/index.html index 96b6534..4448515 100644 --- a/dev/bibliography/index.html +++ b/dev/bibliography/index.html @@ -1,2 +1,2 @@ -Bibliography · ProximalAlgorithms.jl

Bibliography

[1]
[2]
[3]
P.-L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis 16, 964–979 (1979).
[4]
J. Eckstein and D. P. Bertsekas. On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming 55, 293–318 (1992).
[5]
P. Tseng. Accelerated proximal gradient methods for convex optimization (Technical report, University of Washington, Seattle, 2008).
[6]
A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences 2, 183–202 (2009).
[7]
L. Stella, A. Themelis, P. Sopasakis and P. Patrinos. A simple and efficient algorithm for nonlinear model predictive control. In: 2017 IEEE 56th Annual Conference on Decision and Control (CDC) (IEEE, 2017); pp. 1939–1944.
[8]
A. Themelis, L. Stella and P. Patrinos. Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone linesearch algorithms. SIAM Journal on Optimization 28, 2274–2303 (2018).
[9]
A. Themelis, L. Stella and P. Patrinos. Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms. Computational Optimization and Applications 82, 395–440 (2022).
[10]
A. De Marchi and A. Themelis. Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity. Journal of Optimization Theory and Applications 194, 771–794 (2022).
[11]
D. Davis and W. Yin. A three-operator splitting scheme and its optimization applications. Set-valued and variational analysis 25, 829–858 (2017).
[12]
A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision 40, 120–145 (2011).
[13]
B. C. Vũ. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Advances in Computational Mathematics 38, 667–681 (2013).
[14]
L. Condat. A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. Journal of optimization theory and applications 158, 460–479 (2013).
[15]
P. Latafat and P. Patrinos. Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Computational Optimization and Applications 68, 57–93 (2017).
+Bibliography · ProximalAlgorithms.jl

Bibliography

[1]
[2]
[3]
P.-L. Lions and B. Mercier. Splitting algorithms for the sum of two nonlinear operators. SIAM Journal on Numerical Analysis 16, 964–979 (1979).
[4]
J. Eckstein and D. P. Bertsekas. On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming 55, 293–318 (1992).
[5]
P. Tseng. Accelerated proximal gradient methods for convex optimization (Technical report, University of Washington, Seattle, 2008).
[6]
A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences 2, 183–202 (2009).
[7]
L. Stella, A. Themelis, P. Sopasakis and P. Patrinos. A simple and efficient algorithm for nonlinear model predictive control. In: 2017 IEEE 56th Annual Conference on Decision and Control (CDC) (IEEE, 2017); pp. 1939–1944.
[8]
A. Themelis, L. Stella and P. Patrinos. Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone linesearch algorithms. SIAM Journal on Optimization 28, 2274–2303 (2018).
[9]
A. Themelis, L. Stella and P. Patrinos. Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms. Computational Optimization and Applications 82, 395–440 (2022).
[10]
A. De Marchi and A. Themelis. Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity. Journal of Optimization Theory and Applications 194, 771–794 (2022).
[11]
D. Davis and W. Yin. A three-operator splitting scheme and its optimization applications. Set-valued and variational analysis 25, 829–858 (2017).
[12]
A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of mathematical imaging and vision 40, 120–145 (2011).
[13]
B. C. Vũ. A splitting algorithm for dual monotone inclusions involving cocoercive operators. Advances in Computational Mathematics 38, 667–681 (2013).
[14]
L. Condat. A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. Journal of optimization theory and applications 158, 460–479 (2013).
[15]
P. Latafat and P. Patrinos. Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Computational Optimization and Applications 68, 57–93 (2017).
diff --git a/dev/examples/sparse_linear_regression/index.html b/dev/examples/sparse_linear_regression/index.html index 2b94e38..3c84cf4 100644 --- a/dev/examples/sparse_linear_regression/index.html +++ b/dev/examples/sparse_linear_regression/index.html @@ -45,4 +45,4 @@ reg = ProximalOperators.NormL1(1)
ProximalOperators.NormL1{Int64}(1)

We want to minimize the sum of training_loss and reg, and for this task we can use FastForwardBackward, which implements the fast proximal gradient method (also known as fast forward-backward splitting, or FISTA). Therefore we construct the algorithm, then apply it to our problem by providing a starting point, and the objective terms f=training_loss (smooth) and g=reg (non smooth).

ffb = ProximalAlgorithms.FastForwardBackward()
 solution, iterations = ffb(x0 = zeros(n_features + 1), f = training_loss, g = reg)
([0.0, -9.84468723961304, 23.777300627851535, 12.957770377540996, -4.7470469711076575, 0.0, -11.013138882753212, 0.0, 24.351386511781616, 3.2906368579992815, 151.01169590643272], 285)

We can now check how well the trained model performs on the test portion of our data.

test_output = standardized_linear_model(solution, test_input)
-mean_squared_error(test_label, test_output)
1369.3780923676934

This page was generated using Literate.jl.

+mean_squared_error(test_label, test_output)
1369.3780923676934

This page was generated using Literate.jl.

diff --git a/dev/guide/custom_algorithms/index.html b/dev/guide/custom_algorithms/index.html index c1caa6f..0f6fea9 100644 --- a/dev/guide/custom_algorithms/index.html +++ b/dev/guide/custom_algorithms/index.html @@ -1,2 +1,2 @@ -Custom algorithms · ProximalAlgorithms.jl

Custom algorithms

Warning

This page is under construction, and may be incomplete.

ProximalAlgorithms.IterativeAlgorithmType
IterativeAlgorithm(T; maxit, stop, solution, verbose, freq, display, kwargs...)

Wrapper for an iterator type T, adding termination and verbosity options on top of it.

This is a conveniency constructor to allow for "partial" instantiation of an iterator of type T. The resulting "algorithm" object alg can be called on a set of keyword arguments, which will be merged to kwargs and passed on to T to construct an iterator which will be looped over. Specifically, if an algorithm is constructed as

alg = IterativeAlgorithm(T; maxit, stop, solution, verbose, freq, display, kwargs...)

then calling it with

alg(; more_kwargs...)

will internally loop over an iterator constructed as

T(; alg.kwargs..., more_kwargs...)

Note

This constructor is not meant to be used directly: instead, algorithm-specific constructors should be defined on top of it and exposed to the user, that set appropriate default functions for stop, solution, display.

Arguments

  • T::Type: iterator type to use
  • maxit::Int: maximum number of iteration
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool: whether the algorithm state should be displayed
  • freq::Int: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: keyword arguments to pass on to T when constructing the iterator
source
+Custom algorithms · ProximalAlgorithms.jl

Custom algorithms

Warning

This page is under construction, and may be incomplete.

ProximalAlgorithms.IterativeAlgorithmType
IterativeAlgorithm(T; maxit, stop, solution, verbose, freq, display, kwargs...)

Wrapper for an iterator type T, adding termination and verbosity options on top of it.

This is a conveniency constructor to allow for "partial" instantiation of an iterator of type T. The resulting "algorithm" object alg can be called on a set of keyword arguments, which will be merged to kwargs and passed on to T to construct an iterator which will be looped over. Specifically, if an algorithm is constructed as

alg = IterativeAlgorithm(T; maxit, stop, solution, verbose, freq, display, kwargs...)

then calling it with

alg(; more_kwargs...)

will internally loop over an iterator constructed as

T(; alg.kwargs..., more_kwargs...)

Note

This constructor is not meant to be used directly: instead, algorithm-specific constructors should be defined on top of it and exposed to the user, that set appropriate default functions for stop, solution, display.

Arguments

  • T::Type: iterator type to use
  • maxit::Int: maximum number of iteration
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool: whether the algorithm state should be displayed
  • freq::Int: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: keyword arguments to pass on to T when constructing the iterator
source
diff --git a/dev/guide/custom_objectives/425b8cd6.svg b/dev/guide/custom_objectives/b98d945f.svg similarity index 93% rename from dev/guide/custom_objectives/425b8cd6.svg rename to dev/guide/custom_objectives/b98d945f.svg index fb756c0..d60291c 100644 --- a/dev/guide/custom_objectives/425b8cd6.svg +++ b/dev/guide/custom_objectives/b98d945f.svg @@ -1,95 +1,95 @@ - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/guide/custom_objectives/index.html b/dev/guide/custom_objectives/index.html index 81fa954..3803fab 100644 --- a/dev/guide/custom_objectives/index.html +++ b/dev/guide/custom_objectives/index.html @@ -1,5 +1,5 @@ -Custom objective terms · ProximalAlgorithms.jl

Custom objective terms

ProximalAlgorithms relies on the first-order primitives defined in ProximalCore. While a rich library of function types, implementing such primitives, is provided by ProximalOperators, one may need to formulate problems using custom objective terms. When that is the case, one only needs to implement the right first-order primitive, $\nabla f$ or $\operatorname{prox}_{\gamma f}$ or both, for algorithms to be able to work with $f$.

Defining the proximal mapping for a custom function type requires adding a method for ProximalCore.prox!.

To compute gradients, algorithms use ProximalAlgorithms.value_and_gradient_closure: this relies on AbstractDifferentiation, for automatic differentiation with any of its supported backends, when functions are wrapped in ProximalAlgorithms.AutoDifferentiable, as the examples below show.

If however you would like to provide your own gradient implementation (e.g. for efficiency reasons), you can simply implement a method for ProximalAlgorithms.value_and_gradient_closure on your own function type.

ProximalCore.proxFunction
prox(f, x, gamma=1)

Proximal mapping for f, evaluated at x, with stepsize gamma.

The proximal mapping is defined as

\[\mathrm{prox}_{\gamma f}(x) = \arg\min_z \left\{ f(z) + \tfrac{1}{2\gamma}\|z-x\|^2 \right\}.\]

Returns a tuple (y, fy) consisting of

  • y: the output of the proximal mapping of f at x with stepsize gamma
  • fy: the value of f at y

See also: prox!.

source
ProximalCore.prox!Function
prox!(y, f, x, gamma=1)

In-place proximal mapping for f, evaluated at x, with stepsize gamma.

The proximal mapping is defined as

\[\mathrm{prox}_{\gamma f}(x) = \arg\min_z \left\{ f(z) + \tfrac{1}{2\gamma}\|z-x\|^2 \right\}.\]

The result is written to the (pre-allocated) array y, which should have the same shape/size as x.

Returns the value of f at y.

See also: prox.

source

Example: constrained Rosenbrock

Let's try to minimize the celebrated Rosenbrock function, but constrained to the unit norm ball. The cost function is

using Zygote
+Custom objective terms · ProximalAlgorithms.jl

Custom objective terms

ProximalAlgorithms relies on the first-order primitives defined in ProximalCore. While a rich library of function types, implementing such primitives, is provided by ProximalOperators, one may need to formulate problems using custom objective terms. When that is the case, one only needs to implement the right first-order primitive, $\nabla f$ or $\operatorname{prox}_{\gamma f}$ or both, for algorithms to be able to work with $f$.

Defining the proximal mapping for a custom function type requires adding a method for ProximalCore.prox!.

To compute gradients, algorithms use ProximalAlgorithms.value_and_gradient_closure: this relies on AbstractDifferentiation, for automatic differentiation with any of its supported backends, when functions are wrapped in ProximalAlgorithms.AutoDifferentiable, as the examples below show.

If however you would like to provide your own gradient implementation (e.g. for efficiency reasons), you can simply implement a method for ProximalAlgorithms.value_and_gradient_closure on your own function type.

ProximalCore.proxFunction
prox(f, x, gamma=1)

Proximal mapping for f, evaluated at x, with stepsize gamma.

The proximal mapping is defined as

\[\mathrm{prox}_{\gamma f}(x) = \arg\min_z \left\{ f(z) + \tfrac{1}{2\gamma}\|z-x\|^2 \right\}.\]

Returns a tuple (y, fy) consisting of

  • y: the output of the proximal mapping of f at x with stepsize gamma
  • fy: the value of f at y

See also: prox!.

source
ProximalCore.prox!Function
prox!(y, f, x, gamma=1)

In-place proximal mapping for f, evaluated at x, with stepsize gamma.

The proximal mapping is defined as

\[\mathrm{prox}_{\gamma f}(x) = \arg\min_z \left\{ f(z) + \tfrac{1}{2\gamma}\|z-x\|^2 \right\}.\]

The result is written to the (pre-allocated) array y, which should have the same shape/size as x.

Returns the value of f at y.

See also: prox.

source

Example: constrained Rosenbrock

Let's try to minimize the celebrated Rosenbrock function, but constrained to the unit norm ball. The cost function is

using Zygote
 using AbstractDifferentiation: ZygoteBackend
 using ProximalAlgorithms
 
@@ -38,7 +38,7 @@
     color = :red,
     markershape = :star5,
     label = "computed solution",
-)
Example block output

Example: counting operations

It is often interesting to measure how many operations (gradient- or prox-evaluation) an algorithm is taking. In fact, in algorithms involving backtracking or some other line-search logic, the iteration count may not be entirely representative of the amount of operations are being performed; or maybe some specific implementations require additional operations to be performed when checking stopping conditions. All of this makes it difficult to quantify the exact iteration complexity.

We can achieve this by wrapping functions in a dedicated Counting type:

mutable struct Counting{T}
+)
Example block output

Example: counting operations

It is often interesting to measure how many operations (gradient- or prox-evaluation) an algorithm is taking. In fact, in algorithms involving backtracking or some other line-search logic, the iteration count may not be entirely representative of the amount of operations are being performed; or maybe some specific implementations require additional operations to be performed when checking stopping conditions. All of this makes it difficult to quantify the exact iteration complexity.

We can achieve this by wrapping functions in a dedicated Counting type:

mutable struct Counting{T}
     f::T
     eval_count::Int
     gradient_count::Int
@@ -65,4 +65,4 @@
 println("gradient evals: $(f.gradient_count)")
 println("    prox evals: $(g.prox_count)")
function evals: 115
 gradient evals: 107
-    prox evals: 79

This page was generated using Literate.jl.

+ prox evals: 79

This page was generated using Literate.jl.

diff --git a/dev/guide/getting_started/642d70e1.svg b/dev/guide/getting_started/0ce0ae19.svg similarity index 90% rename from dev/guide/getting_started/642d70e1.svg rename to dev/guide/getting_started/0ce0ae19.svg index 800353e..cac8939 100644 --- a/dev/guide/getting_started/642d70e1.svg +++ b/dev/guide/getting_started/0ce0ae19.svg @@ -1,108 +1,108 @@ - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/guide/getting_started/61b01d91.svg b/dev/guide/getting_started/f5715fe9.svg similarity index 92% rename from dev/guide/getting_started/61b01d91.svg rename to dev/guide/getting_started/f5715fe9.svg index d1394f9..23eb5d3 100644 --- a/dev/guide/getting_started/61b01d91.svg +++ b/dev/guide/getting_started/f5715fe9.svg @@ -1,81 +1,81 @@ - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/guide/getting_started/index.html b/dev/guide/getting_started/index.html index 9e530c2..f0bd35e 100644 --- a/dev/guide/getting_started/index.html +++ b/dev/guide/getting_started/index.html @@ -31,7 +31,7 @@ color = :red, markershape = :star5, label = "computed solution", -)Example block output

Iterator interface

Under the hood, algorithms are implemented in the form of standard Julia iterators: constructing such iterator objects directly, and looping over them, allows for more fine-grained control over the termination condition, or what information from the iterations get logged.

Each iterator is constructed with the full problem description (objective terms and, if needed, additional information like Lipschitz constats) and algorithm options (usually step sizes, and any other parameter or option of the algorithm), and produces the sequence of states of the algorithm, so that one can do (almost) anything with it.

Note

Iterators only implement the algorithm iteration logic, and not additional details like stopping criteria. As such, iterators usually yield an infinite sequence of states: when looping over them, be careful to properly guard the loop with a stopping criterion.

Warning

To save on allocations, most (if not all) algorithms re-use state objects when iterating, by updating the state in place instead of creating a new one. For this reason:

  • one should not mutate the state object in any way, as this may corrupt the algorithm's logic;
  • one should not collect the sequence of states, since this will result in an array of identical objects.

Iterator types are named after the algorithm they implement, so the relationship should be obvious:

  • the ForwardBackward algorithm uses the ForwardBackwardIteration iterator type;
  • the FastForwardBackward algorithm uses the FastForwardBackwardIteration iterator type;
  • the DouglasRachford algorithm uses the DouglasRachfordIteration iterator type;

and so on.

Let's see what this means in terms of the previous example.

Example: box constrained quadratic (cont)

Let's solve the problem from the previous example by directly interacting with the underlying iterator: the FastForwardBackward algorithm internally uses a FastForwardBackwardIteration object.

ffbiter = ProximalAlgorithms.FastForwardBackwardIteration(
+)
Example block output

Iterator interface

Under the hood, algorithms are implemented in the form of standard Julia iterators: constructing such iterator objects directly, and looping over them, allows for more fine-grained control over the termination condition, or what information from the iterations get logged.

Each iterator is constructed with the full problem description (objective terms and, if needed, additional information like Lipschitz constats) and algorithm options (usually step sizes, and any other parameter or option of the algorithm), and produces the sequence of states of the algorithm, so that one can do (almost) anything with it.

Note

Iterators only implement the algorithm iteration logic, and not additional details like stopping criteria. As such, iterators usually yield an infinite sequence of states: when looping over them, be careful to properly guard the loop with a stopping criterion.

Warning

To save on allocations, most (if not all) algorithms re-use state objects when iterating, by updating the state in place instead of creating a new one. For this reason:

  • one should not mutate the state object in any way, as this may corrupt the algorithm's logic;
  • one should not collect the sequence of states, since this will result in an array of identical objects.

Iterator types are named after the algorithm they implement, so the relationship should be obvious:

  • the ForwardBackward algorithm uses the ForwardBackwardIteration iterator type;
  • the FastForwardBackward algorithm uses the FastForwardBackwardIteration iterator type;
  • the DouglasRachford algorithm uses the DouglasRachfordIteration iterator type;

and so on.

Let's see what this means in terms of the previous example.

Example: box constrained quadratic (cont)

Let's solve the problem from the previous example by directly interacting with the underlying iterator: the FastForwardBackward algorithm internally uses a FastForwardBackwardIteration object.

ffbiter = ProximalAlgorithms.FastForwardBackwardIteration(
     x0 = ones(2),
     f = quadratic_cost,
     g = box_indicator,
@@ -64,4 +64,4 @@
     color = :red,
     markershape = :star5,
     label = "computed solution",
-)
Example block output
Note

Since each algorithm iterator type has its own logic, it will also have its own dedicated state structure. Interacting with the state then requires being familiar with its structure, and with the nature of its attributes.


This page was generated using Literate.jl.

+)Example block output
Note

Since each algorithm iterator type has its own logic, it will also have its own dedicated state structure. Interacting with the state then requires being familiar with its structure, and with the nature of its attributes.


This page was generated using Literate.jl.

diff --git a/dev/guide/implemented_algorithms/index.html b/dev/guide/implemented_algorithms/index.html index eb55434..41f5f8f 100644 --- a/dev/guide/implemented_algorithms/index.html +++ b/dev/guide/implemented_algorithms/index.html @@ -1,2 +1,2 @@ -Problem types and algorithms · ProximalAlgorithms.jl

Problem types and algorithms

Warning

This page is under construction, and may be incomplete.

Depending on the structure a problem can be reduced to, different types of algorithms will apply. The major distinctions are in the number of objective terms, whether any of them is differentiable, whether they are composed with some linear mapping (which in general complicates evaluating the proximal mapping). Based on this we can split problems, and algorithms that apply to them, in three categories:

In what follows, the list of available algorithms is given, with links to the documentation for their constructors and their underlying iterator type.

Two-terms: $f + g$

This is the most popular model, by far the most thoroughly studied, and an abundance of algorithms exist to solve problems in this form.

AlgorithmAssumptionsOracleImplementationReferences
Proximal gradient$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$ForwardBackward[3]
Douglas-Rachford$\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$DouglasRachford[4]
Fast proximal gradient$f$ convex, smooth, $g$ convex$\nabla f$, $\operatorname{prox}_{\gamma g}$FastForwardBackward[5], [6]
PANOC$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$PANOC[7]
ZeroFPR$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$ZeroFPR[8]
Douglas-Rachford line-search$f$ smooth$\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$DRLS[9]
PANOC+$f$ locally smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$PANOCplus[10]
ProximalAlgorithms.ForwardBackwardFunction
ForwardBackward(; <keyword-arguments>)

Constructs the forward-backward splitting algorithm [1].

This algorithm solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{ForwardBackwardIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ForwardBackwardIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the ForwardBackwardIteration constructor upon call

References

  1. Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
source
ProximalAlgorithms.ForwardBackwardIterationType
ForwardBackwardIteration(; <keyword-arguments>)

Iterator implementing the forward-backward splitting algorithm [1].

This iterator solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: ForwardBackward.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.

References

  1. Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
source
ProximalAlgorithms.DouglasRachfordFunction
DouglasRachford(; <keyword-arguments>)

Constructs the Douglas-Rachford splitting algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x).

The returned object has type IterativeAlgorithm{DouglasRachfordIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DouglasRachfordIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DouglasRachfordIteration constructor upon call

References

  1. Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
source
ProximalAlgorithms.DouglasRachfordIterationType
DouglasRachfordIteration(; <keyword-arguments>)

Iterator implementing the Douglas-Rachford splitting algorithm [1].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x).

See also: DouglasRachford.

Arguments

  • x0: initial point.
  • f=Zero(): proximable objective term.
  • g=Zero(): proximable objective term.
  • gamma: stepsize to use.

References

  1. Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
source
ProximalAlgorithms.FastForwardBackwardFunction
FastForwardBackward(; <keyword-arguments>)

Constructs the accelerated forward-backward splitting algorithm [1, 2].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{FastForwardBackwardIteration}, and can be called with the problem's arguments to trigger its solution.

See also: FastForwardBackwardIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the FastForwardBackwardIteration constructor upon call

References

  1. Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
  2. Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
source
ProximalAlgorithms.FastForwardBackwardIterationType
FastForwardBackwardIteration(; <keyword-arguments>)

Iterator implementing the accelerated forward-backward splitting algorithm [1, 2].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: FastForwardBackward.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • mf=0: convexity modulus of f.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma=nothing: stepsize, defaults to 1/Lf if Lf is set, and nothing otherwise.
  • adaptive=true: makes gamma adaptively adjust during the iterations; this is by default gamma === nothing.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • extrapolation_sequence=nothing: sequence (iterator) of extrapolation coefficients to use for acceleration.

References

  1. Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
  2. Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
source
ProximalAlgorithms.PANOCFunction
PANOC(; <keyword-arguments>)

Constructs the PANOC algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{PANOCIteration}, and can be called with the problem's arguments to trigger its solution.

See also: PANOCIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the PANOCIteration constructor upon call

References

  1. Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
source
ProximalAlgorithms.PANOCIterationType
PANOCIteration(; <keyword-arguments>)

Iterator implementing the PANOC algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

See also: PANOC.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
source
ProximalAlgorithms.ZeroFPRFunction
ZeroFPR(; <keyword-arguments>)

Constructs the ZeroFPR algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{ZeroFPRIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ZeroFPRIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the ZeroFPRIteration constructor upon call

References

  1. Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
source
ProximalAlgorithms.ZeroFPRIterationType
ZeroFPRIteration(; <keyword-arguments>)

Iterator implementing the ZeroFPR algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

See also: ZeroFPR.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
source
ProximalAlgorithms.DRLSFunction
DRLS(; <keyword-arguments>)

Constructs the Douglas-Rachford line-search algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{DRLSIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DRLSIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DRLSIteration constructor upon call

References

  1. Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", Computational Optimization and Applications, vol. 82, no. 2, pp. 395-440 (2022).
source
ProximalAlgorithms.DRLSIterationType
DRLSIteration(; <keyword-arguments>)

Iterator implementing the Douglas-Rachford line-search algorithm [1].

This iterator solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: DRLS.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • mf=nothing: convexity modulus of f.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma: stepsize to use, chosen appropriately based on Lf and mf by defaults.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", Computational Optimization and Applications, vol. 82, no. 2, pp. 395-440 (2022).
source
ProximalAlgorithms.PANOCplusFunction
PANOCplus(; <keyword-arguments>)

Constructs the the PANOCplus algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is locally smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{PANOCplusIteration}, and can be called with the problem's arguments to trigger its solution.

See also: PANOCplusIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the PANOCplusIteration constructor upon call

References

  1. De Marchi, Themelis, "Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity", Journal of Optimization Theory and Applications, vol. 194, no. 3, pp. 771-794 (2022).
source
ProximalAlgorithms.PANOCplusIterationType
PANOCplusIteration(; <keyword-arguments>)

Iterator implementing the PANOCplus algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is locally smooth and A is a linear mapping (for example, a matrix).

See also: PANOCplus.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. De Marchi, Themelis, "Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity", Journal of Optimization Theory and Applications, vol. 194, no. 3, pp. 771-794 (2022).
source

Three-terms: $f + g + h$

When more than one non-differentiable term is there in the objective, algorithms from the previous section do not in general apply out of the box, since $\operatorname{prox}_{\gamma (g + h)}$ does not have a closed form unless in particular cases. Therefore, ad-hoc iteration schemes have been studied.

AlgorithmAssumptionsOracleImplementationReferences
Davis-Yin$f$ convex and smooth, $g, h$ convex$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$DavisYin[11]
ProximalAlgorithms.DavisYinFunction
DavisYin(; <keyword-arguments>)

Constructs the Davis-Yin splitting algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + h(x),

where f is smooth.

The returned object has type IterativeAlgorithm{DavisYinIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DavisYinIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DavisYinIteration constructor upon call

References

  1. Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829–858 (2017).
source
ProximalAlgorithms.DavisYinIterationType
DavisYinIteration(; <keyword-arguments>)

Iterator implementing the Davis-Yin splitting algorithm [1].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + h(x),

where f is smooth.

See also DavisYin.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • h=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of h.
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).

References

  1. Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829-858 (2017).
source

Primal-dual: $f + g + h \circ L$

When a function $h$ is composed with a linear operator $L$, the proximal operator of $h \circ L$ does not have a closed form in general. For this reason, specific algorithms by the name of "primal-dual" splitting schemes are often applied to this model.

AlgorithmAssumptionsOracleImplementationReferences
Chambolle-Pock$f\equiv 0$, $g, h$ convex, $L$ linear operator$\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$ChambollePock[12]
Vu-Condat$f$ convex and smooth, $g, h$ convex, $L$ linear operator$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$VuCondat[13], [14]
AFBA$f$ convex and smooth, $g, h$ convex, $L$ linear operator$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$AFBA[15]
ProximalAlgorithms.ChambollePockFunction
ChambollePock(; <keyword-arguments>)

Constructs the Chambolle-Pock primal-dual algorithm [1].

This algorithm solves convex optimization problems of the form

minimize g(x) + h(L x),

where g and h are possibly nonsmooth, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ChambollePockIteration, AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
source
ProximalAlgorithms.ChambollePockIterationFunction
ChambollePockIteration(; <keyword-arguments>)

Iterator implementing the Chambolle-Pock primal-dual algorithm [1].

This iterator solves convex optimization problems of the form

minimize g(x) + h(L x),

where g and h are possibly nonsmooth, and L is a linear mapping.

See also: AFBAIteration, ChambollePock.

This iteration is equivalent to AFBAIteration with theta=2, f=Zero(), l=IndZero(); for all other arguments see AFBAIteration.

References

  1. Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
source
ProximalAlgorithms.VuCondatFunction
VuCondat(; <keyword-arguments>)

Constructs the Vũ-Condat primal-dual algorithm [1, 2].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: VuCondatIteration, AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  2. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
ProximalAlgorithms.VuCondatIterationFunction
VuCondatIteration(; <keyword-arguments>)

Iterator implementing the Vũ-Condat primal-dual algorithm [1, 2].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

This iteration is equivalent to AFBAIteration with theta=2; for all other arguments see AFBAIteration.

See also: AFBAIteration, VuCondat.

References

  1. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  2. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
ProximalAlgorithms.AFBAFunction
AFBA(; <keyword-arguments>)

Constructs the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
  2. Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
source
ProximalAlgorithms.AFBAIterationType
AFBAIteration(; <keyword-arguments>)

Iterator implementing the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

Points x0 and y0 are the initial primal and dual iterates, respectively. If unspecified, functions f, g, and h default to the identically zero function, l defaults to the indicator of the set {0}, and L defaults to the identity. Important keyword arguments, in case f and l are set, are the Lipschitz constants beta_f and beta_l (see below).

The iterator implements Algorithm 3 of [1] with constant stepsize (α_n=λ) for several prominant special cases:

  1. θ = 2 ==> Corresponds to the Vu-Condat Algorithm [3, 4].
  2. θ = 1, μ=1
  3. θ = 0, μ=1
  4. θ ∈ [0,∞), μ=0

See [2, Section 5.2] and [1, Figure 1] for stepsize conditions, special cases, and relation to other algorithms.

See also: AFBA.

Arguments

  • x0: initial primal point.
  • y0: initial dual point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • h=Zero(): proximable objective term.
  • l=IndZero(): strongly convex function.
  • L=I: linear operator (e.g. a matrix).
  • beta_f=0: Lipschitz constant of the gradient of f.
  • beta_l=0: Lipschitz constant of the gradient of l conjugate.
  • theta=1: nonnegative algorithm parameter.
  • mu=1: algorithm parameter in the range [0,1].
  • gamma1: primal stepsize (see [1] for the default choice).
  • gamma2: dual stepsize (see [1] for the default choice).

References

  1. Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
  2. Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
  3. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  4. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
+Problem types and algorithms · ProximalAlgorithms.jl

Problem types and algorithms

Warning

This page is under construction, and may be incomplete.

Depending on the structure a problem can be reduced to, different types of algorithms will apply. The major distinctions are in the number of objective terms, whether any of them is differentiable, whether they are composed with some linear mapping (which in general complicates evaluating the proximal mapping). Based on this we can split problems, and algorithms that apply to them, in three categories:

In what follows, the list of available algorithms is given, with links to the documentation for their constructors and their underlying iterator type.

Two-terms: $f + g$

This is the most popular model, by far the most thoroughly studied, and an abundance of algorithms exist to solve problems in this form.

AlgorithmAssumptionsOracleImplementationReferences
Proximal gradient$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$ForwardBackward[3]
Douglas-Rachford$\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$DouglasRachford[4]
Fast proximal gradient$f$ convex, smooth, $g$ convex$\nabla f$, $\operatorname{prox}_{\gamma g}$FastForwardBackward[5], [6]
PANOC$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$PANOC[7]
ZeroFPR$f$ smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$ZeroFPR[8]
Douglas-Rachford line-search$f$ smooth$\operatorname{prox}_{\gamma f}$, $\operatorname{prox}_{\gamma g}$DRLS[9]
PANOC+$f$ locally smooth$\nabla f$, $\operatorname{prox}_{\gamma g}$PANOCplus[10]
ProximalAlgorithms.ForwardBackwardFunction
ForwardBackward(; <keyword-arguments>)

Constructs the forward-backward splitting algorithm [1].

This algorithm solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{ForwardBackwardIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ForwardBackwardIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the ForwardBackwardIteration constructor upon call

References

  1. Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
source
ProximalAlgorithms.ForwardBackwardIterationType
ForwardBackwardIteration(; <keyword-arguments>)

Iterator implementing the forward-backward splitting algorithm [1].

This iterator solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: ForwardBackward.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.

References

  1. Lions, Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM Journal on Numerical Analysis, vol. 16, pp. 964–979 (1979).
source
ProximalAlgorithms.DouglasRachfordFunction
DouglasRachford(; <keyword-arguments>)

Constructs the Douglas-Rachford splitting algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x).

The returned object has type IterativeAlgorithm{DouglasRachfordIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DouglasRachfordIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DouglasRachfordIteration constructor upon call

References

  1. Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
source
ProximalAlgorithms.DouglasRachfordIterationType
DouglasRachfordIteration(; <keyword-arguments>)

Iterator implementing the Douglas-Rachford splitting algorithm [1].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x).

See also: DouglasRachford.

Arguments

  • x0: initial point.
  • f=Zero(): proximable objective term.
  • g=Zero(): proximable objective term.
  • gamma: stepsize to use.

References

  1. Eckstein, Bertsekas, "On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators", Mathematical Programming, vol. 55, no. 1, pp. 293-318 (1989).
source
ProximalAlgorithms.FastForwardBackwardFunction
FastForwardBackward(; <keyword-arguments>)

Constructs the accelerated forward-backward splitting algorithm [1, 2].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{FastForwardBackwardIteration}, and can be called with the problem's arguments to trigger its solution.

See also: FastForwardBackwardIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the FastForwardBackwardIteration constructor upon call

References

  1. Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
  2. Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
source
ProximalAlgorithms.FastForwardBackwardIterationType
FastForwardBackwardIteration(; <keyword-arguments>)

Iterator implementing the accelerated forward-backward splitting algorithm [1, 2].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: FastForwardBackward.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • mf=0: convexity modulus of f.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma=nothing: stepsize, defaults to 1/Lf if Lf is set, and nothing otherwise.
  • adaptive=true: makes gamma adaptively adjust during the iterations; this is by default gamma === nothing.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • extrapolation_sequence=nothing: sequence (iterator) of extrapolation coefficients to use for acceleration.

References

  1. Tseng, "On Accelerated Proximal Gradient Methods for Convex-Concave Optimization" (2008).
  2. Beck, Teboulle, "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems", SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202 (2009).
source
ProximalAlgorithms.PANOCFunction
PANOC(; <keyword-arguments>)

Constructs the PANOC algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{PANOCIteration}, and can be called with the problem's arguments to trigger its solution.

See also: PANOCIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the PANOCIteration constructor upon call

References

  1. Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
source
ProximalAlgorithms.PANOCIterationType
PANOCIteration(; <keyword-arguments>)

Iterator implementing the PANOC algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

See also: PANOC.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Stella, Themelis, Sopasakis, Patrinos, "A simple and efficient algorithm for nonlinear model predictive control", 56th IEEE Conference on Decision and Control (2017).
source
ProximalAlgorithms.ZeroFPRFunction
ZeroFPR(; <keyword-arguments>)

Constructs the ZeroFPR algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{ZeroFPRIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ZeroFPRIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the ZeroFPRIteration constructor upon call

References

  1. Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
source
ProximalAlgorithms.ZeroFPRIterationType
ZeroFPRIteration(; <keyword-arguments>)

Iterator implementing the ZeroFPR algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is smooth and A is a linear mapping (for example, a matrix).

See also: ZeroFPR.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Themelis, Stella, Patrinos, "Forward-backward envelope for the sum of two nonconvex functions: Further properties and nonmonotone line-search algorithms", SIAM Journal on Optimization, vol. 28, no. 3, pp. 2274-2303 (2018).
source
ProximalAlgorithms.DRLSFunction
DRLS(; <keyword-arguments>)

Constructs the Douglas-Rachford line-search algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

The returned object has type IterativeAlgorithm{DRLSIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DRLSIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DRLSIteration constructor upon call

References

  1. Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", Computational Optimization and Applications, vol. 82, no. 2, pp. 395-440 (2022).
source
ProximalAlgorithms.DRLSIterationType
DRLSIteration(; <keyword-arguments>)

Iterator implementing the Douglas-Rachford line-search algorithm [1].

This iterator solves optimization problems of the form

minimize f(x) + g(x),

where f is smooth.

See also: DRLS.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • mf=nothing: convexity modulus of f.
  • Lf=nothing: Lipschitz constant of the gradient of f.
  • gamma: stepsize to use, chosen appropriately based on Lf and mf by defaults.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. Themelis, Stella, Patrinos, "Douglas-Rachford splitting and ADMM for nonconvex optimization: Accelerated and Newton-type linesearch algorithms", Computational Optimization and Applications, vol. 82, no. 2, pp. 395-440 (2022).
source
ProximalAlgorithms.PANOCplusFunction
PANOCplus(; <keyword-arguments>)

Constructs the the PANOCplus algorithm [1].

This algorithm solves optimization problems of the form

minimize f(Ax) + g(x),

where f is locally smooth and A is a linear mapping (for example, a matrix).

The returned object has type IterativeAlgorithm{PANOCplusIteration}, and can be called with the problem's arguments to trigger its solution.

See also: PANOCplusIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=1_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=10: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the PANOCplusIteration constructor upon call

References

  1. De Marchi, Themelis, "Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity", Journal of Optimization Theory and Applications, vol. 194, no. 3, pp. 771-794 (2022).
source
ProximalAlgorithms.PANOCplusIterationType
PANOCplusIteration(; <keyword-arguments>)

Iterator implementing the PANOCplus algorithm [1].

This iterator solves optimization problems of the form

minimize f(Ax) + g(x),

where f is locally smooth and A is a linear mapping (for example, a matrix).

See also: PANOCplus.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • A=I: linear operator (e.g. a matrix).
  • g=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of x ↦ f(Ax).
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).
  • adaptive=false: forces the method stepsize to be adaptively adjusted.
  • minimum_gamma=1e-7: lower bound to gamma in case adaptive == true.
  • max_backtracks=20: maximum number of line-search backtracks.
  • directions=LBFGS(5): strategy to use to compute line-search directions.

References

  1. De Marchi, Themelis, "Proximal Gradient Algorithms under Local Lipschitz Gradient Continuity", Journal of Optimization Theory and Applications, vol. 194, no. 3, pp. 771-794 (2022).
source

Three-terms: $f + g + h$

When more than one non-differentiable term is there in the objective, algorithms from the previous section do not in general apply out of the box, since $\operatorname{prox}_{\gamma (g + h)}$ does not have a closed form unless in particular cases. Therefore, ad-hoc iteration schemes have been studied.

AlgorithmAssumptionsOracleImplementationReferences
Davis-Yin$f$ convex and smooth, $g, h$ convex$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$DavisYin[11]
ProximalAlgorithms.DavisYinFunction
DavisYin(; <keyword-arguments>)

Constructs the Davis-Yin splitting algorithm [1].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + h(x),

where f is smooth.

The returned object has type IterativeAlgorithm{DavisYinIteration}, and can be called with the problem's arguments to trigger its solution.

See also: DavisYinIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-8: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the DavisYinIteration constructor upon call

References

  1. Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829–858 (2017).
source
ProximalAlgorithms.DavisYinIterationType
DavisYinIteration(; <keyword-arguments>)

Iterator implementing the Davis-Yin splitting algorithm [1].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + h(x),

where f is smooth.

See also DavisYin.

Arguments

  • x0: initial point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • h=Zero(): proximable objective term.
  • Lf=nothing: Lipschitz constant of the gradient of h.
  • gamma=nothing: stepsize to use, defaults to 1/Lf if not set (but Lf is).

References

  1. Davis, Yin. "A Three-Operator Splitting Scheme and its Optimization Applications", Set-Valued and Variational Analysis, vol. 25, no. 4, pp. 829-858 (2017).
source

Primal-dual: $f + g + h \circ L$

When a function $h$ is composed with a linear operator $L$, the proximal operator of $h \circ L$ does not have a closed form in general. For this reason, specific algorithms by the name of "primal-dual" splitting schemes are often applied to this model.

AlgorithmAssumptionsOracleImplementationReferences
Chambolle-Pock$f\equiv 0$, $g, h$ convex, $L$ linear operator$\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$ChambollePock[12]
Vu-Condat$f$ convex and smooth, $g, h$ convex, $L$ linear operator$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$VuCondat[13], [14]
AFBA$f$ convex and smooth, $g, h$ convex, $L$ linear operator$\nabla f$, $\operatorname{prox}_{\gamma g}$, $\operatorname{prox}_{\gamma h}$, $L$, $L^*$AFBA[15]
ProximalAlgorithms.ChambollePockFunction
ChambollePock(; <keyword-arguments>)

Constructs the Chambolle-Pock primal-dual algorithm [1].

This algorithm solves convex optimization problems of the form

minimize g(x) + h(L x),

where g and h are possibly nonsmooth, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: ChambollePockIteration, AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
source
ProximalAlgorithms.ChambollePockIterationFunction
ChambollePockIteration(; <keyword-arguments>)

Iterator implementing the Chambolle-Pock primal-dual algorithm [1].

This iterator solves convex optimization problems of the form

minimize g(x) + h(L x),

where g and h are possibly nonsmooth, and L is a linear mapping.

See also: AFBAIteration, ChambollePock.

This iteration is equivalent to AFBAIteration with theta=2, f=Zero(), l=IndZero(); for all other arguments see AFBAIteration.

References

  1. Chambolle, Pock, "A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging", Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145 (2011).
source
ProximalAlgorithms.VuCondatFunction
VuCondat(; <keyword-arguments>)

Constructs the Vũ-Condat primal-dual algorithm [1, 2].

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: VuCondatIteration, AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  2. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
ProximalAlgorithms.VuCondatIterationFunction
VuCondatIteration(; <keyword-arguments>)

Iterator implementing the Vũ-Condat primal-dual algorithm [1, 2].

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

This iteration is equivalent to AFBAIteration with theta=2; for all other arguments see AFBAIteration.

See also: AFBAIteration, VuCondat.

References

  1. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  2. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
ProximalAlgorithms.AFBAFunction
AFBA(; <keyword-arguments>)

Constructs the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).

This algorithm solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

The returned object has type IterativeAlgorithm{AFBAIteration}, and can be called with the problem's arguments to trigger its solution.

See also: AFBAIteration, IterativeAlgorithm.

Arguments

  • maxit::Int=10_000: maximum number of iteration
  • tol::1e-5: tolerance for the default stopping criterion
  • stop::Function: termination condition, stop(::T, state) should return true when to stop the iteration
  • solution::Function: solution mapping, solution(::T, state) should return the identified solution
  • verbose::Bool=false: whether the algorithm state should be displayed
  • freq::Int=100: every how many iterations to display the algorithm state
  • display::Function: display function, display(::Int, ::T, state) should display a summary of the iteration state
  • kwargs...: additional keyword arguments to pass on to the AFBAIteration constructor upon call

References

  1. Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
  2. Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
source
ProximalAlgorithms.AFBAIterationType
AFBAIteration(; <keyword-arguments>)

Iterator implementing the asymmetric forward-backward-adjoint algorithm (AFBA, see [1]).

This iterator solves convex optimization problems of the form

minimize f(x) + g(x) + (h □ l)(L x),

where f is smooth, g and h are possibly nonsmooth and l is strongly convex. Symbol denotes the infimal convolution, and L is a linear mapping.

Points x0 and y0 are the initial primal and dual iterates, respectively. If unspecified, functions f, g, and h default to the identically zero function, l defaults to the indicator of the set {0}, and L defaults to the identity. Important keyword arguments, in case f and l are set, are the Lipschitz constants beta_f and beta_l (see below).

The iterator implements Algorithm 3 of [1] with constant stepsize (α_n=λ) for several prominant special cases:

  1. θ = 2 ==> Corresponds to the Vu-Condat Algorithm [3, 4].
  2. θ = 1, μ=1
  3. θ = 0, μ=1
  4. θ ∈ [0,∞), μ=0

See [2, Section 5.2] and [1, Figure 1] for stepsize conditions, special cases, and relation to other algorithms.

See also: AFBA.

Arguments

  • x0: initial primal point.
  • y0: initial dual point.
  • f=Zero(): smooth objective term.
  • g=Zero(): proximable objective term.
  • h=Zero(): proximable objective term.
  • l=IndZero(): strongly convex function.
  • L=I: linear operator (e.g. a matrix).
  • beta_f=0: Lipschitz constant of the gradient of f.
  • beta_l=0: Lipschitz constant of the gradient of l conjugate.
  • theta=1: nonnegative algorithm parameter.
  • mu=1: algorithm parameter in the range [0,1].
  • gamma1: primal stepsize (see [1] for the default choice).
  • gamma2: dual stepsize (see [1] for the default choice).

References

  1. Latafat, Patrinos, "Asymmetric forward-backward-adjoint splitting for solving monotone inclusions involving three operators", Computational Optimization and Applications, vol. 68, no. 1, pp. 57-93 (2017).
  2. Latafat, Patrinos, "Primal-dual proximal algorithms for structured convex optimization: a unifying framework", In Large-Scale and Distributed Optimization, Giselsson and Rantzer, Eds. Springer International Publishing, pp. 97-120 (2018).
  3. Condat, "A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms", Journal of Optimization Theory and Applications, vol. 158, no. 2, pp 460-479 (2013).
  4. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators", Advances in Computational Mathematics, vol. 38, no. 3, pp. 667-681 (2013).
source
diff --git a/dev/index.html b/dev/index.html index 7029fa6..3d6a1d3 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,3 +1,3 @@ Home · ProximalAlgorithms.jl

ProximalAlgorithms.jl

A Julia package for non-smooth optimization algorithms. Link to GitHub repository.

This package provides algorithms for the minimization of objective functions that include non-smooth terms, such as constraints or non-differentiable penalties. Implemented algorithms include:

  • (Fast) Proximal gradient methods
  • Douglas-Rachford splitting
  • Three-term splitting
  • Primal-dual splitting algorithms
  • Newton-type methods

Check out this section for an overview of the available algorithms.

Algorithms rely on:

(but you can easily bring your own gradients)

to handle non-differentiable terms (see for example ProximalOperators for an extensive collection of functions).

Note

ProximalOperators needs to be >=0.15 in order to work with ProximalAlgorithms >=0.5. Make sure to update ProximalOperators, in case you have been using versions <0.15.

Installation

julia> ]
-pkg> add ProximalAlgorithms

Citing

If you use any of the algorithms from ProximalAlgorithms in your research, you are kindly asked to cite the relevant bibliography. Please check this section of the manual for algorithm-specific references.

Contributing

Contributions are welcome in the form of issue notifications or pull requests. When contributing new algorithms, we highly recommend looking at already implemented ones to get inspiration on how to structure the code.

+pkg> add ProximalAlgorithms

Citing

If you use any of the algorithms from ProximalAlgorithms in your research, you are kindly asked to cite the relevant bibliography. Please check this section of the manual for algorithm-specific references.

Contributing

Contributions are welcome in the form of issue notifications or pull requests. When contributing new algorithms, we highly recommend looking at already implemented ones to get inspiration on how to structure the code.