Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos #398

Merged
merged 1 commit into from
Jun 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .vale.ini
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ TokenIgnores = \$(.+)\$,\[.+?\]\(@(ref|id|cite).+?\),`.+`,``.*``,\s{4}.+\n
BasedOnStyles = Vale, Google
; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks
TokenIgnores = (\$+[^\n$]+\$+)
Google.We = false # For tutorials we want to adress the user directly.
Google.We = false # For tutorials we want to address the user directly.

[docs/src/tutorials/*.md]
; ignore since they are derived files
Expand Down
6 changes: 6 additions & 0 deletions Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@ All notable Changes to the Julia package `Manopt.jl` will be documented in this
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.4.67] – unreleased

### Fixed

* a few typos in the documentation

## [0.4.66] June 27, 2024

### Changed
Expand Down
2 changes: 1 addition & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ if "--help" ∈ ARGS
"""
docs/make.jl

Render the `Manopt.jl` documenation with optional arguments
Render the `Manopt.jl` documentation with optional arguments

Arguments
* `--exclude-tutorials` - exclude the tutorials from the menu of Documenter,
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/GeodesicRegression.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ t = map(d -> inner(S, m, pca1, log(S, m, d)), data)
-0.2259012492666664

And we can call the gradient descent. Note that since `gradF!` works in place of `Y`, we have to set the
`evalutation` type accordingly.
`evaluation` type accordingly.

``` julia
y = gradient_descent(
Expand Down
6 changes: 3 additions & 3 deletions src/plans/debug.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1075,8 +1075,8 @@ This collected vector is added to the `:Iteration => [...]` pair.
If necessary, these pairs are created

For each `Pair` of a `Symbol` and a `Vector`, the [`DebugGroupFactory`](@ref)
is called for the `Vector` and the result is added to the debug dictonaries entry
with said symbold. This is wrapped into the [`DebugWhenActive`](@ref),
is called for the `Vector` and the result is added to the debug dictionary's entry
with said symbol. This is wrapped into the [`DebugWhenActive`](@ref),
when the `:WhenActive` symbol is present

# Return value
Expand Down Expand Up @@ -1160,7 +1160,7 @@ If this results in more than one [`DebugAction`](@ref) a [`DebugGroup`](@ref) of
If any integers are present, the last of these is used to wrap the group in a
[`DebugEvery`](@ref)`(k)`.

If `:WhenActive` is present, the resulting Action is wrappedn in [`DebugWhenActive`](@ref),
If `:WhenActive` is present, the resulting Action is wrapped in [`DebugWhenActive`](@ref),
making it deactivatable by its parent solver.
"""
function DebugGroupFactory(a::Vector; activation_offset=1)
Expand Down
2 changes: 1 addition & 1 deletion src/plans/record.jl
Original file line number Diff line number Diff line change
Expand Up @@ -790,7 +790,7 @@ This collected vector is added to the `:Iteration => [...]` pair.
If any of these two pairs does not exist, it is pairs are created when adding the corresponding symbols

For each `Pair` of a `Symbol` and a `Vector`, the [`RecordGroupFactory`](@ref)
is called for the `Vector` and the result is added to the debug dictionaries entry
is called for the `Vector` and the result is added to the debug dictionary's entry
with said symbol. This is wrapped into the [`RecordWhenActive`](@ref),
when the `:WhenActive` symbol is present

Expand Down
4 changes: 2 additions & 2 deletions src/solvers/cma_es.jl
Original file line number Diff line number Diff line change
Expand Up @@ -620,7 +620,7 @@ function status_summary(c::StopWhenBestCostInGenerationConstant)
end
function get_reason(c::StopWhenBestCostInGenerationConstant)
if c.at_iteration >= 0
return "At iteration $(c.at_iteration): for the last $(c.iterations_since_change) generatiosn the best objective value in each generation was equal to $(c.best_objective_at_last_change).\n"
return "At iteration $(c.at_iteration): for the last $(c.iterations_since_change) generations the best objective value in each generation was equal to $(c.best_objective_at_last_change).\n"
end
return ""
end
Expand Down Expand Up @@ -880,7 +880,7 @@ function status_summary(c::StopWhenPopulationCostConcentrated)
end
function get_reason(c::StopWhenPopulationCostConcentrated)
if c.at_iteration >= 0
return "Range of best objective function values in the last $(length(c.best_value_history)) gnerations and all values in the current generation is below $(c.tol)\n"
return "Range of best objective function values in the last $(length(c.best_value_history)) generations and all values in the current generation is below $(c.tol)\n"
end
return ""
end
Expand Down
6 changes: 3 additions & 3 deletions src/solvers/difference-of-convex-proximal-point.jl
Original file line number Diff line number Diff line change
Expand Up @@ -431,7 +431,7 @@ function initialize_solver!(::AbstractManoptProblem, dcps::DifferenceOfConvexPro
return dcps
end
#=
Varant I: allocating closed form of the prox
Variant I: allocating closed form of the prox
=#
function step_solver!(
amp::AbstractManoptProblem,
Expand All @@ -450,7 +450,7 @@ function step_solver!(
end

#=
Varant II: in-place closed form of the prox
Variant II: in-place closed form of the prox
=#
function step_solver!(
amp::AbstractManoptProblem,
Expand All @@ -468,7 +468,7 @@ function step_solver!(
return dcps
end
#=
Varant III: subsolver variant of the prox
Variant III: subsolver variant of the prox
=#
function step_solver!(
amp::AbstractManoptProblem,
Expand Down
2 changes: 1 addition & 1 deletion src/solvers/truncated_conjugate_gradient_descent.jl
Original file line number Diff line number Diff line change
Expand Up @@ -744,7 +744,7 @@ function step_solver!(
tcgs.model_value = new_model_value
return tcgs
end
# otherweise accept step
# otherwise accept step
copyto!(M, tcgs.Y, p, new_Y)
tcgs.model_value = new_model_value
copyto!(M, tcgs.HY, p, new_HY)
Expand Down
4 changes: 2 additions & 2 deletions test/plans/test_cache.jl
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ A `SimpleManifoldCachedObjective`""",
)
# undecorated / recursive cost -> exactly f
@test Manopt.get_cost_function(obj) === Manopt.get_cost_function(c_obj, true)
# otherise different
# otherwise different
f1 = Manopt.get_cost_function(c_obj)
@test f1 != f
@test f1(M, p) == f(M, p)
Expand Down Expand Up @@ -331,7 +331,7 @@ A `SimpleManifoldCachedObjective`""",
s_obj = Manopt.SimpleManifoldCachedObjective(M, obj_g; p=similar(p), X=similar(X))
# undecorated / recursive cost -> exactly f
@test Manopt.get_cost_function(obj_g) === Manopt.get_cost_function(s_obj, true)
# otherise different
# otherwise different
f1 = Manopt.get_cost_function(s_obj)
@test f1 != f
@test f1(M, p) == f(M, p)
Expand Down
2 changes: 1 addition & 1 deletion test/plans/test_counts.jl
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ include("../utils/dummy_types.jl")
c_obj = ManifoldCountObjective(M, obj, [:Cost, :Gradient, :Hessian])
# undecorated / recursive cost -> exactly f
@test Manopt.get_cost_function(obj) === Manopt.get_cost_function(c_obj, true)
# otherise different
# otherwise different
f1 = get_cost_function(c_obj)
@test f1 != f
@test f1(M, p) == f(M, p)
Expand Down
2 changes: 1 addition & 1 deletion test/plans/test_embedded.jl
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ using Manifolds, Manopt, Test, LinearAlgebra, Random
e_obj = EmbeddedManifoldObjective(obj)
# undecorated / recursive cost -> exactly f
@test Manopt.get_cost_function(obj) === Manopt.get_cost_function(e_obj, true)
# otherise different
# otherwise different
f1 = Manopt.get_cost_function(e_obj)
@test f1 != f
@test f1(M, p) == f(M, p)
Expand Down
2 changes: 1 addition & 1 deletion tutorials/GeodesicRegression.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ t = map(d -> inner(S, m, pca1, log(S, m, d)), data)
```

And we can call the gradient descent. Note that since `gradF!` works in place of `Y`, we have to set the
`evalutation` type accordingly.
`evaluation` type accordingly.

```{julia}
y = gradient_descent(
Expand Down
4 changes: 2 additions & 2 deletions tutorials/HowToDebug.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ p1 = exact_penalty_method(
While in the last step, we specified what to print, this can be extend to even specify _when_ to print it. Currently the following four “places” are available, ordered by when they appear
in an algorithm run.

* `:Start` to print something at the start of the algorith. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number `0`
* `:Start` to print something at the start of the algorithm. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number `0`
* `:BeforeIteration` to print something before an iteration starts
* `:Iteration` to print something _after_ an iteration. For example the group of prints from
the last code block `[:Iteration, :Cost, " | ", :ϵ, 25,]` is added to this entry.
Expand All @@ -101,7 +101,7 @@ p1 = exact_penalty_method(
);
```

This also illustrates, that instead of `Symbol`s we can also always pass down a [`DebugAction`](@ref) directly, for example when there is a reason to create or configure the action more individually thatn the default from the symbol.
This also illustrates, that instead of `Symbol`s we can also always pass down a [`DebugAction`](@ref) directly, for example when there is a reason to create or configure the action more individually than the default from the symbol.
Note that the number (`25`) yields that all but `:Start` and `:Stop` are only displayed every twenty-fifth iteration.

## Subsolver debug
Expand Down
4 changes: 2 additions & 2 deletions tutorials/ImplementOwnManifold.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Random.seed!(42)
#| echo: false
#| code-fold: true
#| output: false
# to keep the output and usage simple let's dactivate tutorial mode here
# to keep the output and usage simple let's deactivate tutorial mode here
Manopt.set_manopt_parameter!(:Mode, "None")
```

Expand Down Expand Up @@ -183,7 +183,7 @@ Let's discuss these in the next steps.
q1 = gradient_descent(M, f, grad_f, p0;
retraction_method = ProjectionRetraction(), # state, that we use the retraction from above
stepsize = DecreasingStepsize(M; length=1.0), # A simple step size
stopping_criterion = StopAfterIteration(10), # A simple stopping crtierion
stopping_criterion = StopAfterIteration(10), # A simple stopping criterion
X = zeros(d+1), # how we define/represent tangent vectors
)
f(M,q1)
Expand Down
Loading