From 4e3562e26037f9886fcf058318f29f207d5073a3 Mon Sep 17 00:00:00 2001 From: spaette Date: Fri, 28 Jun 2024 11:06:17 -0500 Subject: [PATCH] typos --- .vale.ini | 2 +- Changelog.md | 6 ++++++ docs/make.jl | 2 +- docs/src/tutorials/GeodesicRegression.md | 2 +- src/plans/debug.jl | 6 +++--- src/plans/record.jl | 2 +- src/solvers/cma_es.jl | 4 ++-- src/solvers/difference-of-convex-proximal-point.jl | 6 +++--- src/solvers/truncated_conjugate_gradient_descent.jl | 2 +- test/plans/test_cache.jl | 4 ++-- test/plans/test_counts.jl | 2 +- test/plans/test_embedded.jl | 2 +- tutorials/GeodesicRegression.qmd | 2 +- tutorials/HowToDebug.qmd | 4 ++-- tutorials/ImplementOwnManifold.qmd | 4 ++-- 15 files changed, 28 insertions(+), 22 deletions(-) diff --git a/.vale.ini b/.vale.ini index ad2f0bbe69..18596bbabf 100644 --- a/.vale.ini +++ b/.vale.ini @@ -44,7 +44,7 @@ TokenIgnores = \$(.+)\$,\[.+?\]\(@(ref|id|cite).+?\),`.+`,``.*``,\s{4}.+\n BasedOnStyles = Vale, Google ; ignore (1) math (2) ref and cite keys (3) code in docs (4) math in docs (5,6) indented blocks TokenIgnores = (\$+[^\n$]+\$+) -Google.We = false # For tutorials we want to adress the user directly. +Google.We = false # For tutorials we want to address the user directly. [docs/src/tutorials/*.md] ; ignore since they are derived files diff --git a/Changelog.md b/Changelog.md index 95f0ed343a..0d89a2c5b5 100644 --- a/Changelog.md +++ b/Changelog.md @@ -5,6 +5,12 @@ All notable Changes to the Julia package `Manopt.jl` will be documented in this The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.4.67] – unreleased + +### Fixed + +* a few typos in the documentation + ## [0.4.66] June 27, 2024 ### Changed diff --git a/docs/make.jl b/docs/make.jl index e2674bfe6d..e9afa93ec9 100755 --- a/docs/make.jl +++ b/docs/make.jl @@ -7,7 +7,7 @@ if "--help" ∈ ARGS """ docs/make.jl -Render the `Manopt.jl` documenation with optional arguments +Render the `Manopt.jl` documentation with optional arguments Arguments * `--exclude-tutorials` - exclude the tutorials from the menu of Documenter, diff --git a/docs/src/tutorials/GeodesicRegression.md b/docs/src/tutorials/GeodesicRegression.md index db294c9e8e..ec1ff1e61d 100644 --- a/docs/src/tutorials/GeodesicRegression.md +++ b/docs/src/tutorials/GeodesicRegression.md @@ -154,7 +154,7 @@ t = map(d -> inner(S, m, pca1, log(S, m, d)), data) -0.2259012492666664 And we can call the gradient descent. Note that since `gradF!` works in place of `Y`, we have to set the -`evalutation` type accordingly. +`evaluation` type accordingly. ``` julia y = gradient_descent( diff --git a/src/plans/debug.jl b/src/plans/debug.jl index 7e04e0ea10..dccba59f3f 100644 --- a/src/plans/debug.jl +++ b/src/plans/debug.jl @@ -1075,8 +1075,8 @@ This collected vector is added to the `:Iteration => [...]` pair. If necessary, these pairs are created For each `Pair` of a `Symbol` and a `Vector`, the [`DebugGroupFactory`](@ref) -is called for the `Vector` and the result is added to the debug dictonaries entry -with said symbold. This is wrapped into the [`DebugWhenActive`](@ref), +is called for the `Vector` and the result is added to the debug dictionary's entry +with said symbol. This is wrapped into the [`DebugWhenActive`](@ref), when the `:WhenActive` symbol is present # Return value @@ -1160,7 +1160,7 @@ If this results in more than one [`DebugAction`](@ref) a [`DebugGroup`](@ref) of If any integers are present, the last of these is used to wrap the group in a [`DebugEvery`](@ref)`(k)`. -If `:WhenActive` is present, the resulting Action is wrappedn in [`DebugWhenActive`](@ref), +If `:WhenActive` is present, the resulting Action is wrapped in [`DebugWhenActive`](@ref), making it deactivatable by its parent solver. """ function DebugGroupFactory(a::Vector; activation_offset=1) diff --git a/src/plans/record.jl b/src/plans/record.jl index a66bb0ddb9..c961e13dad 100644 --- a/src/plans/record.jl +++ b/src/plans/record.jl @@ -790,7 +790,7 @@ This collected vector is added to the `:Iteration => [...]` pair. If any of these two pairs does not exist, it is pairs are created when adding the corresponding symbols For each `Pair` of a `Symbol` and a `Vector`, the [`RecordGroupFactory`](@ref) -is called for the `Vector` and the result is added to the debug dictionaries entry +is called for the `Vector` and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the [`RecordWhenActive`](@ref), when the `:WhenActive` symbol is present diff --git a/src/solvers/cma_es.jl b/src/solvers/cma_es.jl index 16e49edbf3..952560edea 100644 --- a/src/solvers/cma_es.jl +++ b/src/solvers/cma_es.jl @@ -620,7 +620,7 @@ function status_summary(c::StopWhenBestCostInGenerationConstant) end function get_reason(c::StopWhenBestCostInGenerationConstant) if c.at_iteration >= 0 - return "At iteration $(c.at_iteration): for the last $(c.iterations_since_change) generatiosn the best objective value in each generation was equal to $(c.best_objective_at_last_change).\n" + return "At iteration $(c.at_iteration): for the last $(c.iterations_since_change) generations the best objective value in each generation was equal to $(c.best_objective_at_last_change).\n" end return "" end @@ -880,7 +880,7 @@ function status_summary(c::StopWhenPopulationCostConcentrated) end function get_reason(c::StopWhenPopulationCostConcentrated) if c.at_iteration >= 0 - return "Range of best objective function values in the last $(length(c.best_value_history)) gnerations and all values in the current generation is below $(c.tol)\n" + return "Range of best objective function values in the last $(length(c.best_value_history)) generations and all values in the current generation is below $(c.tol)\n" end return "" end diff --git a/src/solvers/difference-of-convex-proximal-point.jl b/src/solvers/difference-of-convex-proximal-point.jl index 2048e1981f..b43fc0a002 100644 --- a/src/solvers/difference-of-convex-proximal-point.jl +++ b/src/solvers/difference-of-convex-proximal-point.jl @@ -431,7 +431,7 @@ function initialize_solver!(::AbstractManoptProblem, dcps::DifferenceOfConvexPro return dcps end #= - Varant I: allocating closed form of the prox + Variant I: allocating closed form of the prox =# function step_solver!( amp::AbstractManoptProblem, @@ -450,7 +450,7 @@ function step_solver!( end #= - Varant II: in-place closed form of the prox + Variant II: in-place closed form of the prox =# function step_solver!( amp::AbstractManoptProblem, @@ -468,7 +468,7 @@ function step_solver!( return dcps end #= - Varant III: subsolver variant of the prox + Variant III: subsolver variant of the prox =# function step_solver!( amp::AbstractManoptProblem, diff --git a/src/solvers/truncated_conjugate_gradient_descent.jl b/src/solvers/truncated_conjugate_gradient_descent.jl index cdb575dfa9..2af4f1e352 100644 --- a/src/solvers/truncated_conjugate_gradient_descent.jl +++ b/src/solvers/truncated_conjugate_gradient_descent.jl @@ -744,7 +744,7 @@ function step_solver!( tcgs.model_value = new_model_value return tcgs end - # otherweise accept step + # otherwise accept step copyto!(M, tcgs.Y, p, new_Y) tcgs.model_value = new_model_value copyto!(M, tcgs.HY, p, new_HY) diff --git a/test/plans/test_cache.jl b/test/plans/test_cache.jl index bb1a291442..bdcd33e63f 100644 --- a/test/plans/test_cache.jl +++ b/test/plans/test_cache.jl @@ -282,7 +282,7 @@ A `SimpleManifoldCachedObjective`""", ) # undecorated / recursive cost -> exactly f @test Manopt.get_cost_function(obj) === Manopt.get_cost_function(c_obj, true) - # otherise different + # otherwise different f1 = Manopt.get_cost_function(c_obj) @test f1 != f @test f1(M, p) == f(M, p) @@ -331,7 +331,7 @@ A `SimpleManifoldCachedObjective`""", s_obj = Manopt.SimpleManifoldCachedObjective(M, obj_g; p=similar(p), X=similar(X)) # undecorated / recursive cost -> exactly f @test Manopt.get_cost_function(obj_g) === Manopt.get_cost_function(s_obj, true) - # otherise different + # otherwise different f1 = Manopt.get_cost_function(s_obj) @test f1 != f @test f1(M, p) == f(M, p) diff --git a/test/plans/test_counts.jl b/test/plans/test_counts.jl index 1490de8458..e724bd7dea 100644 --- a/test/plans/test_counts.jl +++ b/test/plans/test_counts.jl @@ -71,7 +71,7 @@ include("../utils/dummy_types.jl") c_obj = ManifoldCountObjective(M, obj, [:Cost, :Gradient, :Hessian]) # undecorated / recursive cost -> exactly f @test Manopt.get_cost_function(obj) === Manopt.get_cost_function(c_obj, true) - # otherise different + # otherwise different f1 = get_cost_function(c_obj) @test f1 != f @test f1(M, p) == f(M, p) diff --git a/test/plans/test_embedded.jl b/test/plans/test_embedded.jl index 339b0f21c2..0f9c8ac003 100644 --- a/test/plans/test_embedded.jl +++ b/test/plans/test_embedded.jl @@ -91,7 +91,7 @@ using Manifolds, Manopt, Test, LinearAlgebra, Random e_obj = EmbeddedManifoldObjective(obj) # undecorated / recursive cost -> exactly f @test Manopt.get_cost_function(obj) === Manopt.get_cost_function(e_obj, true) - # otherise different + # otherwise different f1 = Manopt.get_cost_function(e_obj) @test f1 != f @test f1(M, p) == f(M, p) diff --git a/tutorials/GeodesicRegression.qmd b/tutorials/GeodesicRegression.qmd index 3ac3de6d0b..1f5e815775 100644 --- a/tutorials/GeodesicRegression.qmd +++ b/tutorials/GeodesicRegression.qmd @@ -188,7 +188,7 @@ t = map(d -> inner(S, m, pca1, log(S, m, d)), data) ``` And we can call the gradient descent. Note that since `gradF!` works in place of `Y`, we have to set the -`evalutation` type accordingly. +`evaluation` type accordingly. ```{julia} y = gradient_descent( diff --git a/tutorials/HowToDebug.qmd b/tutorials/HowToDebug.qmd index dce626fb77..1683140aaf 100644 --- a/tutorials/HowToDebug.qmd +++ b/tutorials/HowToDebug.qmd @@ -76,7 +76,7 @@ p1 = exact_penalty_method( While in the last step, we specified what to print, this can be extend to even specify _when_ to print it. Currently the following four “places” are available, ordered by when they appear in an algorithm run. -* `:Start` to print something at the start of the algorith. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number `0` +* `:Start` to print something at the start of the algorithm. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number `0` * `:BeforeIteration` to print something before an iteration starts * `:Iteration` to print something _after_ an iteration. For example the group of prints from the last code block `[:Iteration, :Cost, " | ", :ϵ, 25,]` is added to this entry. @@ -101,7 +101,7 @@ p1 = exact_penalty_method( ); ``` -This also illustrates, that instead of `Symbol`s we can also always pass down a [`DebugAction`](@ref) directly, for example when there is a reason to create or configure the action more individually thatn the default from the symbol. +This also illustrates, that instead of `Symbol`s we can also always pass down a [`DebugAction`](@ref) directly, for example when there is a reason to create or configure the action more individually than the default from the symbol. Note that the number (`25`) yields that all but `:Start` and `:Stop` are only displayed every twenty-fifth iteration. ## Subsolver debug diff --git a/tutorials/ImplementOwnManifold.qmd b/tutorials/ImplementOwnManifold.qmd index c3902be336..cf4a8f48f2 100644 --- a/tutorials/ImplementOwnManifold.qmd +++ b/tutorials/ImplementOwnManifold.qmd @@ -47,7 +47,7 @@ Random.seed!(42) #| echo: false #| code-fold: true #| output: false -# to keep the output and usage simple let's dactivate tutorial mode here +# to keep the output and usage simple let's deactivate tutorial mode here Manopt.set_manopt_parameter!(:Mode, "None") ``` @@ -183,7 +183,7 @@ Let's discuss these in the next steps. q1 = gradient_descent(M, f, grad_f, p0; retraction_method = ProjectionRetraction(), # state, that we use the retraction from above stepsize = DecreasingStepsize(M; length=1.0), # A simple step size - stopping_criterion = StopAfterIteration(10), # A simple stopping crtierion + stopping_criterion = StopAfterIteration(10), # A simple stopping criterion X = zeros(d+1), # how we define/represent tangent vectors ) f(M,q1)