Skip to content

Commit

Permalink
Use referencable tables
Browse files Browse the repository at this point in the history
  • Loading branch information
michaelosthege committed Sep 29, 2024
1 parent d642265 commit 9d08e51
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 3 deletions.
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
]
myst_enable_extensions = [
"amsmath", # needed for LaTeX math environments
"colon_fence",
"dollarmath", # needed for $ and $$ math
"html_image",
"replacements",
Expand Down
8 changes: 5 additions & 3 deletions docs/source/markdown/PeakPerformance_validation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,12 @@
Several stages of validation were employed to prove the suitability of `PeakPerformance` for chromatographic peak data analysis.
The goals were to showcase the efficacy of `PeakPerformance` utilizing noisy synthetic data, to investigate cases where a peak could reasonably be fit with either of the single peak models, and finally to use experimental data to compare results obtained with `PeakPerformance` to those from the commercial vendor software Sciex MultiQuant.

For the first test, 500 random data sets were generated with the NumPy random module {cite}`harris2020array` by drawing from the normal distributions detailed in Table 1 except for the mean parameter which was held constant at a value of 6.
For the first test, 500 random data sets were generated with the NumPy random module {cite}`harris2020array` by drawing from the normal distributions detailed in [Table 1](#tab_v1) except for the mean parameter which was held constant at a value of 6.
Subsequently, normally distributed random noise ($\mathcal{N}(0, 0.6)$ or $\mathcal{N}(0, 1.2)$ for data sets with the tag "higher noise") was added to each data point.
The amount of data points per time was chosen based on an LC-MS/MS method routinely utilized by the authors and accordingly set to one data point per 1.8 s.

__Table 1:__ Normal distributions from which parameters were drawn randomly to create synthetic data sets for the validation of `PeakPerformance`.
(tab_v1)=
:::{table} __Table 1:__ Normal distributions from which parameters were drawn randomly to create synthetic data sets for the validation of `PeakPerformance`.

| **parameter** | **model (1st test)** | **model (2nd test)** |
| ------------------ | ----------------------- | ----------------------- |
Expand All @@ -17,10 +18,11 @@ __Table 1:__ Normal distributions from which parameters were drawn randomly to c
| skewness | $\mathcal{N}(0, 2)$ | - |
| baseline intercept | $\mathcal{N}(25, 1)$ | $\mathcal{N}(25, 1)$ |
| baseline slope | $\mathcal{N}(0, 1)$ | $\mathcal{N}(0, 1)$ |
:::

In marginal cases when the shape of a single peak had a slight skew, the automated model selection would at times settle on a normal or a skew normal model.
Therefore, it was relevant to investigate whether this choice would lead to a significant discrepancy in estimated peak parameters.
Accordingly, for the second test synthetic data sets were generated with the NumPy random module according to Table 1 and noise was added as described before.
Accordingly, for the second test synthetic data sets were generated with the NumPy random module according to [Table 1](#tab_v1) and noise was added as described before.
The residual parameters were held constant, i.e. the mean was fixed to 6, the area to 8, and the skewness parameter $\alpha$ to 1.

For the third and final test, experimental peak data was analyzed with both `PeakPerformance` (version 0.7.0) and Sciex MultiQuant (version 3.0.3) with human supervision, i.e. the results were visually inspected and corrected if necessary.
Expand Down

0 comments on commit 9d08e51

Please sign in to comment.