diff --git a/_pkgdown.yml b/_pkgdown.yml index 71c56d7..32731cb 100644 --- a/_pkgdown.yml +++ b/_pkgdown.yml @@ -73,8 +73,6 @@ reference: - starts_with("loglik") - lmvgamma - lmvgamma_deriv - - maltipoofit - - maltipoo_fit - starts_with("miniclo") - mongrel-deprecated - starts_with("name") diff --git a/docs/articles/non-linear-models_files/figure-html/unnamed-chunk-14-1.png b/docs/articles/non-linear-models_files/figure-html/unnamed-chunk-14-1.png new file mode 100644 index 0000000..0585a3c Binary files /dev/null and b/docs/articles/non-linear-models_files/figure-html/unnamed-chunk-14-1.png differ diff --git a/docs/articles/non-linear-models_files/figure-html/unnamed-chunk-16-1.png b/docs/articles/non-linear-models_files/figure-html/unnamed-chunk-16-1.png new file mode 100644 index 0000000..5490912 Binary files /dev/null and b/docs/articles/non-linear-models_files/figure-html/unnamed-chunk-16-1.png differ diff --git a/docs/authors.html b/docs/authors.html index cb30575..e75bd91 100644 --- a/docs/authors.html +++ b/docs/authors.html @@ -76,7 +76,7 @@

Citation

Silverman, JD, Roche, K, Holmes, ZC, David, LA, and Mukherjee, S. Bayesian Multinomial Logistic Normal Models through Marginally Latent Matrix-T Processes. 2022, Journal of Machine Learning Research.

@Article{,
   title = {Bayesian Multinomial Logistic Normal Models through Marginally Latent Matrix-T Processes},
-  author = {Justin D Silverman and Kim Roche and Zachary C Holmes and Lawrence A David and Sayan Mukherjee},
+  author = {Justin D. Silverman and Kimberly Roche and Zachary C. Holmes and Lawrence A. David and Sayan Mukherjee},
   year = {2022},
   volume = {23},
   journal = {Journal of Machine Learning Research},
diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml
index 9de52f1..e7db4d4 100644
--- a/docs/pkgdown.yml
+++ b/docs/pkgdown.yml
@@ -7,7 +7,7 @@ articles:
   non-linear-models: non-linear-models.html
   orthus: orthus.html
   picking_priors: picking_priors.html
-last_built: 2024-05-14T14:13Z
+last_built: 2024-05-15T17:40Z
 urls:
   reference: https://jsilve24.github.io/fido/reference
   article: https://jsilve24.github.io/fido/articles
diff --git a/docs/reference/basset.html b/docs/reference/basset.html
new file mode 100644
index 0000000..82055c8
--- /dev/null
+++ b/docs/reference/basset.html
@@ -0,0 +1,8 @@
+
+  
+    
+    
+    
+  
+
+
diff --git a/docs/reference/basset_fit.html b/docs/reference/basset_fit.html
index a864e32..8acd241 100644
--- a/docs/reference/basset_fit.html
+++ b/docs/reference/basset_fit.html
@@ -141,22 +141,22 @@ 

Value

Details

the full model is given by: -$$Y_j \sim Multinomial(\Pi_j)$$ -$$\Pi_j = \Phi^{-1}(\Eta_j)$$ -$$\Eta \sim MN_{D-1 x N}(\Lambda, \Sigma, I_N)$$ -$$\Lambda \sim GP_{D-1 x Q}(\Theta(X), \Sigma, \Gamma(X))$$ -$$\Sigma \sim InvWish(\upsilon, \Xi)$$ +$$Y_j \sim Multinomial(Pi_j)$$ +$$Pi_j = Phi^{-1}(Eta_j)$$ +$$Eta \sim MN_{D-1 \times N}(Lambda, Sigma, I_N)$$ +$$Lambda \sim GP_{D-1 \times Q}(Theta(X), Sigma, Gamma(X))$$ +$$Sigma \sim InvWish(upsilon, Xi)$$ Where Gamma(X) is short hand for the Gram matrix of the Kernel function.

Alternatively can be used to fit an additive GP of the form: -$$Y_j \sim Multinomial(\Pi_j)$$ -$$\Pi_j = \Phi^{-1}(\Eta_j)$$ -$$\Eta \sim MN_{D-1 x N}(\Lambda, \Sigma, I_N)$$ -$$\Lambda = \Lambda_1 + ... + \Lambda_p + \Beta X$$ -$$\Lambda_1 \sim GP_{D-1 x Q}(\Theta_1(X), \Sigma, \Gamma_p(X))$$ +$$Y_j \sim Multinomial(Pi_j)$$ +$$Pi_j = Phi^{-1}(Eta_j)$$ +$$Eta \sim MN_{D-1 \times N}(Lambda, Sigma, I_N)$$ +$$Lambda = Lambda_1 + ... + Lambda_p + Beta X$$ +$$Lambda_1 \sim GP_{D-1 \times Q}(Theta_1(X), Sigma, Gamma_p(X))$$ ... -$$\Lambda_p \sim GP_{D-1 x Q}(\Theta_1(X), \Sigma, \Gamma_1(X))$$ -$$\Beta \sim MN(\Theta_B, \Sigma, \Gamma_B)$$ -$$\Sigma \sim InvWish(\upsilon, \Xi)$$ +$$Lambda_p \sim GP_{D-1 \times Q}(Theta_1(X), Sigma, Gamma_1(X))$$ +$$Beta \sim MN(Theta_B, Sigma, Gamma_B)$$ +$$Sigma \sim InvWish(upsilon, Xi)$$ Where Gamma(X) is short hand for the Gram matrix of the Kernel function.

Default behavior is to use MAP estimate for uncollaping the LTP model if laplace approximation is not preformed.

diff --git a/docs/reference/conjugateLinearModel.html b/docs/reference/conjugateLinearModel.html index 650a5ac..2dff1f7 100644 --- a/docs/reference/conjugateLinearModel.html +++ b/docs/reference/conjugateLinearModel.html @@ -108,9 +108,9 @@

Value

Details

-

$$Y ~ MN_{D-1 x N}(Lambda*X, Sigma, I_N)$$ -$$Lambda ~ MN_{D-1 x Q}(Theta, Sigma, Gamma)$$ -$$Sigma ~ InvWish(upsilon, Xi)$$ +

$$Y \sim MN_{D-1 \times N}(Lambda*X, Sigma, I_N)$$ +$$Lambda \sim MN_{D-1 \times Q}(Theta, Sigma, Gamma)$$ +$$Sigma \sim InvWish(upsilon, Xi)$$ This function provides a means of sampling from the posterior distribution of Lambda and Sigma.

diff --git a/docs/reference/index.html b/docs/reference/index.html index dd4c22f..89c11ee 100644 --- a/docs/reference/index.html +++ b/docs/reference/index.html @@ -284,11 +284,6 @@

Helper functionsTransform Lambda into IQLR (Inter-Quantile Log-Ratio)
- loglikMaltipooCollapsed() gradMaltipooCollapsed() hessMaltipooCollapsed() -
-
Calculations for the Collapsed Maltipoo Model
-
- loglikPibbleCollapsed() gradPibbleCollapsed() hessPibbleCollapsed()
Calculations for the Collapsed Pibble Model
@@ -304,16 +299,6 @@

Helper functionsDerivative of Log of Multivariate Gamma Function - Gamma_p(a)

- maltipoofit() -
-
Create maltipoofit object
-
- - maltipoo() -
-
Interface to fit maltipoo models
-
- miniclo()
Closure operator
@@ -354,11 +339,6 @@

Helper functionsGeneric method for accessing model fit dimensions

- optimMaltipooCollapsed() -
-
Function to Optimize the Collapsed Maltipoo Model
-
- optimPibbleCollapsed()
Function to Optimize the Collapsed Pibble Model
@@ -414,11 +394,6 @@

Helper functionsProvide random initialization for pibble model

- req(<maltipoofit>) -
-
require elements to be non-null in pibblefit or throw error
-
- req(<orthusfit>)
require elements to be non-null in orthusfit or throw error
@@ -464,11 +439,6 @@

Helper functionsSimple verification of passed bassetfit object

- verify(<maltipoofit>) -
-
Simple verification of passed multipoo object
-
- verify(<orthusfit>)
Simple verification of passed orthusfit object
diff --git a/docs/reference/pibble_fit.html b/docs/reference/pibble_fit.html index a64b5cd..ffcca7f 100644 --- a/docs/reference/pibble_fit.html +++ b/docs/reference/pibble_fit.html @@ -165,8 +165,8 @@

Detailsthe full model is given by: $$Y_j \sim Multinomial(Pi_j)$$ $$Pi_j = Phi^{-1}(Eta_j)$$ -$$Eta \sim MN_{D-1 x N}(Lambda*X, Sigma, I_N)$$ -$$Lambda \sim MN_{D-1 x Q}(Theta, Sigma, Gamma)$$ +$$Eta \sim MN_{D-1 \times N}(Lambda*X, Sigma, I_N)$$ +$$Lambda \sim MN_{D-1 \times Q}(Theta, Sigma, Gamma)$$ $$Sigma \sim InvWish(upsilon, Xi)$$ Where Gamma is a Q x Q covariance matrix, and \(Phi^{-1}\) is ALRInv_D transform.

diff --git a/docs/reference/uncollapsePibble.html b/docs/reference/uncollapsePibble.html index 8ddc3bb..18498ea 100644 --- a/docs/reference/uncollapsePibble.html +++ b/docs/reference/uncollapsePibble.html @@ -141,7 +141,7 @@

Value

Details

Notation: Let Z_j denote the J-th row of a matrix Z. While the collapsed model is given by: -$$Y_j sim Multinomial(Pi_j)$$ +$$Y_j \sim Multinomial(Pi_j)$$ $$Pi_j = Phi^{-1}(Eta_j)$$ $$Eta \sim T_{D-1, N}(upsilon, Theta*X, K, A)$$ Where A = I_N + X * Gamma * X', K = Xi is a (D-1)x(D-1) covariance diff --git a/docs/reference/uncollapsePibble_sigmaKnown.html b/docs/reference/uncollapsePibble_sigmaKnown.html index 23137ca..f0c07b2 100644 --- a/docs/reference/uncollapsePibble_sigmaKnown.html +++ b/docs/reference/uncollapsePibble_sigmaKnown.html @@ -165,8 +165,8 @@

DetailsThe uncollapsed model (Full pibble model) is given by: $$Y_j \sim Multinomial(Pi_j)$$ $$Pi_j = Phi^{-1}(Eta_j)$$ -$$Eta \sim MN_{D-1 x N}(Lambda*X, Sigma, I_N)$$ -$$Lambda \sim MN_{D-1 x Q}(Theta, Sigma, Gamma)$$ +$$Eta \sim MN_{D-1 \times N}(Lambda*X, Sigma, I_N)$$ +$$Lambda \sim MN_{D-1 \times Q}(Theta, Sigma, Gamma)$$ $$Sigma \sim InvWish(upsilon, Xi)$$ This function provides a means of sampling from the posterior distribution of Lambda and Sigma given posterior samples of Eta from