Skip to content

devel stuff: notes for test update (bladen)

Max Bladen edited this page Apr 5, 2022 · 33 revisions

"C:/Users/Work/Desktop/mO Work/Test Ground Truths//.RData"

Test types:

  • basic
    • just tests the simplest use case works for ALL possible input objects
  • data
    • similar to basic, but uses different input datasets
  • parameter
    • tests the functionality of a specific parameter (or set of parameters)
  • edge case
    • tests for warnings or odd scenarios
  • error
    • tests that a specific error is raised in the appropriate scenario

Things to test for:

  • names of output ocmponents
  • dimensions of output
  • test full dataframes numerically (explore this a bit)
  • test pass through of input parameters
  • test type of output components
  • test when certain errors should be raised

What do we already have:

  • auroc
    • only test for comp auroc values on one dataset
    • different parameters, different datasets, test for more values
  • background_predict
    • only test one dataset and only test two numerical values.
    • different parameters, different datasets, test for more values
  • cim
    • tests for matricies, rcc, spca, spls (x2) and multilevel. two datasets. fairly sufficient for now
    • could maybe test with a variety of different parameters
  • circosPlot
    • two datasets, only test that circos "works" (tests output is of matrix type)
    • check different parameter usage and wider range of numerical testing
  • diablo
    • wide range of tests but only one dataset. small number of numerical values evaluated
    • different parameters, another dataset
  • internals-.get-pch
    • doesn't use any mixOmics datasets
    • explore different parameters and error states
  • internals
    • tests .get.ind.colors,.are.colors, .get.colors, .get.character.vector, .check_test.keepX and .check_ncomp
    • see if there are more internals to test, explore different datasets with these tests
  • network
    • two datasets, only 1 numerical value for each
    • variety of parameters to be tested
  • pca
    • only one test with one dataset and no numerical values are assessed, a lot of work here
  • perf.diablo
    • only one dataset, need to test edge cases (low numbers of repeats/folds - catch errors)\
    • test wider range of the numerical output
  • perf.mint.splsda
    • only tests choice.ncomp and how alpha affects things. much work here
  • plotIndiv
    • one of the better ones - though only one or two values tested for each case, maybe just a few few parameter tests and any remaining methods (ie pca) and expand on how many components each test checks
    • test for rcc x2, (s)pls x2, (s)plsda, mint.(s)plsda, sgcca x2 and sgccada x3
    • 4 datasets
  • plotLoadings
    • tests spls, splsda, block.splsda, mint.splsda. 3 datasets
    • add some more parameter tests + more numerical values
  • plotVar
    • needs tons of work, only one test
  • predict
    • 4 datasets, mint.splsda, block.splsda, pls, plsda
    • add any remaining methods (eg splsda) and test for more parameters
  • tune.block.splsda
    • only one test (one dataset), no parameter tests
  • tune.mint.splsda
    • two tests, one dataset. test more parameters (not just signif.threshold)
  • tune.spls
    • one test, one dataset, with/without parallel
    • needs work
  • tune.splsda
    • one test, one dataset

What do we need new test files for:

Front end functions:

  • biplot
  • block.(s)pls
  • block.(s)plsda
  • cimDiablo
  • ipca
  • mint.block.(s)pls
  • mint.block.(s)plsda
  • mint.pca
  • mint.(s)pls
  • mint.(s)plsda
  • network
  • nipals
  • all perf variants
  • all plot variants
  • all plot.tune variants
  • plotMarkers
  • (s)pls
  • (s)plsda
  • rcc
  • selectVar
  • sipca
  • spca
  • all tune variants (pca, rcc, spca, splslevel)
  • wrapper.rgcca
  • wrapper.sgcca

Back end functions:

  • explained_variance
  • get.confusion_matrix
  • impute.nipals
  • logratio-transformations
  • map
  • nearZeroVar
  • study_split
  • unmap
  • vip
  • withinVariation
Clone this wiki locally