diff --git a/articles/create-a-new-targets-pipeline-for-lpc.html b/articles/create-a-new-targets-pipeline-for-lpc.html index 184e0a6..e496ea6 100644 --- a/articles/create-a-new-targets-pipeline-for-lpc.html +++ b/articles/create-a-new-targets-pipeline-for-lpc.html @@ -79,7 +79,7 @@
 library(levinmisc)
- + Render DataTable to HTML — render_datatable • levinmisc + Skip to contents @@ -106,8 +106,8 @@

Value

Examples

render_datatable(datasets::mtcars)
-
-
+
+ diff --git a/search.json b/search.json index 9a4d64f..dc84fb6 100644 --- a/search.json +++ b/search.json @@ -1 +1 @@ -[{"path":"https://mglev1n.github.io/levinmisc/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2022 Michael Levin Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"Create a New Targets Pipeline for LPC","text":"targets package tool developing reproducible research workflows R. Details package motivation tools described detail: https://books.ropensci.org/targets/ https://docs.ropensci.org/targets/.","code":""},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"example-workflow","dir":"Articles","previous_headings":"Introduction","what":"Example Workflow","title":"Create a New Targets Pipeline for LPC","text":"Create new R project Run levinmisc::populate_targets_proj() function R console - function described detail, initialize project helpful files/folders start building targets pipeline. Modify Pipelines.qmd file specify analyses. targets documentation can useful specifics. Knit/Render Pipelines.qmd convert markdown document series R scripts actually run analyses. Run submit-targets.sh terminal window submit targets pipeline LPC execution. Details possible command line arguments script described . Modify Results.qmd present results analyses. Rendering file allows mix text describing analyses/methods include citations alongside actual results pipeline. targets::tar_read() function used heavily load pre-computed results pipeline.","code":""},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"populate_targets_proj","dir":"Articles","previous_headings":"","what":"populate_targets_proj()","title":"Create a New Targets Pipeline for LPC","text":"populate_targets_proj function can run within new project folder initialize project files/folders necessary deploying targets pipeline LPC Penn. includes creating LSF templates, Pipelines.qmd file containing boilerplate running analyses, Results.qmd file can used visualize results. populate_targets_proj() function creates several files/folders within project directory. .make-targets.sh .clustermq_lsf.tmpl hidden helper files designed user interaction, necessary submission jobs LSF scheduler. files designed edited/used user: Pipeline.qmd - Quarto markdown file can used create Target Markdown document specifies targets pipeline analyses. See: https://books.ropensci.org/targets/literate-programming.html#target-markdown details. Remember knit/render document order generate pipeline. Results.qmd - Quarto markdown file can used display results generated targets pipeline specified Pipeline.qmd build_logs/ - Directory jobs logs stored submit-targets.sh- bash script can used run targets pipeline, Pipeline.qmd knit/rendered. script can run directly submission host.","code":"#' \\dontrun{ #' populate_targets_proj(\"test\") #' }"},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"submit-targets-sh","dir":"Articles","previous_headings":"","what":"submit-targets.sh","title":"Create a New Targets Pipeline for LPC","text":"script used actually submit pipeline LPC Pipeline.qmd knit/rendered. run terminal session root directory project. function script submit pipeline LPC analysis, can run directly submission host (eg. scisub7). script can accept command line arguments, can useful parallelizing pipeline multiple workers/CPUs:","code":"Usage: ./submit-targets.sh [-n NUM_WORKERS] [-j JOB_NAME] [-o OUTPUT_LOG] [-e ERROR_LOG] [-q QUEUE] [-m MEMORY] [-s SLACK] [-h HELP] Submit a job using the LSF scheduler with the specified number of CPUs and memory usage. Options: -n NUM_WORKERS Number of workers (cpu cores) to request for running the targets pipeline (default: 1) -j JOB_NAME Name of the job (default: make_targets) -o OUTPUT_LOG Path to the output log file (default: build_logs/targets_%J.out) -e ERROR_LOG Path to the error log file (default: build_logs/targets_%J.err) -q QUEUE Name of the queue to submit the job to (default: voltron_normal) -m MEMORY Memory usage for the job in megabytes (default: 16000) -s SLACK Enable slack notifications; requires setup using slackr::slack_setup() (default: false) -h HELP Display this help message and exit"},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"slack-notifications","dir":"Articles","previous_headings":"submit-targets.sh","what":"Slack Notifications","title":"Create a New Targets Pipeline for LPC","text":"Slack can used automatically notify user pipeline start/finish using -s true command line flag: Slack notifications provided using slackr package. package must configured separately Slack notifications enabled. See https://mrkaye97.github.io/slackr/index.html information slackr setup generation Slack API token.","code":"./submit-targets.sh -s true"},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"use-crew-for-parallelization","dir":"Articles","previous_headings":"submit-targets.sh","what":"Use {crew} for parallelization","title":"Create a New Targets Pipeline for LPC","text":"default, populate_targets_proj() creates targets pipeline uses tar_make_clustermq() function execute pipeline using LPC resources. function creates cluster persistent workers, therefore flexible terms varying number resources allocated complete individual targets. recently, crew crew.cluster packages enabled use heterogenous workers (https://books.ropensci.org/targets/crew.html#heterogeneous-workers), can deployed either locally HPC resources. use_crew_lsf() function designed return block code rapidly enable use heterogeneous workers Penn LPC. default, function creates workers submit different queues (eg. voltron_normal, voltron_long), allocate different resources (eg. “normal” worker use 1 core 16GB memory, “long” worker use 1 core 10GB memory).","code":"#' \\dontrun{ #' use_crew_lsf() #' }"},{"path":"https://mglev1n.github.io/levinmisc/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Michael Levin. Author, maintainer.","code":""},{"path":"https://mglev1n.github.io/levinmisc/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Levin M (2023). levinmisc: Miscellaneous Convenience Functions. https://github.com/mglev1n/levinmisc, https://mglev1n.github.io/levinmisc/.","code":"@Manual{, title = {levinmisc: Miscellaneous Convenience Functions}, author = {Michael Levin}, year = {2023}, note = {https://github.com/mglev1n/levinmisc, https://mglev1n.github.io/levinmisc/}, }"},{"path":"https://mglev1n.github.io/levinmisc/index.html","id":"levinmisc","dir":"","previous_headings":"","what":"Miscellaneous Convenience Functions","title":"Miscellaneous Convenience Functions","text":"set miscellaneous convenience functions.","code":""},{"path":"https://mglev1n.github.io/levinmisc/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Miscellaneous Convenience Functions","text":"can install development version levinmisc GitHub :","code":"# install.packages(\"devtools\") devtools::install_github(\"mglev1n/levinmisc\")"},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":null,"dir":"Reference","previous_headings":"","what":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"function can used rapidly add rsids GWAS summary statistics dataframe containing genomic coordinates (eg. chromosome position). rapid function explicitly account differences variants position, strand flips, etc.)","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"","code":"annotate_rsids( df, dbSNP = SNPlocs.Hsapiens.dbSNP144.GRCh37::SNPlocs.Hsapiens.dbSNP144.GRCh37, chrom_col = Chromosome, pos_col = Position )"},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"df Dataframe containing genomic coordinates annotate rsid dbSNP Bioconductor object containing SNP locations alleles used annotation (default: SNPlocs.Hsapiens.dbSNP144.GRCh37::SNPlocs.Hsapiens.dbSNP144.GRCh37) chrom_col Chromosome column pos_col Position column","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"dataframe containing original contents, additional rsid column.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"","code":"if (FALSE) { annotate_rsids(sumstats_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":null,"dir":"Reference","previous_headings":"","what":"Perform Bayesian finemapping — calc_credset","title":"Perform Bayesian finemapping — calc_credset","text":"Description","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Perform Bayesian finemapping — calc_credset","text":"","code":"calc_credset( df, locus_marker_col = locus_marker, effect_col = effect, se_col = std_err, samplesize_col = samplesize, cred_interval = 0.99 )"},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Perform Bayesian finemapping — calc_credset","text":"df Dataframe containing GWAS summary statistics locus_marker_col Column containing locus-level identifier effect_col Column containing effect estimates se_col Column containing standard errors fo effect estimates samplesize_col Column containing sample sizes cred_interval Credible interval fine-mapped credible sets (default = 0.99; 0.95 another common artbitrarily determined interval)","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Perform Bayesian finemapping — calc_credset","text":"data.frame containing credible sets locus. variant within credible set, prior probability casual variant provided.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Perform Bayesian finemapping — calc_credset","text":"","code":"if (FALSE) { calc_credset(gwas_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":null,"dir":"Reference","previous_headings":"","what":"Run Bayesian enumeration colocalization using Coloc — coloc_run","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"function wrapper around coloc::coloc.abf() takes dataframe input, performs colocalization single-causal-variant assumption. Coloc described Giambartolomei et al. (PLOS Genetics 2014; https://doi.org/10.1371/journal.pgen.1004383).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"","code":"coloc_run( df, trait_col = trait, variant_col = rsid, beta_col = beta, se_col = se, samplesize_col = samplesize, maf_col = maf, type_col = type, case_prop_col = case_prop, p1 = 1e-04, p2 = 1e-04, p12 = 1e-05, ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"df Dataframe containing summary statistics single locus two traits \"long\" format, one row per variant per trait. trait_col Column containing trait names variant_col Column containing unique variant identifiers (Eg. rsids, chr:pos) beta_col Column containing effect estimates se_col Column containing standard errors samplesize_col Column containing sample sizes maf_col Column containing minor allele frequencies type_col Column containing type trait (\"quant\" quantitative traits, \"cc\" binary traits) case_prop_col Column containing proportion cases case control studies; column ignored quantitative traits p1 Prior probability SNP associated trait 1, default 1e-4 p2 Prior probability SNP associated trait 2, default 1e-4 p12 Prior probability SNP associated traits, default 1e-5 ... Arguments passed coloc::coloc.abf()","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"list containing coloc results. summary named vector containing number snps, posterior probabilities 5 colocalization hypotheses results annotated version input data containing log approximate Bayes Factors posterior probability SNP causal H4 true.","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"","code":"if (FALSE) { coloc_run(locus_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a Manhattan Plot — gg_manhattan_df","title":"Create a Manhattan Plot — gg_manhattan_df","text":"function wrapper around ggfastman::fast_manhattan allows creation Manhattan plot dataframe containing GWAS summary statistics.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a Manhattan Plot — gg_manhattan_df","text":"","code":"gg_manhattan_df( sumstats_df, annotation_df = NULL, chr_col = chromosome, pos_col = position, pval_col = p_value, pval_threshold = 0.001, label_col = gene, build = \"hg19\", color1 = \"#045ea7\", color2 = \"#82afd3\", speed = \"slow\", ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a Manhattan Plot — gg_manhattan_df","text":"sumstats_df Dataframe containing GWAS summary statistics annotation_df Optional dataframe containing chromosome, position, annotation labels chr_col Name chromosome column pos_col Name position column pval_col Name p-value column pval_threshold Threshold plotting p-values (p-values greater value excluded plot; default = 0.001) label_col Name column annotation_df containing annotations include plot build (string) One \"hg18\", \"hg19\", \"hg38\" (passed ggfastman) color1 (string) Color odd-numbered chromosomes (passed ggfastman) color2 (string) Color even-numbered chromosomes (passed ggfastman) speed (string) One \"slow\", \"fast\", \"ultrafast\"; passed ggfastman control plotting speed ... Arguments passed ggfastman::fast_manhattan","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a Manhattan Plot — gg_manhattan_df","text":"ggplot2 object","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a Manhattan Plot — gg_manhattan_df","text":"","code":"if (FALSE) { gg_manhattan_df(sumstats_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a QQ plot — gg_qq_df","title":"Create a QQ plot — gg_qq_df","text":"function wrapper around ggfastman::fast_qq allows creation QQ plot dataframe containing GWAS summary statistics.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a QQ plot — gg_qq_df","text":"","code":"gg_qq_df(sumstats_df, pval_col = p_value, ...)"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a QQ plot — gg_qq_df","text":"sumstats_df Dataframe containing GWAS summary statistics pval_col Name p-value column ... Arguments passed ggfastman::fast_qq","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a QQ plot — gg_qq_df","text":"ggplot2 object","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a QQ plot — gg_qq_df","text":"","code":"if (FALSE) { gg_qq_df(sumstats_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"function convenience wrapper around hyprcoloc::hyprcoloc() takes dataframe input, performs mutli-trait colocalization. Details HyPrColoc method described Foley et al. (Nature Communications 2021; https://doi.org/10.1038/s41467-020-20885-8).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"","code":"hyprcoloc_df( df, trait_col = trait, snp_col = rsid, beta_col = beta, se_col = se, type_col = type, ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"df Dataframe containing summary statistics single locus, \"long\" format, one row per variant per trait. trait_col Column containing trait names snp_col Column containing variant names (eg. rsid, marker_name), consistent across studies beta_col Column containing effect estimates GWAS se_col Column containing standard errors effect estimates type_col Column containing \"type\" trait - column contain 1 binary traits, 0 others ... Arguments passed hyprcoloc::hyprcoloc()","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"list containing data.frame HyPrColoc results: row cluster colocalized traits coded NA (colocalization identified)","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"","code":"if (FALSE) { hyprcoloc_df(gwas_results_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/ldak_h2.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate heritability using LDAK — ldak_h2","title":"Calculate heritability using LDAK — ldak_h2","text":"function wraps LDAK, command-line tool estimating heritability. tool associated reference files can download LDAK website (https://ldak.org/). method described Zhang et al. (Nature Communications 2021; %","title":"Pipe operator — %>%","text":"See magrittr::%>% details.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/pipe.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pipe operator — %>%","text":"","code":"lhs %>% rhs"},{"path":"https://mglev1n.github.io/levinmisc/reference/pipe.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pipe operator — %>%","text":"lhs value magrittr placeholder. rhs function call using magrittr semantics.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/pipe.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Pipe operator — %>%","text":"result calling rhs(lhs).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a minimal targets template in the current project — populate_targets_proj","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"function creates minimal targets template current directory. includes creating Pipelines.qmd file containing boilerplate running analyses, Results.qmd file can used visualize results. Parallelization pipeline implemented using targets::tar_make_clustermq(), using pre-filled using parameters specific LPC system Penn.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"","code":"populate_targets_proj(title, log_folder = \"build_logs\", overwrite = FALSE)"},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"title (character) base name project files (eg. \"title-Pipeline.qmd\" \"title-Results.qmd\") log_folder (character) directory LSF logs overwrite (logical) overwrite existing template files","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"","code":"if (FALSE) { populate_targets_proj(\"test\") }"},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":null,"dir":"Reference","previous_headings":"","what":"Render DataTable to HTML — render_datatable","title":"Render DataTable to HTML — render_datatable","text":"function wrapper around DT::datatable function, containing useful defaults. function particularly useful provide interactivity (eg. sorting, saving) tables rendering documents using RMarkdown Quarto.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Render DataTable to HTML — render_datatable","text":"","code":"render_datatable( df, extensions = c(\"Buttons\"), class = c(\"compact\", \"stripe\", \"hover\", \"row-border\"), rownames = FALSE, options = list(dom = \"Bfrtip\", buttons = c(\"copy\", \"csv\"), scrollX = TRUE, scrollY = TRUE), height = 400, ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Render DataTable to HTML — render_datatable","text":"df Dataframe render extensions Extensions parameter passed DT::datatable class Class parameters passed DT::datatable rownames (logical) Include rownames output (Default = FALSE) options List options passed DT::datatable height Height output (Default = 400) ... Additional arguments passed DT::datatable","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Render DataTable to HTML — render_datatable","text":"HTML widget","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Render DataTable to HTML — render_datatable","text":"","code":"render_datatable(datasets::mtcars) {\"x\":{\"filter\":\"none\",\"vertical\":false,\"extensions\":[\"Buttons\"],\"data\":[[21,21,22.8,21.4,18.7,18.1,14.3,24.4,22.8,19.2,17.8,16.4,17.3,15.2,10.4,10.4,14.7,32.4,30.4,33.9,21.5,15.5,15.2,13.3,19.2,27.3,26,30.4,15.8,19.7,15,21.4],[6,6,4,6,8,6,8,4,4,6,6,8,8,8,8,8,8,4,4,4,4,8,8,8,8,4,4,4,8,6,8,4],[160,160,108,258,360,225,360,146.7,140.8,167.6,167.6,275.8,275.8,275.8,472,460,440,78.7,75.7,71.09999999999999,120.1,318,304,350,400,79,120.3,95.09999999999999,351,145,301,121],[110,110,93,110,175,105,245,62,95,123,123,180,180,180,205,215,230,66,52,65,97,150,150,245,175,66,91,113,264,175,335,109],[3.9,3.9,3.85,3.08,3.15,2.76,3.21,3.69,3.92,3.92,3.92,3.07,3.07,3.07,2.93,3,3.23,4.08,4.93,4.22,3.7,2.76,3.15,3.73,3.08,4.08,4.43,3.77,4.22,3.62,3.54,4.11],[2.62,2.875,2.32,3.215,3.44,3.46,3.57,3.19,3.15,3.44,3.44,4.07,3.73,3.78,5.25,5.424,5.345,2.2,1.615,1.835,2.465,3.52,3.435,3.84,3.845,1.935,2.14,1.513,3.17,2.77,3.57,2.78],[16.46,17.02,18.61,19.44,17.02,20.22,15.84,20,22.9,18.3,18.9,17.4,17.6,18,17.98,17.82,17.42,19.47,18.52,19.9,20.01,16.87,17.3,15.41,17.05,18.9,16.7,16.9,14.5,15.5,14.6,18.6],[0,0,1,1,0,1,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,0,0,0,0,1,0,1,0,0,0,1],[1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,1,1,1,1,1,1,1],[4,4,4,3,3,3,3,4,4,4,4,3,3,3,3,3,3,4,4,4,3,3,3,3,3,4,5,5,5,5,5,4],[4,4,1,1,2,1,4,2,2,4,4,3,3,3,4,4,4,1,2,1,1,2,2,4,2,1,2,2,4,6,8,2]],\"container\":\"\\n \\n \\n
mpg<\\/th>\\n cyl<\\/th>\\n disp<\\/th>\\n hp<\\/th>\\n drat<\\/th>\\n wt<\\/th>\\n qsec<\\/th>\\n vs<\\/th>\\n am<\\/th>\\n gear<\\/th>\\n carb<\\/th>\\n <\\/tr>\\n <\\/thead>\\n<\\/table>\",\"options\":{\"dom\":\"Bfrtip\",\"buttons\":[\"copy\",\"csv\"],\"scrollX\":true,\"scrollY\":true,\"columnDefs\":[{\"className\":\"dt-right\",\"targets\":[0,1,2,3,4,5,6,7,8,9,10]}],\"order\":[],\"autoWidth\":false,\"orderClasses\":false}},\"evals\":[],\"jsHooks\":[]}"},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":null,"dir":"Reference","previous_headings":"","what":"Integrate PrediXcan data across tissues — s_multixcan","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"function wrapper around S-MultiXcan, tool identifying genes associate trait using GWAS summary statistics. approach leverages sharing eQTLs acorss tissues improve upon tissue-specific TWAS-methods like S-PrediXcan. method uses data generated s_predixcan(). S-MultiXcan method described Barbeira et al. (PLOS Genetics 2019; https://doi.org/10.1371/journal.pgen.1007889). MetaXcan tools can found Github (https://github.com/hakyimlab/MetaXcan) PredictDB (https://predictdb.org/).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"","code":"s_multixcan( df, models_folder, models_name_filter = \".*db\", models_name_pattern, snp = SNP, effect_allele = effect_allele, other_allele = other_allele, beta = beta, eaf = eaf, chr = chr, pos = pos, se = se, pval = pval, samplesize = samplesize, regularization = 0.01, cutoff_condition_number = 30, cutoff_eigen_ratio = 0.001, cutoff_threshold = 0.4, cutoff_trace_ratio = 0.01, metaxcan_folder, metaxcan_filter, metaxcan_file_name_parse_pattern, snp_covariance, metaxcan, data )"},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"df Dataframe containing GWAS summary statistics models_folder Path folder containing MetaXcan models (eg. \"MetaXcan/data/models/eqtl/mashr/\") models_name_filter Filter used identify model files (default = \".*\\\\.db\") models_name_pattern Filter used extract trait name model (default = \"mashr_(.*)\\\\.db\") snp Column containing rsids effect_allele Column containing effect alleles other_allele Column containing non-effect alleles beta Column containing effect estimates eaf Column containing effect allele frequencies chr Column containing chromosomes pos Column containing positions se Column containing standard errors pval Column containing p-values samplesize Column containing samplesize regularization MetaXcan regularization (default = 0.01) cutoff_condition_number Numer eigen values use truncating SVD components (default = 30) cutoff_eigen_ratio Ratio eigenvalues max eigenvalue, threshold use truncating SVD components. (default = 0.001) cutoff_threshold Threshold variance eigenvalues truncating SVD (default = 0.4) cutoff_trace_ratio Ratio eigenvalues trace, use truncating SVD (default = 0.01) metaxcan_folder Path folder containing S-PrediXcan result files metaxcan_filter Regular expression filter results files metaxcan_file_name_parse_pattern Regular expression get phenotype name model name MetaXcan result files. Assumes first group matched phenotype name, second model name. snp_covariance Path file containing S-MultiXcan covariance (eg. \"MetaXcan/data/models/gtex_v8_expression_mashr_snp_smultixcan_covariance.txt.gz\") metaxcan Path MetaXcan (eg. \"MetaXcan/software\") data Path MetaXcan data (eg. \"MetaXcan/data\")","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"dataframe containing S-MultiXcan results","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":null,"dir":"Reference","previous_headings":"","what":"Run a TWAS using S-PrediXcan — s_predixcan","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"function wrapper around S-PrediXcan, method integrating GWAS summary statistics gene-expression/splicing data identify genes associated trait. PrediXcan/MetaXcan method described Barbeira et al. (Nature Communications 2018; https://doi.org/10.1038/s41467-018-03621-1). MetaXcan tools can found Github (https://github.com/hakyimlab/MetaXcan) PredictDB (https://predictdb.org/). S-PrediXcan run across multiple tissues, results can integrated using s_multixcan().","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"","code":"s_predixcan( df, snp = SNP, effect_allele = effect_allele, other_allele = other_allele, beta = beta, eaf = eaf, chr = chr, pos = pos, se = se, pval = pval, samplesize = samplesize, data, metaxcan, output, model_db_path, model_covariance_path, trait_name )"},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"df Dataframe containing GWAS summary statistics snp Column containing rsid effect_allele Column containing effect allele other_allele Column containing non-effect allele beta Column containing effect size eaf Column containing effect allele frequency chr Column containing chromosome pos Column containing position se Column containing standard error effect estimate pval Column containing p-value samplesize Column containing samplesize data Path MetaXcan data (eg. \"MetaXcan/data\") metaxcan Path MetaXcan (eg. \"MetaXcan/software\") output Output directory save S-PrediXcan results model_db_path Path PrediXcan model database model_covariance_path Path PrediXcan model covariance trait_name Name GWAS trait (used name output files)","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"dataframe containing S-PrediXcan results","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"","code":"if (FALSE) { s_predixcan(df, data = \"MetaXcan/data\", metaxcan = \"MetaXcan/software\", output = \"/path/to/output\", model_db_path = \"MetaXcan/data/models/eqtl/mashr/mashr_Liver.db\", model_covariance_path = \"MetaXcan/data/models/eqtl/mashr/mashr_Liver.txt.gz\", trait_name = \"GWAS_trait\") }"},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":null,"dir":"Reference","previous_headings":"","what":"Use Crew LSF — use_crew_lsf","title":"Use Crew LSF — use_crew_lsf","text":"function returns template using crew.cluster targets project, enabling parallel execution targets workflow. default, template pre-filled using parameters specific LPC system Penn. default, function creates workers submit different queues (eg. voltron_normal, voltron_long), allocate different resources (eg. \"normal\" worker use 1 core 16GB memory, \"long\" worker use 1 core 10GB memory).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Use Crew LSF — use_crew_lsf","text":"","code":"use_crew_lsf()"},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Use Crew LSF — use_crew_lsf","text":"code block copy/paste targets project","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Use Crew LSF — use_crew_lsf","text":"","code":"if (FALSE) { use_crew_lsf() }"}] +[{"path":"https://mglev1n.github.io/levinmisc/LICENSE.html","id":null,"dir":"","previous_headings":"","what":"MIT License","title":"MIT License","text":"Copyright (c) 2022 Michael Levin Permission hereby granted, free charge, person obtaining copy software associated documentation files (“Software”), deal Software without restriction, including without limitation rights use, copy, modify, merge, publish, distribute, sublicense, /sell copies Software, permit persons Software furnished , subject following conditions: copyright notice permission notice shall included copies substantial portions Software. SOFTWARE PROVIDED “”, WITHOUT WARRANTY KIND, EXPRESS IMPLIED, INCLUDING LIMITED WARRANTIES MERCHANTABILITY, FITNESS PARTICULAR PURPOSE NONINFRINGEMENT. EVENT SHALL AUTHORS COPYRIGHT HOLDERS LIABLE CLAIM, DAMAGES LIABILITY, WHETHER ACTION CONTRACT, TORT OTHERWISE, ARISING , CONNECTION SOFTWARE USE DEALINGS SOFTWARE.","code":""},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"introduction","dir":"Articles","previous_headings":"","what":"Introduction","title":"Create a New Targets Pipeline for LPC","text":"targets package tool developing reproducible research workflows R. Details package motivation tools described detail: https://books.ropensci.org/targets/ https://docs.ropensci.org/targets/.","code":""},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"example-workflow","dir":"Articles","previous_headings":"Introduction","what":"Example Workflow","title":"Create a New Targets Pipeline for LPC","text":"Create new R project Run levinmisc::populate_targets_proj() function R console - function described detail, initialize project helpful files/folders start building targets pipeline. Modify Pipelines.qmd file specify analyses. targets documentation can useful specifics. Knit/Render Pipelines.qmd convert markdown document series R scripts actually run analyses. Run submit-targets.sh terminal window submit targets pipeline LPC execution. Details possible command line arguments script described . Modify Results.qmd present results analyses. Rendering file allows mix text describing analyses/methods include citations alongside actual results pipeline. targets::tar_read() function used heavily load pre-computed results pipeline.","code":""},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"populate_targets_proj","dir":"Articles","previous_headings":"","what":"populate_targets_proj()","title":"Create a New Targets Pipeline for LPC","text":"populate_targets_proj function can run within new project folder initialize project files/folders necessary deploying targets pipeline LPC Penn. includes creating LSF templates, Pipelines.qmd file containing boilerplate running analyses, Results.qmd file can used visualize results. populate_targets_proj() function creates several files/folders within project directory. .make-targets.sh .clustermq_lsf.tmpl hidden helper files designed user interaction, necessary submission jobs LSF scheduler. files designed edited/used user: Pipeline.qmd - Quarto markdown file can used create Target Markdown document specifies targets pipeline analyses. See: https://books.ropensci.org/targets/literate-programming.html#target-markdown details. Remember knit/render document order generate pipeline. Results.qmd - Quarto markdown file can used display results generated targets pipeline specified Pipeline.qmd build_logs/ - Directory jobs logs stored submit-targets.sh- bash script can used run targets pipeline, Pipeline.qmd knit/rendered. script can run directly submission host.","code":"#' \\dontrun{ #' populate_targets_proj(\"test\") #' }"},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"submit-targets-sh","dir":"Articles","previous_headings":"","what":"submit-targets.sh","title":"Create a New Targets Pipeline for LPC","text":"script used actually submit pipeline LPC Pipeline.qmd knit/rendered. run terminal session root directory project. function script submit pipeline LPC analysis, can run directly submission host (eg. scisub7). script can accept command line arguments, can useful parallelizing pipeline multiple workers/CPUs:","code":"Usage: ./submit-targets.sh [-n NUM_WORKERS] [-j JOB_NAME] [-o OUTPUT_LOG] [-e ERROR_LOG] [-q QUEUE] [-m MEMORY] [-s SLACK] [-h HELP] Submit a job using the LSF scheduler with the specified number of CPUs and memory usage. Options: -n NUM_WORKERS Number of workers (cpu cores) to request for running the targets pipeline (default: 1) -j JOB_NAME Name of the job (default: make_targets) -o OUTPUT_LOG Path to the output log file (default: build_logs/targets_%J.out) -e ERROR_LOG Path to the error log file (default: build_logs/targets_%J.err) -q QUEUE Name of the queue to submit the job to (default: voltron_normal) -m MEMORY Memory usage for the job in megabytes (default: 16000) -s SLACK Enable slack notifications; requires setup using slackr::slack_setup() (default: false) -h HELP Display this help message and exit"},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"slack-notifications","dir":"Articles","previous_headings":"submit-targets.sh","what":"Slack Notifications","title":"Create a New Targets Pipeline for LPC","text":"Slack can used automatically notify user pipeline start/finish using -s true command line flag: Slack notifications provided using slackr package. package must configured separately Slack notifications enabled. See https://mrkaye97.github.io/slackr/index.html information slackr setup generation Slack API token.","code":"./submit-targets.sh -s true"},{"path":"https://mglev1n.github.io/levinmisc/articles/create-a-new-targets-pipeline-for-lpc.html","id":"use-crew-for-parallelization","dir":"Articles","previous_headings":"submit-targets.sh","what":"Use {crew} for parallelization","title":"Create a New Targets Pipeline for LPC","text":"default, populate_targets_proj() creates targets pipeline uses tar_make_clustermq() function execute pipeline using LPC resources. function creates cluster persistent workers, therefore flexible terms varying number resources allocated complete individual targets. recently, crew crew.cluster packages enabled use heterogenous workers (https://books.ropensci.org/targets/crew.html#heterogeneous-workers), can deployed either locally HPC resources. use_crew_lsf() function designed return block code rapidly enable use heterogeneous workers Penn LPC. default, function creates workers submit different queues (eg. voltron_normal, voltron_long), allocate different resources (eg. “normal” worker use 1 core 16GB memory, “long” worker use 1 core 10GB memory).","code":"#' \\dontrun{ #' use_crew_lsf() #' }"},{"path":"https://mglev1n.github.io/levinmisc/authors.html","id":null,"dir":"","previous_headings":"","what":"Authors","title":"Authors and Citation","text":"Michael Levin. Author, maintainer.","code":""},{"path":"https://mglev1n.github.io/levinmisc/authors.html","id":"citation","dir":"","previous_headings":"","what":"Citation","title":"Authors and Citation","text":"Levin M (2023). levinmisc: Miscellaneous Convenience Functions. https://github.com/mglev1n/levinmisc, https://mglev1n.github.io/levinmisc/.","code":"@Manual{, title = {levinmisc: Miscellaneous Convenience Functions}, author = {Michael Levin}, year = {2023}, note = {https://github.com/mglev1n/levinmisc, https://mglev1n.github.io/levinmisc/}, }"},{"path":"https://mglev1n.github.io/levinmisc/index.html","id":"levinmisc","dir":"","previous_headings":"","what":"Miscellaneous Convenience Functions","title":"Miscellaneous Convenience Functions","text":"set miscellaneous convenience functions.","code":""},{"path":"https://mglev1n.github.io/levinmisc/index.html","id":"installation","dir":"","previous_headings":"","what":"Installation","title":"Miscellaneous Convenience Functions","text":"can install development version levinmisc GitHub :","code":"# install.packages(\"devtools\") devtools::install_github(\"mglev1n/levinmisc\")"},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":null,"dir":"Reference","previous_headings":"","what":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"function can used rapidly add rsids GWAS summary statistics dataframe containing genomic coordinates (eg. chromosome position). rapid function explicitly account differences variants position, strand flips, etc.)","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"","code":"annotate_rsids( df, dbSNP = SNPlocs.Hsapiens.dbSNP144.GRCh37::SNPlocs.Hsapiens.dbSNP144.GRCh37, chrom_col = Chromosome, pos_col = Position )"},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"df Dataframe containing genomic coordinates annotate rsid dbSNP Bioconductor object containing SNP locations alleles used annotation (default: SNPlocs.Hsapiens.dbSNP144.GRCh37::SNPlocs.Hsapiens.dbSNP144.GRCh37) chrom_col Chromosome column pos_col Position column","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"dataframe containing original contents, additional rsid column.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/annotate_rsids.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Annotate a dataframe containing genomic coordinates with rsids — annotate_rsids","text":"","code":"if (FALSE) { annotate_rsids(sumstats_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":null,"dir":"Reference","previous_headings":"","what":"Perform Bayesian finemapping — calc_credset","title":"Perform Bayesian finemapping — calc_credset","text":"Description","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Perform Bayesian finemapping — calc_credset","text":"","code":"calc_credset( df, locus_marker_col = locus_marker, effect_col = effect, se_col = std_err, samplesize_col = samplesize, cred_interval = 0.99 )"},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Perform Bayesian finemapping — calc_credset","text":"df Dataframe containing GWAS summary statistics locus_marker_col Column containing locus-level identifier effect_col Column containing effect estimates se_col Column containing standard errors fo effect estimates samplesize_col Column containing sample sizes cred_interval Credible interval fine-mapped credible sets (default = 0.99; 0.95 another common artbitrarily determined interval)","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Perform Bayesian finemapping — calc_credset","text":"data.frame containing credible sets locus. variant within credible set, prior probability casual variant provided.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/calc_credset.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Perform Bayesian finemapping — calc_credset","text":"","code":"if (FALSE) { calc_credset(gwas_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":null,"dir":"Reference","previous_headings":"","what":"Run Bayesian enumeration colocalization using Coloc — coloc_run","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"function wrapper around coloc::coloc.abf() takes dataframe input, performs colocalization single-causal-variant assumption. Coloc described Giambartolomei et al. (PLOS Genetics 2014; https://doi.org/10.1371/journal.pgen.1004383).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"","code":"coloc_run( df, trait_col = trait, variant_col = rsid, beta_col = beta, se_col = se, samplesize_col = samplesize, maf_col = maf, type_col = type, case_prop_col = case_prop, p1 = 1e-04, p2 = 1e-04, p12 = 1e-05, ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"df Dataframe containing summary statistics single locus two traits \"long\" format, one row per variant per trait. trait_col Column containing trait names variant_col Column containing unique variant identifiers (Eg. rsids, chr:pos) beta_col Column containing effect estimates se_col Column containing standard errors samplesize_col Column containing sample sizes maf_col Column containing minor allele frequencies type_col Column containing type trait (\"quant\" quantitative traits, \"cc\" binary traits) case_prop_col Column containing proportion cases case control studies; column ignored quantitative traits p1 Prior probability SNP associated trait 1, default 1e-4 p2 Prior probability SNP associated trait 2, default 1e-4 p12 Prior probability SNP associated traits, default 1e-5 ... Arguments passed coloc::coloc.abf()","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"list containing coloc results. summary named vector containing number snps, posterior probabilities 5 colocalization hypotheses results annotated version input data containing log approximate Bayes Factors posterior probability SNP causal H4 true.","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/coloc_run.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run Bayesian enumeration colocalization using Coloc — coloc_run","text":"","code":"if (FALSE) { coloc_run(locus_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a Manhattan Plot — gg_manhattan_df","title":"Create a Manhattan Plot — gg_manhattan_df","text":"function wrapper around ggfastman::fast_manhattan allows creation Manhattan plot dataframe containing GWAS summary statistics.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a Manhattan Plot — gg_manhattan_df","text":"","code":"gg_manhattan_df( sumstats_df, annotation_df = NULL, chr_col = chromosome, pos_col = position, pval_col = p_value, pval_threshold = 0.001, label_col = gene, build = \"hg19\", color1 = \"#045ea7\", color2 = \"#82afd3\", speed = \"slow\", ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a Manhattan Plot — gg_manhattan_df","text":"sumstats_df Dataframe containing GWAS summary statistics annotation_df Optional dataframe containing chromosome, position, annotation labels chr_col Name chromosome column pos_col Name position column pval_col Name p-value column pval_threshold Threshold plotting p-values (p-values greater value excluded plot; default = 0.001) label_col Name column annotation_df containing annotations include plot build (string) One \"hg18\", \"hg19\", \"hg38\" (passed ggfastman) color1 (string) Color odd-numbered chromosomes (passed ggfastman) color2 (string) Color even-numbered chromosomes (passed ggfastman) speed (string) One \"slow\", \"fast\", \"ultrafast\"; passed ggfastman control plotting speed ... Arguments passed ggfastman::fast_manhattan","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a Manhattan Plot — gg_manhattan_df","text":"ggplot2 object","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_manhattan_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a Manhattan Plot — gg_manhattan_df","text":"","code":"if (FALSE) { gg_manhattan_df(sumstats_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a QQ plot — gg_qq_df","title":"Create a QQ plot — gg_qq_df","text":"function wrapper around ggfastman::fast_qq allows creation QQ plot dataframe containing GWAS summary statistics.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a QQ plot — gg_qq_df","text":"","code":"gg_qq_df(sumstats_df, pval_col = p_value, ...)"},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a QQ plot — gg_qq_df","text":"sumstats_df Dataframe containing GWAS summary statistics pval_col Name p-value column ... Arguments passed ggfastman::fast_qq","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Create a QQ plot — gg_qq_df","text":"ggplot2 object","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/gg_qq_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a QQ plot — gg_qq_df","text":"","code":"if (FALSE) { gg_qq_df(sumstats_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":null,"dir":"Reference","previous_headings":"","what":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"function convenience wrapper around hyprcoloc::hyprcoloc() takes dataframe input, performs mutli-trait colocalization. Details HyPrColoc method described Foley et al. (Nature Communications 2021; https://doi.org/10.1038/s41467-020-20885-8).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"","code":"hyprcoloc_df( df, trait_col = trait, snp_col = rsid, beta_col = beta, se_col = se, type_col = type, ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"df Dataframe containing summary statistics single locus, \"long\" format, one row per variant per trait. trait_col Column containing trait names snp_col Column containing variant names (eg. rsid, marker_name), consistent across studies beta_col Column containing effect estimates GWAS se_col Column containing standard errors effect estimates type_col Column containing \"type\" trait - column contain 1 binary traits, 0 others ... Arguments passed hyprcoloc::hyprcoloc()","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"list containing data.frame HyPrColoc results: row cluster colocalized traits coded NA (colocalization identified)","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/hyprcoloc_df.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run multi-trait colocalization using HyPrColoc — hyprcoloc_df","text":"","code":"if (FALSE) { hyprcoloc_df(gwas_results_df) }"},{"path":"https://mglev1n.github.io/levinmisc/reference/ldak_h2.html","id":null,"dir":"Reference","previous_headings":"","what":"Calculate heritability using LDAK — ldak_h2","title":"Calculate heritability using LDAK — ldak_h2","text":"function wraps LDAK, command-line tool estimating heritability. tool associated reference files can download LDAK website (https://ldak.org/). method described Zhang et al. (Nature Communications 2021; %","title":"Pipe operator — %>%","text":"See magrittr::%>% details.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/pipe.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Pipe operator — %>%","text":"","code":"lhs %>% rhs"},{"path":"https://mglev1n.github.io/levinmisc/reference/pipe.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Pipe operator — %>%","text":"lhs value magrittr placeholder. rhs function call using magrittr semantics.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/pipe.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Pipe operator — %>%","text":"result calling rhs(lhs).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":null,"dir":"Reference","previous_headings":"","what":"Create a minimal targets template in the current project — populate_targets_proj","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"function creates minimal targets template current directory. includes creating Pipelines.qmd file containing boilerplate running analyses, Results.qmd file can used visualize results. Parallelization pipeline implemented using targets::tar_make_clustermq(), using pre-filled using parameters specific LPC system Penn.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"","code":"populate_targets_proj(title, log_folder = \"build_logs\", overwrite = FALSE)"},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"title (character) base name project files (eg. \"title-Pipeline.qmd\" \"title-Results.qmd\") log_folder (character) directory LSF logs overwrite (logical) overwrite existing template files","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/populate_targets_proj.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Create a minimal targets template in the current project — populate_targets_proj","text":"","code":"if (FALSE) { populate_targets_proj(\"test\") }"},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":null,"dir":"Reference","previous_headings":"","what":"Render DataTable to HTML — render_datatable","title":"Render DataTable to HTML — render_datatable","text":"function wrapper around DT::datatable function, containing useful defaults. function particularly useful provide interactivity (eg. sorting, saving) tables rendering documents using RMarkdown Quarto.","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Render DataTable to HTML — render_datatable","text":"","code":"render_datatable( df, extensions = c(\"Buttons\"), class = c(\"compact\", \"stripe\", \"hover\", \"row-border\"), rownames = FALSE, options = list(dom = \"Bfrtip\", buttons = c(\"copy\", \"csv\"), scrollX = TRUE, scrollY = TRUE), height = 400, ... )"},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Render DataTable to HTML — render_datatable","text":"df Dataframe render extensions Extensions parameter passed DT::datatable class Class parameters passed DT::datatable rownames (logical) Include rownames output (Default = FALSE) options List options passed DT::datatable height Height output (Default = 400) ... Additional arguments passed DT::datatable","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Render DataTable to HTML — render_datatable","text":"HTML widget","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/render_datatable.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Render DataTable to HTML — render_datatable","text":"","code":"render_datatable(datasets::mtcars) {\"x\":{\"filter\":\"none\",\"vertical\":false,\"extensions\":[\"Buttons\"],\"data\":[[21,21,22.8,21.4,18.7,18.1,14.3,24.4,22.8,19.2,17.8,16.4,17.3,15.2,10.4,10.4,14.7,32.4,30.4,33.9,21.5,15.5,15.2,13.3,19.2,27.3,26,30.4,15.8,19.7,15,21.4],[6,6,4,6,8,6,8,4,4,6,6,8,8,8,8,8,8,4,4,4,4,8,8,8,8,4,4,4,8,6,8,4],[160,160,108,258,360,225,360,146.7,140.8,167.6,167.6,275.8,275.8,275.8,472,460,440,78.7,75.7,71.09999999999999,120.1,318,304,350,400,79,120.3,95.09999999999999,351,145,301,121],[110,110,93,110,175,105,245,62,95,123,123,180,180,180,205,215,230,66,52,65,97,150,150,245,175,66,91,113,264,175,335,109],[3.9,3.9,3.85,3.08,3.15,2.76,3.21,3.69,3.92,3.92,3.92,3.07,3.07,3.07,2.93,3,3.23,4.08,4.93,4.22,3.7,2.76,3.15,3.73,3.08,4.08,4.43,3.77,4.22,3.62,3.54,4.11],[2.62,2.875,2.32,3.215,3.44,3.46,3.57,3.19,3.15,3.44,3.44,4.07,3.73,3.78,5.25,5.424,5.345,2.2,1.615,1.835,2.465,3.52,3.435,3.84,3.845,1.935,2.14,1.513,3.17,2.77,3.57,2.78],[16.46,17.02,18.61,19.44,17.02,20.22,15.84,20,22.9,18.3,18.9,17.4,17.6,18,17.98,17.82,17.42,19.47,18.52,19.9,20.01,16.87,17.3,15.41,17.05,18.9,16.7,16.9,14.5,15.5,14.6,18.6],[0,0,1,1,0,1,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,0,0,0,0,1,0,1,0,0,0,1],[1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,1,1,1,1,1,1,1],[4,4,4,3,3,3,3,4,4,4,4,3,3,3,3,3,3,4,4,4,3,3,3,3,3,4,5,5,5,5,5,4],[4,4,1,1,2,1,4,2,2,4,4,3,3,3,4,4,4,1,2,1,1,2,2,4,2,1,2,2,4,6,8,2]],\"container\":\"\\n \\n \\n
mpg<\\/th>\\n cyl<\\/th>\\n disp<\\/th>\\n hp<\\/th>\\n drat<\\/th>\\n wt<\\/th>\\n qsec<\\/th>\\n vs<\\/th>\\n am<\\/th>\\n gear<\\/th>\\n carb<\\/th>\\n <\\/tr>\\n <\\/thead>\\n<\\/table>\",\"options\":{\"dom\":\"Bfrtip\",\"buttons\":[\"copy\",\"csv\"],\"scrollX\":true,\"scrollY\":true,\"columnDefs\":[{\"className\":\"dt-right\",\"targets\":[0,1,2,3,4,5,6,7,8,9,10]}],\"order\":[],\"autoWidth\":false,\"orderClasses\":false}},\"evals\":[],\"jsHooks\":[]}"},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":null,"dir":"Reference","previous_headings":"","what":"Integrate PrediXcan data across tissues — s_multixcan","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"function wrapper around S-MultiXcan, tool identifying genes associate trait using GWAS summary statistics. approach leverages sharing eQTLs acorss tissues improve upon tissue-specific TWAS-methods like S-PrediXcan. method uses data generated s_predixcan(). S-MultiXcan method described Barbeira et al. (PLOS Genetics 2019; https://doi.org/10.1371/journal.pgen.1007889). MetaXcan tools can found Github (https://github.com/hakyimlab/MetaXcan) PredictDB (https://predictdb.org/).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"","code":"s_multixcan( df, models_folder, models_name_filter = \".*db\", models_name_pattern, snp = SNP, effect_allele = effect_allele, other_allele = other_allele, beta = beta, eaf = eaf, chr = chr, pos = pos, se = se, pval = pval, samplesize = samplesize, regularization = 0.01, cutoff_condition_number = 30, cutoff_eigen_ratio = 0.001, cutoff_threshold = 0.4, cutoff_trace_ratio = 0.01, metaxcan_folder, metaxcan_filter, metaxcan_file_name_parse_pattern, snp_covariance, metaxcan, data )"},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"df Dataframe containing GWAS summary statistics models_folder Path folder containing MetaXcan models (eg. \"MetaXcan/data/models/eqtl/mashr/\") models_name_filter Filter used identify model files (default = \".*\\\\.db\") models_name_pattern Filter used extract trait name model (default = \"mashr_(.*)\\\\.db\") snp Column containing rsids effect_allele Column containing effect alleles other_allele Column containing non-effect alleles beta Column containing effect estimates eaf Column containing effect allele frequencies chr Column containing chromosomes pos Column containing positions se Column containing standard errors pval Column containing p-values samplesize Column containing samplesize regularization MetaXcan regularization (default = 0.01) cutoff_condition_number Numer eigen values use truncating SVD components (default = 30) cutoff_eigen_ratio Ratio eigenvalues max eigenvalue, threshold use truncating SVD components. (default = 0.001) cutoff_threshold Threshold variance eigenvalues truncating SVD (default = 0.4) cutoff_trace_ratio Ratio eigenvalues trace, use truncating SVD (default = 0.01) metaxcan_folder Path folder containing S-PrediXcan result files metaxcan_filter Regular expression filter results files metaxcan_file_name_parse_pattern Regular expression get phenotype name model name MetaXcan result files. Assumes first group matched phenotype name, second model name. snp_covariance Path file containing S-MultiXcan covariance (eg. \"MetaXcan/data/models/gtex_v8_expression_mashr_snp_smultixcan_covariance.txt.gz\") metaxcan Path MetaXcan (eg. \"MetaXcan/software\") data Path MetaXcan data (eg. \"MetaXcan/data\")","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_multixcan.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Integrate PrediXcan data across tissues — s_multixcan","text":"dataframe containing S-MultiXcan results","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":null,"dir":"Reference","previous_headings":"","what":"Run a TWAS using S-PrediXcan — s_predixcan","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"function wrapper around S-PrediXcan, method integrating GWAS summary statistics gene-expression/splicing data identify genes associated trait. PrediXcan/MetaXcan method described Barbeira et al. (Nature Communications 2018; https://doi.org/10.1038/s41467-018-03621-1). MetaXcan tools can found Github (https://github.com/hakyimlab/MetaXcan) PredictDB (https://predictdb.org/). S-PrediXcan run across multiple tissues, results can integrated using s_multixcan().","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"","code":"s_predixcan( df, snp = SNP, effect_allele = effect_allele, other_allele = other_allele, beta = beta, eaf = eaf, chr = chr, pos = pos, se = se, pval = pval, samplesize = samplesize, data, metaxcan, output, model_db_path, model_covariance_path, trait_name )"},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"arguments","dir":"Reference","previous_headings":"","what":"Arguments","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"df Dataframe containing GWAS summary statistics snp Column containing rsid effect_allele Column containing effect allele other_allele Column containing non-effect allele beta Column containing effect size eaf Column containing effect allele frequency chr Column containing chromosome pos Column containing position se Column containing standard error effect estimate pval Column containing p-value samplesize Column containing samplesize data Path MetaXcan data (eg. \"MetaXcan/data\") metaxcan Path MetaXcan (eg. \"MetaXcan/software\") output Output directory save S-PrediXcan results model_db_path Path PrediXcan model database model_covariance_path Path PrediXcan model covariance trait_name Name GWAS trait (used name output files)","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"dataframe containing S-PrediXcan results","code":""},{"path":[]},{"path":"https://mglev1n.github.io/levinmisc/reference/s_predixcan.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Run a TWAS using S-PrediXcan — s_predixcan","text":"","code":"if (FALSE) { s_predixcan(df, data = \"MetaXcan/data\", metaxcan = \"MetaXcan/software\", output = \"/path/to/output\", model_db_path = \"MetaXcan/data/models/eqtl/mashr/mashr_Liver.db\", model_covariance_path = \"MetaXcan/data/models/eqtl/mashr/mashr_Liver.txt.gz\", trait_name = \"GWAS_trait\") }"},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":null,"dir":"Reference","previous_headings":"","what":"Use Crew LSF — use_crew_lsf","title":"Use Crew LSF — use_crew_lsf","text":"function returns template using crew.cluster targets project, enabling parallel execution targets workflow. default, template pre-filled using parameters specific LPC system Penn. default, function creates workers submit different queues (eg. voltron_normal, voltron_long), allocate different resources (eg. \"normal\" worker use 1 core 16GB memory, \"long\" worker use 1 core 10GB memory).","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":"ref-usage","dir":"Reference","previous_headings":"","what":"Usage","title":"Use Crew LSF — use_crew_lsf","text":"","code":"use_crew_lsf()"},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":"value","dir":"Reference","previous_headings":"","what":"Value","title":"Use Crew LSF — use_crew_lsf","text":"code block copy/paste targets project","code":""},{"path":"https://mglev1n.github.io/levinmisc/reference/use_crew_lsf.html","id":"ref-examples","dir":"Reference","previous_headings":"","what":"Examples","title":"Use Crew LSF — use_crew_lsf","text":"","code":"if (FALSE) { use_crew_lsf() }"}]