Skip to content

Latest commit

 

History

History
855 lines (615 loc) · 32.7 KB

README.md

File metadata and controls

855 lines (615 loc) · 32.7 KB

scfetch - Access and Format Single-cell RNA-seq Datasets from Public Resources

CRAN License: GPLv3 CODE_SIZE

Introduction

scfetch is designed to accelerate users download and prepare single-cell datasets from public resources. It can be used to:

  • Download fastq files from GEO/SRA, foramt fastq files to standard style that can be identified by 10x softwares (e.g. CellRanger).
  • Download bam files from GEO/SRA, support downloading original 10x generated bam files (with custom tags) and normal bam files, and convert bam files to fastq files.
  • Download scRNA-seq matrix and annotation (e.g. cell type) information from GEO, PanglanDB and UCSC Cell Browser, load the downnloaded matrix to Seurat.
  • Download processed objects from Zeenodo, CELLxGENE and Human Cell Atlas.
  • Formats conversion between widely used single cell objects (SeuratObject, AnnData, SingleCellExperiment, CellDataSet/cell_data_set and loom).

Framework

scfetch_framework

Installation

scfetch is an R package distributed as part of the CRAN. To install the package, start R and enter:

# install via CRAN
install.packages("scfetch")

# you can also install the development version from GitHub
# install.packages("devtools")
devtools::install_github("showteeth/scfetch")

There are some conditionally used packages:

# install.packages("devtools") #In case you have not installed it.
devtools::install_github("alexvpickering/GEOfastq") # download fastq
devtools::install_github("cellgeni/sceasy") # format conversion
devtools::install_github("mojaveazure/seurat-disk") # format conversion
devtools::install_github("satijalab/seurat-wrappers") # format conversion

For issues about installation, please refer INSTALL.md.

For data structures conversion and downloading fastq/bam files, scfetch requires additional tools, you can install with:

# install additional packages for format conversion
conda install -c bioconda loompy anndata 
# or
pip install anndata loompy

# install additional packages for downloading fastq/bam files
conda install -c bioconda 'parallel-fastq-dump' 'sra-tools>=3.0.0'

# install bamtofastq, the following installs linux version
wget --quiet https://github.com/10XGenomics/bamtofastq/releases/download/v1.4.1/bamtofastq_linux && chmod +x bamtofastq_linux

Docker

We also provide a docker image to use:

# pull the image
docker pull soyabean/scfetch:1.6

# run the image
docker run --rm -p 8888:8787 -e PASSWORD=passwd -e ROOT=TRUE -it soyabean/scfetch:1.6

Notes:

  • After running the above codes, open browser and enter http://localhost:8888/, the user name is rstudio, the password is passwd (set by -e PASSWORD=passwd)
  • If port 8888 is in use, change -p 8888:8787
  • The conda.path in ExportSeurat and ImportSeurat can be set /opt/conda.
  • The sra-tools can be found in /opt/sratoolkit.3.0.6-ubuntu64/bin.
  • The parallel-fastq-dump path: /opt/conda/bin/parallel-fastq-dump.
  • The bamtofastq_linux path: /opt/bamtofastq_linux.

Vignette

Detailed usage is available in website.


Function list

Type Function Usage
Download and format fastq ExtractRun Extract runs with GEO accession number or GSM number
DownloadSRA Download sra files
SplitSRA Split sra files to fastq files and format to 10x standard style
Download and convert bam DownloadBam Download bam (support 10x original bam)
Bam2Fastq Convert bam files to fastq files
Download matrix and load to Seurat ExtractGEOMeta Extract sample metadata from GEO
ParseGEO Download matrix from GEO and load to Seurat
ExtractPanglaoDBMeta Extract sample metadata from PandlaoDB
ExtractPanglaoDBComposition Extract cell type composition of PanglaoDB datasets
ParsePanglaoDB Download matrix from PandlaoDB and load to Seurat
ShowCBDatasets Show all available datasets in UCSC Cell Browser
ExtractCBDatasets Extract UCSC Cell Browser datasets with attributes
ExtractCBComposition Extract cell type composition of UCSC Cell Browser datasets
ParseCBDatasets Download UCSC Cell Browser datasets and load to Seurat
Download objects ExtractZenodoMeta Extract sample metadata from Zenodo with DOIs
ParseZenodo Download rds/rdata/h5ad/loom from Zenodo with DOIs
ShowCELLxGENEDatasets Show all available datasets in CELLxGENE
ExtractCELLxGENEMeta Extract metadata of CELLxGENE datasets with attributes
ParseCELLxGENE Download rds/h5ad from CELLxGENE
ShowHCAProjects Show all available projects in Human Cell Atlas
ExtractHCAMeta Extract metadata of Human Cell Atlas projects with attributes
ParseHCA Download rds/rdata/h5/h5ad/loom from Human Cell Atlas
Convert between different single-cell objects ExportSeurat Convert SeuratObject to AnnData, SingleCellExperiment, CellDataSet/cell_data_set and loom
ImportSeurat Convert AnnData, SingleCellExperiment, CellDataSet/cell_data_set and loom to SeuratObject
SCEAnnData Convert between SingleCellExperiment and AnnData
SCELoom Convert between SingleCellExperiment and loom
Summarize datasets based on attributes StatDBAttribute Summarize datasets in PandlaoDB, UCSC Cell Browser and CELLxGENE based on attributes

Usage

Downloas fastq and bam

Since the downloading process is time-consuming, we provide the commands used to illustrate the usage.

Downloas fastq

Prepare run number

For fastq files stored in SRA, scfetch can extract sample information and run number with GEO accession number or users can also provide a dataframe contains the run number of interested samples.

Extract all samples under GSE130636 and the platform is GPL20301 (use platform = NULL for all platforms):

GSE130636.runs <- ExtractRun(acce = "GSE130636", platform = "GPL20301")

Download sra

With the dataframe contains gsm and run number, scfetch will download sra files using prefetch. The returned result is a dataframe contains failed runs. If not NULL, users can re-run DownloadSRA by setting gsm.df to the returned result.

# a small test
GSE130636.runs <- GSE130636.runs[GSE130636.runs$run %in% c("SRR9004346", "SRR9004351"), ]
# download, you may need to set prefetch.path
out.folder <- tempdir()
GSE130636.down <- DownloadSRA(
  gsm.df = GSE130636.runs,
  out.folder = out.folder
)
# GSE130636.down is null or dataframe contains failed runs

The out.folder structure will be: gsm_number/run_number.


Split fastq

After obtaining the sra files, scfetch provides function SplitSRA to split sra files to fastq files using parallel-fastq-dump (parallel, fastest and gzip output), fasterq-dump (parallel, fast but unzipped output) and fastq-dump (slowest and gzip output).

For fastqs generated with 10x Genomics, SplitSRA can identify read1, read2 and index files and format the read1 and read2 to 10x required format (sample1_S1_L001_R1_001.fastq.gz and sample1_S1_L001_R2_001.fastq.gz). In detail, the file with read length 26 or 28 is considered as read1, the files with read length 8 or 10 are considered as index files and the remain file is considered as read2. The read length rules is from Sequencing Requirements for Single Cell 3' and Sequencing Requirements for Single Cell V(D)J.

The returned result is a vector of failed sra files. If not NULL, users can re-run SplitSRA by setting sra.path to the returned result.

# parallel-fastq-dump requires sratools.path
# you may need to set split.cmd.path and sratools.path
sra.folder <- tempdir()
GSE130636.split <- SplitSRA(
  sra.folder = sra.folder,
  fastq.type = "10x", split.cmd.threads = 4
)

Download bam

Prepare run number

scfetch can extract sample information and run number with GEO accession number or users can also provide a dataframe contains the run number of interested samples.

GSE138266.runs <- ExtractRun(acce = "GSE138266", platform = "GPL18573")

Download bam

With the dataframe contains gsm and run number, scfetch provides DownloadBam to download bam files using prefetch. It suooorts 10x generated bam files and normal bam files.

  • 10x generated bam: While bam files generated from 10x softwares (e.g. CellRanger) contain custom tags which are not kept when using default parameters of prefetch, scfetch adds --type TenX to make sure the downloaded bam files contain these tags.
  • normal bam: For normal bam files, DownloadBam will download sra files first and then convert sra files to bam files with sam-dump. After testing the efficiency of prefetch + sam-dump and sam-dump, the former is much faster than the latter (52G sra and 72G bam files):
# # use prefetch to download sra file
# prefetch -X 60G SRR1976036
# # real	117m26.334s
# # user	16m42.062s
# # sys	3m28.295s

# # use sam-dump to convert sra to bam
# time (sam-dump SRR1976036.sra | samtools view -bS - -o SRR1976036.bam)
# # real	536m2.721s
# # user	749m41.421s
# # sys	20m49.069s


# use sam-dump to download bam directly
# time (sam-dump SRR1976036 | samtools view -bS - -o SRR1976036.bam)
# # more than 36hrs only get ~3G bam files, too slow

The returned result is a dataframe containing failed runs (either failed to download sra files or failed to convert to bam files for normal bam; failed to download bam files for 10x generated bam). If not NULL, users can re-run DownloadBam by setting gsm.df to the returned result. The following is an example to download 10x generated bam file:

# a small test
GSE138266.runs <- GSE138266.runs[GSE138266.runs$run %in% c("SRR10211566"), ]
# download, you may need to set prefetch.path
out.folder <- tempdir()
GSE138266.down <- DownloadBam(
  gsm.df = GSE138266.runs,
  out.folder = out.folder
)
# GSE138266.down is null or dataframe contains failed runs

The out.folder structure will be: gsm_number/run_number.


Convert bam to fastq

With downloaded bam files, scfetch provides function Bam2Fastq to convert bam files to fastq files. For bam files generated from 10x softwares, Bam2Fastq utilizes bamtofastq tool developed by 10x Genomics.

The returned result is a vector of bam files failed to convert to fastq files. If not NULL, users can re-run Bam2Fastq by setting bam.path to the returned result.

bam.folder <- tempdir()
# you may need to set bamtofastq.path and bamtofastq.paras
GSE138266.convert <- Bam2Fastq(
  bam.folder = bam.folder
)

Download count matrix

scfetch provides functions for users to download count matrices and annotations (e.g. cell type annotation and composition) from GEO and some single-cell databases (e.g. PanglaoDB and UCSC Cell Browser). scfetch also supports loading the downloaded data to Seurat.

Until now, the public resources supported and the returned results:

Resources URL Download Type Returned results
GEO https://www.ncbi.nlm.nih.gov/geo/ count matrix SeuratObject or count matrix for bulk RNA-seq
PanglaoDB https://panglaodb.se/index.html count matrix SeuratObject
UCSC Cell Browser https://cells.ucsc.edu/ count matrix SeuratObject

GEO

GEO is an international public repository that archives and freely distributes microarray, next-generation sequencing, and other forms of high-throughput functional genomics data submitted by the research community. It provides a very convenient way for users to explore and select interested scRNA-seq datasets.

Extract metadata

scfetch provides ExtractGEOMeta to extract sample metadata, including sample title, source name/tissue, description, cell type, treatment, paper title, paper abstract, organism, protocol, data processing methods, et al.

# extract metadata of specified platform
GSE200257.meta <- ExtractGEOMeta(acce = "GSE200257", platform = "GPL24676")
# set VROOM_CONNECTION_SIZE to avoid error: Error: The size of the connection buffer (786432) was not large enough
Sys.setenv("VROOM_CONNECTION_SIZE" = 131072 * 60)
# extract metadata of all platforms
GSE94820.meta <- ExtractGEOMeta(acce = "GSE94820", platform = NULL)

Download matrix and load to Seurat

After manually check the extracted metadata, users can download count matrix and load the count matrix to Seurat with ParseGEO.

For count matrix, ParseGEO supports downloading the matrix from supplementary files and extracting from ExpressionSet, users can control the source by specifying down.supp or detecting automatically (ParseGEO will extract the count matrix from ExpressionSet first, if the count matrix is NULL or contains non-integer values, ParseGEO will download supplementary files). While the supplementary files have two main types: single count matrix file containing all cells and CellRanger-style outputs (barcode, matrix, feature/gene), users are required to choose the type of supplementary files with supp.type.

With the count matrix, ParseGEO will load the matrix to Seurat automatically. If multiple samples available, users can choose to merge the SeuratObject with merge.

# for cellranger output
out.folder <- tempdir()
GSE200257.seu <- ParseGEO(
  acce = "GSE200257", platform = NULL, supp.idx = 1, down.supp = TRUE, supp.type = "10x",
  out.folder = out.folder
)
# for count matrix, no need to specify out.folder, download count matrix to tmp folder
GSE94820.seu <- ParseGEO(acce = "GSE94820", platform = NULL, supp.idx = 1, down.supp = TRUE, supp.type = "count")

For bulk RNA-seq, set data.type = "bulk" in ParseGEO, this will return count matrix.


PanglaoDB

PanglaoDB is a database which contains scRNA-seq datasets from mouse and human. Up to now, it contains 5,586,348 cells from 1368 datasets (1063 from Mus musculus and 305 from Homo sapiens). It has well organized metadata for every dataset, including tissue, protocol, species, number of cells and cell type annotation (computationally identified). Daniel Osorio has developed rPanglaoDB to access PanglaoDB data, the functions of scfetch here are based on rPanglaoDB.

Since PanglaoDB is no longer maintained, scfetch has cached all metadata and cell type composition and use these cached data by default to accelerate, users can access the cached data with PanglaoDBMeta (all metadata) and PanglaoDBComposition (all cell type composition).

Summarise attributes

scfetch provides StatDBAttribute to summary attributes of PanglaoDB:

# use cached metadata
StatDBAttribute(df = PanglaoDBMeta, filter = c("species", "protocol"), database = "PanglaoDB")

Extract metadata

scfetch provides ExtractPanglaoDBMeta to select interested datasets with specified species, protocol, tissue and cell number (The available values of these attributes can be obtained with StatDBAttribute). User can also choose to whether to add cell type annotation to every dataset with show.cell.type.

scfetch uses cached metadata and cell type composition by default, users can change this by setting local.data = FALSE.

hsa.meta <- ExtractPanglaoDBMeta(
  species = "Homo sapiens", protocol = c("Smart-seq2", "10x chromium"),
  show.cell.type = TRUE, cell.num = c(1000, 2000)
)

Extract cell type composition

scfetch provides ExtractPanglaoDBComposition to extract cell type annotation and composition (use cached data by default to accelerate, users can change this by setting local.data = FALSE).

hsa.composition <- ExtractPanglaoDBComposition(species = "Homo sapiens", protocol = c("Smart-seq2", "10x chromium"))

Download matrix and load to Seurat

After manually check the extracted metadata, scfetch provides ParsePanglaoDB to download count matrix and load the count matrix to Seurat. With available cell type annotation, uses can filter datasets without specified cell type with cell.type. Users can also include/exclude cells expressing specified genes with include.gene/exclude.gene.

With the count matrix, ParsePanglaoDB will load the matrix to Seurat automatically. If multiple datasets available, users can choose to merge the SeuratObject with merge.

# small test
hsa.seu <- ParsePanglaoDB(hsa.meta[1:3, ], merge = TRUE)

UCSC Cell Browser

The UCSC Cell Browser is a web-based tool that allows scientists to interactively visualize scRNA-seq datasets. It contains 1040 single cell datasets from 17 different species. And, it is organized with the hierarchical structure, which can help users quickly locate the datasets they are interested in.

Show available datasets

scfetch provides ShowCBDatasets to show all available datasets. Due to the large number of datasets, ShowCBDatasets enables users to perform lazy load of dataset json files instead of downloading the json files online (time-consuming!!!). This lazy load requires users to provide json.folder to save json files and set lazy = TRUE (for the first time of run, ShowCBDatasets will download current json files to json.folder, for next time of run, with lazy = TRUE, ShowCBDatasets will load the downloaded json files from json.folder.). And, ShowCBDatasets supports updating the local datasets with update = TRUE.

json.folder <- tempdir()
# first time run, the json files are stored under json.folder
# ucsc.cb.samples = ShowCBDatasets(lazy = TRUE, json.folder = json.folder, update = TRUE)

# second time run, load the downloaded json files
ucsc.cb.samples <- ShowCBDatasets(lazy = TRUE, json.folder = json.folder, update = FALSE)

# always read online
# ucsc.cb.samples = ShowCBDatasets(lazy = FALSE)

The number of datasets and all available species:

# the number of datasets
nrow(ucsc.cb.samples)

# available species
unique(unlist(sapply(unique(gsub(pattern = "\\|parent", replacement = "", x = ucsc.cb.samples$organisms)), function(x) {
  unlist(strsplit(x = x, split = ", "))
})))

Summarise attributes

scfetch provides StatDBAttribute to summary attributes of UCSC Cell Browser:

StatDBAttribute(df = ucsc.cb.samples, filter = c("organism", "organ"), database = "UCSC")

Extract metadata

scfetch provides ExtractCBDatasets to filter metadata with collection, sub-collection, organ, disease status, organism, project and cell number (The available values of these attributes can be obtained with StatDBAttribute except cell number). All attributes except cell number support fuzzy match with fuzzy.match, this is useful when selecting datasets.

hbb.sample.df <- ExtractCBDatasets(all.samples.df = ucsc.cb.samples, organ = c("brain", "blood"), organism = "Human (H. sapiens)", cell.num = c(1000, 2000))

Extract cell type composition

scfetch provides ExtractCBComposition to extract cell type annotation and composition.

hbb.sample.ct <- ExtractCBComposition(json.folder = json.folder, sample.df = hbb.sample.df)

Load the online datasets to Seurat

After manually check the extracted metadata, scfetch provides ParseCBDatasets to load the online count matrix to Seurat. All the attributes available in ExtractCBDatasets are also same here. Please note that the loading process provided by ParseCBDatasets will load the online count matrix instead of downloading it to local. If multiple datasets available, users can choose to merge the SeuratObject with merge.

hbb.sample.seu <- ParseCBDatasets(sample.df = hbb.sample.df)

Download object

scfetch provides functions for users to download processed single-cell RNA-seq data from Zenodo, CELLxGENE and Human Cell Atlas, including RDS, RData, h5ad, h5, loom objects.

Until now, the public resources supported and the returned results:

Resources URL Download Type Returned results
Zenodo https://zenodo.org/ count matrix, rds, rdata, h5ad, et al. NULL or failed datasets
CELLxGENE https://cellxgene.cziscience.com/ rds, h5ad NULL or failed datasets
Human Cell Atlas https://www.humancellatlas.org/ rds, rdata, h5, h5ad, loom NULL or failed projects

Zenodo

Zenodo contains various types of processed objects, such as SeuratObject which has been clustered and annotated, AnnData which contains processed results generated by scanpy.

Extract metadata

scfetch provides ExtractZenodoMeta to extract dataset metadata, including dataset title, description, available files and corresponding md5. Please note that when the dataset is restricted access, the returned dataframe will be empty.

# single doi
zebrafish.df <- ExtractZenodoMeta(doi = "10.5281/zenodo.7243603")

# vector dois
multi.dois <- ExtractZenodoMeta(doi = c("1111", "10.5281/zenodo.7243603", "10.5281/zenodo.7244441"))

Download object

After manually check the extracted metadata, users can download the specified objects with ParseZenodo. The downloaded objects are controlled by file.ext and the provided object formats should be in lower case (e.g. rds/rdata/h5ad).

The returned result is a dataframe containing failed objects. If not NULL, users can re-run ParseZenodo by setting doi.df to the returned result.

out.folder <- tempdir()
multi.dois.parse <- ParseZenodo(
  doi = c("1111", "10.5281/zenodo.7243603", "10.5281/zenodo.7244441"),
  file.ext = c("rdata", "rds"), out.folder = out.folder
)

CELLxGENE

The CELLxGENE is a web server contains 910 single-cell datasets, users can explore, download and upload own datasets. The downloaded datasets provided by CELLxGENE have two formats: h5ad (AnnData v0.8) and rds (Seurat v4).

Show available datasets

scfetch provides ShowCELLxGENEDatasets to extract dataset metadata, including dataset title, description, contact, organism, ethnicity, sex, tissue, disease, assay, suspension type, cell type, et al.

# all available datasets
all.cellxgene.datasets <- ShowCELLxGENEDatasets()

Summarise attributes

scfetch provides StatDBAttribute to summary attributes of CELLxGENE:

StatDBAttribute(df = all.cellxgene.datasets, filter = c("organism", "sex"), database = "CELLxGENE")

Extract metadata

scfetch provides ExtractCELLxGENEMeta to filter dataset metadata, the available values of attributes can be obtained with StatDBAttribute except cell number:

# human 10x v2 and v3 datasets
human.10x.cellxgene.meta <- ExtractCELLxGENEMeta(
  all.samples.df = all.cellxgene.datasets,
  assay = c("10x 3' v2", "10x 3' v3"), organism = "Homo sapiens"
)

Download object

After manually check the extracted metadata, users can download the specified objects with ParseCELLxGENE. The downloaded objects are controlled by file.ext (choose from "rds" and "h5ad").

The returned result is a dataframe containing failed datasets. If not NULL, users can re-run ParseCELLxGENE by setting meta to the returned result.

out.folder <- tempdir()
ParseCELLxGENE(
  meta = human.10x.cellxgene.meta[1:5, ], file.ext = "rds",
  out.folder = out.folder
)

Format conversion

There are many tools have been developed to process scRNA-seq data, such as Scanpy, Seurat, scran and Monocle. These tools have their own objects, such as Anndata of Scanpy, SeuratObject of Seurat, SingleCellExperiment of scran and CellDataSet/cell_data_set of Monocle2/Monocle3. There are also some file format designed for large omics datasets, such as loom. To perform a comprehensive scRNA-seq data analysis, we usually need to combine multiple tools, which means we need to perform object conversion frequently. To facilitate user analysis of scRNA-seq data, scfetch provides multiple functions to perform object conversion between widely used tools and formats. The object conversion implemented in scfetch has two main advantages:

  • one-step conversion between different objects. There will be no conversion to intermediate objects, thus preventing unnecessary information loss.
  • tools used for object conversion are developed by the team of the source/destination object as far as possible. For example, we use SeuratDisk to convert SeuratObject to loom, use zellkonverter to perform conversion between SingleCellExperiment and Anndata. When there is no such tools, we use sceasy to perform conversion.

Test data

# library
library(Seurat) # pbmc_small
library(scRNAseq) # seger

SeuratObject:

# object
pbmc_small

SingleCellExperiment:

seger <- scRNAseq::SegerstolpePancreasData()

Convert SeuratObject to other objects

Here, we will convert SeuratObject to SingleCellExperiment, CellDataSet/cell_data_set, Anndata, loom.

SeuratObject to SingleCellExperiment

The conversion is performed with functions implemented in Seurat:

sce.obj <- ExportSeurat(seu.obj = pbmc_small, assay = "RNA", to = "SCE")

SeuratObject to CellDataSet/cell_data_set

To CellDataSet (The conversion is performed with functions implemented in Seurat):

# BiocManager::install("monocle") # reuqire monocle
cds.obj <- ExportSeurat(seu.obj = pbmc_small, assay = "RNA", reduction = "tsne", to = "CellDataSet")

To cell_data_set (The conversion is performed with functions implemented in SeuratWrappers):

# remotes::install_github('cole-trapnell-lab/monocle3') # reuqire monocle3
cds3.obj <- ExportSeurat(seu.obj = pbmc_small, assay = "RNA", to = "cell_data_set")

SeuratObject to AnnData

AnnData is a Python object, reticulate is used to communicate between Python and R. User should create a Python environment which contains anndata package and specify the environment path with conda.path to ensure the exact usage of this environment.

The conversion is performed with functions implemented in sceasy:

# remove pbmc_small.h5ad first
anndata.file <- tempfile(pattern = "pbmc_small_", fileext = ".h5ad")
# you may need to set conda.path
ExportSeurat(
  seu.obj = pbmc_small, assay = "RNA", to = "AnnData",
  anndata.file = anndata.file
)

SeuratObject to loom

The conversion is performed with functions implemented in SeuratDisk:

loom.file <- tempfile(pattern = "pbmc_small_", fileext = ".loom")
ExportSeurat(
  seu.obj = pbmc_small, assay = "RNA", to = "loom",
  loom.file = loom.file
)

Convert other objects to SeuratObject

SingleCellExperiment to SeuratObject

The conversion is performed with functions implemented in Seurat:

seu.obj.sce <- ImportSeurat(obj = sce.obj, from = "SCE", count.assay = "counts", data.assay = "logcounts", assay = "RNA")

CellDataSet/cell_data_set to SeuratObject

CellDataSet to SeuratObject (The conversion is performed with functions implemented in Seurat):

seu.obj.cds <- ImportSeurat(obj = cds.obj, from = "CellDataSet", count.assay = "counts", assay = "RNA")

cell_data_set to SeuratObject (The conversion is performed with functions implemented in Seurat):

seu.obj.cds3 <- ImportSeurat(obj = cds3.obj, from = "cell_data_set", count.assay = "counts", data.assay = "logcounts", assay = "RNA")

AnnData to SeuratObject

AnnData is a Python object, reticulate is used to communicate between Python and R. User should create a Python environment which contains anndata package and specify the environment path with conda.path to ensure the exact usage of this environment.

The conversion is performed with functions implemented in sceasy:

# you may need to set conda.path
seu.obj.h5ad <- ImportSeurat(
  anndata.file = anndata.file, from = "AnnData", assay = "RNA"
)

loom to SeuratObject

The conversion is performed with functions implemented in SeuratDisk and Seurat:

# loom will lose reduction
seu.obj.loom <- ImportSeurat(loom.file = loom.file, from = "loom")

Conversion between SingleCellExperiment and AnnData

The conversion is performed with functions implemented in zellkonverter.

SingleCellExperiment to AnnData
# remove seger.h5ad first
seger.anndata.file <- tempfile(pattern = "seger_", fileext = ".h5ad")
SCEAnnData(
  from = "SingleCellExperiment", to = "AnnData", sce = seger, X_name = "counts",
  anndata.file = seger.anndata.file
)

AnnData to SingleCellExperiment
seger.anndata <- SCEAnnData(
  from = "AnnData", to = "SingleCellExperiment",
  anndata.file = seger.anndata.file
)

Conversion between SingleCellExperiment and loom

The conversion is performed with functions implemented in LoomExperiment.

SingleCellExperiment to loom
# remove seger.loom first
seger.loom.file <- tempfile(pattern = "seger_", fileext = ".loom")
SCELoom(
  from = "SingleCellExperiment", to = "loom", sce = seger,
  loom.file = seger.loom.file
)

loom to SingleCellExperiment
seger.loom <- SCELoom(
  from = "loom", to = "SingleCellExperiment",
  loom.file = seger.loom.file
)

Contact

For any question, feature request or bug report please write an email to songyb0519@gmail.com.


Code of Conduct

Please note that the scfetch project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.