Package: scran
Author: Aaron Lun (alun@wehi.edu.au)
Compilation date: 2017-01-18
Single-cell RNA sequencing (scRNA-seq) is a widely used technique for profiling gene expression in individual cells. This allows molecular biology to be studied at a resolution that cannot be matched by bulk sequencing of cell populations. Common analyses include detection of highly variable and correlated genes across cells, or assignment of cells to cell cycle phases. Cell-specific biases also need to be normalized in a manner that is robust to low counts and technical noise. The scran package implements methods to perform these analyses. This vignette provides a brief description of each method and some toy examples for how they are used.
We start off with a count matrix where each row is a gene and each column is a cell. These should have been obtained by mapping read sequences to a reference genome, and then counting the number of reads mapped to the exons of each gene. (See, for example, the Rsubread package to do both of these tasks.) For simplicity, though, we just simulate some counts here from a negative binomial distribution.
ngenes <- 10000
ncells <- 200
mu <- 2^runif(ngenes, 3, 10)
gene.counts <- matrix(rnbinom(ngenes*ncells, mu=mu, size=2), nrow=ngenes)
We add some arbitrary Ensembl gene IDs to give the impression that this is real (mouse) data.
library(org.Mm.eg.db)
all.ensembl <- unique(toTable(org.Mm.egENSEMBL)$ensembl_id)
rownames(gene.counts) <- sample(all.ensembl, ngenes)
We also have a set of counts for spike-in transcripts. These are appended to the counts for the endogenous genes. In practice, the reads should have been mapped to the spike-in transcipts by including the spike-in sequences in the genome index.
nspikes <- 100
ncells <- 200
mu <- 2^runif(nspikes, 3, 10)
spike.counts <- matrix(rnbinom(nspikes*ncells, mu=mu, size=2), nrow=nspikes)
rownames(spike.counts) <- paste0("ERCC-", seq_len(nspikes))
all.counts <- rbind(gene.counts, spike.counts)
Finally, we construct a SCESet
object to store all of the data.
We also indicate which rows correspond to spike-in transcripts.
This is done through the calculateQCMetrics
method from scater, which takes a named list of sets of control genes.
We can then easily indicate which sets of controls are spike-ins using the isSpike
setter function.
(In this case, there is only one control set, so the process may seem more complicated than necessary.
The usefulness of this setup becomes more obvious when multiple control sets are present.)
This information can be easily extracted using the isSpike
or spikes
methods.
library(scran)
sce <- newSCESet(countData=data.frame(all.counts))
sce <- calculateQCMetrics(sce, feature_controls=list(
MySpikes=rep(c(FALSE, TRUE), c(ngenes, nspikes))
))
isSpike(sce) <- "MySpikes"
This is simulated data, so we assume that quality control has already been applied to remove low-quality cells or low-abundance genes. Check out the scater and cellity packages for more details.
Cell-specific biases can be normalized using the computeSumFactors
method, which implements the deconvolution strategy for scRNA-seq normalization.
This computes size factors that are used to scale the counts in each cell.
The assumption is that most genes are not differentially expressed (DE) between cells, such that any differences in expression across the majority of genes represents some technical bias that should be removed.
sce <- computeSumFactors(sce)
summary(sizeFactors(sce))
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.9680 0.9926 1.0010 1.0000 1.0080 1.0370
For larger data sets, clustering can be performed with the quickCluster
function before normalization.
Briefly, cells are grouped into clusters of similar expression; normalization is applied within each cluster to compute size factors for each cell; and the factors are rescaled by normalization between clusters.
This reduces the risk of violating the above assumption when many genes are DE across a heterogeneous population.
larger.sce <- newSCESet(countData=data.frame(cbind(all.counts, all.counts, all.counts)))
clusters <- quickCluster(larger.sce)
larger.sce <- computeSumFactors(larger.sce, cluster=clusters)
An alternative approach is to normalize based on the spike-in counts.
The idea is that the same quantity of spike-in RNA was added to each cell prior to library preparation.
Size factors can then be computed to scale the counts such that the total coverage of the spike-in transcripts is equal across cells.
The main practical difference is that spike-in normalization preserves differences in total RNA content between cells, whereas computeSumFactors
and other non-DE methods do not.
sce2 <- computeSpikeFactors(sce)
summary(sizeFactors(sce2))
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.7457 0.9209 0.9938 1.0000 1.0720 1.2840
If we decide to use the deconvolution size factors, it is recommended to compute a separate set of size factors for the spike-ins.
This is because the spike-ins are not affected by total mRNA content.
Using the deconvolution size factors will over-normalize the spike-in counts, whereas the spike-in size factors are more appropriate.
To obtain the latter without overwriting the former, we can set general.use=FALSE
in computeSpikeFactors
such that the spike-in-based size factors will be computed and stored in the SCESet
object, but will only be used by the spike-in transcripts.
(Obviously, if the spike-in size factors were already being used for normalization, e.g., in sce2
, then this extra step is unnecessary.)
sce <- computeSpikeFactors(sce, general.use=FALSE)
Normalized expression values can then be calculated using the normalize
method from scater.
This will use the deconvolution size factors for the endogenous genes, and the spike-in-based size factors for the spike-in transcripts.
Each expression value can be interpreted as a log-transformed “normalized count”.
sce <- normalize(sce)
We use a pre-defined classifier to assign cells into their cell cycle phases.
This classifier was constructed from a training data set by identifying pairs of genes where the difference in expression within each pair changed sign across phases.
Thus, by examining the sign of the difference in test data, the phase to which the cell belongs can be identified.
Classifiers for human and mouse data are provided with the package – for other systems, classifiers can be constructed from a training set using the sandbag
function.
mm.pairs <- readRDS(system.file("exdata", "mouse_cycle_markers.rds", package="scran"))
The classification itself is done using the cyclone
function, given the count data and the trained classifier.
This yields a number of scores representing the consistency of the signs with each phase.
assigned <- cyclone(sce, pairs=mm.pairs)
head(assigned$scores)
## G1 S G2M
## 1 0.190 0.829 0.390
## 2 0.150 0.708 0.849
## 3 0.159 0.919 0.380
## 4 0.191 0.895 0.468
## 5 0.090 0.522 0.513
## 6 0.124 0.843 0.465
Cells are considered to be in G1 phase, if the G1 score is above 0.5 and the G2/M score is below 0.5; to be in G2/M phase, if the G2/M score is above 0.5 and the G1 score is below 0.5; to be in S phase, if both scores are below 0.5; and to be unknown, if both scores are above 0.5. Despite the availability of a S score, it tends to be more accurate to assign cells based on the G1 and G2/M scores only.
phase <- rep("S", ncol(sce))
phase[assigned$scores$G1 > 0.5] <- "G1"
phase[assigned$scores$G2M > 0.5] <- "G2M"
phase[assigned$scores$G1 > 0.5 & assigned$scores$G2M > 0.5] <- "unknown"
table(phase)
## phase
## G1 G2M S
## 2 110 88
Highly variable genes (HVGs) are detected by decomposing the total variance of each gene into its biological and technical components.
This avoids prioritizing low-abundance genes that have large variances due to technical noise.
First, we fit a mean-variance trend to the normalized log-expression values with trendVar
.
By default, this done using only the spike-in transcripts, as these should only exhibit technical noise.
fit <- trendVar(sce)
The fitted value of the trend can then be used as an estimate of the technical component. We subtract the fitted value from the total variance to obtain the biological component for each gene. HVGs can be defined as the top set of genes with the largest biological components.
decomp <- decomposeVar(sce, fit)
top.hvgs <- order(decomp$bio, decreasing=TRUE)
head(decomp[top.hvgs,])
## mean total bio tech p.value
## ENSMUSG00000033526 7.541512 2.105221 0.7729329 1.332288 3.312298e-07
## ENSMUSG00000017144 8.282977 2.026995 0.6839471 1.343048 4.495518e-06
## ENSMUSG00000066828 6.844986 2.028094 0.6643876 1.363706 9.714084e-06
## ENSMUSG00000070109 7.331460 2.000727 0.6547292 1.345998 9.973877e-06
## ENSMUSG00000065528 7.523215 1.956332 0.6229483 1.333383 1.919513e-05
## ENSMUSG00000034295 8.183719 1.939265 0.5993296 1.339935 3.718138e-05
## FDR
## ENSMUSG00000033526 0.003312298
## ENSMUSG00000017144 0.022477588
## ENSMUSG00000066828 0.024934692
## ENSMUSG00000070109 0.024934692
## ENSMUSG00000065528 0.038390256
## ENSMUSG00000034295 0.060077649
This can be examined more visually by constructing a mean-variance plot. Here, the black points represent the endogenous genes; the red points represent spike-in transcripts; and the red line represents the mean-variance trend fitted to the spike-ins.
plot(decomp$mean, decomp$total, xlab="Mean log-expression", ylab="Variance")
o <- order(decomp$mean)
lines(decomp$mean[o], decomp$tech[o], col="red", lwd=2)
points(fit$mean, fit$var, col="red", pch=16)
If spike-ins are absent or of poor quality, an alternative is to fit the trend to the gene variances directly with use.spikes=FALSE
.
This assumes that technical noise is the major contributor to the variance of most genes in the data set, such that the trend still represents the technical component.
The resulting fit can then be used in decomposeVar
as described above.
alt.fit <- trendVar(sce, use.spikes=FALSE)
alt.decomp <- decomposeVar(sce, alt.fit)
If the data set already contains some uninteresting substructure (e.g., batch effects), we can block on this by setting the design
argument in trendVar
.
This ensures that the substructure does not inflate the variance estimates.
For example, if the cells were prepared in two separate batches, we could construct a design matrix incorporating this information with model.matrix
and pass it to trendVar
.
The same design will also be used in decomposeVar
.
batch <- rep(c("1", "2"), each=100)
design <- model.matrix(~batch)
alt.fit2 <- trendVar(sce, design=design)
alt.decomp2 <- decomposeVar(sce, alt.fit)
Alternative approaches to identifying HVGs are implemented in the DM
and technicalCV2
functions.
These are based on the coefficient of variation for count data, which provides more power for rare subpopulations but is also more sensitive to outliers.
The top set of HVGs can be used to identify significant correlations between pairs of genes.
The idea is to distinguish between HVGs caused by random stochasticity, and those that are driving systematic heterogeneity, e.g., between subpopulations.
Correlations are computed in the correlatePairs
method using a slightly modified version of Spearman's rho.
Testing is performed against the null hypothesis of independent genes, using a permutation method in correlateNull
to construct a null distribution.
null.dist <- correlateNull(ncol(sce))
cor.pairs <- correlatePairs(sce[top.hvgs[1:200],], null.dist=null.dist)
head(cor.pairs)
## gene1 gene2 rho p.value FDR
## 1 ENSMUSG00000028626 ENSMUSG00000028409 -0.3020806 1.799998e-05 0.3510536
## 2 ENSMUSG00000074361 ENSMUSG00000075380 0.2799790 6.999993e-05 0.5460835
## 3 ENSMUSG00000022773 ENSMUSG00000016344 -0.2758479 8.999991e-05 0.5460835
## 4 ENSMUSG00000075085 ENSMUSG00000016664 0.2692162 1.119999e-04 0.5460835
## 5 ENSMUSG00000020386 ENSMUSG00000022285 -0.2544319 2.579997e-04 0.8715815
## 6 ENSMUSG00000025156 ENSMUSG00000071890 -0.2469827 3.799996e-04 0.8715815
As with variance estimation, if uninteresting substructure is present, this should be blocked on using the design
argument in both correlateNull
and correlatePairs
.
null.dist2 <- correlateNull(design=design, iter=1e5) # fewer iterations, to speed it up.
cor.pairs2 <- correlatePairs(sce[top.hvgs[1:200],], null.dist=null.dist2, design=design)
Significant correlations between pairs of genes can be defined at a false discovery rate (FDR) threshold of, e.g., 5%. In this case, no correlations are significant as the counts were randomly generated for each gene. In other situations when correlated gene pairs are present, these can be used to construct heatmaps to verify whether subpopulations exist; for choosing marker genes in experimental validation; and to construct gene-gene association networks.
The SCESet
object can be easily converted into other formats using the convertTo
method.
This allows analyses to be performed using other pipelines and packages.
For example, if DE analyses were to be performed using edgeR, the count data in sce
could be used to construct a DGEList
.
y <- convertTo(sce, type="edgeR")
By default, rows corresponding to spike-in transcripts are dropped when get.spikes=FALSE
.
As such, the rows of y
may not correspond directly to the rows of sce
– users should match by row name to ensure correct cross-referencing between objects.
Normalization factors are also automatically computed from the size factors.
The same conversion strategy roughly applies to the other supported formats.
DE analyses can be performed using DESeq2 by converting the object to a DESeqDataSet
.
Cells can be ordered on pseudotime with monocle by converting the object to a CellDataSet
(in this case, normalized unlogged expression values are stored).
This vignette describes the main functions in the scran package for basic analysis of single-cell RNA-seq data.
We cover normalization, cell cycle phase assignment, HVG detection and correlation testing.
Conversion to other formats can also be performed in preparation for analyses with other packages in the Bioconductor project.
Further information can be obtained by reading the documentation for each function (e.g., ?convertTo
), or asking for help on the Bioconductor support site (please read the posting guide beforehand).
sessionInfo()
## R version 3.3.2 (2016-10-31)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 16.04.1 LTS
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats4 parallel stats graphics grDevices utils datasets
## [8] methods base
##
## other attached packages:
## [1] org.Mm.eg.db_3.4.0 AnnotationDbi_1.36.1 IRanges_2.8.1
## [4] S4Vectors_0.12.1 scran_1.2.2 scater_1.2.0
## [7] ggplot2_2.2.1 Biobase_2.34.0 BiocGenerics_0.20.0
## [10] BiocParallel_1.8.1 BiocStyle_2.2.1 knitr_1.15.1
##
## loaded via a namespace (and not attached):
## [1] tximport_1.2.0 statmod_1.4.27 beeswarm_0.2.3
## [4] zoo_1.7-14 locfit_1.5-9.1 reshape2_1.4.2
## [7] lattice_0.20-34 rhdf5_2.18.0 colorspace_1.3-2
## [10] htmltools_0.3.5 XML_3.98-1.5 DBI_0.5-1
## [13] matrixStats_0.51.0 plyr_1.8.4 stringr_1.1.0
## [16] zlibbioc_1.20.0 munsell_0.4.3 gtable_0.2.0
## [19] memoise_1.0.0 evaluate_0.10 biomaRt_2.30.0
## [22] httpuv_1.3.3 vipor_0.4.4 highr_0.6
## [25] Rcpp_0.12.9 xtable_1.8-2 edgeR_3.16.5
## [28] scales_0.4.1 limma_3.30.8 mime_0.5
## [31] gridExtra_2.2.1 rjson_0.2.15 digest_0.6.11
## [34] stringi_1.1.2 dplyr_0.5.0 shiny_1.0.0
## [37] grid_3.3.2 tools_3.3.2 bitops_1.0-6
## [40] magrittr_1.5 lazyeval_0.2.0 RCurl_1.95-4.8
## [43] tibble_1.2 RSQLite_1.1-2 dynamicTreeCut_1.63-1
## [46] Matrix_1.2-7.1 data.table_1.10.0 ggbeeswarm_0.5.3
## [49] shinydashboard_0.5.3 assertthat_0.1 viridis_0.3.4
## [52] R6_2.2.0