Using scran to perform basic analyses of single-cell RNA-seq data

Package: scran
Author: Aaron Lun (alun@wehi.edu.au)
Compilation date: 2017-07-17

Introduction

Single-cell RNA sequencing (scRNA-seq) is a widely used technique for profiling gene expression in individual cells. This allows molecular biology to be studied at a resolution that cannot be matched by bulk sequencing of cell populations. Common analyses include detection of highly variable and correlated genes across cells, or assignment of cells to cell cycle phases. Cell-specific biases also need to be normalized in a manner that is robust to low counts and technical noise. The scran package implements methods to perform these analyses. This vignette provides brief descriptions of these methods and some toy examples to demonstrate their use.

Setting up the data

We start off with a count matrix where each row is a gene and each column is a cell. These can be obtained by mapping read sequences to a reference genome, and then counting the number of reads mapped to the exons of each gene. (See, for example, the Rsubread package to do both of these tasks.) Alternatively, pseudo-alignment methods can be used to quantify the abundance of each transcript in each cell. For simplicity, though, we'll just simulate some counts here from a negative binomial distribution.

ngenes <- 10000
ncells <- 200
mu <- 2^runif(ngenes, 3, 10)
gene.counts <- matrix(rnbinom(ngenes*ncells, mu=mu, size=2), nrow=ngenes)

We add some arbitrary Ensembl gene IDs to give the impression that this is real (mouse) data.

library(org.Mm.eg.db)
all.ensembl <- unique(toTable(org.Mm.egENSEMBL)$ensembl_id)
rownames(gene.counts) <- sample(all.ensembl, ngenes)

We also have a set of counts for spike-in transcripts. These are appended to the counts for the endogenous genes. In practice, the reads should have been mapped to the spike-in transcipts by including the spike-in sequences in the genome index.

nspikes <- 100
ncells <- 200
mu <- 2^runif(nspikes, 3, 10)
spike.counts <- matrix(rnbinom(nspikes*ncells, mu=mu, size=2), nrow=nspikes)
rownames(spike.counts) <- paste0("ERCC-", seq_len(nspikes))
all.counts <- rbind(gene.counts, spike.counts)

Finally, we construct a SCESet object to store all of the data. We also indicate which rows correspond to spike-in transcripts. This is done through the calculateQCMetrics method from scater, which takes a named list of sets of control genes. We indicate which sets of controls are spike-ins using the setSpike setter function. (In this case, there is only one control set, so the process may seem more complicated than necessary. The usefulness of this setup becomes more obvious when multiple control sets are present.) This information can be easily extracted later on using the isSpike, spikes and whichSpike methods.

library(scran)
sce <- newSCESet(countData=data.frame(all.counts))
sce <- calculateQCMetrics(sce, feature_controls=list(
    MySpikes=rep(c(FALSE, TRUE), c(ngenes, nspikes))
))
setSpike(sce) <- "MySpikes"

This is simulated data, so we assume that quality control has already been applied to remove low-quality cells or low-abundance genes. Check out the scater and cellity packages for more details. Also see https://www.bioconductor.org/help/workflows/simpleSingleCell/ for a workflow where all these steps are used in real data analyses.

Cell cycle phase assignment

We use a pre-defined classifier to assign cells into their cell cycle phases. This classifier was constructed from a training data set by identifying pairs of genes where the difference in expression within each pair changed sign across phases. Thus, by examining the sign of the difference in test data, the phase to which the cell belongs can be identified. Classifiers for human and mouse data are provided with the package – for other systems, classifiers can be constructed from a training set using the sandbag function.

mm.pairs <- readRDS(system.file("exdata", "mouse_cycle_markers.rds", package="scran"))

The classification itself is done using the cyclone function, given the count data and the trained classifier. This yields a number of scores representing the consistency of the signs with each phase.

assigned <- cyclone(sce, pairs=mm.pairs)
head(assigned$scores)
##      G1     S   G2M
## 1 0.441 0.468 0.758
## 2 0.451 0.297 0.732
## 3 0.444 0.159 0.894
## 4 0.590 0.337 0.774
## 5 0.620 0.338 0.615
## 6 0.252 0.475 0.870

Cells are considered to be in G1 phase, if the G1 score is above 0.5 and the G2/M score is below 0.5; to be in G2/M phase, if the G2/M score is above 0.5 and the G1 score is below 0.5; to be in S phase, if both scores are below 0.5; and to be unknown, if both scores are above 0.5. Despite the availability of a S score, it tends to be more accurate to assign cells based on the G1 and G2/M scores only.

table(assigned$phases)
## 
##  G1 G2M   S 
##  23 176   1

Note that it is generally best practice to perform cell cycle phase assignment before filtering out low-abundance genes.

Normalizing cell-specific biases

Based on the gene counts

Cell-specific biases are normalized using the computeSumFactors method, which implements the deconvolution strategy for scRNA-seq normalization. This computes size factors that are used to scale the counts in each cell. The assumption is that most genes are not differentially expressed (DE) between cells, such that any differences in expression across the majority of genes represents some technical bias that should be removed.

sce <- computeSumFactors(sce)
summary(sizeFactors(sce))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.9729  0.9941  1.0004  1.0000  1.0050  1.0282

For larger data sets, clustering should be performed with the quickCluster function before normalization. Briefly, cells are grouped into clusters of similar expression; normalization is applied within each cluster to compute size factors for each cell; and the factors are rescaled by normalization between clusters. This reduces the risk of violating the above assumption when many genes are DE between clusters in a heterogeneous population.

larger.sce <- newSCESet(countData=data.frame(cbind(all.counts, all.counts, all.counts)))
clusters <- quickCluster(larger.sce)
larger.sce <- computeSumFactors(larger.sce, cluster=clusters)

Based on the spike-in counts

An alternative approach is to normalize based on the spike-in counts. The idea is that the same quantity of spike-in RNA was added to each cell prior to library preparation. Size factors are computed to scale the counts such that the total coverage of the spike-in transcripts is equal across cells. The main practical difference is that spike-in normalization preserves differences in total RNA content between cells, whereas computeSumFactors and other non-DE methods do not.

sce2 <- computeSpikeFactors(sce)
summary(sizeFactors(sce2))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.7457  0.9209  0.9938  1.0000  1.0717  1.2835

Even if we decide to use the deconvolution size factors, it is strongly recommended to compute a separate set of size factors for the spike-ins. This is because the spike-ins are not affected by total mRNA content. Using the deconvolution size factors will over-normalize the spike-in counts, whereas the spike-in size factors are more appropriate. To obtain the latter without overwriting the former, we set general.use=FALSE in our call to computeSpikeFactors. This means that the spike-in-based size factors will be computed and stored in the SCESet object, but will only be used by the spike-in transcripts. (Obviously, if the spike-in size factors were already being used for normalization, e.g., in sce2, then this extra step is unnecessary.)

sce <- computeSpikeFactors(sce, general.use=FALSE)

Computing normalized expression values

Normalized expression values are calculated using the normalize method from scater. This will use the deconvolution size factors for the endogenous genes, and the spike-in-based size factors for the spike-in transcripts. Each expression value can be interpreted as a log-transformed “normalized count”, and can be used in downstream applications like clustering or dimensionality reduction.

sce <- normalize(sce)

Detecting highly variable genes

Highly variable genes (HVGs) are detected by decomposing the total variance of each gene into its biological and technical components. This avoids prioritizing low-abundance genes that have large variances due to technical noise. First, we fit a mean-variance trend to the normalized log-expression values with trendVar. By default, this done using only the spike-in transcripts, as these should only exhibit technical noise.

fit <- trendVar(sce)

The fitted value of the trend is used as an estimate of the technical component. We subtract the fitted value from the total variance to obtain the biological component for each gene. HVGs are defined as the top set of genes with the largest biological components.

decomp <- decomposeVar(sce, fit)
top.hvgs <- order(decomp$bio, decreasing=TRUE)
head(decomp[top.hvgs,])
##                        mean    total       bio     tech      p.value
## ENSMUSG00000041961 7.541418 2.107359 0.7642261 1.343133 5.059379e-07
## ENSMUSG00000108654 8.282958 2.031944 0.6779703 1.353973 6.068966e-06
## ENSMUSG00000008892 6.844954 2.031455 0.6566542 1.374801 1.347917e-05
## ENSMUSG00000097437 7.331400 1.999182 0.6422291 1.356953 1.562552e-05
## ENSMUSG00000030173 7.523211 1.960417 0.6161856 1.344232 2.576489e-05
## ENSMUSG00000045991 8.183682 1.938980 0.5881451 1.350835 5.472659e-05
##                            FDR
## ENSMUSG00000041961 0.005059379
## ENSMUSG00000108654 0.030344828
## ENSMUSG00000008892 0.039063812
## ENSMUSG00000097437 0.039063812
## ENSMUSG00000030173 0.051529775
## ENSMUSG00000045991 0.087711450

We examined this in more detail by constructing a mean-variance plot. Here, the black points represent the endogenous genes; the red points represent spike-in transcripts; and the red line represents the mean-variance trend fitted to the spike-ins.

plot(decomp$mean, decomp$total, xlab="Mean log-expression", ylab="Variance")
o <- order(decomp$mean)
lines(decomp$mean[o], decomp$tech[o], col="red", lwd=2)
points(fit$mean, fit$var, col="red", pch=16)

plot of chunk unnamed-chunk-16

If spike-ins are absent or of poor quality, an alternative is to fit the trend to the gene variances directly with use.spikes=FALSE. This assumes that technical noise is the major contributor to the variance of most genes in the data set, such that the trend still represents the technical component. The resulting fit can then be used in decomposeVar as described above.

alt.fit <- trendVar(sce, use.spikes=FALSE) 
alt.decomp <- decomposeVar(sce, alt.fit)

If the data set already contains some uninteresting substructure (e.g., batch effects), we can block on this by setting the design argument in trendVar. This ensures that the substructure does not inflate the variance estimates. For example, if the cells were prepared in two separate batches, we could construct a design matrix incorporating this information with model.matrix and pass it to trendVar. The same design will also be used in decomposeVar.

batch <- rep(c("1", "2"), each=100)
design <- model.matrix(~batch)
alt.fit2 <- trendVar(sce, design=design)
alt.decomp2 <- decomposeVar(sce, alt.fit)

Alternative approaches to identifying HVGs are implemented in the DM and technicalCV2 functions. These are based on the coefficient of variation for count data, which provides more power for rare subpopulations but is also more sensitive to outliers. Also see the improvedCV2 function, which is intended as a more stable counterpart of technicalCV2.

Detecting correlated genes

Another useful procedure is to identify significant pairwise correlations between pairs of HVGs. The idea is to distinguish between HVGs caused by random stochasticity, and those that are driving systematic heterogeneity, e.g., between subpopulations. Correlations are computed in the correlatePairs method using a slightly modified version of Spearman's rho. Testing is performed against the null hypothesis of independent genes, using a permutation method in correlateNull to construct a null distribution.

null.dist <- correlateNull(ncol(sce))
# Only using the first 200 genes as a demonstration.
cor.pairs <- correlatePairs(sce[top.hvgs[1:200],], null.dist=null.dist)
head(cor.pairs)
##                gene1              gene2        rho      p.value       FDR
## 1 ENSMUSG00000000326 ENSMUSG00000020435 -0.3036121 2.199998e-05 0.4290656
## 2 ENSMUSG00000031433 ENSMUSG00000063715  0.2795710 6.799993e-05 0.5460835
## 3 ENSMUSG00000035051 ENSMUSG00000028619 -0.2732108 9.799990e-05 0.5460835
## 4 ENSMUSG00000032264 ENSMUSG00000073973  0.2715023 1.119999e-04 0.5460835
## 5 ENSMUSG00000038370 ENSMUSG00000038797 -0.2531133 2.839997e-04 0.8458721
## 6 ENSMUSG00000061080 ENSMUSG00000038481 -0.2477072 3.899996e-04 0.8458721
##   limited
## 1   FALSE
## 2   FALSE
## 3   FALSE
## 4   FALSE
## 5   FALSE
## 6   FALSE

As with variance estimation, if uninteresting substructure is present, this should be blocked on using the design argument in both correlateNull and correlatePairs.

null.dist2 <- correlateNull(design=design, iter=1e5) # fewer iterations, to speed it up.
cor.pairs2 <- correlatePairs(sce[top.hvgs[1:200],], null.dist=null.dist2, design=design)

The pairs can be used for choosing marker genes in experimental validation, and to construct gene-gene association networks. In other situations, the pairs may not be of direct interest - rather, we just want to know whether a gene is correlated with any other gene. This is often the case if we are to select a set of correlated HVGs for use in downstream steps like clustering or dimensionality reduction. To do so, we set per.gene=TRUE to compute a single set of statistics for each gene, rather than for each pair.

cor.genes <- correlatePairs(sce[top.hvgs[1:200],], null.dist=null.dist, per.gene=TRUE)

Significant correlations are defined at a false discovery rate (FDR) threshold of, e.g., 5%. Note that the p-values are calculated by permutation and will have a lower bound. If there were insufficient permutation iterations, a warning will be issued suggesting that more iterations be performed.

Converting to other formats

The SCESet object can be easily converted into other formats using the convertTo method. This allows analyses to be performed using other pipelines and packages. For example, if DE analyses were to be performed using edgeR, the count data in sce could be used to construct a DGEList.

y <- convertTo(sce, type="edgeR")

By default, rows corresponding to spike-in transcripts are dropped when get.spikes=FALSE. As such, the rows of y may not correspond directly to the rows of sce – users should match by row name to ensure correct cross-referencing between objects. Normalization factors are also automatically computed from the size factors.

The same conversion strategy roughly applies to the other supported formats. DE analyses can be performed using DESeq2 by converting the object to a DESeqDataSet. Cells can be ordered on pseudotime with monocle by converting the object to a CellDataSet (in this case, normalized unlogged expression values are stored).

Graphical interaction with the data

scran provides several functions for exploring scRNA-seq data:

These functions will generate and load Shiny apps for interactive data exploration.

Summary

This vignette describes the main functions in the scran package for basic analysis of single-cell RNA-seq data. We cover normalization, cell cycle phase assignment, HVG detection and correlation testing. Conversion to other formats can also be performed in preparation for analyses with other packages in the Bioconductor project. Further information can be obtained by examining the documentation for each function (e.g., ?convertTo); reading the workflow at https://www.bioconductor.org/help/workflows/simpleSingleCell; or asking for help on the Bioconductor support site (please read the posting guide beforehand).

Session information

sessionInfo()
## R version 3.4.1 (2017-06-30)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 16.04.2 LTS
## 
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.5-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.5-bioc/R/lib/libRlapack.so
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] stats4    parallel  stats     graphics  grDevices utils     datasets 
## [8] methods   base     
## 
## other attached packages:
##  [1] org.Mm.eg.db_3.4.1   AnnotationDbi_1.38.1 IRanges_2.10.2      
##  [4] S4Vectors_0.14.3     scran_1.4.5          scater_1.4.0        
##  [7] ggplot2_2.2.1        Biobase_2.36.2       BiocGenerics_0.22.0 
## [10] BiocParallel_1.10.1  BiocStyle_2.4.0      knitr_1.16          
## 
## loaded via a namespace (and not attached):
##  [1] viridis_0.4.0         dynamicTreeCut_1.63-1 edgeR_3.18.1         
##  [4] splines_3.4.1         bit64_0.9-7           viridisLite_0.2.0    
##  [7] shiny_1.0.3           assertthat_0.2.0      statmod_1.4.30       
## [10] highr_0.6             blob_1.1.0            vipor_0.4.5          
## [13] yaml_2.1.14           RSQLite_2.0           backports_1.1.0      
## [16] lattice_0.20-35       glue_1.1.1            limma_3.32.3         
## [19] digest_0.6.12         colorspace_1.3-2      htmltools_0.3.6      
## [22] httpuv_1.3.5          Matrix_1.2-10         plyr_1.8.4           
## [25] XML_3.98-1.9          pkgconfig_2.0.1       biomaRt_2.32.1       
## [28] zlibbioc_1.22.0       xtable_1.8-2          scales_0.4.1         
## [31] tibble_1.3.3          DT_0.2                lazyeval_0.2.0       
## [34] magrittr_1.5          mime_0.5              memoise_1.1.0        
## [37] evaluate_0.10.1       FNN_1.1               beeswarm_0.2.3       
## [40] shinydashboard_0.6.1  tools_3.4.1           data.table_1.10.4    
## [43] matrixStats_0.52.2    stringr_1.2.0         munsell_0.4.3        
## [46] locfit_1.5-9.1        bindrcpp_0.2          compiler_3.4.1       
## [49] rlang_0.1.1           rhdf5_2.20.0          grid_3.4.1           
## [52] RCurl_1.95-4.8        tximport_1.4.0        htmlwidgets_0.9      
## [55] rjson_0.2.15          igraph_1.1.1          bitops_1.0-6         
## [58] rmarkdown_1.6         gtable_0.2.0          DBI_0.7              
## [61] reshape2_1.4.2        R6_2.2.2              gridExtra_2.2.1      
## [64] zoo_1.8-0             dplyr_0.7.1           bit_1.1-12           
## [67] bindr_0.1             rprojroot_1.2         stringi_1.1.5        
## [70] ggbeeswarm_0.5.3      Rcpp_0.12.12