Abstract
A basic task in the analysis of count data from RNA-seq is the detection of differentially expressed genes. The count data are presented as a table which reports, for each sample, the number of sequence fragments that have been assigned to each gene. Analogous data also arise for other assay types, including comparative ChIP-Seq, HiC, shRNA screening, mass spectrometry. An important analysis question is the quantification and statistical inference of systematic changes between conditions, as compared to within-condition variability. The package DESeq2 provides methods to test for differential expression by use of negative binomial generalized linear models; the estimates of dispersion and logarithmic fold changes incorporate data-driven prior distributions This vignette explains the use of the package and demonstrates typical workflows. An RNA-seq workflow on the Bioconductor website covers similar material to this vignette but at a slower pace, including the generation of count matrices from FASTQ files. DESeq2 package version: 1.18.1
Note: if you use DESeq2 in published research, please cite:
Love, M.I., Huber, W., Anders, S. (2014) Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biology, 15:550. 10.1186/s13059-014-0550-8
Other Bioconductor packages with similar aims are edgeR, limma, DSS, EBSeq, and baySeq.
Here we show the most basic steps for a differential expression analysis. There are a variety of steps upstream of DESeq2 that result in the generation of counts or estimated counts for each sample, which we will discuss in the sections below. This code chunk assumes that you have a count matrix called cts
and a table of sample information called coldata
. The design
indicates how to model the samples, here, that we want to measure the effect of the condition, controlling for batch differences. The two factor variables batch
and condition
should be columns of coldata
.
dds <- DESeqDataSetFromMatrix(countData = cts,
colData = coldata,
design= ~ batch + condition)
dds <- DESeq(dds)
res <- results(dds, contrast=c("condition","treat","ctrl"))
resultsNames(dds)
res <- lfcShrink(dds, coef=2)
The following starting functions will be explained below:
Any and all DESeq2 questions should be posted to the Bioconductor support site, which serves as a searchable knowledge base of questions and answers:
https://support.bioconductor.org
Posting a question and tagging with “DESeq2” will automatically send an alert to the package authors to respond on the support site. See the first question in the list of Frequently Asked Questions (FAQ) for information about how to construct an informative post.
You should not email your question to the package authors, as we will just reply that the question should be posted to the Bioconductor support site.
As input, the DESeq2 package expects count data as obtained, e.g., from RNA-seq or another high-throughput sequencing experiment, in the form of a matrix of integer values. The value in the i-th row and the j-th column of the matrix tells how many reads can be assigned to gene i in sample j. Analogously, for other types of assays, the rows of the matrix might correspond e.g. to binding regions (with ChIP-Seq) or peptide sequences (with quantitative mass spectrometry). We will list method for obtaining count matrices in sections below.
The values in the matrix should be un-normalized counts or estimated counts of sequencing reads (for single-end RNA-seq) or fragments (for paired-end RNA-seq). The RNA-seq workflow describes multiple techniques for preparing such count matrices. It is important to provide count matrices as input for DESeq2’s statistical model (Love, Huber, and Anders 2014) to hold, as only the count values allow assessing the measurement precision correctly. The DESeq2 model internally corrects for library size, so transformed or normalized values such as counts scaled by library size should not be used as input.
The object class used by the DESeq2 package to store the read counts and the intermediate estimated quantities during statistical analysis is the DESeqDataSet, which will usually be represented in the code here as an object dds
.
A technical detail is that the DESeqDataSet class extends the RangedSummarizedExperiment class of the SummarizedExperiment package. The “Ranged” part refers to the fact that the rows of the assay data (here, the counts) can be associated with genomic ranges (the exons of genes). This association facilitates downstream exploration of results, making use of other Bioconductor packages’ range-based functionality (e.g. find the closest ChIP-seq peaks to the differentially expressed genes).
A DESeqDataSet object must have an associated design formula. The design formula expresses the variables which will be used in modeling. The formula should be a tilde (~) followed by the variables with plus signs between them (it will be coerced into an formula if it is not already). The design can be changed later, however then all differential analysis steps should be repeated, as the design formula is used to estimate the dispersions and to estimate the log2 fold changes of the model.
Note: In order to benefit from the default settings of the package, you should put the variable of interest at the end of the formula and make sure the control level is the first level.
We will now show 4 ways of constructing a DESeqDataSet, depending on what pipeline was used upstream of DESeq2 to generated counts or estimated counts:
A newer and recommended pipeline is to use fast transcript abundance quantifiers upstream of DESeq2, and then to create gene-level count matrices for use with DESeq2 by importing the quantification data using the tximport package. This workflow allows users to import transcript abundance estimates from a variety of external software, including the following methods:
Some advantages of using the above methods for transcript abundance estimation are: (i) this approach corrects for potential changes in gene length across samples (e.g. from differential isoform usage) (Trapnell et al. 2013), (ii) some of these methods (Salmon, Sailfish, kallisto) are substantially faster and require less memory and disk usage compared to alignment-based methods that require creation and storage of BAM files, and (iii) it is possible to avoid discarding those fragments that can align to multiple genes with homologous sequence, thus increasing sensitivity (Robert and Watson 2015).
Full details on the motivation and methods for importing transcript level abundance and count estimates, summarizing to gene-level count matrices and producing an offset which corrects for potential changes in average transcript length across samples are described in (Soneson, Love, and Robinson 2015). Note that the tximport-to-DESeq2 approach uses estimated gene counts from the transcript abundance quantifiers, but not normalized counts.
A tutorial on how to use the Salmon software for quantifying transcript abundance can be found here. We recommend using the --gcBias
flag which estimates a correction factor for systematic biases commonly present in RNA-seq data (Love, Hogenesch, and Irizarry 2016; Patro et al. 2017), unless you are certain that your data do not contain such bias.
Here, we demonstrate how to import transcript abundances and construct of a gene-level DESeqDataSet object from Salmon quant.sf
files, which are stored in the tximportData package. You do not need the tximportData
package for your analysis, it is only used here for demonstration.
Note that, instead of locating dir
using system.file, a user would typically just provide a path, e.g. /path/to/quant/files
. For a typical use, the condition
information should already be present as a column of the sample table samples
, while here we construct artificial condition labels for demonstration.
library("tximport")
library("readr")
library("tximportData")
dir <- system.file("extdata", package="tximportData")
samples <- read.table(file.path(dir,"samples.txt"), header=TRUE)
samples$condition <- factor(rep(c("A","B"),each=3))
rownames(samples) <- samples$run
samples[,c("pop","center","run","condition")]
## pop center run condition
## ERR188297 TSI UNIGE ERR188297 A
## ERR188088 TSI UNIGE ERR188088 A
## ERR188329 TSI UNIGE ERR188329 A
## ERR188288 TSI UNIGE ERR188288 B
## ERR188021 TSI UNIGE ERR188021 B
## ERR188356 TSI UNIGE ERR188356 B
Next we specify the path to the files using the appropriate columns of samples
, and we read in a table that links transcripts to genes for this dataset.
files <- file.path(dir,"salmon", samples$run, "quant.sf")
names(files) <- samples$run
tx2gene <- read.csv(file.path(dir, "tx2gene.csv"))
We import the necessary quantification data for DESeq2 using the tximport function. For further details on use of tximport, including the construction of the tx2gene
table for linking transcripts to genes in your dataset, please refer to the tximport package vignette.
txi <- tximport(files, type="salmon", tx2gene=tx2gene)
Finally, we can construct a DESeqDataSet from the txi
object and sample information in samples
.
library("DESeq2")
ddsTxi <- DESeqDataSetFromTximport(txi,
colData = samples,
design = ~ condition)
The ddsTxi
object here can then be used as dds
in the following analysis steps.
Alternatively, the function DESeqDataSetFromMatrix can be used if you already have a matrix of read counts prepared from another source. Another method for quickly producing count matrices from alignment files is the featureCounts function (Liao, Smyth, and Shi 2013) in the Rsubread package. To use DESeqDataSetFromMatrix, the user should provide the counts matrix, the information about the samples (the columns of the count matrix) as a DataFrame or data.frame, and the design formula.
To demonstate the use of DESeqDataSetFromMatrix, we will read in count data from the pasilla package. We read in a count matrix, which we will name cts
, and the sample information table, which we will name coldata
. Further below we describe how to extract these objects from, e.g. featureCounts output.
library("pasilla")
pasCts <- system.file("extdata",
"pasilla_gene_counts.tsv",
package="pasilla", mustWork=TRUE)
pasAnno <- system.file("extdata",
"pasilla_sample_annotation.csv",
package="pasilla", mustWork=TRUE)
cts <- as.matrix(read.csv(pasCts,sep="\t",row.names="gene_id"))
coldata <- read.csv(pasAnno, row.names=1)
coldata <- coldata[,c("condition","type")]
We examine the count matrix and column data to see if they are consistent in terms of sample order.
head(cts,2)
## untreated1 untreated2 untreated3 untreated4 treated1 treated2
## FBgn0000003 0 0 0 0 0 0
## FBgn0000008 92 161 76 70 140 88
## treated3
## FBgn0000003 1
## FBgn0000008 70
coldata
## condition type
## treated1fb treated single-read
## treated2fb treated paired-end
## treated3fb treated paired-end
## untreated1fb untreated single-read
## untreated2fb untreated single-read
## untreated3fb untreated paired-end
## untreated4fb untreated paired-end
Note that these are not in the same order with respect to samples!
It is absolutely critical that the columns of the count matrix and the rows of the column data (information about samples) are in the same order. DESeq2 will not make guesses as to which column of the count matrix belongs to which row of the column data, these must be provided to DESeq2 already in consistent order.
As they are not in the correct order as given, we need to re-arrange one or the other so that they are consistent in terms of sample order (if we do not, later functions would produce an error). We additionally need to chop off the "fb"
of the row names of coldata
, so the naming is consistent.
rownames(coldata) <- sub("fb", "", rownames(coldata))
all(rownames(coldata) %in% colnames(cts))
## [1] TRUE
all(rownames(coldata) == colnames(cts))
## [1] FALSE
cts <- cts[, rownames(coldata)]
all(rownames(coldata) == colnames(cts))
## [1] TRUE
If you have used the featureCounts function (Liao, Smyth, and Shi 2013) in the Rsubread package, the matrix of read counts can be directly provided from the "counts"
element in the list output. The count matrix and column data can typically be read into R from flat files using base R functions such as read.csv or read.delim. For htseq-count files, see the dedicated input function below.
With the count matrix, cts
, and the sample information, coldata
, we can construct a DESeqDataSet:
library("DESeq2")
dds <- DESeqDataSetFromMatrix(countData = cts,
colData = coldata,
design = ~ condition)
dds
## class: DESeqDataSet
## dim: 14599 7
## metadata(1): version
## assays(1): counts
## rownames(14599): FBgn0000003 FBgn0000008 ... FBgn0261574
## FBgn0261575
## rowData names(0):
## colnames(7): treated1 treated2 ... untreated3 untreated4
## colData names(2): condition type
If you have additional feature data, it can be added to the DESeqDataSet by adding to the metadata columns of a newly constructed object. (Here we add redundant data just for demonstration, as the gene names are already the rownames of the dds
.)
featureData <- data.frame(gene=rownames(cts))
mcols(dds) <- DataFrame(mcols(dds), featureData)
mcols(dds)
## DataFrame with 14599 rows and 1 column
## gene
## <factor>
## 1 FBgn0000003
## 2 FBgn0000008
## 3 FBgn0000014
## 4 FBgn0000015
## 5 FBgn0000017
## ... ...
## 14595 FBgn0261571
## 14596 FBgn0261572
## 14597 FBgn0261573
## 14598 FBgn0261574
## 14599 FBgn0261575
You can use the function DESeqDataSetFromHTSeqCount if you have used htseq-count from the HTSeq python package (Anders, Pyl, and Huber 2014). For an example of using the python scripts, see the pasilla data package. First you will want to specify a variable which points to the directory in which the htseq-count output files are located.
directory <- "/path/to/your/files/"
However, for demonstration purposes only, the following line of code points to the directory for the demo htseq-count output files packages for the pasilla package.
directory <- system.file("extdata", package="pasilla",
mustWork=TRUE)
We specify which files to read in using list.files, and select those files which contain the string "treated"
using grep. The sub function is used to chop up the sample filename to obtain the condition status, or you might alternatively read in a phenotypic table using read.table.
sampleFiles <- grep("treated",list.files(directory),value=TRUE)
sampleCondition <- sub("(.*treated).*","\\1",sampleFiles)
sampleTable <- data.frame(sampleName = sampleFiles,
fileName = sampleFiles,
condition = sampleCondition)
Then we build the DESeqDataSet using the following function:
library("DESeq2")
ddsHTSeq <- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable,
directory = directory,
design= ~ condition)
ddsHTSeq
## class: DESeqDataSet
## dim: 70463 7
## metadata(1): version
## assays(1): counts
## rownames(70463): FBgn0000003:001 FBgn0000008:001 ...
## FBgn0261575:001 FBgn0261575:002
## rowData names(0):
## colnames(7): treated1fb.txt treated2fb.txt ... untreated3fb.txt
## untreated4fb.txt
## colData names(1): condition
An example of the steps to produce a RangedSummarizedExperiment can be found in the RNA-seq workflow and in the vignette for the data package airway. Here we load the RangedSummarizedExperiment from that package in order to build a DESeqDataSet.
library("airway")
data("airway")
se <- airway
The constructor function below shows the generation of a DESeqDataSet from a RangedSummarizedExperiment se
.
library("DESeq2")
ddsSE <- DESeqDataSet(se, design = ~ cell + dex)
ddsSE
## class: DESeqDataSet
## dim: 64102 8
## metadata(2): '' version
## assays(1): counts
## rownames(64102): ENSG00000000003 ENSG00000000005 ... LRG_98 LRG_99
## rowData names(0):
## colnames(8): SRR1039508 SRR1039509 ... SRR1039520 SRR1039521
## colData names(9): SampleName cell ... Sample BioSample
While it is not necessary to pre-filter low count genes before running the DESeq2 functions, there are two reasons which make pre-filtering useful: by removing rows in which there are very few reads, we reduce the memory size of the dds
data object, and we increase the speed of the transformation and testing functions within DESeq2. Here we perform a minimal pre-filtering to keep only rows that have at least 10 reads total. Note that more strict filtering to increase power is automatically applied via independent filtering on the mean of normalized counts within the results function.
keep <- rowSums(counts(dds)) >= 10
dds <- dds[keep,]
By default, R will choose a reference level for factors based on alphabetical order. Then, if you never tell the DESeq2 functions which level you want to compare against (e.g. which level represents the control group), the comparisons will be based on the alphabetical order of the levels. There are two solutions: you can either explicitly tell results which comparison to make using the contrast
argument (this will be shown later), or you can explicitly set the factors levels. You should only change the factor levels of variables in the design before running the DESeq2 analysis, not during or afterward. Setting the factor levels can be done in two ways, either using factor:
dds$condition <- factor(dds$condition, levels = c("untreated","treated"))
…or using relevel, just specifying the reference level:
dds$condition <- relevel(dds$condition, ref = "untreated")
If you need to subset the columns of a DESeqDataSet, i.e., when removing certain samples from the analysis, it is possible that all the samples for one or more levels of a variable in the design formula would be removed. In this case, the droplevels function can be used to remove those levels which do not have samples in the current DESeqDataSet:
dds$condition <- droplevels(dds$condition)
DESeq2 provides a function collapseReplicates which can assist in combining the counts from technical replicates into single columns of the count matrix. The term technical replicate implies multiple sequencing runs of the same library. You should not collapse biological replicates using this function. See the manual page for an example of the use of collapseReplicates.
We continue with the pasilla data constructed from the count matrix method above. This data set is from an experiment on Drosophila melanogaster cell cultures and investigated the effect of RNAi knock-down of the splicing factor pasilla (Brooks et al. 2011). The detailed transcript of the production of the pasilla data is provided in the vignette of the data package pasilla.
The standard differential expression analysis steps are wrapped into a single function, DESeq. The estimation steps performed by this function are described below, in the manual page for ?DESeq
and in the Methods section of the DESeq2 publication (Love, Huber, and Anders 2014).
Results tables are generated using the function results, which extracts a results table with log2 fold changes, p values and adjusted p values. With no additional arguments to results, the log2 fold change and Wald test p value will be for the last variable in the design formula, and if this is a factor, the comparison will be the last level of this variable over the first level. However, the order of the variables of the design do not matter so long as the user specifies the comparison using the name
or contrast
arguments of results (described later and in ?results
).
Details about the comparison are printed to the console, above the results table. The text, condition treated vs untreated
, tells you that the estimates are of the logarithmic fold change log2(treated/untreated).
dds <- DESeq(dds)
res <- results(dds)
res
## log2 fold change (MLE): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 9921 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.144292 0.002276428 0.2237292 0.01017493 0.99188172
## FBgn0000014 1.056523 -0.495113878 2.1431096 -0.23102593 0.81729466
## FBgn0000017 4352.553569 -0.239918945 0.1263378 -1.89902705 0.05756092
## FBgn0000018 418.610484 -0.104673913 0.1484903 -0.70492106 0.48085936
## FBgn0000024 6.406200 0.210848562 0.6895923 0.30575830 0.75978868
## ... ... ... ... ... ...
## FBgn0261570 3208.388610 0.29553289 0.1273514 2.32061001 0.0203079
## FBgn0261572 6.197188 -0.95882276 0.7753130 -1.23669125 0.2162017
## FBgn0261573 2240.979511 0.01271946 0.1133028 0.11226079 0.9106166
## FBgn0261574 4857.680373 0.01539243 0.1925619 0.07993497 0.9362890
## FBgn0261575 10.682520 0.16356865 0.9308661 0.17571663 0.8605166
## padj
## <numeric>
## FBgn0000008 0.9972093
## FBgn0000014 NA
## FBgn0000017 0.2880108
## FBgn0000018 0.8268644
## FBgn0000024 0.9435005
## ... ...
## FBgn0261570 0.1442486
## FBgn0261572 0.6078453
## FBgn0261573 0.9826550
## FBgn0261574 0.9881787
## FBgn0261575 0.9679223
In previous versions of DESeq2, the DESeq function by default would produce moderated, or shrunken, log2 fold changes through the use of the betaPrior
argument. In version 1.16 and higher, we have split the moderation of log2 fold changes into a separate function, lfcShrink, for reasons described in the changes section below.
Here we provide the dds
object and the number of the coefficient we want to moderate. It is also possible to specify a contrast
, instead of coef
, which works the same as the contrast
argument of the results function. If a results object is provided, the log2FoldChange
column will be swapped out, otherwise lfcShrink returns a vector of shrunken log2 fold changes.
resultsNames(dds)
## [1] "Intercept" "condition_treated_vs_untreated"
resLFC <- lfcShrink(dds, coef=2)
resLFC
## log2 fold change (MAP): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 9921 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.144292 0.00155932 0.15353974 0.01017493 0.99188172
## FBgn0000014 1.056523 -0.01200958 0.05031619 -0.23102593 0.81729466
## FBgn0000017 4352.553569 -0.20935007 0.11026137 -1.89902705 0.05756092
## FBgn0000018 418.610484 -0.08715582 0.12358964 -0.70492106 0.48085936
## FBgn0000024 6.406200 0.03956173 0.12886543 0.30575830 0.75978868
## ... ... ... ... ... ...
## FBgn0261570 3208.388610 0.25744403 0.1109237 2.32061001 0.0203079
## FBgn0261572 6.197188 -0.14674317 0.1220335 -1.23669125 0.2162017
## FBgn0261573 2240.979511 0.01138380 0.1014129 0.11226079 0.9106166
## FBgn0261574 4857.680373 0.01150000 0.1438479 0.07993497 0.9362890
## FBgn0261575 10.682520 0.01898573 0.1043720 0.17571663 0.8605166
## padj
## <numeric>
## FBgn0000008 0.9972093
## FBgn0000014 NA
## FBgn0000017 0.2880108
## FBgn0000018 0.8268644
## FBgn0000024 0.9435005
## ... ...
## FBgn0261570 0.1442486
## FBgn0261572 0.6078453
## FBgn0261573 0.9826550
## FBgn0261574 0.9881787
## FBgn0261575 0.9679223
The above steps should take less than 30 seconds for most analyses. For experiments with many samples (e.g. 100 samples), one can take advantage of parallelized computation. Parallelizing DESeq
, results
, and lfcShrink
can be easily accomplished by loading the BiocParallel package, and then setting the following arguments: parallel=TRUE
and BPPARAM=MulticoreParam(4)
, for example, splitting the job over 4 cores. Note that results
for coefficients or contrasts listed in resultsNames(dds)
is fast and will not need parallelization. As an alternative to BPPARAM
, one can register
cores at the beginning of an analysis, and then just specify parallel=TRUE
to the functions when called.
library("BiocParallel")
register(MulticoreParam(4))
We can order our results table by the smallest p value:
resOrdered <- res[order(res$pvalue),]
We can summarize some basic tallies using the summary function.
summary(res)
##
## out of 9921 with nonzero total read count
## adjusted p-value < 0.1
## LFC > 0 (up) : 518, 5.2%
## LFC < 0 (down) : 536, 5.4%
## outliers [1] : 1, 0.01%
## low counts [2] : 1539, 16%
## (mean count < 6)
## [1] see 'cooksCutoff' argument of ?results
## [2] see 'independentFiltering' argument of ?results
How many adjusted p-values were less than 0.1?
sum(res$padj < 0.1, na.rm=TRUE)
## [1] 1054
The results function contains a number of arguments to customize the results table which is generated. You can read about these arguments by looking up ?results
. Note that the results function automatically performs independent filtering based on the mean of normalized counts for each gene, optimizing the number of genes which will have an adjusted p value below a given FDR cutoff, alpha
. Independent filtering is further discussed below. By default the argument alpha
is set to \(0.1\). If the adjusted p value cutoff will be a value other than \(0.1\), alpha
should be set to that value:
res05 <- results(dds, alpha=0.05)
summary(res05)
##
## out of 9921 with nonzero total read count
## adjusted p-value < 0.05
## LFC > 0 (up) : 407, 4.1%
## LFC < 0 (down) : 431, 4.3%
## outliers [1] : 1, 0.01%
## low counts [2] : 1347, 14%
## (mean count < 5)
## [1] see 'cooksCutoff' argument of ?results
## [2] see 'independentFiltering' argument of ?results
sum(res05$padj < 0.05, na.rm=TRUE)
## [1] 838
A generalization of the idea of p value filtering is to weight hypotheses to optimize power. A Bioconductor package, IHW, is available that implements the method of Independent Hypothesis Weighting (Ignatiadis et al. 2016). Here we show the use of IHW for p value adjustment of DESeq2 results. For more details, please see the vignette of the IHW package. The IHW result object is stored in the metadata.
Note: If the results of independent hypothesis weighting are used in published research, please cite:
Ignatiadis, N., Klaus, B., Zaugg, J.B., Huber, W. (2016) Data-driven hypothesis weighting increases detection power in genome-scale multiple testing. Nature Methods, 13:7. 10.1038/nmeth.3885
library("IHW")
resIHW <- results(dds, filterFun=ihw)
summary(resIHW)
##
## out of 9921 with nonzero total read count
## adjusted p-value < 0.1
## LFC > 0 (up) : 504, 5.1%
## LFC < 0 (down) : 540, 5.4%
## outliers [1] : 1, 0.01%
## [1] see 'cooksCutoff' argument of ?results
## [2] see metadata(res)$ihwResult on hypothesis weighting
sum(resIHW$padj < 0.1, na.rm=TRUE)
## [1] 1044
metadata(resIHW)$ihwResult
## ihwResult object with 9921 hypothesis tests
## Nominal FDR control level: 0.1
## Split into 6 bins, based on an ordinal covariate
If a multi-factor design is used, or if the variable in the design formula has more than two levels, the contrast
argument of results can be used to extract different comparisons from the DESeqDataSet returned by DESeq. The use of the contrast
argument is further discussed below.
For advanced users, note that all the values calculated by the DESeq2 package are stored in the DESeqDataSet object, and access to these values is discussed below.
In DESeq2, the function plotMA shows the log2 fold changes attributable to a given variable over the mean of normalized counts for all the samples in the DESeqDataSet. Points will be colored red if the adjusted p value is less than 0.1. Points which fall out of the window are plotted as open triangles pointing either up or down.
plotMA(res, ylim=c(-2,2))
It is more useful visualize the MA-plot for the shrunken log2 fold changes, which remove the noise associated with log2 fold changes from low count genes without requiring arbitrary filtering thresholds.
plotMA(resLFC, ylim=c(-2,2))
After calling plotMA, one can use the function identify to interactively detect the row number of individual genes by clicking on the plot. One can then recover the gene identifiers by saving the resulting indices:
idx <- identify(res$baseMean, res$log2FoldChange)
rownames(res)[idx]
The moderated log fold changes proposed by Love, Huber, and Anders (2014) use a normal prior distribution, centered on zero and with a scale that is fit to the data. The shrunken log fold changes are useful for ranking and visualization, without the need for arbitrary filters on low count genes. The normal prior can sometimes produce too strong of shrinkage for certain datasets. In DESeq2 version 1.18, we include two additional adaptive shrinkage estimators, available via the type
argument of lfcShrink
. For more details, see ?lfcShrink
The options for type
are:
normal
is the the original DESeq2 shrinkage estimator, an adaptive normal priorapeglm
is the adaptive t prior shrinkage estimator from the apeglm packageashr
is the adaptive shrinkage estimator from the ashr package (Stephens 2016). Here DESeq2 uses the ashr option to fit a mixture of normal distributions to form the prior, with method="shrinkage"
Note: if the shrinkage estimator type="ashr"
is used in published research, please cite:
Stephens, M. (2016) False discovery rates: a new deal. Biostatistics, 18:2. 10.1093/biostatistics/kxw041
resApe <- lfcShrink(dds, coef=2, type="apeglm")
resAsh <- lfcShrink(dds, coef=2, type="ashr")
par(mfrow=c(1,3), mar=c(4,4,2,1))
xlim <- c(1,1e5); ylim <- c(-3,3)
plotMA(resLFC, xlim=xlim, ylim=ylim, main="normal")
plotMA(resApe, xlim=xlim, ylim=ylim, main="apeglm")
plotMA(resAsh, xlim=xlim, ylim=ylim, main="ashr")
Note: due to the nature of the statistical model and optimization approach, apeglm
is usually a factor of ~5 slower than normal
. For example, with 10,000 genes and 10 samples, normal
may take ~3 seconds, while apeglm
takes ~15 seconds (on a laptop). However, apeglm
can be more than an order of magnitude slower when there are many coefficients, e.g. 10 or more coefficients in resultsNames(dds)
. The method ashr
is fairly fast and does not depend on the number of coefficients, as it uses only the estimated MLE coefficients and their standard errors. A solution for speeding up normal
and apeglm
is to use multiple cores. This can be easily accomplished by loading the BiocParallel package, and then setting the following arguments of lfcShrink
: parallel=TRUE
and BPPARAM=MulticoreParam(4)
, for example, splitting the job over 4 cores. This approach can also be used with DESeq
and results
, as mentioned above.
Note: If there is unwanted variation present in the data (e.g. batch effects) it is always recommend to correct for this, which can be accommodated in DESeq2 by including in the design any known batch variables or by using functions/packages such as svaseq
in sva (Leek 2014) or the RUV
functions in RUVSeq (Risso et al. 2014) to estimate variables that capture the unwanted variation. In addition, the ashr developers have a specific method for accounting for unwanted variation in combination with ashr (Gerard and Stephens 2017).
It can also be useful to examine the counts of reads for a single gene across the groups. A simple function for making this plot is plotCounts, which normalizes counts by sequencing depth and adds a pseudocount of 1/2 to allow for log scale plotting. The counts are grouped by the variables in intgroup
, where more than one variable can be specified. Here we specify the gene which had the smallest p value from the results table created above. You can select the gene to plot by rowname or by numeric index.
plotCounts(dds, gene=which.min(res$padj), intgroup="condition")
For customized plotting, an argument returnData
specifies that the function should only return a data.frame for plotting with ggplot.
d <- plotCounts(dds, gene=which.min(res$padj), intgroup="condition",
returnData=TRUE)
library("ggplot2")
ggplot(d, aes(x=condition, y=count)) +
geom_point(position=position_jitter(w=0.1,h=0)) +
scale_y_log10(breaks=c(25,100,400))
Information about which variables and tests were used can be found by calling the function mcols on the results object.
mcols(res)$description
## [1] "mean of normalized counts for all samples"
## [2] "log2 fold change (MLE): condition treated vs untreated"
## [3] "standard error: condition treated vs untreated"
## [4] "Wald statistic: condition treated vs untreated"
## [5] "Wald test p-value: condition treated vs untreated"
## [6] "BH adjusted p-values"
For a particular gene, a log2 fold change of -1 for condition treated vs untreated
means that the treatment induces a multiplicative change in observed gene expression level of \(2^{-1} = 0.5\) compared to the untreated condition. If the variable of interest is continuous-valued, then the reported log2 fold change is per unit of change of that variable.
Note on p-values set to NA: some values in the results table can be set to NA
for one of the following reasons:
baseMean
column will be zero, and the log2 fold change estimates, p value and adjusted p value will all be set to NA
.NA
. These outlier counts are detected by Cook’s distance. Customization of this outlier filtering and description of functionality for replacement of outlier counts and refitting is described belowNA
. Description and customization of independent filtering is described belowReportingTools. An HTML report of the results with plots and sortable/filterable columns can be generated using the ReportingTools package on a DESeqDataSet that has been processed by the DESeq function. For a code example, see the RNA-seq differential expression vignette at the ReportingTools page, or the manual page for the publish method for the DESeqDataSet class.
regionReport. An HTML and PDF summary of the results with plots can also be generated using the regionReport package. The DESeq2Report function should be run on a DESeqDataSet that has been processed by the DESeq function. For more details see the manual page for DESeq2Report and an example vignette in the regionReport package.
Glimma. Interactive visualization of DESeq2 output, including MA-plots (also called MD-plot) can be generated using the Glimma package. See the manual page for glMDPlot.DESeqResults.
pcaExplorer. Interactive visualization of DESeq2 output, including PCA plots, boxplots of counts and other useful summaries can be generated using the pcaExplorer package. See the Launching the application section of the package vignette.
A plain-text file of the results can be exported using the base R functions write.csv or write.delim. We suggest using a descriptive file name indicating the variable and levels which were tested.
write.csv(as.data.frame(resOrdered),
file="condition_treated_results.csv")
Exporting only the results which pass an adjusted p value threshold can be accomplished with the subset function, followed by the write.csv function.
resSig <- subset(resOrdered, padj < 0.1)
resSig
## log2 fold change (MLE): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 1054 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0039155 730.5677 -4.618741 0.16912665 -27.30936 3.283533e-164
## FBgn0025111 1501.4479 2.899946 0.12735926 22.76981 9.134947e-115
## FBgn0029167 3706.0240 -2.196912 0.09792037 -22.43570 1.765125e-111
## FBgn0003360 4342.8321 -3.179541 0.14356712 -22.14672 1.121885e-108
## FBgn0035085 638.2193 -2.560242 0.13781525 -18.57735 4.901323e-77
## ... ... ... ... ... ...
## FBgn0037073 973.10163 -0.2521459 0.10099319 -2.496662 0.01253684
## FBgn0029976 2312.58850 -0.2211265 0.08858313 -2.496259 0.01255108
## FBgn0030938 24.80638 0.9576449 0.38364585 2.496169 0.01255428
## FBgn0034753 7775.27113 0.3935148 0.15767268 2.495770 0.01256840
## FBgn0039260 1088.27659 -0.2592536 0.10387902 -2.495727 0.01256994
## padj
## <numeric>
## FBgn0039155 2.751929e-160
## FBgn0025111 3.827999e-111
## FBgn0029167 4.931170e-108
## FBgn0003360 2.350629e-105
## FBgn0035085 8.215598e-74
## ... ...
## FBgn0037073 0.0999513
## FBgn0029976 0.0999513
## FBgn0030938 0.0999513
## FBgn0034753 0.0999513
## FBgn0039260 0.0999513
Experiments with more than one factor influencing the counts can be analyzed using design formula that include the additional variables. In fact, DESeq2 can analyze any possible experimental design that can be expressed with fixed effects terms (multiple factors, designs with interactions, designs with continuous variables, splines, and so on are all possible).
By adding variables to the design, one can control for additional variation in the counts. For example, if the condition samples are balanced across experimental batches, by including the batch
factor to the design, one can increase the sensitivity for finding differences due to condition
. There are multiple ways to analyze experiments when the additional variables are of interest and not just controlling factors (see section on interactions).
The data in the pasilla package have a condition of interest (the column condition
), as well as information on the type of sequencing which was performed (the column type
), as we can see below:
colData(dds)
## DataFrame with 7 rows and 3 columns
## condition type sizeFactor
## <factor> <factor> <numeric>
## treated1 treated single-read 1.6355014
## treated2 treated paired-end 0.7612159
## treated3 treated paired-end 0.8326603
## untreated1 untreated single-read 1.1383376
## untreated2 untreated single-read 1.7935406
## untreated3 untreated paired-end 0.6494828
## untreated4 untreated paired-end 0.7516005
We create a copy of the DESeqDataSet, so that we can rerun the analysis using a multi-factor design.
ddsMF <- dds
We change the levels of type
so it only contains letters (numbers, underscore and period are also allowed in design factor levels). Be careful when changing level names to use the same order as the current levels.
levels(ddsMF$type)
## [1] "paired-end" "single-read"
levels(ddsMF$type) <- sub("-.*", "", levels(ddsMF$type))
levels(ddsMF$type)
## [1] "paired" "single"
We can account for the different types of sequencing, and get a clearer picture of the differences attributable to the treatment. As condition
is the variable of interest, we put it at the end of the formula. Thus the results function will by default pull the condition
results unless contrast
or name
arguments are specified.
Then we can re-run DESeq:
design(ddsMF) <- formula(~ type + condition)
ddsMF <- DESeq(ddsMF)
Again, we access the results using the results function.
resMF <- results(ddsMF)
head(resMF)
## log2 fold change (MLE): condition treated vs untreated
## Wald test p-value: condition treated vs untreated
## DataFrame with 6 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.144292 -0.04055736 0.2200633 -0.18429862 0.85377920
## FBgn0000014 1.056523 -0.08351882 2.0760816 -0.04022906 0.96791051
## FBgn0000017 4352.553569 -0.25605701 0.1122166 -2.28181137 0.02250048
## FBgn0000018 418.610484 -0.06461523 0.1313488 -0.49193622 0.62276444
## FBgn0000024 6.406200 0.30898382 0.7560075 0.40870468 0.68275640
## FBgn0000032 989.720217 -0.04837925 0.1208420 -0.40035128 0.68889781
## padj
## <numeric>
## FBgn0000008 0.9494634
## FBgn0000014 NA
## FBgn0000017 0.1302627
## FBgn0000018 0.8593904
## FBgn0000024 0.8877697
## FBgn0000032 0.8902013
It is also possible to retrieve the log2 fold changes, p values and adjusted p values of the type
variable. The contrast
argument of the function results takes a character vector of length three: the name of the variable, the name of the factor level for the numerator of the log2 ratio, and the name of the factor level for the denominator. The contrast
argument can also take other forms, as described in the help page for results and below
resMFType <- results(ddsMF,
contrast=c("type", "single", "paired"))
head(resMFType)
## log2 fold change (MLE): type single vs paired
## Wald test p-value: type single vs paired
## DataFrame with 6 rows and 6 columns
## baseMean log2FoldChange lfcSE stat pvalue
## <numeric> <numeric> <numeric> <numeric> <numeric>
## FBgn0000008 95.144292 -0.2623745 0.2185279 -1.2006453 0.22988882
## FBgn0000014 1.056523 3.2898915 2.0531720 1.6023457 0.10907917
## FBgn0000017 4352.553569 -0.1000200 0.1120782 -0.8924127 0.37217178
## FBgn0000018 418.610484 0.2290491 0.1302603 1.7583952 0.07868028
## FBgn0000024 6.406200 0.3060704 0.7514061 0.4073303 0.68376543
## FBgn0000032 989.720217 0.2374130 0.1202745 1.9739259 0.04839017
## padj
## <numeric>
## FBgn0000008 0.5361332
## FBgn0000014 NA
## FBgn0000017 0.6831308
## FBgn0000018 0.2917133
## FBgn0000024 0.8804532
## FBgn0000032 0.2175655
If the variable is continuous or an interaction term (see section on interactions) then the results can be extracted using the name
argument to results, where the name is one of elements returned by resultsNames(dds)
.
In order to test for differential expression, we operate on raw counts and use discrete distributions as described in the previous section on differential expression. However for other downstream analyses – e.g. for visualization or clustering – it might be useful to work with transformed versions of the count data.
Maybe the most obvious choice of transformation is the logarithm. Since count values for a gene can be zero in some conditions (and non-zero in others), some advocate the use of pseudocounts, i.e. transformations of the form:
\[ y = \log_2(n + n_0) \]
where n represents the count values and \(n_0\) is a positive constant.
In this section, we discuss two alternative approaches that offer more theoretical justification and a rational way of choosing parameters equivalent to \(n_0\) above. One makes use of the concept of variance stabilizing transformations (VST) (Tibshirani 1988; Huber et al. 2003; Anders and Huber 2010), and the other is the regularized logarithm or rlog, which incorporates a prior on the sample differences (Love, Huber, and Anders 2014). Both transformations produce transformed data on the log2 scale which has been normalized with respect to library size or other normalization factors.
The point of these two transformations, the VST and the rlog, is to remove the dependence of the variance on the mean, particularly the high variance of the logarithm of count data when the mean is low. Both VST and rlog use the experiment-wide trend of variance over mean, in order to transform the data to remove the experiment-wide trend. Note that we do not require or desire that all the genes have exactly the same variance after transformation. Indeed, in a figure below, you will see that after the transformations the genes with the same mean do not have exactly the same standard deviations, but that the experiment-wide trend has flattened. It is those genes with row variance above the trend which will allow us to cluster samples into interesting groups.
Note on running time: if you have many samples (e.g. 100s), the rlog function might take too long, and so the vst function will be a faster choice. The rlog and VST have similar properties, but the rlog requires fitting a shrinkage term for each sample and each gene which takes time. See the DESeq2 paper for more discussion on the differences (Love, Huber, and Anders 2014).
The two functions, vst and rlog have an argument blind
, for whether the transformation should be blind to the sample information specified by the design formula. When blind
equals TRUE
(the default), the functions will re-estimate the dispersions using only an intercept. This setting should be used in order to compare samples in a manner wholly unbiased by the information about experimental groups, for example to perform sample QA (quality assurance) as demonstrated below.
However, blind dispersion estimation is not the appropriate choice if one expects that many or the majority of genes (rows) will have large differences in counts which are explainable by the experimental design, and one wishes to transform the data for downstream analysis. In this case, using blind dispersion estimation will lead to large estimates of dispersion, as it attributes differences due to experimental design as unwanted noise, and will result in overly shrinking the transformed values towards each other. By setting blind
to FALSE
, the dispersions already estimated will be used to perform transformations, or if not present, they will be estimated using the current design formula. Note that only the fitted dispersion estimates from mean-dispersion trend line are used in the transformation (the global dependence of dispersion on mean for the entire experiment). So setting blind
to FALSE
is still for the most part not using the information about which samples were in which experimental group in applying the transformation.
These transformation functions return an object of class DESeqTransform which is a subclass of RangedSummarizedExperiment. For ~20 samples, running on a newly created DESeqDataSet
, rlog may take 30 seconds, while vst takes less than 1 second. The running times are shorter when using blind=FALSE
and if the function DESeq has already been run, because then it is not necessary to re-estimate the dispersion values. The assay function is used to extract the matrix of normalized values.
vsd <- vst(dds, blind=FALSE)
rld <- rlog(dds, blind=FALSE)
head(assay(vsd), 3)
## treated1 treated2 treated3 untreated1 untreated2 untreated3
## FBgn0000008 7.607799 7.834807 7.594933 7.567177 7.642058 7.844499
## FBgn0000014 6.318607 6.040987 6.040987 6.412578 6.173698 6.040987
## FBgn0000017 11.938303 12.024550 12.013558 12.045714 12.284641 12.455933
## untreated4
## FBgn0000008 7.669033
## FBgn0000014 6.040987
## FBgn0000017 12.077397
Above, we used a parametric fit for the dispersion. In this case, the closed-form expression for the variance stabilizing transformation is used by the vst function. If a local fit is used (option fitType="locfit"
to estimateDispersions) a numerical integration is used instead. The transformed data should be approximated variance stabilized and also includes correction for size factors or normalization factors. The transformed data is on the log2 scale for large counts.
The function rlog, stands for regularized log, transforming the original count data to the log2 scale by fitting a model with a term for each sample and a prior distribution on the coefficients which is estimated from the data. This is the same kind of shrinkage (sometimes referred to as regularization, or moderation) of log fold changes used by the DESeq and nbinomWaldTest. The resulting data contains elements defined as:
\[ \log_2(q_{ij}) = \beta_{i0} + \beta_{ij} \]
where \(q_{ij}\) is a parameter proportional to the expected true concentration of fragments for gene i and sample j (see formula below), \(\beta_{i0}\) is an intercept which does not undergo shrinkage, and \(\beta_{ij}\) is the sample-specific effect which is shrunk toward zero based on the dispersion-mean trend over the entire dataset. The trend typically captures high dispersions for low counts, and therefore these genes exhibit higher shrinkage from the rlog.
Note that, as \(q_{ij}\) represents the part of the mean value \(\mu_{ij}\) after the size factor \(s_j\) has been divided out, it is clear that the rlog transformation inherently accounts for differences in sequencing depth. Without priors, this design matrix would lead to a non-unique solution, however the addition of a prior on non-intercept betas allows for a unique solution to be found.
The figure below plots the standard deviation of the transformed data, across samples, against the mean, using the shifted logarithm transformation, the regularized log transformation and the variance stabilizing transformation. The shifted logarithm has elevated standard deviation in the lower count range, and the regularized log to a lesser extent, while for the variance stabilized data the standard deviation is roughly constant along the whole dynamic range.
Note that the vertical axis in such plots is the square root of the variance over all samples, so including the variance due to the experimental conditions. While a flat curve of the square root of variance over the mean may seem like the goal of such transformations, this may be unreasonable in the case of datasets with many true differences due to the experimental conditions.
# this gives log2(n + 1)
ntd <- normTransform(dds)
library("vsn")
meanSdPlot(assay(ntd))