Here we walk through an end-to-end gene-level RNA-seq differential expression workflow using Bioconductor packages. We will start from the FASTQ files, show how these were quantified to the reference transcripts, and prepare gene-level count datasets for downstream analysis. We will perform exploratory data analysis (EDA) for quality assessment and to explore the relationship between samples, perform differential gene expression analysis, and visually explore the results.
R version: R version 4.3.2 (2023-10-31)
Bioconductor version: 3.18
Package: 1.26.0
Bioconductor has many packages which support analysis of high-throughput sequence data, including RNA sequencing (RNA-seq). The packages which we will use in this workflow include core packages maintained by the Bioconductor core team for working with gene annotations (gene and transcript locations in the genome, as well as gene ID lookup). We will also use contributed packages for statistical analysis and visualization of sequencing data. Through scheduled releases every 6 months, the Bioconductor project ensures that all the packages within a release will work together in harmony (hence the “conductor” metaphor). The packages used in this workflow are loaded with the library function and can be installed by following the Bioconductor package installation instructions.
rnaseqgene
. Note the
posting guide
for crafting an optimal question for the support site.The data used in this workflow is stored in the airway package that summarizes an RNA-seq experiment wherein airway smooth muscle cells were treated with dexamethasone, a synthetic glucocorticoid steroid with anti-inflammatory effects (Himes et al. 2014). Glucocorticoids are used, for example, by people with asthma to reduce inflammation of the airways. In the experiment, four primary human airway smooth muscle cell lines were treated with 1 micromolar dexamethasone for 18 hours. For each of the four cell lines, we have a treated and an untreated sample. For more description of the experiment see the PubMed entry 24926665 and for raw data see the GEO entry GSE52778.
As input, the count-based statistical methods, such as DESeq2 (Love, Huber, and Anders 2014), edgeR (Robinson, McCarthy, and Smyth 2009), limma with the voom method (Law et al. 2014), DSS (Wu, Wang, and Wu 2013), EBSeq (Leng et al. 2013) and baySeq (Hardcastle and Kelly 2010), expect input data as obtained, e.g., from RNA-seq or another high-throughput sequencing experiment, in the form of a matrix of un-normalized counts. The value in the i-th row and the j-th column of the matrix tells how many reads (or fragments, for paired-end RNA-seq) can be assigned to gene i in sample j. Analogously, for other types of assays, the rows of the matrix might correspond e.g., to binding regions (with ChIP-Seq), or peptide sequences (with quantitative mass spectrometry).
The values in the matrix should be counts or estimated counts of sequencing reads/fragments. This is important for DESeq2’s statistical model to hold, as only counts allow assessing the measurement precision correctly. It is important to never provide counts that were pre-normalized for sequencing depth/library size, as the statistical model is most powerful when applied to un-normalized counts, and is designed to account for library size differences internally.
A previous version of this workflow (including the published version) demonstrated how to align reads to the genome and then count the number of reads that are consistent with gene models. We now recommend a faster, alternative pipeline to genome alignment and read counting. This workflow will demonstrate how to import transcript-level quantification data, aggregating to the gene-level with tximport or tximeta. Transcript quantification methods such as Salmon (Patro et al. 2017), kallisto (Bray et al. 2016), or RSEM (Li and Dewey 2011) perform mapping or alignment of reads to reference transcripts, outputting estimated counts per transcript as well as effective transcript lengths which summarize bias effects. After running one of these tools, the tximport (Soneson, Love, and Robinson 2015) or tximeta (Love et al. 2020) packages can be used to assemble estimated count and offset matrices for use with Bioconductor differential gene expression packages, as will be demonstrated below.
A tutorial on how to use the Salmon software for quantifying
transcript abundance can be
found here.
We recommend using the --gcBias
flag
which estimates a correction factor for systematic biases
commonly present in RNA-seq data (Love, Hogenesch, and Irizarry 2016; Patro et al. 2017),
unless you are certain that your data do not contain such bias.
The advantages of using the transcript abundance quantifiers in conjunction with tximport/tximeta to produce gene-level count matrices and normalizing offsets, are: (1) this approach corrects for any potential changes in gene length across samples (e.g. from differential isoform usage) (Trapnell et al. 2013); (2) some of these methods are substantially faster and require less memory and disk usage compared to alignment-based methods; and (3) it is possible to avoid discarding those fragments that can align to multiple genes with homologous sequence (Robert and Watson 2015). Note that transcript abundance quantifiers skip the generation of large files which store read alignments, instead producing smaller files which store estimated abundances, counts, and effective lengths per transcript. For more details, see the manuscript describing this approach (Soneson, Love, and Robinson 2015), and the tximport package vignette for software details.
tximeta (Love et al. 2020)
extends tximport, offering the same functionality, plus the
additional benefit of automatic addition of annotation metadata for
commonly used transcriptomes (GENCODE, Ensembl, RefSeq for human and
mouse). See the
tximeta vignette
package vignette for more details. tximeta produces a
SummarizedExperiment that can be loaded easily into DESeq2 using
the DESeqDataSet
function, which will be demonstrated below. We will
also discuss the various possible inputs into DESeq2, whether using
tximport, tximeta, htseq (Anders, Pyl, and Huber 2015), or a pre-computed
count matrix.
As mentioned above, a short tutorial on how to use Salmon can be found here, so instead we will provide the code that was used to quantify the files used in this workflow. Salmon can be conveniently run on a cluster using the Snakemake workflow management system (Köster and Rahmann 2012).
The following Snakemake
file was used to quantify the eight samples that
were downloaded from the SRA (the SRR identifier is the run
identifier, and there was only one run per sample for these eight samples).
DATASETS = ["SRR1039508",
"SRR1039509",
"SRR1039512",
"SRR1039513",
"SRR1039516",
"SRR1039517",
"SRR1039520",
"SRR1039521"]
SALMON = "/path/to/salmon_0.14.1/bin/salmon"
rule all:
input: expand("quants/{dataset}/quant.sf", dataset=DATASETS)
rule salmon_quant:
input:
r1 = "fastq/{sample}_1.fastq.gz",
r2 = "fastq/{sample}_2.fastq.gz",
index = "/path/to/gencode.v29_salmon_0.14.1"
output:
"quants/{sample}/quant.sf"
params:
dir = "quants/{sample}"
shell:
"{SALMON} quant -i {input.index} -l A -p 6 --validateMappings \
--gcBias --numGibbsSamples 20 -o {params.dir} \
-1 {input.r1} -2 {input.r2}"
The last line is the key one which runs Salmon. It says to quantify using a specific index, with automatic library type detection, using 6 threads, with the validate mappings setting (this is default in versions of Salmon \(\ge\) 0.99), with GC bias correction, and writing out 20 Gibbs samples (this is optional). The last three arguments specify the output directory and the two paired read files.
The above Snakemake file requires that an index be created at
/path/to/gencode.vVV_salmon_X.Y.Z
, where VV and X,Y,Z should help
specify the release of the reference transcripts and of Salmon.
For human and mouse reference transcripts, we recommend to use
GENCODE (Frankish et al. 2018).
The Salmon index can be created easily with the following command:
salmon index -t transcripts.fa.gz -i name_of_index
Note: Salmon can also make use of genomic decoy sequences during indexing, as described in Srivastava et al. (2020), which has been demonstrated to improve quantification accuracy. For further details on how to make use of decoy sequences within the Salmon software please consult this note in the Salmon documentation. Here, we will continue mapping reads to the transcriptome without decoy sequences. During indexing, a note will appear that the Salmon index is being built without any decoy sequences.
If the transcripts are downloaded from GENCODE, it is recommended to use something similar to the following command (which simply helps to strip the extra information from the transcript names):
salmon index --gencode -t gencode.v29.transcripts.fa.gz \
-i gencode.v29_salmon_X.Y.Z
The above Snakemake file can then be used to execute Snakemake in various ways, including submitting multiple jobs to a compute cluster or in the cloud. The above Snakemake file was executed on a cluster with SLURM scheduling, with the following line in a separate job submitted to the cluster:
snakemake -j 4 --latency-wait 30 --cluster "sbatch -N 1 -n 6"
Later in the workflow, we will load an object that contains the
quantification data at the gene-level for all eight samples. However,
the airway package also contains two quantification
directories output by Salmon, in order to demonstrate reading this data
into R/Bioconductor. In order to make the data package smaller, the
quant.sf
files in the quantification directories have been gzipped,
so below where you see quant.sf.gz
, you would probably use
quant.sf
on your own machine.
After we demonstrate importing with tximeta, we will load the full count matrix corresponding to all samples and all data, which is already provided in the same package, and will continue the analysis with that full data object.
We first load the data package with the example data:
library("airway")
The R function system.file can be used to find out where on your
computer the files from a package have been installed. Here we ask for
the full path to the extdata
directory, where R packages store
external data, that is part of the airway package.
dir <- system.file("extdata", package="airway", mustWork=TRUE)
In this directory, we find a number of files, including eight BAM
files that were used in the previous version of this workflow
demonstrating alignment and counting. We will focus on the two
directories that are in the quants
directory, which contain the
output from Salmon on two files.
list.files(dir)
## [1] "GSE52778_series_matrix.txt" "Homo_sapiens.GRCh37.75_subset.gtf"
## [3] "SRR1039508_subset.bam" "SRR1039509_subset.bam"
## [5] "SRR1039512_subset.bam" "SRR1039513_subset.bam"
## [7] "SRR1039516_subset.bam" "SRR1039517_subset.bam"
## [9] "SRR1039520_subset.bam" "SRR1039521_subset.bam"
## [11] "SraRunInfo_SRP033351.csv" "quants"
## [13] "sample_table.csv"
list.files(file.path(dir, "quants"))
## [1] "SRR1039508" "SRR1039509"
Typically, we have a table with detailed information for each of our samples that links samples to the associated FASTQ and Salmon directories. For your own project, you might create such a comma-separated value (CSV) file using a text editor or spreadsheet software such as Excel.
We load such a CSV file with read.csv:
csvfile <- file.path(dir, "sample_table.csv")
coldata <- read.csv(csvfile, row.names=1, stringsAsFactors=FALSE)
coldata
## SampleName cell dex albut Run avgLength Experiment
## SRR1039508 GSM1275862 N61311 untrt untrt SRR1039508 126 SRX384345
## SRR1039509 GSM1275863 N61311 trt untrt SRR1039509 126 SRX384346
## SRR1039512 GSM1275866 N052611 untrt untrt SRR1039512 126 SRX384349
## SRR1039513 GSM1275867 N052611 trt untrt SRR1039513 87 SRX384350
## SRR1039516 GSM1275870 N080611 untrt untrt SRR1039516 120 SRX384353
## SRR1039517 GSM1275871 N080611 trt untrt SRR1039517 126 SRX384354
## SRR1039520 GSM1275874 N061011 untrt untrt SRR1039520 101 SRX384357
## SRR1039521 GSM1275875 N061011 trt untrt SRR1039521 98 SRX384358
## Sample BioSample
## SRR1039508 SRS508568 SAMN02422669
## SRR1039509 SRS508567 SAMN02422675
## SRR1039512 SRS508571 SAMN02422678
## SRR1039513 SRS508572 SAMN02422670
## SRR1039516 SRS508575 SAMN02422682
## SRR1039517 SRS508576 SAMN02422673
## SRR1039520 SRS508579 SAMN02422683
## SRR1039521 SRS508580 SAMN02422677
To demonstrate loading Salmon quantifiation data into R, we will
just work with the two samples that are provided in the airway
package. We create a column called names
and a column called
files
:
coldata <- coldata[1:2,]
coldata$names <- coldata$Run
coldata$files <- file.path(dir, "quants", coldata$names, "quant.sf.gz")
file.exists(coldata$files)
## [1] TRUE TRUE
Now we load the tximeta package and run its main function:
library("tximeta")
se <- tximeta(coldata)
## importing quantifications
## reading in files with read_tsv
## 1 2
## found matching transcriptome:
## [ GENCODE - Homo sapiens - release 29 ]
## useHub=TRUE: checking for TxDb via 'AnnotationHub'
## found matching TxDb via 'AnnotationHub'
## loading from cache
## Loading required package: GenomicFeatures
## Loading required package: AnnotationDbi
## generating transcript ranges
If the reference transcriptome checksum was recognized by tximeta (details on this in the tximeta vignette), and if we have a working internet connection, tximeta will locate and download the relevant annotation data from various sources. A few details: the annotation data is only downloaded and parsed once, subsequently it will used locally cached versions of the metadata as needed (if you load data a second time that was quantified against the same reference transcripts). Also, the very first time that one uses tximeta, it will ask you to approve the default cache location (following the paradigm of the cache location used by other R and Bioconductor packages). You can change this location at any point later.
We will discuss what is the structure of the se
object in the next
section, but we can first just consider the dimensions. Note that
tximeta imports data at the transcript level.
dim(se)
## [1] 205870 2
head(rownames(se))
## [1] "ENST00000456328.2" "ENST00000450305.2" "ENST00000488147.1"
## [4] "ENST00000619216.1" "ENST00000473358.1" "ENST00000469289.1"
As this workflow is concerned with gene-level analysis, we will now
summarize the transcript-level quantifications to the gene level
(which internally makes use of the methods in tximport
(Soneson, Love, and Robinson 2015)). The correct transcript-to-gene mapping
table is automatically created based on the metadata stored within the
se
object.
gse <- summarizeToGene(se)
## loading existing TxDb created: 2023-11-10 16:03:54
## obtaining transcript-to-gene mapping from database
## generating gene ranges
## gene ranges assigned by total range of isoforms, see `assignRanges`
## summarizing abundance
## summarizing counts
## summarizing length
Now we can check that the dimensions are reduced and the row IDs are now gene IDs:
dim(gse)
## [1] 58294 2
head(rownames(gse))
## [1] "ENSG00000000003.14" "ENSG00000000005.5" "ENSG00000000419.12"
## [4] "ENSG00000000457.13" "ENSG00000000460.16" "ENSG00000000938.12"
While the above section described use of Salmon and tximeta, there are many possible inputs to DESeq2, each of which have their own dedicated import functions. The following tools can be used generate or compile count data for use with DESeq2: tximport (Soneson, Love, and Robinson 2015), tximeta (Love et al. 2020), htseq-count (Anders, Pyl, and Huber 2015), featureCounts (Liao, Smyth, and Shi 2014), summarizeOverlaps (Lawrence et al. 2013).
function | package | framework | output | DESeq2 input function |
---|---|---|---|---|
tximport | tximport | R/Bioconductor | list of matrices | DESeqDataSetFromTximport |
tximeta | tximeta | R/Bioconductor | SummarizedExperiment | DESeqDataSet |
htseq-count | HTSeq | Python | files | DESeqDataSetFromHTSeq |
featureCounts | Rsubread | R/Bioconductor | matrix | DESeqDataSetFromMatrix |
summarizeOverlaps | GenomicAlignments | R/Bioconductor | SummarizedExperiment | DESeqDataSet |
We will next describe the class of object created by tximeta which
was saved above as se
and gse
, and how to create a DESeqDataSet
object from this for use with DESeq2 (the other functions above also
create a DESeqDataSet).
The component parts of a SummarizedExperiment object. The assay
(pink
block) contains the matrix of counts, the rowRanges
(blue block) contains
information about the genomic ranges and the colData
(green block)
contains information about the samples. The highlighted line in each
block represents the first row (note that the first row of colData
lines up with the first column of the assay
).
The SummarizedExperiment container is diagrammed in the Figure above
and discussed in the latest Bioconductor paper (Huber et al. 2015).
In our case, tximeta has created an object gse
with three
matrices: “counts” - the estimated fragment counts for each gene and
sample, “abundance” - the estimated transcript abundances in TPM, and
“length” - the effective gene lengths which include changes in length
due to biases as well as due to transcript usage. The names of the
assays can be examined with assayNames, and the assays themselves
are stored as assays
(a list of matrices). The first matrix in the
list can be pulled out via assay
.
The rowRanges
for our object is the GRanges of the genes (from the
left-most position of all the transcripts to the right-most position
of all the transcripts).
The component parts of the SummarizedExperiment are accessed with an
R function of the same name: assay
(or assays
), rowRanges
and colData
.
We now will load the full count matrix corresponding to all samples
and all data, which is provided in the airway package, and will
continue the analysis with the full data object.
We can investigate this SummarizedExperiment object by looking at
the matrices in the assays
slot, the phenotypic data about the samples in
colData
slot, and the data about the genes in the rowRanges
slot.
data(gse)
gse
## class: RangedSummarizedExperiment
## dim: 58294 8
## metadata(6): tximetaInfo quantInfo ... txomeInfo txdbInfo
## assays(3): counts abundance length
## rownames(58294): ENSG00000000003.14 ENSG00000000005.5 ...
## ENSG00000285993.1 ENSG00000285994.1
## rowData names(1): gene_id
## colnames(8): SRR1039508 SRR1039509 ... SRR1039520 SRR1039521
## colData names(3): names donor condition
The counts are the first matrix, so we can examine them with just
assay
:
assayNames(gse)
## [1] "counts" "abundance" "length"
head(assay(gse), 3)
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516
## ENSG00000000003.14 708.164 467.962 900.992 424.368 1188.295
## ENSG00000000005.5 0.000 0.000 0.000 0.000 0.000
## ENSG00000000419.12 455.000 510.000 604.000 352.000 583.000
## SRR1039517 SRR1039520 SRR1039521
## ENSG00000000003.14 1090.668 805.929 599.337
## ENSG00000000005.5 0.000 0.000 0.000
## ENSG00000000419.12 773.999 409.999 499.000
colSums(assay(gse))
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516 SRR1039517 SRR1039520
## 21100805 19298584 26145537 15688246 25268618 31891456 19683767
## SRR1039521
## 21813903
The rowRanges
, when printed, shows the ranges for the first five and
last five genes:
rowRanges(gse)
## GRanges object with 58294 ranges and 1 metadata column:
## seqnames ranges strand | gene_id
## <Rle> <IRanges> <Rle> | <character>
## ENSG00000000003.14 chrX 100627109-100639991 - | ENSG00000000003.14
## ENSG00000000005.5 chrX 100584802-100599885 + | ENSG00000000005.5
## ENSG00000000419.12 chr20 50934867-50958555 - | ENSG00000000419.12
## ENSG00000000457.13 chr1 169849631-169894267 - | ENSG00000000457.13
## ENSG00000000460.16 chr1 169662007-169854080 + | ENSG00000000460.16
## ... ... ... ... . ...
## ENSG00000285990.1 chr14 19244904-19269380 - | ENSG00000285990.1
## ENSG00000285991.1 chr6 149817937-149896011 - | ENSG00000285991.1
## ENSG00000285992.1 chr8 47129262-47132628 + | ENSG00000285992.1
## ENSG00000285993.1 chr18 46409197-46410645 - | ENSG00000285993.1
## ENSG00000285994.1 chr10 12563151-12567351 + | ENSG00000285994.1
## -------
## seqinfo: 25 sequences (1 circular) from hg38 genome
The rowRanges
also contains metadata about the sequences
(chromosomes in our case) in the seqinfo
slot:
seqinfo(rowRanges(gse))
## Seqinfo object with 25 sequences (1 circular) from hg38 genome:
## seqnames seqlengths isCircular genome
## chr1 248956422 FALSE hg38
## chr2 242193529 FALSE hg38
## chr3 198295559 FALSE hg38
## chr4 190214555 FALSE hg38
## chr5 181538259 FALSE hg38
## ... ... ... ...
## chr21 46709983 FALSE hg38
## chr22 50818468 FALSE hg38
## chrX 156040895 FALSE hg38
## chrY 57227415 FALSE hg38
## chrM 16569 TRUE hg38
The colData
for the SummarizedExperiment reflects the data.frame
that was provided to the tximeta
function for importing the
quantification data. Here we can see that there are columns indicating
sample names, as well as the donor ID, and the treatment condition
(treated with dexamethasone or untreated).
colData(gse)
## DataFrame with 8 rows and 3 columns
## names donor condition
## <factor> <factor> <factor>
## SRR1039508 SRR1039508 N61311 Untreated
## SRR1039509 SRR1039509 N61311 Dexamethasone
## SRR1039512 SRR1039512 N052611 Untreated
## SRR1039513 SRR1039513 N052611 Dexamethasone
## SRR1039516 SRR1039516 N080611 Untreated
## SRR1039517 SRR1039517 N080611 Dexamethasone
## SRR1039520 SRR1039520 N061011 Untreated
## SRR1039521 SRR1039521 N061011 Dexamethasone
At this point, we have counted the fragments which overlap the genes in the gene model we specified. This is a branching point where we could use a variety of Bioconductor packages for exploration and differential expression of the count data, including edgeR (Robinson, McCarthy, and Smyth 2009), limma with the voom method (Law et al. 2014), DSS (Wu, Wang, and Wu 2013), EBSeq (Leng et al. 2013) and baySeq (Hardcastle and Kelly 2010). Schurch et al. (2016) compared performance of different statistical methods for RNA-seq using a large number of biological replicates and can help users to decide which tools make sense to use, and how many biological replicates are necessary to obtain a certain sensitivity. We will continue using DESeq2 (Love, Huber, and Anders 2014). The SummarizedExperiment object is all we need to start our analysis. In the following section we will show how to use it to create the data object used by DESeq2.
Bioconductor software packages often define and use a custom class for storing data that makes sure that all the needed data slots are consistently provided and fulfill the requirements. In addition, Bioconductor has general data classes (such as the SummarizedExperiment) that can be used to move data between packages. Additionally, the core Bioconductor classes provide useful functionality: for example, subsetting or reordering the rows or columns of a SummarizedExperiment automatically subsets or reorders the associated rowRanges and colData, which can help to prevent accidental sample swaps that would otherwise lead to spurious results. With SummarizedExperiment this is all taken care of behind the scenes.
In DESeq2, the custom class is called DESeqDataSet. It is built on
top of the SummarizedExperiment class, and it is easy to convert
SummarizedExperiment objects into DESeqDataSet objects, which we
show below. One of the two main differences is that the assay
slot is
instead accessed using the counts accessor function, and the
DESeqDataSet class enforces that the values in this matrix are
non-negative integers.
A second difference is that the DESeqDataSet has an associated
design formula. The experimental design is specified at the
beginning of the analysis, as it will inform many of the DESeq2
functions how to treat the samples in the analysis (one exception is
the size factor estimation, i.e., the adjustment for differing library
sizes, which does not depend on the design formula). The design
formula tells which columns in the sample information table (colData
)
specify the experimental design and how these factors should be used
in the analysis.
First, let’s examine the columns of the colData
of gse
.
We can see each of the columns just using the $
directly on the
SummarizedExperiment or DESeqDataSet.
gse$donor
## [1] N61311 N61311 N052611 N052611 N080611 N080611 N061011 N061011
## Levels: N052611 N061011 N080611 N61311
gse$condition
## [1] Untreated Dexamethasone Untreated Dexamethasone Untreated
## [6] Dexamethasone Untreated Dexamethasone
## Levels: Untreated Dexamethasone
We can rename our variables if we want. Let’s use cell
to denote the
donor cell line, and dex
to denote the treatment condition.
gse$cell <- gse$donor
gse$dex <- gse$condition
We can also change the names of the levels. It is critical when one
renames levels to not change the order. Here we will rename
"Untreated"
as "untrt"
and "Dexamethasone"
as "trt"
:
levels(gse$dex)
## [1] "Untreated" "Dexamethasone"
# when renaming levels, the order must be preserved!
levels(gse$dex) <- c("untrt", "trt")
The simplest design formula for differential expression would be
~ condition
, where condition
is a column in colData(dds)
that
specifies which of two (or more groups) the samples belong to.
For the airway experiment, we will specify ~ cell + dex
meaning that we want to test for the effect of dexamethasone (dex
)
controlling for the effect of different cell line (cell
).
Note: it is prefered in R that the first level of a factor be the
reference level (e.g. control, or untreated samples). In this case,
when the colData
table was assembled the untreated samples were
already set as the reference, but if this were not the case we could
use relevel as shown below. While levels(...) <-
above was
simply for renaming the character strings associated with levels,
relevel is a very different function, which decides how the
variables will be coded, and how contrasts will be computed. For a
two-group comparison, the use of relevel to change the reference
level would flip the sign of a coefficient associated with a contrast
between the two groups.
library("magrittr")
gse$dex %<>% relevel("untrt")
gse$dex
## [1] untrt trt untrt trt untrt trt untrt trt
## Levels: untrt trt
%<>%
is the compound assignment pipe-operator from
the magrittr package, the above line of code is a
concise way of saying:
gse$dex <- relevel(gse$dex, "untrt")
For running DESeq2 models, you can use R’s formula notation to
express any fixed-effects experimental design.
Note that DESeq2 uses the same formula notation as, for instance, the lm
function of base R. If the research aim is to determine for which
genes the effect of treatment is different across groups, then
interaction terms can be included and tested using a design such as
~ group + treatment + group:treatment
. See the manual page for
?results
for more examples. We will show how to use an interaction
term to test for condition-specific changes over time in a
time course example below.
In the following sections, we will demonstrate the construction of the DESeqDataSet from two starting points:
For a full example of using the HTSeq Python package for read counting, please see the pasilla vignette. For an example of generating the DESeqDataSet from files produced by htseq-count, please see the DESeq2 vignette.
Again, we can quickly check the millions of fragments that could be mapped by Salmon to the genes (the second argument of round tells how many decimal points to keep).
round( colSums(assay(gse)) / 1e6, 1 )
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516 SRR1039517 SRR1039520
## 21.1 19.3 26.1 15.7 25.3 31.9 19.7
## SRR1039521
## 21.8
Once we have our fully annotated SummarizedExperiment object, we can construct a DESeqDataSet object from it that will then form the starting point of the analysis. We add an appropriate design for the analysis:
library("DESeq2")
dds <- DESeqDataSet(gse, design = ~ cell + dex)
In this section, we will show how to build an DESeqDataSet supposing we only have a count matrix and a table of sample information.
Note: if you have prepared a SummarizedExperiment you should skip this section. While the previous section would be used to construct a DESeqDataSet from a SummarizedExperiment, here we first extract the individual object (count matrix and sample info) from the SummarizedExperiment in order to build it back up into a new object – only for demonstration purposes. In practice, the count matrix would either be read in from a file or perhaps generated by an R function like featureCounts from the Rsubread package (Liao, Smyth, and Shi 2014).
The information in a SummarizedExperiment object can be accessed with accessor functions. For example, to see the actual data, i.e., here, the fragment counts, we use the assay function. (The head function restricts the output to the first few lines.)
countdata <- round(assays(gse)[["counts"]])
head(countdata, 3)
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516
## ENSG00000000003.14 708 468 901 424 1188
## ENSG00000000005.5 0 0 0 0 0
## ENSG00000000419.12 455 510 604 352 583
## SRR1039517 SRR1039520 SRR1039521
## ENSG00000000003.14 1091 806 599
## ENSG00000000005.5 0 0 0
## ENSG00000000419.12 774 410 499
In this count matrix, each row represents a gene, each column a sequenced RNA library, and the values give the estimated counts of fragments that were probabilistically assigned to the respective gene in each library by Salmon. We also have information on each of the samples (the columns of the count matrix). If you’ve imported the count data in some other way, for example loading a pre-computed count matrix, it is very important to check manually that the columns of the count matrix correspond to the rows of the sample information table.
coldata <- colData(gse)
We now have all the ingredients to prepare our data object in a form that is suitable for analysis, namely:
countdata
: a table with the fragment countscoldata
: a table with information about the samplesTo now construct the DESeqDataSet object from the matrix of counts and the sample information table, we use:
ddsMat <- DESeqDataSetFromMatrix(countData = countdata,
colData = coldata,
design = ~ cell + dex)
We will continue with the object generated from the SummarizedExperiment section.
There are two separate paths in this workflow; the one we will see first involves transformations of the counts in order to visually explore sample relationships. In the second part, we will go back to the original raw counts for statistical testing. This is critical because the statistical testing methods rely on original count data (not scaled or transformed) for calculating the precision of measurements.
Our count matrix with our DESeqDataSet contains many rows with only zeros, and additionally many rows with only a few fragments total. In order to reduce the size of the object, and to increase the speed of our functions, we can remove the rows that have no or nearly no information about the amount of gene expression. Here we perform pre-filtering to keep only rows that have a count of at least 10 for a minimal number of samples. The count of 10 is a reasonable choice for bulk RNA-seq. A recommendation for the minimal number of samples is to specify the smallest group size, e.g. here there are 4 samples in each group. If there are not discrete groups, one can use the minimal number of samples where non-zero counts would be considered interesting. Additional weighting/filtering to improve power is applied at a later step in the workflow.
nrow(dds)
## [1] 58294
smallestGroupSize <- 4
keep <- rowSums(counts(dds) >= 10) >= smallestGroupSize
dds <- dds[keep,]
nrow(dds)
## [1] 16637
Many common statistical methods for exploratory analysis of multidimensional data, for example clustering and principal components analysis (PCA), work best for data that generally has the same range of variance at different ranges of the mean values. When the expected amount of variance is approximately the same across different mean values, the data is said to be homoskedastic. For RNA-seq counts, however, the expected variance grows with the mean. For example, if one performs PCA directly on a matrix of counts or normalized counts (e.g. correcting for differences in sequencing depth), the resulting plot typically depends mostly on the genes with highest counts because they show the largest absolute differences between samples. A simple and often used strategy to avoid this is to take the logarithm of the normalized count values plus a pseudocount of 1; however, depending on the choice of pseudocount, now the genes with the very lowest counts will contribute a great deal of noise to the resulting plot, because taking the logarithm of small counts actually inflates their variance. We can quickly show this property of counts with some simulated data (here, Poisson counts with a range of lambda from 0.1 to 100). We plot the standard deviation of each row (genes) against the mean:
lambda <- 10^seq(from = -1, to = 2, length = 1000)
cts <- matrix(rpois(1000*100, lambda), ncol = 100)
library("vsn")
meanSdPlot(cts, ranks = FALSE)
And for logarithm-transformed counts:
log.cts.one <- log2(cts + 1)
meanSdPlot(log.cts.one, ranks = FALSE)
The logarithm with a small pseudocount amplifies differences when the values are close to 0. The low count genes with low signal-to-noise ratio will overly contribute to sample-sample distances and PCA plots.
As a solution, DESeq2 offers two transformations for count data that stabilize the variance across the mean: the variance stabilizing transformation (VST) for negative binomial data with a dispersion-mean trend (Anders and Huber 2010), implemented in the vst function, and the regularized-logarithm transformation or rlog (Love, Huber, and Anders 2014).
For genes with high counts, both the VST and the rlog will give similar result to the ordinary log2 transformation of normalized counts. For genes with lower counts, however, the values are shrunken towards a middle value. The VST or rlog-transformed data then become approximately homoskedastic (more flat trend in the meanSdPlot), and can be used directly for computing distances between samples, making PCA plots, or as input to downstream methods which perform best with homoskedastic data.
Which transformation to choose?
The VST is much faster to compute and is less sensitive to high count
outliers than the rlog. The rlog tends to work well on
small datasets (n < 30), potentially outperforming the VST when there is
a wide range of sequencing depth across samples (an order of magnitude difference).
We therefore recommend the VST for medium-to-large datasets (n > 30).
You can perform both transformations and compare the meanSdPlot
or
PCA plots generated, as described below.
Note that the two transformations offered by DESeq2 are provided for applications other than differential testing. For differential testing we recommend the DESeq function applied to raw counts, as described later in this workflow, which also takes into account the dependence of the variance of counts on the mean value during the dispersion estimation step.
Both vst and rlog return a DESeqTransform object which is based
on the SummarizedExperiment class. The transformed values are no
longer counts, and are stored in the assay slot. The colData that
was attached to dds
is still accessible:
vsd <- vst(dds, blind = FALSE)
head(assay(vsd), 3)
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516
## ENSG00000000003.14 10.082167 9.828058 10.152774 9.970690 10.410407
## ENSG00000000419.12 9.663489 9.900634 9.779942 9.775148 9.740478
## ENSG00000000457.13 9.417376 9.280001 9.333639 9.430438 9.250501
## SRR1039517 SRR1039520 SRR1039521
## ENSG00000000003.14 10.171994 10.300682 9.976235
## ENSG00000000419.12 9.848458 9.659469 9.817695
## ENSG00000000457.13 9.363013 9.450889 9.444291
colData(vsd)
## DataFrame with 8 rows and 5 columns
## names donor condition cell dex
## <factor> <factor> <factor> <factor> <factor>
## SRR1039508 SRR1039508 N61311 Untreated N61311 untrt
## SRR1039509 SRR1039509 N61311 Dexamethasone N61311 trt
## SRR1039512 SRR1039512 N052611 Untreated N052611 untrt
## SRR1039513 SRR1039513 N052611 Dexamethasone N052611 trt
## SRR1039516 SRR1039516 N080611 Untreated N080611 untrt
## SRR1039517 SRR1039517 N080611 Dexamethasone N080611 trt
## SRR1039520 SRR1039520 N061011 Untreated N061011 untrt
## SRR1039521 SRR1039521 N061011 Dexamethasone N061011 trt
Again, for the rlog:
rld <- rlog(dds, blind = FALSE)
head(assay(rld), 3)
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516
## ENSG00000000003.14 9.479269 9.172988 9.561792 9.347935 9.853759
## ENSG00000000419.12 8.856681 9.150490 9.003390 8.997381 8.954032
## ENSG00000000457.13 8.352320 8.165586 8.239148 8.368679 8.122726
## SRR1039517 SRR1039520 SRR1039521
## ENSG00000000003.14 9.584201 9.730416 9.353458
## ENSG00000000419.12 9.088130 8.851863 9.049900
## ENSG00000000457.13 8.279323 8.396402 8.387635
In the above function calls, we specified blind = FALSE
, which means
that differences between cell lines and treatment (the variables in
the design) will not contribute to the expected variance-mean trend of
the experiment. The experimental design is not used directly in the
transformation, only in estimating the global amount of variability in
the counts. For a fully unsupervised transformation, one can set
blind = TRUE
(which is the default).
To show the effect of the transformation, in the figure below
we plot the first sample
against the second, first simply using the log2 function (after adding
1, to avoid taking the log of zero), and then using the VST and rlog-transformed
values. For the log2 approach, we need to first estimate size factors to
account for sequencing depth, and then specify normalized=TRUE
.
Sequencing depth correction is done automatically for the vst and rlog.
library("dplyr")
library("ggplot2")
dds <- estimateSizeFactors(dds)
df <- bind_rows(
as_data_frame(log2(counts(dds, normalized=TRUE)[, 1:2]+1)) %>%
mutate(transformation = "log2(x + 1)"),
as_data_frame(assay(vsd)[, 1:2]) %>% mutate(transformation = "vst"),
as_data_frame(assay(rld)[, 1:2]) %>% mutate(transformation = "rlog"))
colnames(df)[1:2] <- c("x", "y")
lvls <- c("log2(x + 1)", "vst", "rlog")
df$transformation <- factor(df$transformation, levels=lvls)
ggplot(df, aes(x = x, y = y)) + geom_hex(bins = 80) +
coord_fixed() + facet_grid( . ~ transformation)
Scatterplot of transformed counts from two samples. Shown are scatterplots using the log2 transform of normalized counts (left), using the VST (middle), and using the rlog (right). While the rlog is on roughly the same scale as the log2 counts, the VST has a upward shift for the smaller values. It is the differences between samples (deviation from y=x in these scatterplots) which will contribute to the distance calculations and the PCA plot.
We can see how genes with low counts (bottom left-hand corner) seem to be excessively variable on the ordinary logarithmic scale, while the VST and rlog compress differences for the low count genes for which the data provide little information about differential expression.
A useful first step in an RNA-seq analysis is often to assess overall similarity between samples: Which samples are similar to each other, which are different? Does this fit to the expectation from the experiment’s design?
We use the R function dist to calculate the Euclidean distance between samples. To ensure we have a roughly equal contribution from all genes, we use it on the VST data. We need to transpose the matrix of values using t, because the dist function expects the different samples to be rows of its argument, and different dimensions (here, genes) to be columns.
sampleDists <- dist(t(assay(vsd)))
sampleDists
## SRR1039508 SRR1039509 SRR1039512 SRR1039513 SRR1039516 SRR1039517
## SRR1039509 37.77776
## SRR1039512 30.25948 43.38127
## SRR1039513 49.66501 34.84097 40.18573
## SRR1039516 33.62914 46.11028 32.89674 50.61885
## SRR1039517 50.13559 39.95782 45.59332 38.73688 38.49412
## SRR1039520 29.92878 45.27463 27.72834 46.25456 34.97574 48.96752
## SRR1039521 50.08387 35.35988 45.51074 28.48458 51.19216 39.62934
## SRR1039520
## SRR1039509
## SRR1039512
## SRR1039513
## SRR1039516
## SRR1039517
## SRR1039520
## SRR1039521 41.45015
We visualize the distances in a heatmap in a figure below, using the function pheatmap from the pheatmap package.
library("pheatmap")
library("RColorBrewer")
In order to plot the sample distance matrix with the rows/columns
arranged by the distances in our distance matrix,
we manually provide sampleDists
to the clustering_distance
argument of the pheatmap function.
Otherwise the pheatmap function would assume that the matrix contains
the data values themselves, and would calculate distances between the
rows/columns of the distance matrix, which is not desired.
We also manually specify a blue color palette using the
colorRampPalette function from the RColorBrewer package.
sampleDistMatrix <- as.matrix( sampleDists )
rownames(sampleDistMatrix) <- paste( vsd$dex, vsd$cell, sep = " - " )
colnames(sampleDistMatrix) <- NULL
colors <- colorRampPalette( rev(brewer.pal(9, "Blues")) )(255)
pheatmap(sampleDistMatrix,
clustering_distance_rows = sampleDists,
clustering_distance_cols = sampleDists,
col = colors)
Heatmap of sample-to-sample distances using the variance stabilizing transformed values.
Note that we have changed the row names of the distance matrix to contain treatment type and patient number instead of sample ID, so that we have all this information in view when looking at the heatmap.
Another option for calculating sample distances is to use the
Poisson Distance (Witten 2011), implemented in the
PoiClaClu package.
This measure of dissimilarity between counts
also takes the inherent variance
structure of counts into consideration when calculating the distances
between samples. The PoissonDistance function takes the original
count matrix (not normalized) with samples as rows instead of columns,
so we need to transpose the counts in dds
.
library("PoiClaClu")
poisd <- PoissonDistance(t(counts(dds)))
We plot the heatmap in a Figure below.
samplePoisDistMatrix <- as.matrix( poisd$dd )
rownames(samplePoisDistMatrix) <- paste( dds$dex, dds$cell, sep=" - " )
colnames(samplePoisDistMatrix) <- NULL
pheatmap(samplePoisDistMatrix,
clustering_distance_rows = poisd$dd,
clustering_distance_cols = poisd$dd,
col = colors)
Heatmap of sample-to-sample distances using the Poisson Distance.
Another way to visualize sample-to-sample distances is a principal components analysis (PCA). In this ordination method, the data points (here, the samples) are projected onto the 2D plane such that they spread out in the two directions that explain most of the differences (figure below). The x-axis is the direction that separates the data points the most. The values of the samples in this direction are written PC1. The y-axis is a direction (it must be orthogonal to the first direction) that separates the data the second most. The values of the samples in this direction are written PC2. The percent of the total variance that is contained in the direction is printed in the axis label. Note that these percentages do not add to 100%, because there are more dimensions that contain the remaining variance (although each of these remaining dimensions will explain less than the two that we see).
plotPCA(vsd, intgroup = c("dex", "cell"))