1 Introduction

Identifying important transcription factor (TF) motifs, as shown in the main vignette, could also be done using a regression-based approach, where motifs have to compete against each other for selection. In this framework, the response vector can be the observed experimental measure of interest, e.g. log-fold changes of accessibility for a set of regions, and the predictors consist of the TF motif hits across those regions. In monaLisa, we implement the randomized lasso stability selection proposed by Meinshausen and Bühlmann (2010) with the improved error bounds introduced by Shah and Samworth (2013). We have modified the stabs::glmnet.lasso function used by stabs::stabsel from the stabs package to implement the randomized lasso.

Lasso stability selection performs the lasso regression multiple times on subsamples of the data, and returns a selection probability for each predictor (number of times selected divided by number of regressions done). With the randomized lasso, a weakness parameter is additionally used to vary the lasso penalty term \(\lambda\) to a randomly chosen value between [\(\lambda\), \(\lambda\)/weakness] for each predictor. This type of regularization has advantages in cases where the number of predictors exceeds the number of observations, in selecting variables consistently, demonstrating better error control and not depending strongly on the penalization parameter (Meinshausen and Bühlmann 2010).

With this approach, TF motifs compete against each other to explain the response vector, and we can also include additional predictors like GC content to compete against the TF motifs for selection. This is especially useful if the response is biased by sequence composition, for example if regions with higher GC content tend to have higher response values.

It is worth noting that, as with any regression analysis, the interpretability of the results depends strongly on the quality of the predictors. Hence, increasing the size of the motif database is not, in itself, a guarantee for more interpretable results, since the added motifs may be unrelated to the signal of interest. In addition, as discussed in section 2.3.1, a high level of redundancy, resulting in strong correlations among the motifs, may result in more ambiguous selection probabilities in the regression approach. In fact, also for the binned approach, although the motifs are evaluated independently for association with the outcome, a high degree of redundancy can lead to large collections of very similar motifs showing significant enrichments, complicating interpretability of the results.

2 Motif selection with Randomized Lasso Stability Selection

In the example below, we select for TF motifs explaining log-fold changes in chromatin accessibility (ATAC-seq) across the enhancers between mouse liver and lung tissue at P0, but this can be applied to other data types as well (ChIP-seq, RNA-seq, methylation etc.). Positive log2-fold changes indicate more accessibility in the liver tissue, whereas negative values indicate more accessibility in the lung tissue.

2.1 Load packages

We start by loading the needed packages:


2.2 Load dataset

In this example dataset from ENCODE (The ENCODE Project Consortium 2012), and available in monaLisa, we have quantified ATAC-seq reads on enhancers in mouse P0 lung and liver tissues. The log2-fold change (our response vector in this example) is for liver vs lung chromatin accessibility. We are using a set of 10,000 randomly sampled enhancers to illustrate how randomized lasso stability selection can be used to select TF motifs.

# load GRanges object with logFC and peaks
gr_path <- system.file("extdata", "atac_liver_vs_lung.rds", 
                       package = "monaLisa")
gr <- readRDS(gr_path)

2.3 Get TFBS per motif and peak

We will now construct the transcription factor binding site (TFBS) matrix for known motifs (from a database like JASPAR2020) in the given peak regions. We use the findMotifHits function to scan for TF motif hits. This matrix will be the predictor matrix in our regression. This step may take a while, and it may be useful to parallelize it using the BPPARAM argument (e.g. to run on n parallel threads using the multi-core backend, you can use: findMotifHits(..., BPPARAM = BiocParallel::MulticoreParam(n))).

As mentioned, this framework offers the flexibility to add additional predictors to compete against the TF motifs for selection. Here, we add the fraction of G+C and CpG observed/expected ratio as predictors, to ensure that selected TF motifs are not just detecting a simple trend in GC or CpG composition.

# get PFMs (vertebrate TFs from Jaspar)
pfms <- getMatrixSet(JASPAR2020, list(matrixtype = "PFM", 
                                      tax_group = "vertebrates"))

# randomly sample 300 PFMs for illustration purposes (for quick runtime)
pfms <- pfms[sample(length(pfms), size = 300)]

# convert PFMs to PWMs
pwms <- toPWM(pfms)

# get TFBS on given GRanges (peaks)
# suppress warnings generated by matchPWM due to the presence of Ns 
# in the sequences
peakSeq <- getSeq(BSgenome.Mmusculus.UCSC.mm10, gr)
  hits <- findMotifHits(query = pwms, subject = peakSeq, min.score = 10.0,
                        BPPARAM = BiocParallel::SerialParam())

# get TFBS matrix
TFBSmatrix <- unclass(table(factor(seqnames(hits), levels = seqlevels(hits)),
                            factor(hits$pwmname, levels = name(pwms))))
TFBSmatrix[1:6, 1:6]
#>               NR3C2 Arnt LHX1 SNAI1 MAFG ZSCAN4
#>   peak_51663      0    0    0     0    0      0
#>   peak_57870      0    0    0     0    0      0
#>   peak_2986       2    0    0     0    0      0
#>   peak_124022     0    0    0     0    0      0
#>   peak_29925      0    2    0     1    0      0
#>   peak_95246      0    0    0     0    0      0

# remove TF motifs with 0 binding sites in all regions
zero_TF <- colSums(TFBSmatrix) == 0
#> [1] 2
TFBSmatrix <- TFBSmatrix[, !zero_TF]

# calculate G+C and CpG obs/expected
fMono <- oligonucleotideFrequency(peakSeq, width = 1L, as.prob = TRUE)
fDi <- oligonucleotideFrequency(peakSeq, width = 2L, as.prob = TRUE)
fracGC <- fMono[, "G"] + fMono[, "C"]
oeCpG <- (fDi[, "CG"] + 0.01) / (fMono[, "G"] * fMono[, "C"] + 0.01)

# add GC and oeCpG to predictor matrix
TFBSmatrix <- cbind(fracGC, oeCpG, TFBSmatrix)
TFBSmatrix[1:6, 1:6]
#>                fracGC     oeCpG NR3C2 Arnt LHX1 SNAI1
#> peak_51663  0.5155709 0.4079115     0    0    0     0
#> peak_57870  0.4963235 0.3048298     0    0    0     0
#> peak_2986   0.4008264 0.3103806     2    0    0     0
#> peak_124022 0.4572650 0.4429813     0    0    0     0
#> peak_29925  0.4675000 0.3495939     0    2    0     1
#> peak_95246  0.5144509 0.4020976     0    0    0     0

2.3.1 A note on collinearity

At this point it is useful for the user to get an overall feeling of the collinearity structure in the TFBS matrix. Motifs that share a lot of similar binding sites across the peaks will be highly correlated. High collinearity between predictors is a well known problem in linear regression. It particularly manifests itself in the lasso regression for example, where if variables are equally highly correlated with the response, not all are co-selected as predictors (if they are signal variables). Instead, one is arbitrarily chosen while the others’ coefficients are set to zero. The rationale is that the non-selected correlated predictors do not provide much additional information to explain the response. It is good to be aware of these properties of regression, and to place more weight on the meaning of the selected motif itself, rather than the specific TF name when interpreting the results.

If many cases of high correlations exist and this is a concern, one may consider selecting a representative set of predictors to use. This may for example be achieved by clustering the weight matrices beforehand and using only one representative motif per cluster for running the regression, using tools such as for example RSAT (Castro-Mondragon et al. 2017). RSAT-derived clusters of Jaspar weight matrices can be found at https://jaspar.genereg.net/matrix-clusters/vertebrates/.

If the user is interested in working with all correlated motifs, the binned approach is preferable as the motifs are independently tested for significance (see the binned enrichment vignette). In the regression-based approach on the other hand, we can more clearly understand the relative contributions of TF motifs to the response in the context of each other.

2.4 Identify important TFs

We can now run randomized lasso stability selection to identify TFs that are likely to explain the log-fold changes in accessibility. The exact choice of parameter values for this approach will depend largely on how stringent the user wishes to be and how much signal there is in the data. For example, for more stringent selections, one may decrease the value of the weakness parameter which will make it harder for a variable to get selected. The user is in control of false discoveries with the PFER parameter, which indicates the number of falsely selected variables. As for the selection probability cutoff, Meinshausen and Bühlmann (2010) argue that values in the range of [0.6, 0.9] should give similar results. See the randLassoStabSel function for more details and default parameter values as well as the stabs::stabsel function for the default assumptions on the stability selection implementation by Shah and Samworth (2013).

## randLassoStabSel() is stochastic, so if we run with parallelization
##   (`mc.cores` argument), we must select a random number generator that can
##   provide multiple streams of random numbers used in the `parallel` package
##   and set its seed for reproducible results
# RNGkind("L'Ecuyer-CMRG")
# set.seed(123)
# se <- randLassoStabSel(x = TFBSmatrix, y = gr$logFC_liver_vs_lung, 
#                        cutoff = 0.8, mc.preschedule = TRUE, 
#                        mc.set.seed = TRUE, mc.cores = 2L)

# if not running in parallel mode, it is enough to use set.seed() before 
#   the call to ensure reproducibility (`mc.sores = 1`)
se <- randLassoStabSel(x = TFBSmatrix, y = gr$logFC_liver_vs_lung, 
                       cutoff = 0.8)
#> class: SummarizedExperiment 
#> dim: 10000 300 
#> metadata(12): stabsel.params.cutoff stabsel.params.selected ...
#>   stabsel.params.call randStabsel.params.weakness
#> assays(1): x
#> rownames(10000): peak_51663 peak_57870 ... peak_98880 peak_67984
#> rowData names(1): y
#> colnames(300): fracGC oeCpG ... CLOCK OLIG2
#> colData names(20): selProb selected ... regStep16 regStep17

# selected TFs
#>  [1] "NKX2-5"       "GATA1::TAL1"  "HNF1B"        "HNF4A(var.2)" "Nr2f6"       
#>  [6] "ONECUT3"      "MYF5"         "THRB"         "ISL2"         "NR2C2"       
#> [11] "TEAD3"        "TEAD4"        "GATA3"        "RORA"         "NFIC"        
#> [16] "ZEB1"

The stability paths visualize how predictors get selected over decreasing regularization stringency (from left to right):