PhenoPath models gene expression expression \(y\) in terms of a latent pathway score (pseudotime) \(z\). Uniquely, the evolution of genes along the trajectory isn’t common to each gene but can be perturbed by an additional sample-specific covariate \(\beta\). For example, this could be the mutational status of each sample or a drug that each sample was exposed to.
This software infers both the latent pathway scores \(z_n\) and interaction coefficients \(\beta_{pg}\) for samples \(n = 1, \ldots, N\), covariates \(p = 1, \ldots, P\) and genes \(G = 1, \ldots, G\).
Inference is performed using co-ordinate ascent mean field variational inference. This attempts to find a set of approximating distributions \(q(\mathbf{\theta}) = \prod_i q_i(\theta_i)\) for each variable \(i\) by minimising the KL divergence between the KL divergence between the variational distribution and the true posterior. For a good overview of variational inference see Blei, Kucukelbir, and McAuliffe (2016).
We can simulate data according to the PhenoPath model via a call to simulate_phenopath()
:
set.seed(123L)
sim <- simulate_phenopath()
This returns a list with four entries:
print(str(sim))
## List of 4
## $ parameters:Classes 'tbl_df', 'tbl' and 'data.frame': 40 obs. of 4 variables:
## ..$ alpha : num [1:40] 1 1 1 -1 1 1 1 -1 1 1 ...
## ..$ lambda: num [1:40] 0 0 0 0 0 0 0 0 0 0 ...
## ..$ beta : num [1:40] 0 0 0 0 0 0 0 0 0 0 ...
## ..$ regime: chr [1:40] "de" "de" "de" "de" ...
## $ y : num [1:100, 1:40] 1.001 0.999 1.001 1.002 0.999 ...
## $ x : num [1:100] 1 1 1 1 1 1 1 1 1 1 ...
## $ z : num [1:100] -0.713 -0.3512 1.6085 -0.0218 0.0426 ...
## NULL
parameters
is a data frame with the simulated parameters, with a column for each of the parameters \(\alpha\), \(\beta\) and \(\lambda\), and a row for each gene. There is an additional column specifying from which regime that gene has been simulated (see ?simulate_phenopath
for details).y
is the \(N \times G\) matrix of gene expressionx
is the \(N\)-length vector of covariatesz
is the true latent pseudotimesBy default this simulates the model for \(N= 100\) cells and \(G=40\) genes.
For 8 representative genes we can visualise what the expression looks like over pseudotime:
genes_to_extract <- c(1,3,11,13,21,23,31,33)
expression_df <- as.data.frame(sim$y[,genes_to_extract])
names(expression_df) <- paste0("gene_", genes_to_extract)
df_gex <- as_tibble(expression_df) %>%
mutate(x = factor(sim[['x']]), z = sim[['z']]) %>%
gather(gene, expression, -x, -z)
ggplot(df_gex, aes(x = z, y = expression, color = x)) +
geom_point() +
facet_wrap(~ gene, nrow = 2) +
scale_color_brewer(palette = "Set1")
We see for the first two genes there is differential expression only, genes 3 and 4 show a pseudotime dependence, genes 5 and 6 show pseudotime-covariate interactions (but marginally no differential expression), while genes 7 and 8 show differential expression, pseudotime dependence and pseudotime-covariate interactions.
We can further plot this in PCA space, coloured by both covariate and pseudotime to get an idea of how these look in the reduced dimension:
pca_df <- as_tibble(as.data.frame(prcomp(sim$y)$x[,1:2])) %>%
mutate(x = factor(sim[['x']]), z = sim[['z']])
ggplot(pca_df, aes(x = PC1, y = PC2, color = x)) +
geom_point() + scale_colour_brewer(palette = "Set1")
ggplot(pca_df, aes(x = PC1, y = PC2, color = z)) +
geom_point()
PhenoPath fits models with a call to the phenopath
function, which requires at least an expression matrix y
and a covariate vector x
. The expression data should represent something comparable to logged counts, such as \(log_2(\text{TPM}+1)\). Note that buy default PhenoPath centre-scales the input expression.
For use with SummarizedExperiment
s see the section on input formats. For this example we choose an ELBO tolerance of 1e-6
and ELBO calculations thinned by 40
. A discussion of how to control variational inference can be found below.
fit <- phenopath(sim$y, sim$x, elbo_tol = 1e-6, thin = 40)
## Iteration ELBO Change (%)
## [ 40 ] -371.028993280217 Inf
## [ 80 ] -370.889836821024 0.000937990512128878
## [ 120 ] -370.810424921066 0.000535394197549554
## [ 160 ] -370.764023874671 0.000312874519960949
## [ 200 ] -370.736371282354 0.000186470727305164
## [ 240 ] -370.719625484181 0.0001129276481604
## [ 280 ] -370.709354564873 6.92653097470587e-05
## [ 320 ] -370.702991977763 4.29089274161446e-05
## [ 360 ] -370.699020180318 2.67858641942194e-05
## [ 400 ] -370.6965262757 1.68190449694308e-05
## [ 440 ] -370.694953393123 1.06076611159376e-05
## [ 480 ] -370.693958071552 6.71255593007937e-06
## [ 520 ] -370.693326646959 4.25840275431552e-06
## [ 560 ] -370.692925319368 2.70660406234696e-06
## [ 600 ] -370.692669877554 1.72273310974096e-06
## [ 640 ] -370.692507117574 1.09767513606556e-06
## [ 680 ] -370.692403328498 6.99967651763141e-07
print(fit)
## PhenoPath fit with 100 cells and 40 genes
The phenopath
function will print progress on iterations, ELBO, and % change in ELBO that may be turned off by setting verbose = FALSE
in the call.
Once the model has been fit it is important to check convergence with a call to plot_elbo(fit)
to ensure the ELBO is approximately flat:
plot_elbo(fit)
The posterior mean estimates of the pseudotimes \(z\) sit in fit$m_z
that can be extracted using the trajectory
function. We can visualise these compared to both the true pseudotimes and the first principal component of the data:
qplot(sim$z, trajectory(fit)) +
xlab("True z") + ylab("Phenopath z")
qplot(sim$z, pca_df$PC1) +
xlab("True z") + ylab("PC1")
We see that this has high correlation with the true pseudotimes:
cor(sim$z, trajectory(fit))
## [1] -0.9976558
Next, we’re interested in the interactions between the latent space and the covariates. There are three functions to help us here:
interaction_effects
retrieves the posterior interaction effect sizesinteraction_sds
retrieves the posterior interaction standard deviationssignificant_interactions
applies a Bayesian significant test to the interaction parametersNote that if \(P=1\) (ie there is only one covariate) each of these will return a vector, while if \(P>1\) then a matrix is returned.
Alternatively, one can call the interactions
function that returns a data frame with the following entries:
feature
The feature (usually gene)covariate
The covariate, specified from the order originally supplied to the call to phenopath
interaction_effect_size
The effect size of the interaction (\(\beta\) from the statistical model)significant
Boolean for whether the interaction effect is significantly different from 0chi
The precision of the ARD prior on \(\beta\)pathway_loading
The pathway loading \(\lambda\), showing the overall effect for each gene marginalised over the covariateIn our simulated data above, the first 20 genes were simulated with no interaction effect while the latter 20 were simulated with interaction effects. We can pull out the interaction effect sizes, standard deviations, and significance test results into a data frame and plot:
gene_names <- paste0("gene", seq_len(ncol(fit$m_beta)))
df_beta <- data_frame(beta = interaction_effects(fit),
beta_sd = interaction_sds(fit),
is_sig = significant_interactions(fit),
gene = gene_names)
## Warning: `data_frame()` is deprecated, use `tibble()`.
## This warning is displayed once per session.
df_beta$gene <- fct_relevel(df_beta$gene, gene_names)
ggplot(df_beta, aes(x = gene, y = beta, color = is_sig)) +
geom_point() +
geom_errorbar(aes(ymin = beta - 2 * beta_sd, ymax = beta + 2 * beta_sd)) +
theme(axis.text.x = element_text(angle = 90, hjust = 1),
axis.title.x = element_blank()) +
ylab(expression(beta)) +
scale_color_brewer(palette = "Set2", name = "Significant")
A typical analysis might follow by graphing the largest effect size; we can do this as follows:
which_largest <- which.max(df_beta$beta)
df_large <- data_frame(
y = sim[['y']][, which_largest],
x = factor(sim[['x']]),
z = sim[['z']]
)
ggplot(df_large, aes(x = z, y = y, color = x)) +
geom_point() +
scale_color_brewer(palette = "Set1") +
stat_smooth()
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
SummarizedExperiment
as inputAlternatively you might have expression values in an SummarizedExperiment
. For single-cell data it is highly recommended to use the SummarizedExperiment in which case data is stored in a class derived from SummarizedExperiment
.
We’ll first construct an example SummarizedExperiment
using our previous simulation data:
suppressPackageStartupMessages(library(SummarizedExperiment))
exprs_mat <- t(sim$y)
pdata <- data.frame(x = sim$x)
sce <- SummarizedExperiment(assays = list(exprs = exprs_mat),
colData = pdata)
sce
## class: SummarizedExperiment
## dim: 40 100
## metadata(0):
## assays(1): exprs
## rownames: NULL
## rowData names(0):
## colnames: NULL
## colData names(1): x
Note that PhenoPath will use by default is in the exprs
assay of a SummarizedExperiment
(ie assay(sce, "exprs")
) as gene expression values, which can be changed using the sce_assay
string in the column to phenopath
.
We can then pass the \(x\) covariates to PhenoPath in three ways:
colData(sce)
to usecolData(sce)
For our example, these three look like
fit <- phenopath(sce, sim$x) # 1
fit <- phenopath(sce, "x") # 2
fit <- phenopath(sce, ~ x) # 3
Note that if the column of the SCESet is a factor it is coerced into a one-hot encoding. The intercept term is then removed as this is taken care of by the \(\lambda\) coefficient automatically, and the columns centre-scaled.
The posterior distribution is naturally multi-modal and the use of variational inference means we essentially dive straight into the nearest local maximum. Therefore, correct initialisation of the latent space is important. This is controlled through the z_init
argument.
PhenoPath allows for three different initialisations:
For our example these three look like
fit <- phenopath(sim$y, sim$x, z_init = 1) # 1, initialise to first principal component
fit <- phenopath(sim$y, sim$x, z_init = sim$z) # 2, initialise to true values
fit <- phenopath(sim$y, sim$x, z_init = "random") # 3, random initialisation
There are several parameters that tune the coordinate ascent variational inference (CAVI):
maxiter
maximum number of iterations to run CAVI forelbo_tol
the percentage change in the ELBO below which the
model is considered convergedthin
Computing the ELBO is expensive, so only compute the ELBO every
thin
iterationsFor example:
fit <- phenopath(sim$y, sim$x,
maxiter = 1000, # 1000 iterations max
elbo_tol = 1e-2, # consider model converged when change in ELBO < 0.02%
thin = 20 # calculate ELBO every 20 iterations
)
sessionInfo()
## R version 3.6.1 (2019-07-05)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.3 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.10-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.10-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] parallel stats4 stats graphics grDevices utils datasets
## [8] methods base
##
## other attached packages:
## [1] SummarizedExperiment_1.16.0 DelayedArray_0.12.0
## [3] BiocParallel_1.20.0 matrixStats_0.55.0
## [5] Biobase_2.46.0 GenomicRanges_1.38.0
## [7] GenomeInfoDb_1.22.0 IRanges_2.20.0
## [9] S4Vectors_0.24.0 BiocGenerics_0.32.0
## [11] phenopath_1.10.0 forcats_0.4.0
## [13] tidyr_1.0.0 ggplot2_3.2.1
## [15] dplyr_0.8.3 BiocStyle_2.14.0
##
## loaded via a namespace (and not attached):
## [1] tidyselect_0.2.5 xfun_0.10 purrr_0.3.3
## [4] lattice_0.20-38 colorspace_1.4-1 vctrs_0.2.0
## [7] htmltools_0.4.0 yaml_2.2.0 rlang_0.4.1
## [10] pillar_1.4.2 glue_1.3.1 withr_2.1.2
## [13] RColorBrewer_1.1-2 GenomeInfoDbData_1.2.2 lifecycle_0.1.0
## [16] stringr_1.4.0 zlibbioc_1.32.0 munsell_0.5.0
## [19] gtable_0.3.0 codetools_0.2-16 evaluate_0.14
## [22] labeling_0.3 knitr_1.25 Rcpp_1.0.2
## [25] backports_1.1.5 scales_1.0.0 BiocManager_1.30.9
## [28] XVector_0.26.0 digest_0.6.22 stringi_1.4.3
## [31] bookdown_0.14 grid_3.6.1 tools_3.6.1
## [34] bitops_1.0-6 magrittr_1.5 lazyeval_0.2.2
## [37] RCurl_1.95-4.12 tibble_2.1.3 crayon_1.3.4
## [40] pkgconfig_2.0.3 zeallot_0.1.0 ellipsis_0.3.0
## [43] Matrix_1.2-17 assertthat_0.2.1 rmarkdown_1.16
## [46] R6_2.4.0 compiler_3.6.1
Blei, David M, Alp Kucukelbir, and Jon D McAuliffe. 2016. “Variational Inference: A Review for Statisticians.” arXiv Preprint arXiv:1601.00670.