This vignette aims to be a short tutorial for the main functionalities of SIAMCAT. More detailed tutorials or examples of additional workflows can be found on the web under EMBL-microbiome tools.
The promise of elucidating associations between the microbiota and their host, with diagnostic and therapeutic potential, is fueling metagenomics research. However, there is a lack of user-friendly software tools implementing robust statistical testing and machine learning methods suitable for microbiome data. Here, we describe SIAMCAT, a solution to this problem implemented as R package.
Associations between microbiome and host phenotypes are ideally described by quantitative models able to predict host status from microbiome composition. SIAMCAT can do so for data from hundreds of thousands of microbial taxa, gene families, or metabolic pathways over hundreds of samples. SIAMCAT produces graphical output for convenient assessment of the quality of the input data and statistical associations, for model diagnostics and inference revealing the most predictive microbial biomarkers.
First, let`s load the SIAMCAT package and use the files included in SIAMCAT. The data are the same as used in the publication of Zeller et al, which demonstrated the potential of microbial markers in feacal samples to distinguish patients with colorectal cancer (CRC) from healthy controls.
library(SIAMCAT)
fn.in.feat <- system.file(
"extdata",
"feat_crc_study-pop-I_N141_tax_profile_mocat_bn_specI_clusters.tsv",
package = "SIAMCAT"
)
fn.in.label <- system.file(
"extdata",
"label_crc_study-pop-I_N141_tax_profile_mocat_bn_specI_clusters.tsv",
package = "SIAMCAT"
)
fn.in.meta <- system.file(
"extdata",
"num_metadata_crc_study-pop-I_N141_tax_profile_mocat_bn_specI_clusters.tsv",
package = "SIAMCAT"
)
We can access the files with the dedicated SIAMCAT functions and directly construct a SIAMCAT object containing the microbial features, the patient`s labels, and metadata for the patients.
feat <- read.features(fn.in.feat)
## The provided feature names were not semantically correct for
## use in R, they were updated.
label <- read.labels(fn.in.label)
meta <- read.meta(fn.in.meta)
siamcat <- siamcat(feat, label, meta)
A few information about the siamcat
object can be accessed with the show
function from phyloseq
:
show(siamcat)
## siamcat-class object
## label() label: 88 healthy and 53 cancer samples
##
## contains phyloseq-class experiment-level object @phyloseq:
## phyloseq@otu_table() OTU Table: [ 1754 taxa and 141 samples ]
## phyloseq@sam_data() Sample Data: [ 141 samples by 8 sample variables ]
In fact, siamcat-class
object extends phyloseq-class
object:
phyloseq <- physeq(siamcat)
show(phyloseq)
## phyloseq-class experiment-level object
## otu_table() OTU Table: [ 1754 taxa and 141 samples ]
## sample_data() Sample Data: [ 141 samples by 8 sample variables ]
The validate.data
function ensures that we have labels for all the samples in
features and vice versa.
siamcat <- validate.data(siamcat, verbose=1)
## Data succesfully validated
The data can also be sub-selected based on the available meta-data. For example, if we want to exclude patients that are too young or too old for the question of interest, we can do so easily with:
siamcat <- select.samples(
siamcat,
filter = 'age',
allowed.set = NULL,
allowed.range = c(20, 90),
verbose = 2
)
## + starting select.samples
## +++ removed 0 samples with age not in [20, 90] (retaining 141)
## Keeping labels of 141 sample(s).
## + finished select.samples in 0.012 s
Since we have quite a lot of microbial markers in the dataset at the moment, we
perform unsupervised feature selection using the function filter.features
.
Here, we filter based on overall abundance, but could also do so based on
prevalence or cumulative abundance.
siamcat <- filter.features(
siamcat,
filter.method = 'abundance',
cutoff = 0.001,
recomp.prop = FALSE,
rm.unmapped = TRUE,
verbose = 2
)
## + starting filter.features
## +++ before filtering, the data has 1754 features
## +++ removed 1 features corresponding to UNMAPPED reads
## +++ removed 1546 features whose values did not exceed0.001 in any sample (retaining 207)
## + finished filter.features in 0.014 s
Another functionality of SIAMCAT are the modules for testing of confounders
and associations. Confounders are checked with the function check.confounders
,
which produces a plot for each possible confounder in the metadata and diverts
the output into a pdf-file. The function can be used like that:
## Not run here, since the function produces a pdf-file as output
check.confounders(siamcat,
fn.plot = 'conf_check.pdf')
Here would be an example output for Age
as potential confounder:
Similarly, associations between microbial markers and the label can be
testedwith the check.associations
function. The function computes a
generalized fold change for the marker, the prevalence shift, a single
feature AUC, and the significance of the associations. The significance
is tested with a Wilcoxon test. The function again produces a pdf-file
as output and is thus not run here, but can be used as follows:
## Not run here, since the function produces a pdf-file as output
check.associations(
siamcat,
sort.by = 'fc',
fn.plot = 'assoc.pdf',
alpha = 0.05,
mult.corr = "fdr",
detect.lim = 10 ^ -6,
max.show = 50,
plot.type = "quantile.box",
panels = c("fc", "prevalence", "auroc"),
verbose = 2
)
Another feature of SIAMCAT is the versatile but easy-to-use interface for the construction of machine learning models on the basis of microbial markers. SIAMCAT contains functions for data normalization, splitting the data into cross-validation folds, training the model, and making predictions based on cross-validation instances and the trained models.
Data normalization is performed with the normalize.features
function.
Several control options are available, for example a choice of the normalization
method from log.unit
, log.std
, rank.unit
, rank.std
and log.clr
or
additional parameters. Here, we use the log.unit
method:
siamcat <- normalize.features(
siamcat,
norm.method = "log.unit",
norm.param = list(
log.n0 = 1e-06,
n.p = 2,
norm.margin = 1
),
verbose = 2
)
## + starting normalize.features
## +++ performing de novo normalization using the log.unit method
## + feature sparsity before normalization: 45.12%
## +++ feature sparsity after normalization: 0 %
## + finished normalize.features in 0.043 s
Preparation of the cross-validation fold is a crucial step in machine learning.
SIAMCAT greatly simplifies the set-up of cross-validation schemes, including
stratification of samples or keeping samples inseperable based on metadata.
For this small example, we choose a twice-repeated 5-fold cross-validation
scheme. The data-split will be saved in the data_split
slot of the siamcat
object.
siamcat <- create.data.split(
siamcat,
num.folds = 5,
num.resample = 2,
stratify = TRUE,
inseparable = NULL,
verbose = 2
)
## + starting create.data.split
## + resampling round 1
## + resampling round 2
## + finished create.data.split in 0.01 s
The actual model training is performed using the function train.model
.
Again, multiple options for customization are available, ranging from the
machine learning method to the measure for model selection or customizable
parameter set for hyperparameter tuning. Here, we train a Lasso model and
enforce at least 5 non-zero coefficients.
siamcat <- train.model(
siamcat,
method = "lasso",
stratify = TRUE,
modsel.crit = list("pr"),
min.nonzero.coeff = 5,
param.set = NULL,
verbose = 3
)
## + starting train.model
## +++ preparing selection measures
## + training lasso models on 10 training sets
## +++ training on cv fold: 1
## ++++ repetition: 1
## ++++ repetition: 2
## +++ training on cv fold: 2
## ++++ repetition: 1
## ++++ repetition: 2
## +++ training on cv fold: 3
## ++++ repetition: 1
## ++++ repetition: 2
## +++ training on cv fold: 4
## ++++ repetition: 1
## ++++ repetition: 2
## +++ training on cv fold: 5
## ++++ repetition: 1
## ++++ repetition: 2
## + finished train.model in 17.6 s
The models are saved in the model_list
slot of the siamcat
object. This
slot stores objects of model_list-class
. To get the complete list out of the
SIAMCAT object:
model_list <- model_list(siamcat)
This slot also stores information on which method was used to construct the model:
model_type(siamcat)
## [1] "lasso"
Models can also be easily accessed:
models <- models(siamcat)
models[[1]]
## Model for learner.id=classif.cvglmnet; learner.class=classif.cvglmnet
## Trained on: task.id = data; obs = 112; features = 207
## Hyperparameters: nlambda=100,alpha=1
Using the data-split and the models trained in previous step, we can use the
function make.predictions
in order to apply the models on the test instances
in the data-split. The predictions will be saved in the pred_matrix
slot of
the siamcat
object.
siamcat <- make.predictions(siamcat, verbose=0)
pred_matrix <- pred_matrix(siamcat)
head(pred_matrix)
## CV_rep1 CV_rep2
## CCIS27304052ST-3-0 -0.66663330 -0.82434399
## CCIS15794887ST-4-0 -0.59495246 -0.63845637
## CCIS74726977ST-3-0 -0.36886696 -0.08185959
## CCIS16561622ST-4-0 0.19181463 -0.45406671
## CCIS79210440ST-3-0 -0.02002631 -0.57577568
## CCIS82507866ST-3-0 -0.91231450 -0.82832222
In the final part, we want to find out how well the model performed and which
microbial markers had been selected in the model. In order to do so, we first
calculate how well the predictions fit the real data using the function
evaluate.predictions
. This function calculates the Area Under the Receiver
Operating Characteristic (ROC) Curve (AU-ROC) and the Precision Recall (PR)
Curve for each resampled cross-validation run. The results of the evaluation
will be stored in the eval_data
slot of the siamcat
object.
siamcat <- evaluate.predictions(siamcat, verbose=2)
## + starting evaluate.predictions
## + finished evaluate.predictions in 7.47 s
To plot the results of the evaluation, we can use the function
model.evaluation.plot
, which produces a pdf-file showing the ROC and PR
Curves for the different resamples runs as well as the mean ROC and PR Curve.
## Not run here, since the function produces a pdf-file as output
model.evaluation.plot(siamcat,'eval_plot.pdf',verbose = 2)
Instead of the pdf-output, we can also access the evaluation data in the
siamcat
object directly and plot the ROC-Curves:
# plot ROC Curves
plot(
NULL,
xlim = c(0, 1),
ylim = c(0, 1),
xlab = 'False positive rate',
ylab = 'True positive rate',
type = 'n'
)
title('ROC curve for the model')
abline(a = 0, b = 1, lty = 3)
# for each resampled CV run
eval_data <- eval_data(siamcat)
for (r in 1:length(eval_data$roc.all)) {
roc.c = eval_data$roc.all[[r]]
lines(1 - roc.c$specificities, roc.c$sensitivities,
col = gray(runif(1, 0.2, 0.8)))
}
# mean ROC curve
roc.summ = eval_data$roc.average[[1]]
lines(1 - roc.summ$specificities,
roc.summ$sensitivities,
col = 'black',
lwd = 2)
# plot CI
x = as.numeric(rownames(roc.summ$ci))
yl = roc.summ$ci[, 1]
yu = roc.summ$ci[, 3]
polygon(1 - c(x, rev(x)), c(yl, rev(yu)), col = '#88888844' , border = NA)
The final plot produced by SIAMCAT is the model interpretation plot, created by
the model.interpretation.plot
function. The plot shows for the top selected
features the
model weights (and how robust they are) as a barplot,
a heatmap with the z-scores or fold changes for the top selected features, and
a boxplot showing the proportions of weight per model which is captured by the top selected features.
The function again produces a pdf-file as output and is thus not run here. An example of how it can be used can be found below:
## Not run here, since the function produces a pdf-file as output
model.interpretation.plot(
siamcat,
fn.plot = 'interpretation.pdf',
consens.thres = 0.5,
norm.models = TRUE,
limits = c(-3, 3),
heatmap.type = 'zscore',
verbose = 2
)
Alternatively, we show here only the z-score heatmap for the top selected features:
## + model.interpretation.select.features
sessionInfo()
## R version 3.5.0 (2018-04-23)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 16.04.4 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.7-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.7-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] SIAMCAT_1.0.0 phyloseq_1.24.0 mlr_2.12.1 ParamHelpers_1.10
## [5] BiocStyle_2.8.0
##
## loaded via a namespace (and not attached):
## [1] Rcpp_0.12.16 ape_5.1 lattice_0.20-35
## [4] Biostrings_2.48.0 glmnet_2.0-16 rprojroot_1.3-2
## [7] digest_0.6.15 foreach_1.4.4 gridBase_0.4-7
## [10] plyr_1.8.4 backports_1.1.2 stats4_3.5.0
## [13] evaluate_0.10.1 ggplot2_2.2.1 pillar_1.2.2
## [16] zlibbioc_1.26.0 rlang_0.2.0 lazyeval_0.2.1
## [19] data.table_1.10.4-3 vegan_2.5-1 S4Vectors_0.18.0
## [22] Matrix_1.2-14 checkmate_1.8.5 rmarkdown_1.9
## [25] splines_3.5.0 stringr_1.3.0 igraph_1.2.1
## [28] munsell_0.4.3 compiler_3.5.0 xfun_0.1
## [31] pkgconfig_2.0.1 BiocGenerics_0.26.0 multtest_2.36.0
## [34] mgcv_1.8-23 BBmisc_1.11 htmltools_0.3.6
## [37] biomformat_1.8.0 tibble_1.4.2 gridExtra_2.3
## [40] bookdown_0.7 IRanges_2.14.0 codetools_0.2-15
## [43] matrixStats_0.53.1 XML_3.98-1.11 permute_0.9-4
## [46] MASS_7.3-50 grid_3.5.0 nlme_3.1-137
## [49] jsonlite_1.5 gtable_0.2.0 magrittr_1.5
## [52] pROC_1.11.0 scales_0.5.0 stringi_1.1.7
## [55] XVector_0.20.0 reshape2_1.4.3 parallelMap_1.3
## [58] PRROC_1.3 Rhdf5lib_1.2.0 RColorBrewer_1.1-2
## [61] iterators_1.0.9 tools_3.5.0 ade4_1.7-11
## [64] Biobase_2.40.0 LiblineaR_2.10-8 parallel_3.5.0
## [67] survival_2.42-3 yaml_2.1.18 colorspace_1.3-2
## [70] rhdf5_2.24.0 cluster_2.0.7-1 beanplot_1.2
## [73] knitr_1.20