Here, we describe the workflow to run variance-sensitive clustering on data from a quantitative omics experiments. In principle, this can be any multi-dimensional data set containing quantitative and optionally replicated values. This vignette is distributed under a CC BY-SA license.
vsclust 1.6.0
Clustering is a method to identify common pattern in highly dimensional data. This can be for example genes or proteins with similar quantitative changes, thus providing insights into the affected biological pathways.
Despite of numerous clustering algorithms, they do not account for feature variance, i.e. the uncertainty in the measurements across the different experimental conditions. VSClust determines the characteristic patterns in high-dimensional data while accounting for feature variance that is given through replicated measurements.
Here, we present an example script to run the full clustering analysis using
the vsclust
library. The same can be done by running the Shiny app (e.g. via
its docker image or on ), or the
corresponding command line script. For the source code, see
.
Use the common Bioconductor commands for installation:
if (!require("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("vsclust")
The full functionality of this vignette can be obtained by additionally
installing and loading the packages matrixStats
and clusterProfiler
Here, we define the different parameters for the example data set
protein_expressions
. In the command-line version of VSClust (“runVSClust.R”),
they can be given via yaml file.
Comments:
A. Data sets with different numbers of replicates per condition need to be adapted to contain the same number of columns per condition. These can be done by either removing excess replicates or adding empty columns.
B. We assume the input data to be of the following format: A1, B1, C1, …, A2, B2, C2, …, where letters denote sample type and numbers are the different replicates.
C. If you prefer to estimate feature variance different, use averages and add
an estimate for the standard deviation as last column. You will need to set the
last option of PreparedForVSClust
to FALSE
.
D. If you don’t have replicates, use the same format as in C. and set the standard deviations to 1.
#### Input parameters, only read when now parameter file was provided
## All principal parameters for running VSClust can be defined as in the
## shinyapp at computproteomics.bmb.sdu.dk/Apps/VSClust
# name of study
Experiment <- "ProtExample"
# Number of replicates/sample per different experimental condition (sample
# type)
NumReps <- 3
# Number of different experimental conditions (e.g. time points or sample
# types)
NumCond <- 4
# Paired or unpaired statistical tests when carrying out LIMMA for
# statistical testing
isPaired <- FALSE
# Number of threads to accelerate the calculation (use 1 in doubt)
cores <- 1
# If 0 (default), then automatically estimate the cluster number for the
# vsclust
# run from the Minimum Centroid Distance
PreSetNumClustVSClust <- 0
# If 0 (default), then automatically estimate the cluster number for the
# original fuzzy c-means from the Minimum Centroid Distance
PreSetNumClustStand <- 0
# max. number of clusters when estimating the number of clusters. Higher
# numbers can drastically extend the computation time.
maxClust <- 10
At first, we load the example proteomics data set and carry out statistical
testing of all conditions version the first based on the LIMMA moderated t-test.
The data consists of mice fed with four different diets (high fat, TTA, fish oil
and TTA\(+\)fish oil).
Understand more about the data set with ?protein_expressions
This will calculate the false discovery rates for the differentially regulated features (pairwise comparisons versus the first “high fat” condition) and most importantly, their expected individual variances, to be used in the variance-sensitive clustering. These variances can also be uploaded separately via a last column containing them as individual standard deviations.
The PrepareForVSClust
function also creates a PCA plot to assess variability
and control whether the samples have been loaded correctly (replicated samples
should form groups).
After estimating the standard deviations, the matrix consists of the averaged quantitative feature values and a last column for the standard deviations of the features.
data(protein_expressions)
dat <- protein_expressions
#### running statistical analysis and estimation of individual variances
statOut <- PrepareForVSClust(dat, NumReps, NumCond, isPaired, TRUE)
dat <- statOut$dat
Sds <- dat[,ncol(dat)]
cat(paste("Features:",nrow(dat),"\nMissing values:",
sum(is.na(dat)),"\nMedian standard deviations:",
round(median(Sds,na.rm=TRUE),digits=3)))
## Features: 574
## Missing values: 0
## Median standard deviations: 0.22
## Write output into file
write.csv(statOut$statFileOut,
paste("",Experiment,"statFileOut.csv",sep=""))
There is no simple way to find the optimal number of clusters in a data set. For obtaining this number, we run the clustering for different cluster numbers and evaluate them via so-called validity indices, which provide information about suitable cluster numbers. VSClust uses mainly the “Maximum centroid distances” that denotes the shortest distance between any of the centroids. Alternatively, one can inspect the Xie Beni index.
The output of estimClustNum
contains the suggestion for the number of clusters.
We further visualize the outcome.
#### Estimate number of clusters with maxClust as maximum number clusters
#### to run the estimation with
ClustInd <- estimClustNum(dat, maxClust=maxClust, scaling="standardize", cores=cores)
## Running cluster number 3
## Running cluster number 4
## Running cluster number 5
## Running cluster number 6
## Running cluster number 7
## Running cluster number 8
## Running cluster number 9
## Running cluster number 10
#### Use estimate cluster number or use own
if (PreSetNumClustVSClust == 0)
PreSetNumClustVSClust <- optimalClustNum(ClustInd)
if (PreSetNumClustStand == 0)
PreSetNumClustStand <- optimalClustNum(ClustInd, method="FCM")
#### Visualize
estimClust.plot(ClustInd)
Now we run the clustering again with the optimal parameters from the estimation. One can take alternative numbers of clusters corresponding to large decays in the Minimum Centroid Distance or low values of the Xie Beni index.
First, we carry out the variance-sensitive method
#### Run clustering (VSClust and standard fcm clustering
ClustOut <- runClustWrapper(dat,
PreSetNumClustVSClust,
NULL,
VSClust=TRUE,
scaling="standardize",
cores=cores)
Bestcl <- ClustOut$Bestcl
VSClust_cl <- Bestcl
#ClustOut$p
## Write clustering results (VSClust)
write.csv(data.frame(cluster=Bestcl$cluster,
ClustOut$outFileClust,
isClusterMember=rowMaxs(Bestcl$membership)>0.5,
maxMembership=rowMaxs(Bestcl$membership),
Bestcl$membership),
paste(Experiment,
"FCMVarMResults",
Sys.Date(),
".csv",
sep=""))
## Write coordinates of cluster centroids
write.csv(Bestcl$centers,
paste(Experiment,
"FCMVarMResultsCentroids",
Sys.Date(),
".csv",
sep=""))
We see that most of the difference are between TTA diets and the rest. This shows that the TTA fatty acids have strong impact on the organisms. Cluster three shows the proteins that a commonly lower abundant in mice fed with fish oil and thus are related to biological processes affected this particular diet.
For comparison, this is the clustering using standard fuzzy c-means of the means over the replicates.
ClustOut <- runClustWrapper(dat, PreSetNumClustStand, NULL, VSClust=FALSE,
scaling="standardize", cores=cores)
Bestcl <- ClustOut$Bestcl
## Write clustering results (standard fcm)
write.csv(data.frame(cluster=Bestcl$cluster,
ClustOut$outFileClust,
isClusterMember=rowMaxs(Bestcl$membership)>0.5,
maxMembership=rowMaxs(Bestcl$membership),
Bestcl$membership),
paste(Experiment,
"FCMResults",
Sys.Date(),
".csv",
sep=""))
## Write coordinates of cluster centroids
write.csv(Bestcl$centers, paste(Experiment,
"FCMResultsCentroids",
Sys.Date(),
".csv",
sep=""))
Here, the clusters look rather similar. VSClust best performs for larger numbers of different experimental conditions (one finds major improvements for \(D>6\)). For a 4-dimensional data set, the algorithm mostly filters out features with very high variance levels, making them unsuitable for belonging to a particular cluster.
This analysis is then followed by evaluating the features (here proteins) of
each cluster for their biological relevance. This can be done by functional
analysis with e.g. the clusterProfiler
package.
sessionInfo()
## R version 4.4.0 beta (2024-04-15 r86425)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 22.04.4 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.19-bioc/R/lib/libRblas.so
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## time zone: America/New_York
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] clusterProfiler_4.12.0 MultiAssayExperiment_1.30.0
## [3] SummarizedExperiment_1.34.0 Biobase_2.64.0
## [5] GenomicRanges_1.56.0 GenomeInfoDb_1.40.0
## [7] IRanges_2.38.0 S4Vectors_0.42.0
## [9] BiocGenerics_0.50.0 MatrixGenerics_1.16.0
## [11] matrixStats_1.3.0 vsclust_1.6.0
## [13] BiocStyle_2.32.0
##
## loaded via a namespace (and not attached):
## [1] RColorBrewer_1.1-3 jsonlite_1.8.8 magrittr_2.0.3
## [4] magick_2.8.3 farver_2.1.1 rmarkdown_2.26
## [7] fs_1.6.4 zlibbioc_1.50.0 vctrs_0.6.5
## [10] memoise_2.0.1 ggtree_3.12.0 tinytex_0.50
## [13] htmltools_0.5.8.1 S4Arrays_1.4.0 BiocBaseUtils_1.6.0
## [16] SparseArray_1.4.0 gridGraphics_0.5-1 sass_0.4.9
## [19] bslib_0.7.0 plyr_1.8.9 cachem_1.0.8
## [22] igraph_2.0.3 mime_0.12 lifecycle_1.0.4
## [25] pkgconfig_2.0.3 gson_0.1.0 Matrix_1.7-0
## [28] R6_2.5.1 fastmap_1.1.1 GenomeInfoDbData_1.2.12
## [31] shiny_1.8.1.1 digest_0.6.35 aplot_0.2.2
## [34] enrichplot_1.24.0 colorspace_2.1-0 patchwork_1.2.0
## [37] AnnotationDbi_1.66.0 RSQLite_2.3.6 fansi_1.0.6
## [40] httr_1.4.7 polyclip_1.10-6 abind_1.4-5
## [43] compiler_4.4.0 bit64_4.0.5 withr_3.0.0
## [46] BiocParallel_1.38.0 viridis_0.6.5 DBI_1.2.2
## [49] highr_0.10 ggforce_0.4.2 MASS_7.3-60.2
## [52] DelayedArray_0.30.0 HDO.db_0.99.1 tools_4.4.0
## [55] scatterpie_0.2.2 ape_5.8 httpuv_1.6.15
## [58] glue_1.7.0 nlme_3.1-164 GOSemSim_2.30.0
## [61] promises_1.3.0 shadowtext_0.1.3 grid_4.4.0
## [64] reshape2_1.4.4 fgsea_1.30.0 generics_0.1.3
## [67] gtable_0.3.5 tidyr_1.3.1 data.table_1.15.4
## [70] tidygraph_1.3.1 utf8_1.2.4 XVector_0.44.0
## [73] ggrepel_0.9.5 pillar_1.9.0 stringr_1.5.1
## [76] yulab.utils_0.1.4 limma_3.60.0 later_1.3.2
## [79] splines_4.4.0 dplyr_1.1.4 tweenr_2.0.3
## [82] treeio_1.28.0 lattice_0.22-6 bit_4.0.5
## [85] tidyselect_1.2.1 GO.db_3.19.1 Biostrings_2.72.0
## [88] knitr_1.46 gridExtra_2.3 bookdown_0.39
## [91] xfun_0.43 graphlayouts_1.1.1 statmod_1.5.0
## [94] stringi_1.8.3 UCSC.utils_1.0.0 lazyeval_0.2.2
## [97] ggfun_0.1.4 yaml_2.3.8 evaluate_0.23
## [100] codetools_0.2-20 ggraph_2.2.1 tibble_3.2.1
## [103] qvalue_2.36.0 BiocManager_1.30.22 ggplotify_0.1.2
## [106] cli_3.6.2 xtable_1.8-4 munsell_0.5.1
## [109] jquerylib_0.1.4 Rcpp_1.0.12 png_0.1-8
## [112] parallel_4.4.0 ggplot2_3.5.1 blob_1.2.4
## [115] DOSE_3.30.0 viridisLite_0.4.2 tidytree_0.4.6
## [118] scales_1.3.0 purrr_1.0.2 crayon_1.5.2
## [121] rlang_1.1.3 cowplot_1.1.3 fastmatch_1.1-4
## [124] KEGGREST_1.44.0