This vignette is made for users that are already familiar with the basic condiments workflow described in the first vignette. Here, we will show how to modify the default parameters for the first two steps of the workflow

# For analysis
library(condiments)
library(slingshot)
set.seed(21)

Toy dataset

We rely on the same toy dataset as the first vignette

data("toy_dataset", package = "condiments")
df <- toy_dataset$sd
rd <- as.matrix(df[, c("Dim1", "Dim2")])
sds <- slingshot(rd, df$cl)

The topologyTest function

By default, the topologyTest function requires only two inputs, the sds object and the condition labels. To limit run time for the vignette, we also change the default number of permutations used to generate trajectories under the null by setting the rep argument to \(10\) instead of the default \(100\). As such, the test statistics might be more variable.

top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 10)
## Generating permuted trajectories
## Running KS-mean test
knitr::kable(top_res)
method thresh statistic p.value
KS_mean 0.01 0 1

Changing the method or the threshold

The topologyTest function can be relatively slow on large datasets. Moreover, when changing the method used to test the null hypothesis that a common trajectory should be fitted, the first permutation part of generating rep trajectories under the null is identical. Therefore, we allow users to specify more than one method and one value of the threshold. Here, we will use both the Kolmogorov-Smirnov test test(Smirnov 1939) and the classifier-test(Lopez-Paz and Oquab 2016).

top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 10,
                        methods = c("KS_mean", "Classifier"),
                        threshs = c(0, .01, .05, .1))
## Generating permuted trajectories
## Running KS-mean test
## Running Classifier test
knitr::kable(top_res)
method thresh statistic p.value
KS_mean 0 0.0070000 1
KS_mean 0.01 0.0000000 1
KS_mean 0.05 0.0000000 1
KS_mean 0.1 0.0000000 1
Classifier 0 0.3666667 1
Classifier 0.01 0.3466667 1
Classifier 0.05 0.3216667 1
Classifier 0.1 0.2750000 1

To see all methods avaible, use /tmp/RtmpPpxrNc/Rinst2c991d781262c5/condiments/help/topologyTest and look at the methods argument.

Passing arguments to the test method

For all methods but the KS test, additional paramters can be specified, using a custom argument: args_classifier, args_wass or args_mmd. See the help file for given test more information on those parameters. For example, since the default test based on the wasserstein distance and permutation test is quite slow, we can pass a fast argument.

top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 10,
                        methods = "wasserstein_permutation",
                        args_wass = list(fast = TRUE, S = 100, iterations  = 10^2))
## Generating permuted trajectories
## Running wassertsein permutation test
knitr::kable(top_res)
method thresh statistic p.value
wasserstein_permutation NA 1.640631 0.65

Using parallelisation

For now, the first part of the topologyTest has been designed for parallelisation using the BiocParallel package. For example, to run with 4 cores, you can run the following command

library(BiocParallel)
BPPARAM <- bpparam()
BPPARAM$progressbar <- TRUE
BPPARAM$workers <- 4
top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 100, 
                        parallel = TRUE, BPPARAM = BPPARAM)
knitr::kable(top_res)

Differential progression and fate selection

The tests for the second test are much less compute-intensive, therefore there is no parallelisation. However, the other changes introduce in the previous section are still possible

Default

prog_res <- progressionTest(sds, conditions = df$conditions)
knitr::kable(prog_res)
lineage statistic p.value
All 5.506366 0
dif_res <- fateSelectionTest(sds, conditions = df$conditions)
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
## 
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(dif_res)
pair statistic p.value
1vs2 0.6836937 0

Changing the method and / or threshold

prog_res <- progressionTest(sds, conditions = df$conditions, method = "Classifier")
knitr::kable(prog_res)
lineage statistic p.value
All 0.6341441 3.41e-05
dif_res <- fateSelectionTest(sds, conditions = df$conditions, thresh = .05)
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
## 
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(dif_res)
pair statistic p.value
1vs2 0.5941441 0.0029024

Passing more parameters to the test methods

prog_res <- progressionTest(sds, conditions = df$conditions, method = "Classifier",
                            args_classifier = list(method = "rf"))
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(prog_res)
lineage statistic p.value
All 0.517027 0.3192952
dif_res <- fateSelectionTest(sds, conditions = df$conditions)
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
## 
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(dif_res)
pair statistic p.value
1vs2 0.6431532 1.04e-05

Conclusion

For all of the above procedures, it is important to note that we are making multiple comparisons. The p-values we obtain from these tests should be corrected for multiple testing, especially for trajectories with a large number of lineages.

That said, trajectory inference is often one of the last computational methods in a very long analysis pipeline (generally including gene-level quantification, gene filtering / feature selection, and dimensionality reduction). Hence, we strongly discourage the reader from putting too much faith in any p-value that comes out of this analysis. Such values may be useful suggestions, indicating particular features or cells for follow-up study, but should generally not be treated as meaningful statistical quantities.

If some commands and parameters are still unclear after going through this vignette, do not hesitate to open an issue on the condiments Github repository.

Session Info

sessionInfo()
## R version 4.2.0 RC (2022-04-19 r82224)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.4 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.15-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.15-bioc/R/lib/libRlapack.so
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_GB              LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
##  [1] caret_6.0-92                lattice_0.20-45            
##  [3] viridis_0.6.2               viridisLite_0.4.0          
##  [5] RColorBrewer_1.1-3          ggplot2_3.3.5              
##  [7] tidyr_1.2.0                 dplyr_1.0.8                
##  [9] slingshot_2.4.0             TrajectoryUtils_1.4.0      
## [11] SingleCellExperiment_1.18.0 SummarizedExperiment_1.26.0
## [13] Biobase_2.56.0              GenomicRanges_1.48.0       
## [15] GenomeInfoDb_1.32.0         IRanges_2.30.0             
## [17] S4Vectors_0.34.0            BiocGenerics_0.42.0        
## [19] MatrixGenerics_1.8.0        matrixStats_0.62.0         
## [21] princurve_2.1.6             condiments_1.4.0           
## [23] knitr_1.38                 
## 
## loaded via a namespace (and not attached):
##   [1] plyr_1.8.7                igraph_1.3.1             
##   [3] splines_4.2.0             BiocParallel_1.30.0      
##   [5] listenv_0.8.0             scater_1.24.0            
##   [7] digest_0.6.29             foreach_1.5.2            
##   [9] htmltools_0.5.2           fansi_1.0.3              
##  [11] magrittr_2.0.3            ScaledMatrix_1.4.0       
##  [13] tensor_1.5                doParallel_1.0.17        
##  [15] limma_3.52.0              recipes_0.2.0            
##  [17] globals_0.14.0            gower_1.0.0              
##  [19] hardhat_0.2.0             spatstat.sparse_2.1-1    
##  [21] colorspace_2.0-3          ggrepel_0.9.1            
##  [23] xfun_0.30                 crayon_1.5.1             
##  [25] RCurl_1.98-1.6            jsonlite_1.8.0           
##  [27] distinct_1.8.0            spatstat_2.3-4           
##  [29] spatstat.data_2.2-0       survival_3.3-1           
##  [31] iterators_1.0.14          glue_1.6.2               
##  [33] polyclip_1.10-0           gtable_0.3.0             
##  [35] ipred_0.9-12              zlibbioc_1.42.0          
##  [37] XVector_0.36.0            DelayedArray_0.22.0      
##  [39] Ecume_0.9.1               BiocSingular_1.12.0      
##  [41] kernlab_0.9-30            future.apply_1.9.0       
##  [43] abind_1.4-5               scales_1.2.0             
##  [45] DBI_1.1.2                 rngtools_1.5.2           
##  [47] spatstat.random_2.2-0     Rcpp_1.0.8.3             
##  [49] spatstat.core_2.4-2       rsvd_1.0.5               
##  [51] proxy_0.4-26              lava_1.6.10              
##  [53] prodlim_2019.11.13        ellipsis_0.3.2           
##  [55] farver_2.1.0              pkgconfig_2.0.3          
##  [57] scuttle_1.6.0             nnet_7.3-17              
##  [59] sass_0.4.1                deldir_1.0-6             
##  [61] utf8_1.2.2                labeling_0.4.2           
##  [63] tidyselect_1.1.2          rlang_1.0.2              
##  [65] reshape2_1.4.4            munsell_0.5.0            
##  [67] tools_4.2.0               cli_3.3.0                
##  [69] generics_0.1.2            evaluate_0.15            
##  [71] stringr_1.4.0             fastmap_1.1.0            
##  [73] yaml_2.3.5                goftest_1.2-3            
##  [75] ModelMetrics_1.2.2.2      transport_0.12-2         
##  [77] randomForest_4.7-1        purrr_0.3.4              
##  [79] RANN_2.6.1                pbapply_1.5-0            
##  [81] future_1.25.0             nlme_3.1-157             
##  [83] doRNG_1.8.2               sparseMatrixStats_1.8.0  
##  [85] compiler_4.2.0            beeswarm_0.4.0           
##  [87] e1071_1.7-9               spatstat.utils_2.3-0     
##  [89] spatstat.linnet_2.3-2     tibble_3.1.6             
##  [91] bslib_0.3.1               stringi_1.7.6            
##  [93] highr_0.9                 Matrix_1.4-1             
##  [95] vctrs_0.4.1               pillar_1.7.0             
##  [97] lifecycle_1.0.1           spatstat.geom_2.4-0      
##  [99] jquerylib_0.1.4           BiocNeighbors_1.14.0     
## [101] irlba_2.3.5               data.table_1.14.2        
## [103] bitops_1.0-7              R6_2.5.1                 
## [105] gridExtra_2.3             vipor_0.4.5              
## [107] parallelly_1.31.1         codetools_0.2-18         
## [109] MASS_7.3-57               assertthat_0.2.1         
## [111] withr_2.5.0               GenomeInfoDbData_1.2.8   
## [113] mgcv_1.8-40               parallel_4.2.0           
## [115] grid_4.2.0                rpart_4.1.16             
## [117] beachmat_2.12.0           timeDate_3043.102        
## [119] class_7.3-20              rmarkdown_2.14           
## [121] DelayedMatrixStats_1.18.0 pROC_1.18.0              
## [123] lubridate_1.8.0           ggbeeswarm_0.6.0

References

Lopez-Paz, David, and Maxime Oquab. 2016. “Revisiting Classifier Two-Sample Tests.” Arxiv, October, 1–15. http://arxiv.org/abs/1610.06545.

Smirnov, Nikolai V. 1939. “On the Estimation of the Discrepancy Between Empirical Curves of Distribution for Two Independent Samples.” Bull. Math. Univ. Moscou 2 (2): 3–14.