This vignette demonstrates the use of the pengls package for high-dimensional data with spatial or temporal autocorrelation. It consists of an iterative loop around the nlme and glmnet packages. Currently, only continuous outcomes and \(R^2\) and MSE as performance measure are implemented.
The pengls package is available from BioConductor, and can be installed as follows:
Once installed, it can be loaded and version info printed.
suppressPackageStartupMessages(library(pengls))
cat("pengls package version", as.character(packageVersion("pengls")), "\n")
## pengls package version 1.4.0
We first create a toy dataset with spatial coordinates.
library(nlme)
n <- 25 #Sample size
p <- 50 #Number of features
g <- 15 #Size of the grid
#Generate grid
Grid <- expand.grid("x" = seq_len(g), "y" = seq_len(g))
# Sample points from grid without replacement
GridSample <- Grid[sample(nrow(Grid), n, replace = FALSE),]
#Generate outcome and regressors
b <- matrix(rnorm(p*n), n , p)
a <- rnorm(n, mean = b %*% rbinom(p, size = 1, p = 0.25), sd = 0.1) #25% signal
#Compile to a matrix
df <- data.frame("a" = a, "b" = b, GridSample)
The pengls method requires prespecification of a functional form for the autocorrelation. This is done through the corStruct objects defined by the nlme package. We specify a correlation decaying as a Gaussian curve with distance, and with a nugget parameter. The nugget parameter is a proportion that indicates how much of the correlation structure explained by independent errors; the rest is attributed to spatial autocorrelation. The starting values are chosen as reasonable guesses; they will be overwritten in the fitting process.
# Define the correlation structure (see ?nlme::gls), with initial nugget 0.5 and range 5
corStruct <- corGaus(form = ~ x + y, nugget = TRUE, value = c("range" = 5, "nugget" = 0.5))
Finally the model is fitted with a single outcome variable and large number of regressors, with the chosen covariance structure and for a prespecified penalty parameter \(\lambda=0.2\).
#Fit the pengls model, for simplicity for a simple lambda
penglsFit <- pengls(data = df, outVar = "a", xNames = grep(names(df), pattern = "b", value =TRUE), glsSt = corStruct, lambda = 0.2, verbose = TRUE)
## Starting iterations...
## Iteration 1
## Iteration 2
## Iteration 3
## Iteration 4
## Iteration 5
## Iteration 6
## Iteration 7
## Iteration 8
## Iteration 9
## Iteration 10
## Iteration 11
## Iteration 12
## Iteration 13
## Iteration 14
## Iteration 15
## Iteration 16
## Iteration 17
## Iteration 18
## Iteration 19
## Iteration 20
## Iteration 21
## Iteration 22
## Iteration 23
## Iteration 24
## Iteration 25
## Iteration 26
## Iteration 27
## Iteration 28
## Iteration 29
## Iteration 30
## Warning in pengls(data = df, outVar = "a", xNames = grep(names(df), pattern = "b", : No convergence achieved in pengls!
Standard extraction functions like print(), coef() and predict() are defined for the new “pengls” object.
## pengls model with correlation structure: corGaus
## and 49 non-zero coefficients
The method can also account for temporal autocorrelation by defining another correlation structure from the nlme package, e.g. autocorrelation structure of order 1:
set.seed(354509)
n <- 100 #Sample size
p <- 10 #Number of features
#Generate outcome and regressors
b <- matrix(rnorm(p*n), n , p)
a <- rnorm(n, mean = b %*% rbinom(p, size = 1, p = 0.25), sd = 0.1) #25% signal
#Compile to a matrix
dfTime <- data.frame("a" = a, "b" = b, "t" = seq_len(n))
corStructTime <- corAR1(form = ~ t, value = 0.5)
The fitting command is similar, this time the \(\lambda\) parameter is found through cross-validation of the naive glmnet (for full cross-validation , see below). We choose \(\alpha=0.5\) this time, fitting an elastic net model.
penglsFitTime <- pengls(data = dfTime, outVar = "a", verbose = TRUE,
xNames = grep(names(dfTime), pattern = "b", value =TRUE),
glsSt = corStructTime, nfolds = 5, alpha = 0.5)
## Fitting naieve model...
## Starting iterations...
## Iteration 1
## Iteration 2
Show the output
## pengls model with correlation structure: corAR1
## and 2 non-zero coefficients
The pengls package also provides cross-validation for finding the optimal \(\lambda\) value. If the tuning parameter \(\lambda\) is not supplied, the optimal \(\lambda\) according to cross-validation with the naive glmnet function (the one that ignores dependence) is used. Hence we recommend to use the following function to use cross-validation. Multithreading is supported through the BiocParallel package :
The function is called similarly to cv.glmnet:
Check the result:
## Cross-validated pengls model with correlation structure: corGaus
## and 0 non-zero coefficients.
## 3 fold cross-validation yielded an estimated R2 of -0.2438712 .
By default, the 1 standard error is used to determine the optimal value of \(\lambda\) :
## [1] 1.468952
## [1] -0.2438712
Extract coefficients and fold IDs.
## [1] 0 0 0 0 0 0
## 204 76 17 221 192 56 52 201 18 15 54 7 190 16 143 186 220 74 80 133
## 1 3 3 1 1 2 2 1 3 2 2 3 1 3 1 1 1 2 3 1
## 132 1 71 43 127
## 1 3 2 2 1
By default, blocked cross-validation is used, but random cross-validation is also available (but not recommended for timecourse or spatial data). First we illustrate the different ways graphically, again using the timecourse example:
set.seed(5657)
randomFolds <- makeFolds(nfolds = nfolds, dfTime, "random", "t")
blockedFolds <- makeFolds(nfolds = nfolds, dfTime, "blocked", "t")
plot(dfTime$t, randomFolds, xlab ="Time", ylab ="Fold")
points(dfTime$t, blockedFolds, col = "red")
legend("topleft", legend = c("random", "blocked"), pch = 1, col = c("black", "red"))
To perform random cross-validation
To negate baseline differences at different timepoints, it may be useful to center or scale the outcomes in the cross validation. For instance for centering only:
penglsFitCVtimeCenter <- cv.pengls(data = dfTime, outVar = "a", xNames = grep(names(dfTime), pattern = "b", value =TRUE), glsSt = corStructTime, nfolds = nfolds, cvType = "blocked", transFun = function(x) x-mean(x))
penglsFitCVtimeCenter$cvOpt #Better performance
## [1] 0.9949131
Alternatively, the mean squared error (MSE) can be used as loss function, rather than the default \(R^2\):
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.5 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.16-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.16-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] BiocParallel_1.32.0 nlme_3.1-160 pengls_1.4.0
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.9 knitr_1.40 magrittr_2.0.3 splines_4.2.1
## [5] lattice_0.20-45 R6_2.5.1 rlang_1.0.6 fastmap_1.1.0
## [9] foreach_1.5.2 highr_0.9 stringr_1.4.1 tools_4.2.1
## [13] parallel_4.2.1 grid_4.2.1 glmnet_4.1-4 xfun_0.34
## [17] cli_3.4.1 jquerylib_0.1.4 htmltools_0.5.3 iterators_1.0.14
## [21] survival_3.4-0 yaml_2.3.6 digest_0.6.30 Matrix_1.5-1
## [25] sass_0.4.2 codetools_0.2-18 shape_1.4.6 cachem_1.0.6
## [29] evaluate_0.17 rmarkdown_2.17 stringi_1.7.8 compiler_4.2.1
## [33] bslib_0.4.0 jsonlite_1.8.3