We report a novel method to identify specific transcriptomic phenotypes based on an elitist non-dominated sorting genetic algorithm that combines the advantages of clustering methods and the exploratory properties of genetic algorithms to discover biologically and clinically relevant molecular subtypes in different cancers.
In the new era of omics data, precision medicine has become the new paradigm of cancer treatment. Among all available omics techniques, gene expression profiling, in particular, has been increasingly used to classify tumor subtypes with different biological behavior. Cancer subtype discovery is usually approached from two possible perspectives:
-Using the molecular data alone with unsupervised techniques such as clustering analysis. -Using supervised techniques focusing entirely on survival data.
The problem of finding patients subgroups with survival differences while maintaining cluster consistency could be viewed as a bi-objective problem, where there is a trade-off between the separability of the different groups and the ability of a given signature to consistently distinguish patients with different clinical outcomes. This gives rise to a set of optimal solutions, also known as Pareto-optimal solutions. To overcome these issues, we combined the advantages of clustering methods for grouping heterogeneous omics data and the search properties of genetic algorithms in GSgalgoR: A flexible yet robust multi-objective meta-heuristic for disease subtype discovery based on an elitist non-dominated sorting genetic algorithm (NSGA-II), driven by the underlying premise of maximizing survival differences between groups while getting high consistency and robustness of the clusters obtained.
In the GSgalgoR package, the NSGA-II framework was used for finding multiple Pareto-optimal solutions to classify patients according to their gene expression patterns. Basically, NSGA-II starts with a population of competing individuals which are evaluated under a set of fitness functions that estimate the survival differences and cohesiveness of the different transcriptomic groups. Then, solutions are ranked and sorted according to their non-domination level which will affect the way they are chosen to be submitted to the so-called “evolutionary operators” such as crossover and mutation. Once a set of well-suited solutions are selected and reproduced, a new offspring of individuals composed of a mixture of the “genetic information” of the parents is obtained. Parents and offspring are pooled and the best-ranked solutions are selected and passed to the next generation which will start over the same process again.
To install GSgalgoR package, start R and enter:
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("GSgalgoR")
library(GSgalgoR)
Alternatively you can install GSgalgoR from github using the devtool package
devtools::install_github("https://github.com/harpomaxx/GSgalgoR")
library(GSgalgoR)
To standardize the structure of genomic data, we use the
ExpressionSet
structure for the examples given in this guide. The ExpressionSet
objects are
formed mainly by:
To start testing GSgalgoR, we will use two Breast Cancer datasets. Namely, the UPP and the TRANSBIG datasets. Additionally, we will use PAM50 centroids to perform breast cancer sample classification. The datasets can be accessed from the following Bioconductor packages:
BiocManager::install("breastCancerUPP",version = "devel")
BiocManager::install("breastCancerTRANSBIG",version = "devel")
library(breastCancerTRANSBIG)
library(breastCancerUPP)
Also, some basic packages are needed to run the example in this vignette
library(GSgalgoR)
library(Biobase)
library(genefu)
library(survival)
library(survminer)
library(ggplot2)
data(pam50)
To access the ExpressionSets
we use:
data(upp)
Train<- upp
rm(upp)
data(transbig)
Test<- transbig
rm(transbig)
#To access gene expression data
train_expr<- exprs(Train)
test_expr<- exprs(Test)
#To access feature data
train_features<- fData(Train)
test_features<- fData(Test)
#To access clinical data
train_clinic <- pData(Train)
test_clinic <- pData(Test)
Galgo can accept any numeric data, like probe intensity from microarray experiments or RNAseq normalized counts, nevertheless, features are expected to be scaled across the dataset before being plugged in into the Galgo Framework. For PAM50 classification, Gene Symbols are expected, so probesets are mapped into their respective gene symbols. Probesets mapping for multiple genes are expanded while Genes mapped to multiple probes are collapsed selecting the probes with the highest variance for each duplicated gene.
#Custom function to drop duplicated genes (keep genes with highest variance)
DropDuplicates<- function(eset, map= "Gene.symbol"){
#Drop NA's
drop <- which(is.na(fData(eset)[,map]))
eset <- eset[-drop,]
#Drop duplicates
drop <- NULL
Dup <- as.character(unique(fData(eset)[which(duplicated
(fData(eset)[,map])),map]))
Var <- apply(exprs(eset),1,var)
for(j in Dup){
pos <- which(fData(eset)[,map]==j)
drop <- c(drop,pos[-which.max(Var[pos])])
}
eset <- eset[-drop,]
featureNames(eset) <- fData(eset)[,map]
return(eset)
}
# Custom function to expand probesets mapping to multiple genes
expandProbesets <- function (eset, sep = "///", map="Gene.symbol"){
x <- lapply(featureNames(eset), function(x) strsplit(x, sep)[[1]])
y<- lapply(as.character(fData(eset)[,map]), function(x) strsplit(x,sep))
eset <- eset[order(sapply(x, length)), ]
x <- lapply(featureNames(eset), function(x) strsplit(x, sep)[[1]])
y<- lapply(as.character(fData(eset)[,map]), function(x) strsplit(x,sep))
idx <- unlist(sapply(1:length(x), function(i) rep(i,length(x[[i]]))))
idy <- unlist(sapply(1:length(y), function(i) rep(i,length(y[[i]]))))
xx <- !duplicated(unlist(x))
idx <- idx[xx]
idy <- idy[xx]
x <- unlist(x)[xx]
y <- unlist(y)[xx]
eset <- eset[idx, ]
featureNames(eset) <- x
fData(eset)[,map] <- x
fData(eset)$gene <- y
return(eset)
}
Train=DropDuplicates(Train)
Train=expandProbesets(Train)
#Drop NAs in survival
Train <- Train[,!is.na(
survival::Surv(time=pData(Train)$t.rfs,event=pData(Train)$e.rfs))]
Test=DropDuplicates(Test)
Test=expandProbesets(Test)
#Drop NAs in survival
Test <-
Test[,!is.na(survival::Surv(
time=pData(Test)$t.rfs,event=pData(Test)$e.rfs))]
#Determine common probes (Genes)
Int= intersect(rownames(Train),rownames(Test))
Train= Train[Int,]
Test= Test[Int,]
identical(rownames(Train),rownames(Test))
#> [1] TRUE
For simplicity and speed, we will create a reduced expression matrix for the examples.
#First we will get PAM50 centroids from genefu package
PAM50Centroids <- pam50$centroids
PAM50Genes <- pam50$centroids.map$probe
PAM50Genes<- featureNames(Train)[ featureNames(Train) %in% PAM50Genes]
#Now we sample 200 random genes from expression matrix
Non_PAM50Genes<- featureNames(Train)[ !featureNames(Train) %in% PAM50Genes]
Non_PAM50Genes <- sample(Non_PAM50Genes,200, replace=FALSE)
reduced_set <- c(PAM50Genes, Non_PAM50Genes)
#Now we get the reduced training and test sets
Train<- Train[reduced_set,]
Test<- Test[reduced_set,]
Apply robust linear scaling as proposed in paper reference
exprs(Train) <- t(apply(exprs(Train),1,genefu::rescale,na.rm=TRUE,q=0.05))
exprs(Test) <- t(apply(exprs(Test),1,genefu::rescale,na.rm=TRUE,q=0.05))
train_expr <- exprs(Train)
test_expr <- exprs(Test)
The ‘Surv’ object is created by the Surv() function of the survival package.
This uses phenotypic data that are contained in the corresponding datasets,
accessed by the pData
command.
train_clinic <- pData(Train)
test_clinic <- pData(Test)
train_surv <- survival::Surv(time=train_clinic$t.rfs,event=train_clinic$e.rfs)
test_surv <- survival::Surv(time=test_clinic$t.rfs,event=test_clinic$e.rfs)
The main function in this package is galgo()
. It accepts an expression matrix
and survival object to find robust gene expression signatures related to a given
outcome. This function contains some parameters that can be modified, according
to the characteristics of the analysis to be performed.
The principal parameters are:
# For testing reasons it is set to a low number but ideally should be above 100
population <- 30
# For testing reasons it is set to a low number but ideally should be above 150
generations <-15
nCV <- 5
distancetype <- "pearson"
TournamentSize <- 2
period <- 3650
set.seed(264)
output <- GSgalgoR::galgo(generations = generations,
population = population,
prob_matrix = train_expr,
OS = train_surv,
nCV = nCV,
distancetype = distancetype,
TournamentSize = TournamentSize,
period = period)
print(class(output))
#> [1] "galgo.Obj"
#> attr(,"package")
#> [1] "GSgalgoR"
The output of the galgo() function is an object of type galgo.Obj
that has two
slots with the elements:
Is a l x (n + 5) matrix where n is the number of features evaluated and l is the number of solutions obtained.
Is a list of length equal to the number of generations run in the algorithm. Each element is a l x 2 matrix where l is the number of solutions obtained and the columns are the SC Fitness and the Survival Fitness values respectively.
For easier interpretation of the galgo.Obj
, the output can be transformed to a
list
or to a data.frame
objects.
This function restructurates a galgo.Obj
to a more easy to understand an use
list. This output is particularly useful if one wants to select a given solution
and use its outputs in a new classifier. The output of type list has a length
equals to the number of solutions obtained by the galgo algorithm.
Basically this output is a list of lists, where each element of the output is named after the solution’s name (solution.n, where n is the number assigned to that solution), and inside of it, it has all the constituents for that given solution with the following structure:
outputList <- to_list(output)
head(names(outputList))
#> [1] "Solution.1" "Solution.2" "Solution.3" "Solution.4" "Solution.5"
#> [6] "Solution.6"
To evaluate the structure of the first solution we can run:
outputList[["Solution.1"]]
#> $Genes
#> [1] "MYBL2" "RRM2" "BIRC5" "CDC20" "CDC6" "ESR1"
#> [7] "MAPT" "PGR" "GRB7" "MKI67" "NAT1" "CCNB1"
#> [13] "FOXA1" "MDM2" "BTN2A3" "WDR13" "RPS4XP13" "KCMF1"
#> [19] "CLEC4A" "NT5C2" "NARFL" "POLR1D" "SFN" "PCDHGA10"
#> [25] "MB" "WTAP" "ATXN3" "DHRS11" "SH3GLB1" "ELOVL4"
#> [31] "TRIB1" "PODNL1" "ADRA1A" "HPR" "DPF2" "DDX25"
#> [37] "TCL1A" "ZNF135" "PYY" "NDUFS8" "SCML2" "CRBN"
#> [43] "SEC61B" "RB1CC1" "IL19" "GOLGA8F" "SNRPD3" "CCDC69"
#> [49] "THAP10" "KIAA0087" "MORF4" "TMEM143" "NDUFA5" "ERH"
#> [55] "PAPOLA" "ADAMTS5" "AQP7" "FRAS1" "GFM1" "ELSPBP1"
#> [61] "ACTG1" "KIAA0090" "HIBCH" "FHOD1" "CCNL1" "SLC16A6"
#> [67] "NFATC3" "ZNF292" "OBFC2B" "HEG1" "MTOR" "ZNF32"
#> [73] "MOBKL1B" "TGM2" "ARHGAP19" "TPSB2" "DPYSL2" "PDF"
#> [79] "NUPR1" "DENND2D" "OAZ3" "SSX1" "PIN1" "HAGH"
#> [85] "KLHL9" "FLJ13224" "MGC31957" "POFUT1" "HIST1H4B" "RPS14P3"
#> [91] "RPS28P6" "SEPX1" "TPCN1" "HBP1" "ATP9B" "RXFP3"
#> [97] "LMAN1" "AGPAT4" "SEMA4A"
#>
#> $k
#> [1] 10
#>
#> $SC.Fit
#> [1] 0.03629928
#>
#> $Surv.Fit
#> [1] 860.0639
#>
#> $rank
#> [1] 1
#>
#> $CrowD
#> [1] Inf
The current function restructures a galgo.Obj
to a more easy to understand
an use data.frame
. The output data frame has m x n dimensions, were the
rownames (m) are the solutions obtained by the galgo algorithm. The columns has
the following structure:
outputDF <- to_dataframe(output)
head(outputDF)
#> Genes k SC.Fit Surv.Fit Rank CrowD
#> Solutions.1 MYBL2, R.... 10 0.03629928 860.0639 1 Inf
#> Solutions.2 RRM2, BI.... 2 0.17500850 470.7737 1 Inf
#> Solutions.3 RRM2, BI.... 4 0.09019771 612.8622 1 0.6369769
#> Solutions.4 RRM2, EX.... 2 0.14824914 587.8288 1 0.5689651
#> Solutions.5 RRM2, SL.... 4 0.08172465 790.7670 1 0.5240323
#> Solutions.6 PHGDH, M.... 6 0.04684246 834.5560 1 0.3308023
Once we obtain the galgo.obj
from the output of galgo()
we can plot the
obtained Pareto front and see how it evolved trough the tested number of
generations
plot_pareto(output)
Breast cancer (BRCA) is the most common neoplasm in women to date and one of the best studied cnacer types. Currently, numerous molecular alteration for this type of cancer are well known and many transcriptomic signatures have been developed for this type of cancer. In this regards, Perou et al. proposed one of the first molecular subtype classification according to transcriptomic profiles of the tumor, which recapitulates naturally-occurring gene expression patterns that encompass different functional pathways and patient outcomes. These subtypes, (LumA, LumB, Basal-like, HER2 and Normal-Like) have a strong overlap with the classical histopathological classification of BRCA tumors and might affect decision making when used to decided chemotherapy in certain cases.
To evaluate Galgo’s performance along with PAM50 classification, we will use the two already scaled and reduced BRCA gene expression datasets and will compare Galgo performance with the widely used intrinsic molecular subtype PAM50 classification. Galgo performs feature selection by design, so this step is not strictly necessary to use galgoR (although feature selection might fasten GSgalgoRruns), nevertheless, appropriate gene expression scaling is critical when running GSgalgoR.
The scaled expression values of each patient are compared with the prototypical centroids using Pearson’s correlation coefficient and the closest centroid to each patient is used to assign the corresponding labels.
#The reduced UPP dataset will be used as training set
train_expression <- exprs(Train)
train_clinic<- pData(Train)
train_features<- fData(Train)
train_surv<- survival::Surv(time=train_clinic$t.rfs,event=train_clinic$e.rfs)
#The reduced TRANSBIG dataset will be used as test set
test_expression <- exprs(Test)
test_clinic<- pData(Test)
test_features<- fData(Test)
test_surv<- survival::Surv(time=test_clinic$t.rfs,event=test_clinic$e.rfs)
#PAM50 centroids
centroids<- pam50$centroids
#Extract features from both data.frames
inBoth<- Reduce(intersect, list(rownames(train_expression),rownames(centroids)))
#Classify samples
PAM50_train<- cluster_classify(train_expression[inBoth,],centroids[inBoth,],
method = "spearman")
table(PAM50_train)
#> PAM50_train
#> 1 2 3 4 5
#> 22 30 94 73 15
PAM50_test<- cluster_classify(test_expression[inBoth,],centroids[inBoth,],
method = "spearman")
table(PAM50_test)
#> PAM50_test
#> 1 2 3 4 5
#> 45 26 80 44 3
# Classify samples using genefu
#annot<- fData(Train)
#colnames(annot)[3]="Gene.Symbol"
#PAM50_train<- molecular.subtyping(sbt.model = "pam50",
# data = t(train_expression), annot = annot,do.mapping = TRUE)
Once the patients are classified according to their closest centroids, we can now evaluate the survival curves for the different types in each of the datasets
surv_formula <-
as.formula("Surv(train_clinic$t.rfs,train_clinic$e.rfs)~ PAM50_train")
tumortotal1 <- surv_fit(surv_formula,data=train_clinic)
tumortotal1diff <- survdiff(surv_formula)
tumortotal1pval<- pchisq(tumortotal1diff$chisq, length(tumortotal1diff$n) - 1,
lower.tail = FALSE)
p<-ggsurvplot(tumortotal1,
data=train_clinic,
risk.table=TRUE,
pval=TRUE,
palette="dark2",
title="UPP breast cancer \n PAM50 subtypes survival",
surv.scale="percent",
conf.int=FALSE,
xlab="time (days)",
ylab="survival(%)",
xlim=c(0,3650),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,censor=FALSE)
print(p)
surv_formula <-
as.formula("Surv(test_clinic$t.rfs,test_clinic$e.rfs)~ PAM50_test")
tumortotal2 <- surv_fit(surv_formula,data=test_clinic)
tumortotal2diff <- survdiff(surv_formula)
tumortotal2pval<- pchisq(tumortotal2diff$chisq, length(tumortotal2diff$n) - 1,
lower.tail = FALSE)
p<-ggsurvplot(tumortotal2,
data=test_clinic,
risk.table=TRUE,
pval=TRUE,
palette="dark2",
title="TRANSBIG breast cancer \n PAM50 subtypes survival",
surv.scale="percent",
conf.int=FALSE,
xlab="time (days)",
ylab="survival(%)",
xlim=c(0,3650),
break.time.by = 365,
ggtheme = theme_minimal(),
risk.table.y.text.col = TRUE,
risk.table.y.text = FALSE,
censor=FALSE)
print(p)