The identification of novel compound-protein interaction (CPI) is important in drug discovery. Revealing unknown compound-protein interactions is useful to design a new drug for a target protein by screening candidate compounds. The accurate CPI prediction assists in effective drug discovery process. To identify potential CPI effectively, prediction methods based on machine learning and deep learning have been developed. Data for sequences are provided as discrete symbolic data. In the data, compounds are represented as SMILES (simplified molecular-input line-entry system) strings and proteins are sequences in which the characters are amino acids. The outcome is defined as a variable that indicates how strong two molecules interact with each other or whether there is an interaction between them. In this package, a deep-learning based model that takes only sequence information of both compounds and proteins as input and the outcome as output is used to predict CPI. The model is implemented by using compound and protein encoders with useful features. The CPI model also supports other modeling tasks, including protein-protein interaction (PPI), chemical-chemical interaction (CCI), or single compounds and proteins. Although the model is designed for proteins, DNA and RNA can be used if they are represented as sequences.
Multilayer perceptron (MLP) is the simplest form of neural networks consisting only of fully connected layers. Convolutional Neural Network (CNN) has convolutional layers instead of fully connected layers in the initial phase of the network. Recurrent neural network (RNN) are distinguished from other classes by presence of components with memory, containing long short term memory (LSTM) and gated recurrent units (GRU). The graph neural network (GNN) is a class of deep learning methods designed to perform inference on graph data. The graph convolutional network (GCN) is a type of GNN that can work directly on graphs and take advantage of their structural information. The GCN enables to learn hidden layer representations that capture both graph structure and node features. Compounds are represented as graphs and GCN can be used to learn compound-protein interactions. In molecular graph representations, nodes represent atoms and edges represent bonds. Besides graphs, a molecular fingerprint can be extracted from SMILES strings. A molecular fingerprint is a way of encoding the structural features of a molecule. Fingerprints are special kinds of descriptors that characterize a molecule and its properties as a binary bit vector that represents the presence or absence of particular substructure in the molecule. The Chemistry Development Kit (CDK) is an open source toolkit for bioinformatics and cheminformatics. It can accept the SMILES notation of a molecule as input to calculate molecular descriptors and fingerprints or get structural information. It can be implemented in R by the rcdk package.
The “example_cpi” dataset has compound-protein pairs and their interactions. Each line has a SMILES string, an amino acid sequence, and a label. Here, the label 1 means that the pair of SMILES and amino acid sequences has interaction and 0 means that the pair does not have interaction. In order to avoid overfitting, the data is split into a training sample and a validation sample. The training sample is used to estimate parameters or weights of the deep learning models. The validation sample is an independent sample, set aside to monitor the misclassification error. Here, 70% of the data are used for training and 30% for validation.
if (keras::is_keras_available() & reticulate::py_available()) {
library(DeepPINCS)
example_cpi <- example_cpi[1:500,]
validation_split <- 0.3
idx <- sample(seq_len(length(example_cpi[,1])))
train_idx <- seq_len(length(example_cpi[,1])) %in%
idx[seq_len(round(length(example_cpi[,1]) * (1 - validation_split)))]
}
## Loading required package: keras
The input sequences are fed into encoders. We need to provide the list of arguments for compound and protein encoder networks and for fully connected layer. For compound and protein networks, input and output tensors of encoders are required. Here, we can choose the graph convolutional network (GCN), recurrent neural network (RNN), convolutional neural network (CNN), and multilayer perceptron (MLP) as encoders. Additionally, we can use our own encoders. Note that the GCN is only available for compounds. The arguments for the “compound” and “protein” are the function of encoders with input and output tensors. The “compound_arg” and “protein_arg” are the arguments of the functions. The compound and protein encoders are concatenated and they are passed to the dense layer. A dense layer is a fully connected layer. We now need to compile the model, this step configures the learning process for our neural network architecture. We need to provide three arguments including the optimization algorithm to be used, the loss function to be optimized and metrics for evaluation.
if (keras::is_keras_available() & reticulate::py_available()) {
net_args <- list(
compound = "gcn_in_out",
compound_args = list(
gcn_units = c(128, 64),
gcn_activation = c("relu", "relu"),
fc_units = c(10),
fc_activation = c("relu")),
protein = "cnn_in_out",
protein_args = list(
cnn_filters = c(32),
cnn_kernel_size = c(3),
cnn_activation = c("relu"),
fc_units = c(10),
fc_activation = c("relu")),
fc_units = c(1),
fc_activation = c("sigmoid"),
loss = "binary_crossentropy",
optimizer = keras::optimizer_adam(),
metrics = "accuracy")
}
For example, consider the GCN for compounds and the CNN for proteins. For the GCN, we have to set the maximum number of atoms since every compound has not the same number of atoms. The degree of an atom in the graph representation and the atomic symbol and implicit hydrogen count for an atom are used as molecular features. Similarly, we define the maximum number of amino acids because there will be proteins of different lengths. In other words, naturally, some of the proteins are shorter and longer. The layer for embeddings to help us map strings to vectors. For them, we need to provide the dimension of the dense embedding. The n-gram is available only for protein sequences. We can use “callbacks”. The callback functions can terminate the training process, modify the learning rate, and save snapshots of the best version of our model during the training process.
if (keras::is_keras_available() & reticulate::py_available()) {
compound_max_atoms <- 50
protein_embedding_dim <- 16
protein_length_seq <- 100
gcn_cnn_cpi <- fit_cpi(
smiles = example_cpi[train_idx, 1],
AAseq = example_cpi[train_idx, 2],
outcome = example_cpi[train_idx, 3],
compound_type = "graph",
compound_max_atoms = compound_max_atoms,
protein_length_seq = protein_length_seq,
protein_embedding_dim = protein_embedding_dim,
protein_ngram_max = 2,
protein_ngram_min = 1,
smiles_val = example_cpi[!train_idx, 1],
AAseq_val = example_cpi[!train_idx, 2],
outcome_val = example_cpi[!train_idx, 3],
net_args = net_args,
epochs = 20,
batch_size = 64,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 10,
restore_best_weights = TRUE))
ttgsea::plot_model(gcn_cnn_cpi$model)
}
## checking sequences...
## preprocessing for compounds...
## preprocessing for proteins...
## fitting model...
## Epoch 1/20
## 6/6 - 2s - loss: 0.6965 - accuracy: 0.5000 - val_loss: 0.6924 - val_accuracy: 0.5933 - 2s/epoch - 282ms/step
## Epoch 2/20
## 6/6 - 0s - loss: 0.6913 - accuracy: 0.5657 - val_loss: 0.6882 - val_accuracy: 0.6200 - 130ms/epoch - 22ms/step
## Epoch 3/20
## 6/6 - 0s - loss: 0.6863 - accuracy: 0.6171 - val_loss: 0.6840 - val_accuracy: 0.6000 - 89ms/epoch - 15ms/step
## Epoch 4/20
## 6/6 - 0s - loss: 0.6814 - accuracy: 0.6771 - val_loss: 0.6804 - val_accuracy: 0.6333 - 77ms/epoch - 13ms/step
## Epoch 5/20
## 6/6 - 0s - loss: 0.6768 - accuracy: 0.6857 - val_loss: 0.6796 - val_accuracy: 0.6200 - 71ms/epoch - 12ms/step
## Epoch 6/20
## 6/6 - 0s - loss: 0.6712 - accuracy: 0.6743 - val_loss: 0.6739 - val_accuracy: 0.6467 - 72ms/epoch - 12ms/step
## Epoch 7/20
## 6/6 - 0s - loss: 0.6652 - accuracy: 0.6800 - val_loss: 0.6690 - val_accuracy: 0.6467 - 85ms/epoch - 14ms/step
## Epoch 8/20
## 6/6 - 0s - loss: 0.6588 - accuracy: 0.6886 - val_loss: 0.6644 - val_accuracy: 0.6400 - 92ms/epoch - 15ms/step
## Epoch 9/20
## 6/6 - 0s - loss: 0.6516 - accuracy: 0.7000 - val_loss: 0.6578 - val_accuracy: 0.6600 - 80ms/epoch - 13ms/step
## Epoch 10/20
## 6/6 - 0s - loss: 0.6433 - accuracy: 0.7086 - val_loss: 0.6504 - val_accuracy: 0.6733 - 83ms/epoch - 14ms/step
## Epoch 11/20
## 6/6 - 0s - loss: 0.6343 - accuracy: 0.7143 - val_loss: 0.6445 - val_accuracy: 0.6533 - 73ms/epoch - 12ms/step
## Epoch 12/20
## 6/6 - 0s - loss: 0.6235 - accuracy: 0.7171 - val_loss: 0.6343 - val_accuracy: 0.6533 - 81ms/epoch - 14ms/step
## Epoch 13/20
## 6/6 - 0s - loss: 0.6114 - accuracy: 0.7171 - val_loss: 0.6239 - val_accuracy: 0.6667 - 87ms/epoch - 15ms/step
## Epoch 14/20
## 6/6 - 0s - loss: 0.5975 - accuracy: 0.7257 - val_loss: 0.6135 - val_accuracy: 0.6733 - 85ms/epoch - 14ms/step
## Epoch 15/20
## 6/6 - 0s - loss: 0.5847 - accuracy: 0.7286 - val_loss: 0.6035 - val_accuracy: 0.6800 - 87ms/epoch - 15ms/step
## Epoch 16/20
## 6/6 - 0s - loss: 0.5679 - accuracy: 0.7743 - val_loss: 0.5892 - val_accuracy: 0.7133 - 83ms/epoch - 14ms/step
## Epoch 17/20
## 6/6 - 0s - loss: 0.5555 - accuracy: 0.7800 - val_loss: 0.5809 - val_accuracy: 0.7067 - 81ms/epoch - 14ms/step
## Epoch 18/20
## 6/6 - 0s - loss: 0.5421 - accuracy: 0.7857 - val_loss: 0.5692 - val_accuracy: 0.7200 - 82ms/epoch - 14ms/step
## Epoch 19/20
## 6/6 - 0s - loss: 0.5251 - accuracy: 0.7829 - val_loss: 0.5731 - val_accuracy: 0.6733 - 81ms/epoch - 13ms/step
## Epoch 20/20
## 6/6 - 0s - loss: 0.5090 - accuracy: 0.7829 - val_loss: 0.5521 - val_accuracy: 0.7200 - 86ms/epoch - 14ms/step
Using the trained model, we can predict whether pairs of SMILES and amino acid sequences have interaction or not. A very convenient way to evaluate the accuracy of a model is the use of a table that summarizes the performance of our algorithm against the data provided. The Receiver Operator Characteristic (ROC) is a quantitative analysis technique used in binary classification. The ROC curve is a helpful diagnostic for a model. The Area Under the Curve (AUC) can be calculated and provide a single score to summarize the plot that can be used to compare models. An alternative to the ROC curve is the precision-recall curve that can be useful for imbalanced data.
if (keras::is_keras_available() & reticulate::py_available()) {
pred <- predict_cpi(gcn_cnn_cpi,
smiles = example_cpi[!train_idx, 1],
AAseq = example_cpi[!train_idx, 2],
batch_size = 32)
pred_calss <- ifelse(pred$values > 0.5, 1, 0)
table(pred_calss, example_cpi[!train_idx, 3])
roc <- PRROC::roc.curve(scores.class0 = pred$values[example_cpi[!train_idx, 3] == 1],
scores.class1 = pred$values[example_cpi[!train_idx,3] == 0],
curve = TRUE)
plot(roc)
pr <- PRROC::pr.curve(scores.class0 = pred$values[example_cpi[!train_idx, 3] == 1],
scores.class1 = pred$values[example_cpi[!train_idx,3] == 0],
curve = TRUE)
plot(pr)
}
## checking sequences...
## preprocessing for compounds...
## preprocessing for proteins...
## predicting model...
## 5/5 - 0s - 188ms/epoch - 38ms/step
The chemical-chemical interactions such as drug-drug interactions (DDI) have become one of the emerging topics of the clinical drug development. Predictions of DDI are fueled by recent growth of knowledge in molecular biology, computation based simulation or predictions, and a better understanding of the inhibition and induction mechanisms. Here, the molecular fingerprint is used from SMILES strings.
if (keras::is_keras_available() & reticulate::py_available()) {
library(DeepPINCS)
validation_split <- 0.3
idx <- sample(seq_len(length(example_cci[,1])))
train_idx <- seq_len(length(example_cci[,1])) %in%
idx[seq_len(round(length(example_cci[,1]) * (1 - validation_split)))]
mlp_mlp_cci <- fit_cpi(
smiles = example_cci[train_idx, 1:2],
outcome = example_cci[train_idx, 3],
compound_type = "fingerprint",
smiles_val = example_cci[!train_idx, 1:2],
outcome_val = example_cci[!train_idx, 3],
net_args = list(
compound = "mlp_in_out",
compound_args = list(
fc_units = c(10, 5),
fc_activation = c("relu", "relu")),
fc_units = c(1),
fc_activation = c("sigmoid"),
loss = "binary_crossentropy",
optimizer = keras::optimizer_adam(),
metrics = "accuracy"),
epochs = 20, batch_size = 64,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 10,
restore_best_weights = TRUE))
ttgsea::plot_model(mlp_mlp_cci$model)
pred <- predict_cpi(mlp_mlp_cci,
smiles = example_cci[!train_idx, 1:2],
batch_size = 32)
pred_calss <- ifelse(pred$values > 0.5, 1, 0)
table(pred_calss, example_cci[!train_idx, 3])
}
## checking sequences...
## preprocessing for compounds...
## fitting model...
## Epoch 1/20
## 11/11 - 1s - loss: 0.6835 - accuracy: 0.5557 - val_loss: 0.6685 - val_accuracy: 0.6000 - 741ms/epoch - 67ms/step
## Epoch 2/20
## 11/11 - 0s - loss: 0.6202 - accuracy: 0.7057 - val_loss: 0.6418 - val_accuracy: 0.6433 - 62ms/epoch - 6ms/step
## Epoch 3/20
## 11/11 - 0s - loss: 0.5696 - accuracy: 0.7714 - val_loss: 0.6162 - val_accuracy: 0.6733 - 60ms/epoch - 5ms/step
## Epoch 4/20
## 11/11 - 0s - loss: 0.5161 - accuracy: 0.8129 - val_loss: 0.5890 - val_accuracy: 0.7033 - 59ms/epoch - 5ms/step
## Epoch 5/20
## 11/11 - 0s - loss: 0.4588 - accuracy: 0.8500 - val_loss: 0.5674 - val_accuracy: 0.7167 - 62ms/epoch - 6ms/step
## Epoch 6/20
## 11/11 - 0s - loss: 0.4044 - accuracy: 0.8829 - val_loss: 0.5539 - val_accuracy: 0.7233 - 62ms/epoch - 6ms/step
## Epoch 7/20
## 11/11 - 0s - loss: 0.3569 - accuracy: 0.9000 - val_loss: 0.5407 - val_accuracy: 0.7367 - 62ms/epoch - 6ms/step
## Epoch 8/20
## 11/11 - 0s - loss: 0.3148 - accuracy: 0.9200 - val_loss: 0.5398 - val_accuracy: 0.7567 - 61ms/epoch - 6ms/step
## Epoch 9/20
## 11/11 - 0s - loss: 0.2844 - accuracy: 0.9343 - val_loss: 0.5223 - val_accuracy: 0.7567 - 59ms/epoch - 5ms/step
## Epoch 10/20
## 11/11 - 0s - loss: 0.2509 - accuracy: 0.9443 - val_loss: 0.5289 - val_accuracy: 0.7433 - 60ms/epoch - 5ms/step
## Epoch 11/20
## 11/11 - 0s - loss: 0.2240 - accuracy: 0.9471 - val_loss: 0.5241 - val_accuracy: 0.7567 - 61ms/epoch - 6ms/step
## Epoch 12/20
## 11/11 - 0s - loss: 0.2047 - accuracy: 0.9529 - val_loss: 0.5138 - val_accuracy: 0.7567 - 60ms/epoch - 5ms/step
## Epoch 13/20
## 11/11 - 0s - loss: 0.1835 - accuracy: 0.9586 - val_loss: 0.5299 - val_accuracy: 0.7500 - 60ms/epoch - 5ms/step
## Epoch 14/20
## 11/11 - 0s - loss: 0.1657 - accuracy: 0.9714 - val_loss: 0.5259 - val_accuracy: 0.7633 - 60ms/epoch - 5ms/step
## Epoch 15/20
## 11/11 - 0s - loss: 0.1505 - accuracy: 0.9671 - val_loss: 0.5271 - val_accuracy: 0.7533 - 61ms/epoch - 6ms/step
## Epoch 16/20
## 11/11 - 0s - loss: 0.1354 - accuracy: 0.9786 - val_loss: 0.5416 - val_accuracy: 0.7600 - 58ms/epoch - 5ms/step
## Epoch 17/20
## 11/11 - 0s - loss: 0.1236 - accuracy: 0.9800 - val_loss: 0.5334 - val_accuracy: 0.7633 - 56ms/epoch - 5ms/step
## Epoch 18/20
## 11/11 - 0s - loss: 0.1122 - accuracy: 0.9814 - val_loss: 0.5454 - val_accuracy: 0.7700 - 61ms/epoch - 6ms/step
## Epoch 19/20
## 11/11 - 0s - loss: 0.1012 - accuracy: 0.9857 - val_loss: 0.5483 - val_accuracy: 0.7700 - 58ms/epoch - 5ms/step
## Epoch 20/20
## 11/11 - 0s - loss: 0.0923 - accuracy: 0.9900 - val_loss: 0.5454 - val_accuracy: 0.7567 - 58ms/epoch - 5ms/step
## checking sequences...
## preprocessing for compounds...
## predicting model...
## 10/10 - 0s - 93ms/epoch - 9ms/step
##
## pred_calss 0 1
## 0 127 34
## 1 39 100
The protein-protein interactions (PPIs) are biochemical events that play an important role in the functioning of the cell. The prediction of protein-protein interactions has important implications for understanding the behavioral processes of life, preventing diseases, and developing new drugs. Here, the n-gram for proteins is available. However, the q-gram would be better than the n-gram, since the q-gram is a string of q characters. In literature on text classification, the term n-gram is often used instead of the q-gram.
if (keras::is_keras_available() & reticulate::py_available()) {
validation_split <- 0.3
idx <- sample(seq_len(length(example_ppi[,1])))
train_idx <- seq_len(length(example_ppi[,1])) %in%
idx[seq_len(round(length(example_ppi[,1]) * (1 - validation_split)))]
protein_embedding_dim <- 16
protein_length_seq <- 100
mlp_mlp_ppi <- fit_cpi(
AAseq = example_ppi[train_idx, 1:2],
outcome = example_ppi[train_idx, 3],
protein_length_seq = protein_length_seq,
protein_embedding_dim = protein_embedding_dim,
AAseq_val = example_ppi[!train_idx, 1:2],
outcome_val = example_ppi[!train_idx, 3],
net_args = list(
protein = "mlp_in_out",
protein_args = list(
fc_units = c(10, 5),
fc_activation = c("relu", "relu")),
fc_units = c(1),
fc_activation = c("sigmoid"),
loss = "binary_crossentropy",
optimizer = keras::optimizer_adam(),
metrics = "accuracy"),
epochs = 20, batch_size = 64,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 10,
restore_best_weights = TRUE))
ttgsea::plot_model(mlp_mlp_ppi$model)
pred <- predict_cpi(mlp_mlp_ppi,
AAseq = example_ppi[!train_idx, 1:2],
batch_size = 32)
pred_calss <- ifelse(pred$values > 0.5, 1, 0)
table(pred_calss, example_ppi[!train_idx,3])
}
## checking sequences...
## preprocessing for proteins...
## fitting model...
## Epoch 1/20
## 55/55 - 1s - loss: 0.6908 - accuracy: 0.5286 - val_loss: 0.6884 - val_accuracy: 0.5447 - 1s/epoch - 20ms/step
## Epoch 2/20
## 55/55 - 0s - loss: 0.6637 - accuracy: 0.6706 - val_loss: 0.6736 - val_accuracy: 0.5973 - 261ms/epoch - 5ms/step
## Epoch 3/20
## 55/55 - 0s - loss: 0.5913 - accuracy: 0.7480 - val_loss: 0.6601 - val_accuracy: 0.6173 - 268ms/epoch - 5ms/step
## Epoch 4/20
## 55/55 - 0s - loss: 0.4849 - accuracy: 0.7986 - val_loss: 0.6725 - val_accuracy: 0.6453 - 257ms/epoch - 5ms/step
## Epoch 5/20
## 55/55 - 0s - loss: 0.3945 - accuracy: 0.8474 - val_loss: 0.7097 - val_accuracy: 0.6513 - 276ms/epoch - 5ms/step
## Epoch 6/20
## 55/55 - 0s - loss: 0.3189 - accuracy: 0.8823 - val_loss: 0.7611 - val_accuracy: 0.6460 - 242ms/epoch - 4ms/step
## Epoch 7/20
## 55/55 - 0s - loss: 0.2641 - accuracy: 0.9129 - val_loss: 0.8178 - val_accuracy: 0.6567 - 237ms/epoch - 4ms/step
## Epoch 8/20
## 55/55 - 0s - loss: 0.2218 - accuracy: 0.9283 - val_loss: 0.8923 - val_accuracy: 0.6587 - 236ms/epoch - 4ms/step
## Epoch 9/20
## 55/55 - 0s - loss: 0.1861 - accuracy: 0.9397 - val_loss: 0.9667 - val_accuracy: 0.6627 - 245ms/epoch - 4ms/step
## Epoch 10/20
## 55/55 - 0s - loss: 0.1608 - accuracy: 0.9477 - val_loss: 1.0272 - val_accuracy: 0.6627 - 247ms/epoch - 4ms/step
## Epoch 11/20
## 55/55 - 0s - loss: 0.1399 - accuracy: 0.9589 - val_loss: 1.1028 - val_accuracy: 0.6607 - 239ms/epoch - 4ms/step
## Epoch 12/20
## 55/55 - 0s - loss: 0.1204 - accuracy: 0.9651 - val_loss: 1.1875 - val_accuracy: 0.6627 - 220ms/epoch - 4ms/step
## Epoch 13/20
## 55/55 - 0s - loss: 0.1061 - accuracy: 0.9711 - val_loss: 1.2621 - val_accuracy: 0.6593 - 217ms/epoch - 4ms/step
## Epoch 14/20
## 55/55 - 0s - loss: 0.0993 - accuracy: 0.9694 - val_loss: 1.3414 - val_accuracy: 0.6680 - 216ms/epoch - 4ms/step
## Epoch 15/20
## 55/55 - 0s - loss: 0.0880 - accuracy: 0.9737 - val_loss: 1.4193 - val_accuracy: 0.6633 - 206ms/epoch - 4ms/step
## Epoch 16/20
## 55/55 - 0s - loss: 0.0822 - accuracy: 0.9731 - val_loss: 1.4813 - val_accuracy: 0.6593 - 246ms/epoch - 4ms/step
## Epoch 17/20
## 55/55 - 0s - loss: 0.0757 - accuracy: 0.9769 - val_loss: 1.5547 - val_accuracy: 0.6607 - 257ms/epoch - 5ms/step
## Epoch 18/20
## 55/55 - 0s - loss: 0.0688 - accuracy: 0.9771 - val_loss: 1.6100 - val_accuracy: 0.6580 - 248ms/epoch - 5ms/step
## Epoch 19/20
## 55/55 - 0s - loss: 0.0645 - accuracy: 0.9791 - val_loss: 1.6766 - val_accuracy: 0.6600 - 292ms/epoch - 5ms/step
## Epoch 20/20
## 55/55 - 0s - loss: 0.0573 - accuracy: 0.9829 - val_loss: 1.7421 - val_accuracy: 0.6593 - 271ms/epoch - 5ms/step
## checking sequences...
## preprocessing for proteins...
## predicting model...
## 47/47 - 0s - 149ms/epoch - 3ms/step
##
## pred_calss 0 1
## 0 540 279
## 1 232 449
Although the function “fit_cpi” is designed for amino acid sequences, we may instead use nucleic acid sequences, if they are composed of capital letters of an alphabet.
if (keras::is_keras_available() & reticulate::py_available()) {
validation_split <- 0.1
idx <- sample(seq_len(length(example_pd[,1])))
train_idx <- seq_len(length(example_pd[,1])) %in%
idx[seq_len(round(length(example_pd[,1]) * (1 - validation_split)))]
protein_embedding_dim <- 16
protein_length_seq <- 30
mlp_mlp_pd <- fit_cpi(
AAseq = example_pd[train_idx, 1:2],
outcome = example_pd[train_idx, 3],
protein_length_seq = protein_length_seq,
protein_embedding_dim = protein_embedding_dim,
AAseq_val = example_pd[!train_idx, 1:2],
outcome_val = example_pd[!train_idx, 3],
net_args = list(
protein = "mlp_in_out",
protein_args = list(
fc_units = c(10, 5),
fc_activation = c("relu", "relu")),
fc_units = c(1),
fc_activation = c("sigmoid"),
loss = "binary_crossentropy",
optimizer = keras::optimizer_adam(),
metrics = "accuracy"),
epochs = 30, batch_size = 16,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 10,
restore_best_weights = TRUE))
pred <- predict_cpi(mlp_mlp_pd,
AAseq = example_pd[!train_idx, 1:2],
batch_size = 16)
pred_calss <- ifelse(pred$values > 0.5, 1, 0)
table(pred_calss, example_pd[!train_idx, 3])
}
## checking sequences...
## preprocessing for proteins...
## fitting model...
## Epoch 1/30
## 18/18 - 1s - loss: 0.6336 - accuracy: 0.8293 - val_loss: 0.5346 - val_accuracy: 0.8750 - 933ms/epoch - 52ms/step
## Epoch 2/30
## 18/18 - 0s - loss: 0.4853 - accuracy: 0.8328 - val_loss: 0.3759 - val_accuracy: 0.8750 - 69ms/epoch - 4ms/step
## Epoch 3/30
## 18/18 - 0s - loss: 0.4409 - accuracy: 0.8328 - val_loss: 0.3568 - val_accuracy: 0.8750 - 67ms/epoch - 4ms/step
## Epoch 4/30
## 18/18 - 0s - loss: 0.4301 - accuracy: 0.8328 - val_loss: 0.3480 - val_accuracy: 0.8750 - 67ms/epoch - 4ms/step
## Epoch 5/30
## 18/18 - 0s - loss: 0.4126 - accuracy: 0.8328 - val_loss: 0.3314 - val_accuracy: 0.8750 - 64ms/epoch - 4ms/step
## Epoch 6/30
## 18/18 - 0s - loss: 0.4014 - accuracy: 0.8328 - val_loss: 0.3143 - val_accuracy: 0.8750 - 65ms/epoch - 4ms/step
## Epoch 7/30
## 18/18 - 0s - loss: 0.3862 - accuracy: 0.8328 - val_loss: 0.2948 - val_accuracy: 0.8750 - 66ms/epoch - 4ms/step
## Epoch 8/30
## 18/18 - 0s - loss: 0.3797 - accuracy: 0.8328 - val_loss: 0.2824 - val_accuracy: 0.8750 - 66ms/epoch - 4ms/step
## Epoch 9/30
## 18/18 - 0s - loss: 0.3633 - accuracy: 0.8328 - val_loss: 0.2738 - val_accuracy: 0.8750 - 67ms/epoch - 4ms/step
## Epoch 10/30
## 18/18 - 0s - loss: 0.3525 - accuracy: 0.8328 - val_loss: 0.2700 - val_accuracy: 0.8750 - 69ms/epoch - 4ms/step
## Epoch 11/30
## 18/18 - 0s - loss: 0.3387 - accuracy: 0.8328 - val_loss: 0.2664 - val_accuracy: 0.8750 - 79ms/epoch - 4ms/step
## checking sequences...
## preprocessing for proteins...
## predicting model...
## 1/1 - 0s - 88ms/epoch - 88ms/step
##
## pred_calss 0 1
## 0 28 4
Even though the function “fit_cpi” is designed for pairs of compounds, proteins, or both, we may instead use single compounds alone.
if (keras::is_keras_available() & reticulate::py_available()) {
validation_split <- 0.3
idx <- sample(seq_len(length(example_chem[,1])))
train_idx <- seq_len(length(example_chem[,1])) %in%
idx[seq_len(round(length(example_chem[,1]) * (1 - validation_split)))]
compound_length_seq <- 50
compound_embedding_dim <- 16
gcn_chem <- fit_cpi(
smiles = example_chem[train_idx, 1],
outcome = example_chem[train_idx, 2],
compound_type = "sequence",
compound_length_seq = compound_length_seq,
compound_embedding_dim = compound_embedding_dim,
smiles_val = example_chem[!train_idx, 1],
outcome_val = example_chem[!train_idx, 2],
net_args = list(
compound = "mlp_in_out",
compound_args = list(
fc_units = c(5),
fc_activation = c("relu")),
fc_units = c(1),
fc_activation = c("sigmoid"),
loss='binary_crossentropy',
optimizer = keras::optimizer_adam(),
metrics = "accuracy"),
epochs = 20, batch_size = 16,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 10,
restore_best_weights = TRUE))
ttgsea::plot_model(gcn_chem$model)
pred <- predict_cpi(gcn_chem, smiles = example_chem[!train_idx, 1])
pred_calss <- ifelse(pred$values > 0.5, 1, 0)
table(pred_calss, smiles = example_chem[!train_idx,2])
}
## checking sequences...
## preprocessing for compounds...
## fitting model...
## Epoch 1/20
## 9/9 - 1s - loss: 0.6856 - accuracy: 0.6071 - val_loss: 0.6692 - val_accuracy: 0.7000 - 666ms/epoch - 74ms/step
## Epoch 2/20
## 9/9 - 0s - loss: 0.6645 - accuracy: 0.6429 - val_loss: 0.6422 - val_accuracy: 0.7000 - 49ms/epoch - 5ms/step
## Epoch 3/20
## 9/9 - 0s - loss: 0.6488 - accuracy: 0.6429 - val_loss: 0.6163 - val_accuracy: 0.7000 - 48ms/epoch - 5ms/step
## Epoch 4/20
## 9/9 - 0s - loss: 0.6265 - accuracy: 0.6429 - val_loss: 0.5983 - val_accuracy: 0.7000 - 48ms/epoch - 5ms/step
## Epoch 5/20
## 9/9 - 0s - loss: 0.6124 - accuracy: 0.6429 - val_loss: 0.5823 - val_accuracy: 0.7000 - 48ms/epoch - 5ms/step
## Epoch 6/20
## 9/9 - 0s - loss: 0.6013 - accuracy: 0.6500 - val_loss: 0.5714 - val_accuracy: 0.7167 - 48ms/epoch - 5ms/step
## Epoch 7/20
## 9/9 - 0s - loss: 0.5929 - accuracy: 0.7071 - val_loss: 0.5684 - val_accuracy: 0.7667 - 52ms/epoch - 6ms/step
## Epoch 8/20
## 9/9 - 0s - loss: 0.5845 - accuracy: 0.7214 - val_loss: 0.5625 - val_accuracy: 0.7667 - 53ms/epoch - 6ms/step
## Epoch 9/20
## 9/9 - 0s - loss: 0.5762 - accuracy: 0.7429 - val_loss: 0.5614 - val_accuracy: 0.7667 - 49ms/epoch - 5ms/step
## Epoch 10/20
## 9/9 - 0s - loss: 0.5670 - accuracy: 0.7429 - val_loss: 0.5606 - val_accuracy: 0.7167 - 49ms/epoch - 5ms/step
## Epoch 11/20
## 9/9 - 0s - loss: 0.5567 - accuracy: 0.7429 - val_loss: 0.5554 - val_accuracy: 0.7167 - 47ms/epoch - 5ms/step
## Epoch 12/20
## 9/9 - 0s - loss: 0.5416 - accuracy: 0.7643 - val_loss: 0.5538 - val_accuracy: 0.7167 - 47ms/epoch - 5ms/step
## Epoch 13/20
## 9/9 - 0s - loss: 0.5266 - accuracy: 0.7786 - val_loss: 0.5600 - val_accuracy: 0.7500 - 48ms/epoch - 5ms/step
## Epoch 14/20
## 9/9 - 0s - loss: 0.5092 - accuracy: 0.7786 - val_loss: 0.5561 - val_accuracy: 0.7500 - 48ms/epoch - 5ms/step
## Epoch 15/20
## 9/9 - 0s - loss: 0.4954 - accuracy: 0.7786 - val_loss: 0.5541 - val_accuracy: 0.7500 - 48ms/epoch - 5ms/step
## Epoch 16/20
## 9/9 - 0s - loss: 0.4774 - accuracy: 0.7786 - val_loss: 0.5560 - val_accuracy: 0.7500 - 49ms/epoch - 5ms/step
## Epoch 17/20
## 9/9 - 0s - loss: 0.4641 - accuracy: 0.8000 - val_loss: 0.5545 - val_accuracy: 0.7500 - 53ms/epoch - 6ms/step
## checking sequences...
## preprocessing for compounds...
## predicting model...
## 2/2 - 0s - 59ms/epoch - 30ms/step
## smiles
## pred_calss 0 1
## 0 5 1
## 1 13 41
In a similar way, we can use single proteins, too. Our model can be extended to multiclass problems.
if (keras::is_keras_available() & reticulate::py_available()) {
example_prot <- example_prot[1:500,]
example_prot[,2] <- as.numeric(factor(example_prot[,2])) - 1
validation_split <- 0.3
idx <- sample(seq_len(length(example_prot[,1])))
train_idx <- seq_len(length(example_prot[,1])) %in%
idx[seq_len(round(length(example_prot[,1]) * (1 - validation_split)))]
protein_embedding_dim <- 16
protein_length_seq <- 100
rnn_prot <- fit_cpi(
AAseq = example_prot[train_idx, 1],
outcome = to_categorical(example_prot[train_idx, 2]),
protein_length_seq = protein_length_seq,
protein_embedding_dim = protein_embedding_dim,
AAseq_val = example_prot[!train_idx, 1],
outcome_val = to_categorical(example_prot[!train_idx, 2]),
net_args = list(
protein = "rnn_in_out",
protein_args = list(
rnn_type = c("gru"),
rnn_bidirectional = c(TRUE),
rnn_units = c(50),
rnn_activation = c("relu"),
fc_units = c(10),
fc_activation = c("relu")),
fc_units = c(3),
fc_activation = c("softmax"),
loss = 'categorical_crossentropy',
optimizer = keras::optimizer_adam(clipvalue = 0.5),
metrics = "accuracy"),
epochs = 20, batch_size = 64,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 10,
restore_best_weights = TRUE))
ttgsea::plot_model(rnn_prot$model)
val_index <- seq_len(length(example_prot[,2]))[!train_idx]
if (!is.null(rnn_prot$preprocessing$removed_AAseq_val)) {
pred <- predict_cpi(rnn_prot,
AAseq = example_prot[val_index[-rnn_prot$preprocessing$removed_AAseq_val[[1]]], 1])
pred_calss <- apply(pred$values, 1, which.max) - 1
table(pred_calss, example_prot[val_index[-rnn_prot$preprocessing$removed_AAseq_val[[1]]], 2])
} else {
pred <- predict_cpi(rnn_prot, AAseq = example_prot[!train_idx, 1])
pred_calss <- apply(pred$values, 1, which.max) - 1
table(pred_calss, example_prot[!train_idx, 2])
}
}
## checking sequences...
## at least one of protein sequences may not be valid
## preprocessing for proteins...
## fitting model...
## Epoch 1/20
## 6/6 - 4s - loss: 1.0933 - accuracy: 0.4686 - val_loss: 1.0865 - val_accuracy: 0.4631 - 4s/epoch - 736ms/step
## Epoch 2/20
## 6/6 - 1s - loss: 1.0797 - accuracy: 0.4743 - val_loss: 1.0738 - val_accuracy: 0.4631 - 740ms/epoch - 123ms/step
## Epoch 3/20
## 6/6 - 1s - loss: 1.0657 - accuracy: 0.4743 - val_loss: 1.0638 - val_accuracy: 0.4631 - 887ms/epoch - 148ms/step
## Epoch 4/20
## 6/6 - 1s - loss: 1.0579 - accuracy: 0.4743 - val_loss: 1.0585 - val_accuracy: 0.4631 - 847ms/epoch - 141ms/step
## Epoch 5/20
## 6/6 - 1s - loss: 1.0515 - accuracy: 0.4743 - val_loss: 1.0569 - val_accuracy: 0.4631 - 843ms/epoch - 141ms/step
## Epoch 6/20
## 6/6 - 1s - loss: 1.0501 - accuracy: 0.4743 - val_loss: 1.0557 - val_accuracy: 0.4631 - 964ms/epoch - 161ms/step
## Epoch 7/20
## 6/6 - 1s - loss: 1.0468 - accuracy: 0.4743 - val_loss: 1.0545 - val_accuracy: 0.4631 - 1s/epoch - 168ms/step
## Epoch 8/20
## 6/6 - 1s - loss: 1.0438 - accuracy: 0.4743 - val_loss: 1.0529 - val_accuracy: 0.4631 - 983ms/epoch - 164ms/step
## Epoch 9/20
## 6/6 - 1s - loss: 1.0404 - accuracy: 0.4743 - val_loss: 1.0513 - val_accuracy: 0.4631 - 873ms/epoch - 146ms/step
## Epoch 10/20
## 6/6 - 1s - loss: 1.0368 - accuracy: 0.4743 - val_loss: 1.0498 - val_accuracy: 0.4631 - 923ms/epoch - 154ms/step
## Epoch 11/20
## 6/6 - 1s - loss: 1.0320 - accuracy: 0.4743 - val_loss: 1.0480 - val_accuracy: 0.4631 - 927ms/epoch - 154ms/step
## checking sequences...
## preprocessing for proteins...
## predicting model...
## 5/5 - 1s - 519ms/epoch - 104ms/step
##
## pred_calss 0 1 2
## 0 69 46 34
The process of developing drugs is extensive, laborious, expensive, and time consuming. Drug repurposing, also referred to as repositioning, significantly reduces the cost, risk, and time compared to traditional drug development strategies, by recycling already established drugs. In the drug discovery process, it is required to determine the cause of a disease and, thus, a potential biological target. A target can be any biological entity from RNA to a protein to a gene that is ‘druggable’ or accessible to binding with a drug-like compound. Drug interactions with a biological target change the shape or confirmation of some facet of the target when bound to a small molecule and alter the target’s ability to function. This conformational change ideally triggers a desired biological response involved in the particular disease process. Here, deep learning models can be used to identify the candidate drugs for selected targets and disease.
Suppose that we want to identify which existing antiviral drugs can be repurposed to target the SARS coronavirus 3C-like Protease. For training the deep learning model, we use the data from past bioassay data such as high throughput screening (HTS) assay on SARS-CoV 3CL Protease, which conserves large portion of the gene with SARS-CoV-2. For repurposing for COVID-19, the deep learning model is trained using this data to rank drug candidates from the antiviral library.
if (keras::is_keras_available() & reticulate::py_available()) {
compound_length_seq <- 50
protein_length_seq <- 500
compound_embedding_dim <- 16
protein_embedding_dim <- 16
mlp_mlp <- fit_cpi(
smiles = example_bioassay[,1],
AAseq = example_bioassay[,2],
outcome = example_bioassay[,3],
compound_type = "sequence",
compound_length_seq = compound_length_seq,
protein_length_seq = protein_length_seq,
compound_embedding_dim = compound_embedding_dim,
protein_embedding_dim = protein_embedding_dim,
net_args = list(
compound = "mlp_in_out",
compound_args = list(
fc_units = c(10, 5),
fc_activation = c("relu", "relu")),
protein = "mlp_in_out",
protein_args = list(
fc_units = c(10, 5),
fc_activation = c("relu", "relu")),
fc_units = c(1),
fc_activation = c("sigmoid"),
loss = 'binary_crossentropy',
optimizer = keras::optimizer_adam(),
metrics = "accuracy"),
epochs = 20, batch_size = 64,
validation_split = 0.3,
verbose = 0,
callbacks = keras::callback_early_stopping(
monitor = "val_accuracy",
patience = 5,
restore_best_weights = TRUE))
ttgsea::plot_model(mlp_mlp$model)
pred <- predict_cpi(mlp_mlp,
antiviral_drug[,2],
rep(SARS_CoV2_3CL_Protease, nrow(antiviral_drug)))
Result <- data.frame(antiviral_drug[,1], pred$values)
colnames(Result) <- c("drug", "probability")
Result[order(Result[,2], decreasing = TRUE),]
}
## checking sequences...
## preprocessing for compounds...
## preprocessing for proteins...
## fitting model...
## checking sequences...
## preprocessing for compounds...
## preprocessing for proteins...
## predicting model...
## 3/3 - 0s - 112ms/epoch - 37ms/step
## drug probability
## 9 Bictegravir 0.997987032
## 8 Baloxavir 0.990397155
## 64 Simeprevir 0.988348424
## 46 Nelfinavir 0.978619993
## 71 Tipranavir 0.972705603
## 68 Telaprevir 0.969731331
## 55 Glecaprevir 0.945349038
## 14 Cobicistat 0.932891667
## 25 Efavirenz 0.915295124
## 41 Lopinavir 0.908460855
## 76 Vicriviroc 0.893622875
## 39 Inosine 0.848434031
## 38 Indinavir 0.743015230
## 2 Aciclovir 0.664314926
## 59 Remdesivir 0.601965487
## 27 Enfuvirtide 0.514443755
## 5 Amprenavir 0.463177770
## 56 Grazoprevir 0.450645238
## 22 Dolutegravir 0.432092249
## 1 Abacavir 0.399830431
## 17 Daclatasvir 0.399714619
## 78 Taribavirin 0.380654156
## 65 Sofosbuvir 0.366999418
## 19 Delavirdine 0.339412421
## 29 Etravirine 0.334428221
## 13 Cidofovir 0.330575556
## 67 Chloroquine 0.285256803
## 23 Doravirine 0.272628784
## 70 Tenofovir_disoproxil 0.270569414
## 28 Entecavir 0.270149380
## 24 Edoxudine 0.236046687
## 12 Boceprevir 0.208014727
## 60 Ribavirin 0.201766923
## 42 Loviride 0.198027492
## 49 Ritonavir 0.151243329
## 77 Vidarabine 0.133880466
## 51 Penciclovir 0.129042044
## 11 Tenofovir 0.121273234
## 73 Tromantadine 0.114720330
## 47 Nevirapine 0.104099296
## 37 Imunovir 0.102215081
## 61 Rilpivirine 0.086239345
## 40 Letermovir 0.082208730
## 44 Methisazone 0.080827601
## 18 Darunavir 0.070806488
## 45 Moroxydine 0.064116761
## 36 Imiquimod 0.049891487
## 21 Docosanol 0.047814403
## 32 Foscarnet 0.047663040
## 35 Idoxuridine 0.042887092
## 30 Famciclovir 0.036044225
## 58 Raltegravir 0.033715654
## 53 Pleconaril 0.032842513
## 69 Telbivudine 0.029700631
## 54 Podophyllotoxin 0.026766786
## 79 Zalcitabine 0.026025524
## 6 Arbidol 0.022905614
## 63 Saquinavir 0.021886634
## 57 Pyrimidine 0.020815611
## 31 Fosamprenavir 0.018226115
## 26 Elvitegravir 0.017937275
## 34 Ibacitabine 0.017728956
## 52 Peramivir 0.017006086
## 81 Hydroxychloroquine 0.015855389
## 15 Lamivudine 0.013522738
## 10 Emtricitabine 0.013420626
## 20 Didanosine 0.012187747
## 72 Trifluridine 0.011139869
## 74 Valacyclovir 0.011137222
## 4 Amantadine 0.010410092
## 33 Ganciclovir 0.009664205
## 43 Maraviroc 0.009295318
## 50 Oseltamivir 0.009014327
## 75 Valganciclovir 0.008720020
## 48 Nitazoxanide 0.005761216
## 66 Stavudine 0.005447453
## 3 Adefovir 0.004392643
## 16 Zidovudine 0.002167937
## 62 Rimantadine 0.001398770
## 7 Atazanavir 0.001118541
## 80 Zanamivir 0.001049359
## R version 4.4.0 beta (2024-04-15 r86425)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 22.04.4 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.19-bioc/R/lib/libRblas.so
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=en_US.UTF-8
## [9] LC_ADDRESS=en_US.UTF-8 LC_TELEPHONE=en_US.UTF-8
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=en_US.UTF-8
##
## time zone: America/New_York
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] DeepPINCS_1.12.0 keras_2.15.0
##
## loaded via a namespace (and not attached):
## [1] tidyselect_1.2.1 dplyr_1.1.4 tensorflow_2.16.0
## [4] fastmap_1.1.1 promises_1.3.0 digest_0.6.35
## [7] lifecycle_1.0.4 koRpus_0.13-8 tokenizers_0.3.0
## [10] NLP_0.2-1 processx_3.8.4 magrittr_2.0.3
## [13] compiler_4.4.0 rlang_1.1.3 sass_0.4.9
## [16] tools_4.4.0 utf8_1.2.4 yaml_2.3.8
## [19] data.table_1.15.4 knitr_1.46 stopwords_2.3
## [22] htmlwidgets_1.6.4 reticulate_1.36.1 xml2_1.3.6
## [25] RColorBrewer_1.1-3 PRROC_1.3.1 websocket_1.4.1
## [28] ttgsea_1.12.0 purrr_1.0.2 itertools_0.1-3
## [31] grid_4.4.0 fansi_1.0.6 matlab_1.0.4
## [34] CatEncoders_0.1.1 tm_0.7-13 data.tree_1.1.0
## [37] iterators_1.0.14 zeallot_0.1.0 cli_3.6.2
## [40] rmarkdown_2.26 DiagrammeR_1.0.11 crayon_1.5.2
## [43] generics_0.1.3 stringdist_0.9.12 rstudioapi_0.16.0
## [46] webchem_1.3.0 httr_1.4.7 tfruns_1.5.3
## [49] visNetwork_2.1.2 fingerprint_3.5.7 cachem_1.0.8
## [52] chromote_0.2.0 stringr_1.5.1 sylly.en_0.1-3
## [55] rcdk_3.8.1 rvest_1.0.4 parallel_4.4.0
## [58] textstem_0.1.4 base64enc_0.1-3 vctrs_0.6.5
## [61] Matrix_1.7-0 jsonlite_1.8.8 slam_0.1-50
## [64] koRpus.lang.en_0.1-4 lgr_0.4.4 jquerylib_0.1.4
## [67] glue_1.7.0 ps_1.7.6 sylly_0.1-6
## [70] stringi_1.8.3 rJava_1.0-11 later_1.3.2
## [73] mlapi_0.1.1 tibble_3.2.1 pillar_1.9.0
## [76] rcdklibs_2.9 htmltools_0.5.8.1 float_0.3-2
## [79] rsparse_0.5.1 R6_2.5.1 evaluate_0.23
## [82] lattice_0.22-6 highr_0.10 png_0.1-8
## [85] SnowballC_0.7.1 RhpcBLASctl_0.23-42 bslib_0.7.0
## [88] text2vec_0.6.4 Rcpp_1.0.12 whisker_0.4.1
## [91] xfun_0.43 pkgconfig_2.0.3
Balakin, K. V. (2009). Pharmaceutical data mining: approaches and applications for drug discovery. John Wiley & Sons.
Huang, K., Fu, T., Glass, L. M., Zitnik, M., Xiao, C., & Sun, J. (2020). DeepPurpose: A Deep Learning Library for Drug-Target Interaction Prediction. Bioinformatics.
Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. ICLR.
Nguyen, T., Le, H., & Venkatesh, S. (2020). GraphDTA: prediction of drug-target binding affinity using graph convolutional networks. Bioinformatics.
O’Donnell, J. J., Somberg, J., Idemyor, V., & O’Donnell, J. T. (Eds.). (2019). Drug Discovery and Development. CRC Press.
Pedrycz, W., & Chen, S. M. (Eds.). (2020). Deep Learning: Concepts and Architectures. Springer.
Srinivasa, K. G., Siddesh, G. M., & Manisekhar, S. R. (Eds.). (2020). Statistical Modelling and Machine Learning Principles for Bioinformatics Techniques, Tools, and Applications. Springer.
Trabocchi, A. & Lenci, E. (2020). Small molecule drug discovery: methods, molecules and applications. Elsevier.
Tsubaki, M., Tomii, K., & Sese, J. (2019). Compound-protein interaction prediction with end-to-end learning of neural networks for graphs and sequences. Bioinformatics.
Van der Loo, M., & De Jonge, E. (2018). Statistical data cleaning with applications in R. John Wiley & Sons.
Vogel, H. G., Maas, J., & Gebauer, A. (Eds.). (2010). Drug discovery and evaluation: methods in clinical pharmacology. Springer.
Wang, L., Wang, H. F., Liu, S. R., Yan, X., & Song, K. J. (2019). Predicting protein-protein interactions from matrix-based protein sequence using convolution neural network and feature-selective rotation forest. Scientific reports.