2024 : 11 : 23
Fardin Akhlaghian Tab

Fardin Akhlaghian Tab

Academic rank: Associate Professor
ORCID:
Education: PhD.
ScopusId: 9635715500
HIndex:
Faculty: Faculty of Engineering
Address:
Phone:

Research

Title
Enhancing link prediction through adversarial training in deep Nonnegative Matrix Factorization
Type
JournalPaper
Keywords
ink prediction, Deep matrix factorization, Adversarial training Generalization, Robustnes
Year
2024
Journal Engineering Applications of Artificial Intelligence
DOI
Researchers Reza Mahmoudi ، Seyed Amjad Seyedi ، Alireza Abdollahpouri ، Fardin Akhlaghian Tab

Abstract

Link prediction is a fundamental problem in complex network analysis, aimed at predicting missing or forthcoming connections. Recent research investigates the potential of Nonnegative Matrix Factorization (NMF) models in reconstructing sparse networks and using deep NMF to uncover the hierarchical structure. Deep models have demonstrated remarkable performance in various domains, but they are susceptible to overfitting, especially on the limited training data. This paper proposes a novel Link Prediction using Adversarial Deep NMF (LPADNMF) to enhance the generalization of network reconstruction in sparse graphs. The main contribution is the introduction of an adversarial training that incorporates a bounded attack on the input, leveraging the 𝐿2,1 norm to generate diverse perturbations. This adversarial training aims to improve the model’s robustness and prevent overfitting, particularly in scenarios with limited training data. Additionally, the proposed method incorporates first and second-order affinities as input to capture higher-order dependencies and encourage the extraction of informative features from the network structure. To further mitigate overfitting, a smooth 𝐿2 regularization is applied to the model parameters. To optimize the proposed model effectively, we utilize a majorization-minimization algorithm that efficiently updates the perturbation and latent factors in an iterative manner. Our findings demonstrate that the proposed model not only has the ability to uncover and learn complex structures but also possesses generalization capabilities. This method demonstrates superior performance compared to state-of-the-art methods across eight networks. In comparison to the best-performing approach, LPADNMF exhibited improvements of 2.36% in AUC, 4.67% in Precision, 2.21% in Recall, and 3.73% in F-Measure.