Networks are now more crucial than ever for modeling complex systems with interconnected components. Many methods have been developed to infer unobserved links or predict latent links based on observed network topology. However, the effectiveness of current methods is often limited due to the nature of sparseness in large-scale networks. Nonnegative Matrix Factorization (NMF) is one of the most popular lowrank approximation methods which has been successfully applied to large-scale data. However, maintaining generalization from limited observations is still an open challenge. To overcome these challenges, we propose a novel link prediction method based on adversarial NMF, which reconstructs a sparse network by an efficient adversarial training algorithm. Unlike the conventional NMF methods, our model considers potential test adversaries beyond the pre-defined bounds and provides a robust reconstruction with good generalization power. Besides, to preserve the local structure of a network, we use the common neighbor algorithm to extract the node similarity and apply it to low-dimensional latent representation. Simultaneously, we use Frobenius-norm regularization to prevent the factorization from overfitting. We provide an effective Majorization-Minimization method to learn the model parameters. Our method outperforms the state-of-the-art methods, as shown by extensive experiment results on twelve real-world datasets.