Creator:
Date:
Abstract:
The link prediction problem is fundamental to many application domains. Recently, deep learning-based models have been proposed to tackle this kind of problem. Graph auto-encoder (GAE) is a framework for unsupervised learning on graph-structured data. GAE achieves competitive results in link prediction tasks on citation networks. Another important problem on graph-structured data is node classification. Graph attention mechanism has been shown to have good performance in these tasks. This research investigates whether graph attention mechanisms can achieve good performance in link prediction tasks. We propose the attentive graph auto-encoder (AGAE) model, which incorporates GAE with the graph attention mechanism. The model is compared with GAE on both real-world citation networks and synthetic datasets. Investigations on how the model performs on networks with different characteristics is also included. In general, AGAE achieves competitive performance with GAE on citation networks while it outperforms GAE on certain synthetic networks.