圖神經網絡 (GNN) 是一種連接模型,它通過圖的節點之間的消息傳遞來捕捉圖的依賴關係。與標準神經網絡不同的是,圖神經網絡保留了一種狀態,可以表示來自其鄰域的具有任意深度的信息。近年來,圖神經網絡(GNN)在社交網絡、知識圖、推薦係統、問答係統甚至生命科學等各個領域得到了越來越廣泛的應用。

知識薈萃

圖神經網絡(Graph Neural Networks, GNN)專知薈萃

入門

綜述

  • A Comprehensive Survey on Graph Neural Networks. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu. 2019
    https://arxiv.org/pdf/190-00596.pdf
  • Relational inductive biases, deep learning, and graph networks. Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu. 2018.
    https://arxiv.org/pdf/1806.0126-pdf
  • Attention models in graphs. John Boaz Lee, Ryan A. Rossi, Sungchul Kim, Nesreen K. Ahmed, Eunyee Koh. 2018.
    https://arxiv.org/pdf/1807.07984.pdf
  • Deep learning on graphs: A survey. Ziwei Zhang, Peng Cui and Wenwu Zhu. 2018.
    https://arxiv.org/pdf/1812.04202.pdf
  • Graph Neural Networks: A Review of Methods and Applications. Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Maosong Sun. 2018
    https://arxiv.org/pdf/1812.08434.pdf
  • Geometric deep learning: going beyond euclidean data. Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst. 2016.
    https://arxiv.org/pdf/161-08097.pdf

進階論文

Recurrent Graph Neural Networks

Convolutional Graph Neural Networks

Spectral and Spatial

Architecture

Attention Mechanisms

Convolution

Training Methods

Pooling

Bayesian

Analysis

GAE

Spatial-Temporal Graph Neural Networks

應用

Physics

Knowledge Graph

Recommender Systems

  • STAR-GCN: Stacked and Reconstructed Graph Convolutional Networks for Recommender Systems. Jiani Zhang, Xingjian Shi, Shenglin Zhao, Irwin King. IJCAI 2019.
    https://arxiv.org/pdf/1905.13129.pdf

  • Binarized Collaborative Filtering with Distilling Graph Convolutional Networks. Haoyu Wang, Defu Lian, Yong Ge. IJCAI 2019.
    https://arxiv.org/pdf/1906.01829.pdf

  • Graph Contextualized Self-Attention Network for Session-based Recommendation. Chengfeng Xu, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Fuzhen Zhuang, Junhua Fang, Xiaofang Zhou. IJCAI 2019.
    https://www.ijcai.org/proceedings/2019/0547.pdf

  • Session-based Recommendation with Graph Neural Networks. Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, Tieniu Tan. AAAI 2019.
    https://arxiv.org/pdf/181-00855.pdf

  • Geometric Hawkes Processes with Graph Convolutional Recurrent Neural Networks. Jin Shang, Mingxuan Sun. AAAI 2019.
    https://jshang2.github.io/pubs/geo.pdf

  • Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems. Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, Zhongyuan Wang. KDD 2019.
    https://arxiv.org/pdf/1905.04413

  • Exact-K Recommendation via Maximal Clique Optimization. Yu Gong, Yu Zhu, Lu Duan, Qingwen Liu, Ziyu Guan, Fei Sun, Wenwu Ou, Kenny Q. Zhu. KDD 2019.
    https://arxiv.org/pdf/1905.07089

  • KGAT: Knowledge Graph Attention Network for Recommendation. Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, Tat-Seng Chua. KDD 2019.
    https://arxiv.org/pdf/1905.07854

  • Knowledge Graph Convolutional Networks for Recommender Systems. Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, Minyi Guo. WWW 2019.
    https://arxiv.org/pdf/1904.12575.pdf

  • Dual Graph Attention Networks for Deep Latent Representation of Multifaceted Social Effects in Recommender Systems. Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Peng He, Paul Weng, Han Gao, Guihai Chen. WWW 2019.
    https://arxiv.org/pdf/1903.10433.pdf

  • Graph Neural Networks for Social Recommendation. Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, Dawei Yin. WWW 2019.
    https://arxiv.org/pdf/1902.07243.pdf

  • Graph Convolutional Neural Networks for Web-Scale Recommender Systems. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, Jure Leskovec. KDD 2018.
    https://arxiv.org/abs/1806.01973

  • Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks. Federico Monti, Michael M. Bronstein, Xavier Bresson. NIPS 2017.
    https://arxiv.org/abs/1704.06803

  • Graph Convolutional Matrix Completion. Rianne van den Berg, Thomas N. Kipf, Max Welling. 2017.
    https://arxiv.org/abs/1706.02263

Computer Vision

Natural Language Processing

Others

Tutorial

視頻教程

代碼

領域專家

VIP內容

藥物的發現往往依賴於對蛋白配體結合親合性的成功預測。近年來,圖神經網絡(GNNs)通過學習蛋白質-配體配合物的表示來實現更好的親合性預測具有廣闊的應用前景。然而,現有的解決方案通常將蛋白質-配體配合物作為拓撲圖數據處理,因此沒有充分利用生物分子的結構信息。在GNN模型中也忽略了原子間基本的遠距離相互作用。為此,我們提出了一種結構感知的交互式圖神經網絡(SIGN),它由兩部分組成: 極性啟發圖注意力層(PGAL)和成對交互池(PiPool)。PGAL迭代執行節點邊緣聚合過程,更新節點和邊緣的嵌入,同時保留原子之間的距離和角度信息。然後,采用PiPool方法收集交互邊,並進行後續的重構損失,以反映全局交互。在兩個基準上進行了詳盡的實驗研究,驗證了SIGN算法的優越性。

//www.webtourguide.com/paper/92285a151920031182e24c0fced612de

成為VIP會員查看完整內容
0
4
0

最新論文

Drug discovery often relies on the successful prediction of protein-ligand binding affinity. Recent advances have shown great promise in applying graph neural networks (GNNs) for better affinity prediction by learning the representations of protein-ligand complexes. However, existing solutions usually treat protein-ligand complexes as topological graph data, thus the biomolecular structural information is not fully utilized. The essential long-range interactions among atoms are also neglected in GNN models. To this end, we propose a structure-aware interactive graph neural network (SIGN) which consists of two components: polar-inspired graph attention layers (PGAL) and pairwise interactive pooling (PiPool). Specifically, PGAL iteratively performs the node-edge aggregation process to update embeddings of nodes and edges while preserving the distance and angle information among atoms. Then, PiPool is adopted to gather interactive edges with a subsequent reconstruction loss to reflect the global interactions. Exhaustive experimental study on two benchmarks verifies the superiority of SIGN.

0
3
0
下載
預覽
Top