關係抽取指的是檢測和識別文本中實體之間的語義關係,並將表示同一語義關係的提及(mention)鏈接起來的任務

知識薈萃

關係抽取(Relation Extraction )薈萃

入門學習

綜述

論文

2007

  1. Razvan Bunescu, Raymond Mooney. Learning to Extract Relations from the Web using Minimal Supervision. ACL 2007. [https://www.aclweb.org/anthology/P07-1073/]

2009

  1. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid Ó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, Stan Szpakowicz. SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals. ACL 2009. [https://www.aclweb.org/anthology/S10-1006/]
  2. Mike Mintz, Steven Bills, Rion Snow and Dan Jurafsky. Distant supervision for relation extraction without labeled data. ACL 2009. [https://web.stanford.edu/~jurafsky/mintz.pdf]

2010

  1. Sebastian Riedel, Limin Yao, and Andrew McCallum. Modeling Relations and Their Mentions withoutLabeled Text.ECML 2010. [https://link.springer.com/content/pdf/10.1007%2F978-3-642-15939-8_10.pdf]

2011

  1. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, Daniel S. Weld. Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations. ACL-HLT 2011. [https://www.aclweb.org/anthology/P11-1055/]

2012

  1. Richard Socher,Brody Huval ,Christopher D. Manning ,Andrew Y. Ng. Semantic Compositionality through Recursive Matrix-Vector Spaces. EMNLP-CoNLL 2012. [https://ai.stanford.edu/~ang/papers/emnlp12-SemanticCompositionalityRecursiveMatrixVectorSpaces.pdf]
  2. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, Christopher D. Manning. Multi-instance Multi-label Learning for Relation Extraction. EMNLP-CoNLL 2012. [https://www.aclweb.org/anthology/D12-1042.pdf]

2013

  1. Tomas Mikolov,Ilya Sutskever,Kai Chen,Greg Corrado,Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. NIPS 2013. [https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf]
  2. ChunYang LiuWenBo SunWenHan ChaoWanXiang Che. Convolution Neural Network for Relation Extraction. ADMA 2013. [https://link.springer.com/chapter/10.1007/978-3-642-53917-6_21]

2014

  1. Jeffrey Pennington, Richard Socher, Christopher D. Manning. GloVe: Global Vectors for Word Representation. EMNLP 2014. [https://www.aclweb.org/anthology/D14-1162.pdf]
  2. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou and Jun Zhao. Relation Classification via Convolutional Deep Neural Network. COLING 2014. [https://www.aclweb.org/anthology/C14-1220.pdf]
  3. Mo Yu, Matthw R. Gormley and Mark Dredze. Factor-based Compositional Embedding Models. NIPS Workshop on Learning Semantics 2014. [https://www.cs.cmu.edu/~mgormley/papers/yu+gormley+dredze.nipsw.2014.pdf]

2015

  1. Cicero Nogueira dos Santos,Bing Xiang,Bowen Zhou. Classifying Relations by Ranking with Convolutional Neural Networks. ACL 2015. [https://www.aclweb.org/anthology/P15-1061.pdf]
  2. Dongxu Zhang, Dong Wang. Relation Classification via Recurrent Neural Network. arXiv preprint arXiv:1508.01006 (2015). [https://arxiv.org/abs/1508.01006]
  3. Thien Huu Nguyen, Ralph Grishman. Relation Extraction: Perspective from Convolutional Neural Networks. NAACL-HLT 2015. [https://www.aclweb.org/anthology/W15-1506.pdf]
  4. Shu Zhang, Dequan Zheng, Xinchen Hu, Ming Yang. Bidirectional Long Short-Term Memory Networks for Relation Classification .PACLIC 2015. [https://www.aclweb.org/anthology/Y15-1009/]
  5. Daojian Zeng, Kang Liu, Yubo Chen and Jun Zhao. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. EMNLP 2015. [http://www.emnlp2015.org/proceedings/EMNLP/pdf/EMNLP203.pdf]
  6. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou,Houfeng Wang. A Dependency-Based Neural Network for Relation Classification. ACL 2015. [https://www.aclweb.org/anthology/P15-2047.pdf]
  7. Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin. Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path . EMNLP 2015. [https://arxiv.org/pdf/1508.03720.pdf]
  8. Kun Xu, Yansong Feng, Songfang Huang,Dongyan Zhao. Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling. [https://www.aclweb.org/anthology/D15-1062.pdf]

2016

  1. Yankai Lin, Shiqi Shen, Zhiyuan Liu,Huanbo Luan, Maosong Sun. Neural Relation Extraction with Selective Attention over Instances.ACL 2016 . [http://nlp.csai.tsinghua.edu.cn/~lyk/publications/acl2016_nre.pdf]
  2. Makoto Miwa,Mohit Bansal. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures .ACL 2016. [https://www.aclweb.org/anthology/P16-1105.pdf]
  3. Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, Bo Xu. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. ACL 2016. [https://www.aclweb.org/anthology/P16-2034/]
  4. Minguang Xiao, Cong Liu. Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention. COLING 2016. [https://www.aclweb.org/anthology/C16-1119/]
  5. Yankai Lin,Shiqi Shen,Zhiyuan Liu,Huanbo Luan, Maosong Sun. Neural Relation Extraction with Selective Attention over Instances. ACL 2016. [https://www.aclweb.org/anthology/P16-1200.pdf]
  6. Xiaotian Jiang, Quan Wang, Peng Li, Bin Wang. Relation Extraction with Multi-instance Multi-label Convolutional Neural Networks. COLING 2016. [https://www.aclweb.org/anthology/C16-1139.pdf]
  7. Yatian Shen , Xuanjing Huang. Attention-Based Convolutional Neural Network for Semantic Relation Extraction. COLING 2016. [https://www.aclweb.org/anthology/C16-1238.pdf]
  8. Linlin Wang, Zhu Cao, Gerard de Melo and Zhiyuan Liu. Relation Classification via Multi-Level Attention CNNs. ACL 2016. [https://www.aclweb.org/anthology/P16-1123.pdf]
  9. Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin. Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation. COLING 2016. [https://arxiv.org/pdf/1601.03651.pdf]
  10. Rui Cai, Xiaodong Zhang and Houfeng Wang. Bidirectional Recurrent Convolutional Neural Network for Relation Classification. ACL 2016. [https://www.aclweb.org/anthology/P16-1072.pdf]

2017

  1. Yi Wu,David Bamman,Stuart Russell. Adversarial Training for Relation Extraction. EMNLP 2017. [https://www.aclweb.org/anthology/D17-1187.pdf]
  2. Tianyu Liu, Kexiang Wang, Baobao Chang, Zhifang Sui. A Soft-label Method for Noise-tolerant Distantly Supervised Relation Extraction. EMNLP 2017. [https://www.aclweb.org/anthology/D17-1189.pdf]
  3. Wenyuan Zeng, Yankai Lin, Zhiyuan Liu, Maosong Sun. Incorporating Relation Paths in Neural Relation Extraction. EMNLP 2017. [https://www.aclweb.org/anthology/D17-1186.pdf]
  4. Ji Young Lee, Franck Dernoncourt ,Peter Szolovits. MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks. SemEval 2017. [https://arxiv.org/pdf/1704.01523.pdf]
  5. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan , Maosong Sun. Neural Relation Extraction with Selective Attention over Instances. ACL 2017. [https://www.aclweb.org/anthology/P16-1200.pdf]
  6. Desh Raj, Sunil Kumar Sahu, Ashish Anan. Learning local and global contexts using a convolutional recurrent network model for relation classification in biomedical text. CoNLL 2017. [https://www.aclweb.org/anthology/K17-1032.pdf]
  7. Hai Ye, Wenhan Chao, Zhunchen Luo, Zhoujun Li. Jointly Extracting Relations with Class Ties via Effective Deep Ranking. ACL 2017. [https://www.aclweb.org/anthology/P17-1166.pdf]
  8. Meishan Zhang, Yue Zhang , Guohong Fu. End-to-End Neural Relation Extraction with Global Optimization . EMNLP 2017. [https://www.aclweb.org/anthology/D17-1182.pdf]
  9. Fei Li, Meishan Zhang, Guohong Fu, Donghong Ji. A neural joint model for entity and relation extraction from biomedical text. BMC bioinformatics 2017. [https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-017-1609-9]
  10. Yuntian Feng, Hongjun Zhang, Wenning Hao, Gang Chen. Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning. Journal of Computational Intelligence and Neuroscience 2017. [https://www.hindawi.com/journals/cin/2017/7643065/]

2018

  1. Fenia Christopoulou,Makoto Miwa,Sophia Ananiadou. A Walk-based Model on Entity Graphs for Relation Extraction . ACL 2018. [https://www.aclweb.org/anthology/P18-2014.pdf]
  2. Pengda Qin, Weiran Xu, William Yang Wang. DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction. ACL 2018. [https://www.aclweb.org/anthology/P18-1046.pdf]
  3. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, Xiaoyan Zhu. Reinforcement Learning for Relation Classification from Noisy Data. AAAI 2018. [https://tianjun.me/static/essay_resources/RelationExtraction/Paper/AAAI2018Denoising.pdf]
  4. Pengda Qin, Weiran Xu, William Yang Wang. Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning. ACL 2018. [https://arxiv.org/pdf/1805.09927.pdf]
  5. Xu Han, Zhiyuan Liu, Maosong Sun. Neural Knowledge Acquisition via Mutual Attention between Knowledge Graph and Text. AAAI 2018. [http://nlp.csai.tsinghua.edu.cn/~lzy/publications/aaai2018_jointnre.pdf]
  6. Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun , Peng Li. Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention. EMNLP 2018. [https://www.aclweb.org/anthology/D18-1247.pdf]
  7. Shikhar Vashishth,Rishabh Joshi,Sai Suman Prayaga,Partha Talukdar,Chiranjib Bhattacharyya. RESIDE: Improving Distantly-Supervised Neural Relation Extractionusing Side Information. EMNLP 2018. [https://www.aclweb.org/anthology/D18-1157.pdf]
  8. Tianyi Liu, Xinsong Zhang, Wanhao Zhou, Weijia Jia. Neural Relation Extraction via Inner-Sentence Noise Reduction and Transfer Learning. EMNLP 2018. [https://arxiv.org/pdf/1808.06738.pdf]
  9. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. EMNLP 2018. [https://arxiv.org/pdf/1810.10147.pdf]
  10. Zhengqiu He, Wenliang Chen, Zhenghua Li,Meishan Zhang, Wei Zhang, Min Zhang. SEE: Syntax-aware Entity Embedding for Neural Relation Extraction. AAAI2018. [https://arxiv.org/pdf/1801.03603.pdf]

2019

  1. Joohong Lee, Sangwoo Seo,Yong Suk Choi. Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing. arXiv 2019. [https://arxiv.org/pdf/1901.08163.pdf]
  2. Shanchan Wu, Yifan He. Enriching Pre-trained Language Model with Entity Information for Relation Classification. arXiv 2019. [https://arxiv.org/pdf/1905.08284.pdf]
  3. Yujin Yuan, Liyuan Liu, Siliang Tang, Zhongfei Zhang, Yueting Zhuang, Shiliang Pu, Fei Wu, Xiang Ren. Cross-relation Cross-bag Attention for Distantly-supervised Relation Extraction. AAAI2019 [https://arxiv.org/pdf/1812.10604.pdf]
  4. Shanchan Wu,Kai Fan,Qiong Zhang. Improving Distantly Supervised Relation Extraction with Neural Noise Converter and Conditional Optimal Selector. AAAI2019. [https://arxiv.org/pdf/1811.05616.pdf]
  5. Xinsong Zhang, Pengshuai Li, Weijia Jia,Hai Zhao. Multi-labeled Relation Extraction with Attentive Capsule Network. AAAI2019. [https://arxiv.org/pdf/1811.04354.pdf]
  6. Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, Minlie Huang. A Hierarchical Framework for Relation Extraction with Reinforcement Learning. AAAI2019 [https://arxiv.org/pdf/1811.03925.pdf]
  7. Sahil Garg, Aram Galstyan, Greg Ver Steeg, Irina Rish, Guillermo Cecchi, Shuyang Gao. Kernelized Hashcode Representations for Relation Extraction. AAAI2019 [https://arxiv.org/pdf/1711.04044.pdf]

2020

  1. Yang Li, Guodong Long, Tao Shen, Tianyi Zhou, Lina Yao, Huan Huo, Jing Jiang. Self-Attention Enhanced Selective Gate with Entity-Aware Embedding for Distantly Supervised Relation Extraction. AAAI2020. [https://arxiv.org/pdf/1911.11899.pdf]
  2. Tapas Nayak,Hwee Tou Ng. Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction. AAAI2020. [https://arxiv.org/pdf/1911.09886.pdf]

視頻教程

  1. CS 124: From Languages to Information,Dan Jurafsky [Week 5: Relation Extraction and Question]
  2. Michigan University: Coursera, Dragomir R. Radev [Lecture 48: Relation Extraction]
  3. Stanford CS224U: Natural Language Understand|Lecture 7 – Relation Extraction.[https://www.bilibili.com/video/av56067156?p=7]

代碼

  1. OpenNRE:https://github.com/thunlp/OpenNRE
  2. 清華大學自然語言處理實驗室發布的一個大規模精標注關係抽取數據集FewRel:https://github.com/ProKil/FewRel#fewrel-dataset-toolkits-and-baseline-models

領域專家

  1. The University of Washington:Luke Zettlemoyer
  2. The University of Texas at Austin:Raymond J. Mooney
  3. Stanford University: Dan Jurafsky 、Bill MacCartney、Christopher Potts
  4. New York University:Ralph Grishman
  5. 中國科學院自動化研究所:趙軍

Datasets

  1. ACE 2005 Multilingual Training Corpus:[https://catalog.ldc.upenn.edu/LDC2006T06]
  2. SemEval-2010 Task 8 Dataset:名詞對之間語義關係的多途徑分類。 [http://semeval2.fbk.eu/semeval2.php?location=tasks#T11]
  3. NYT Dataset:該數據集是通過將Freebase與《紐約時報》語料庫的關係進行校準而生成的,其中使用2005-2006年的句子作為訓練語料庫,使用2007年的句子作為測試語料庫。 [http://iesl.cs.umass.edu/riedel/ecml/]
  4. Few-shot Datasets:關係分類數據集FewRel,該數據集包含 100 個類別、70000 個實例,全麵超越了以往的同類精標注數據集。 FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation [https://www.aclweb.org/anthology/D18-1514/]
  5. TAC Relation Extraction Dataset:TACRED是由斯坦福NLP小組開發的,是一個大型關係提取數據集,包含106,264個基於英文新聞和web文本的示例,在2009-2014年期間用於NIST TAC KBP英文槽填充評估。 [https://nlp.stanford.edu/projects/tacred/]

VIP內容

知識圖譜一直是學術工業界關注的焦點,但是知識圖譜的書籍缺非常少。南加州大學計算機科學家Mayank Kejriwal撰寫了《Domain-Specific Knowledge Graph Construction》,總共115頁圖書,包含了知識圖譜的涵義、信息抽取、實體鏈接、知識圖譜補全、知識圖譜實例等內容,值得學習閱讀!

領域知識圖譜構建

特定領域的知識圖譜已經作為一個方向開始出現,並且發展迅速。圖方法在人工智能中已經存在了很長一段時間,可以追溯到該領域最早的時代,但將大量數據自動表示為圖譜是一項相對現代的發明。隨著Web的出現,以及對更智能搜索引擎的需求,穀歌知識圖譜誕生了。穀歌知識圖譜改變了我們與搜索引擎交互的方式,盡管我們常常沒有意識到這一點。例如,用戶在搜索某個東西時不點擊某個鏈接的情況已經不再罕見;一般來說,搜索引擎本身能夠為用戶所麵臨的問題提供解決方案。將傳統的搜索引擎與圖像、新聞和視頻有機地結合起來,為這些交互添加豐富的元素。

領域特定知識圖構建(KGC)是一個活躍的研究領域,最近由於機器學習技術(如深度神經網絡和單詞嵌入)取得了令人印象深刻的進展。本書將以一種引人入勝和可訪問的方式綜合Web數據上的知識圖結構。

知識圖譜示例

Google知識圖譜構建流程

目錄內容:

1.什麼是知識圖譜?

1.1 引言

1.2 示例 1: 學術領域

1.3 示例 2: 產品與公司

1.4 示例 3: 地理政治事件

1.5 結論

2 信息抽取

2.1 引言

2.2 IE挑戰

2.3 IE 任務範疇

2.3.1 命名實體識別

2.3.2 關係提取

2.3.3 事件提取

2.3.4 Web IE

2.4 IE效果評估

2.5 總結

3 實體消歧

3.1 引言

3.2 挑戰與要求

3.3 兩階段框架

3.4 性能度量

3.5 兩階段框架流程擴展

3.6 相關工作概述

3.7 總結

4. 高級主題: 知識圖譜補全

4.1 引言

4.2 知識圖譜嵌入

4.2.1 TransE

4.2.2 TransE Extensions and Alternatives

4.2.3 局限

4.2.4 前沿以及相關工作

4.2.5 KGEs應用

4.3 引言

5 生態係統

5.1 引言

5.2 Web鏈接數據

5.2.1 鏈接數據原則

5.2.2 技術棧

5.2.3 鏈接開放數據

5.2.4 例子: DBpedia

5.3 Google知識圖譜

5.4 Schema.org

5.5 未來展望

下載鏈接:https://pan.baidu.com/s/1vnyVBRn8GclvwEOH_eqM2g提取碼: 4y44

成為VIP會員查看完整內容
0
33
0

最新論文

Reader reviews of literary fiction on social media, especially those in persistent, dedicated forums, create and are in turn driven by underlying narrative frameworks. In their comments about a novel, readers generally include only a subset of characters and their relationships, thus offering a limited perspective on that work. Yet in aggregate, these reviews capture an underlying narrative framework comprised of different actants (people, places, things), their roles, and interactions that we label the "consensus narrative framework". We represent this framework in the form of an actant-relationship story graph. Extracting this graph is a challenging computational problem, which we pose as a latent graphical model estimation problem. Posts and reviews are viewed as samples of sub graphs/networks of the hidden narrative framework. Inspired by the qualitative narrative theory of Greimas, we formulate a graphical generative Machine Learning (ML) model where nodes represent actants, and multi-edges and self-loops among nodes capture context-specific relationships. We develop a pipeline of interlocking automated methods to extract key actants and their relationships, and apply it to thousands of reviews and comments posted on Goodreads.com. We manually derive the ground truth narrative framework from SparkNotes, and then use word embedding tools to compare relationships in ground truth networks with our extracted networks. We find that our automated methodology generates highly accurate consensus narrative frameworks: for our four target novels, with approximately 2900 reviews per novel, we report average coverage/recall of important relationships of > 80% and an average edge detection rate of >89\%. These extracted narrative frameworks can generate insight into how people (or classes of people) read and how they recount what they have read to others.

0
0
0
下載
預覽
Top