Please wait a minute...
Data Analysis and Knowledge Discovery  2023, Vol. 7 Issue (7): 1-17    DOI: 10.11925/infotech.2096-3467.2023.0498
Current Issue | Archive | Adv Search |
Review of Knowledge Elements Extraction in Scientific Literature Based on Deep Learning
Li Guangjian,Yuan Yue()
Department of Information Management, Peking University, Beijing 100871, China
Download: PDF (2068 KB)   HTML ( 32
Export: BibTeX | EndNote (RIS)      
Abstract  

[Objective] This paper examines the research of extracting knowledge elements from scientific literature using deep learning techniques. [Coverage] We used keywords such as “knowledge elements” and “deep learning” in databases, including Web of Science, Google Scholar, and CNKI. A total of 71 representative articles were manually selected for the review. [Methods] First, we provided an overview of the relevant concepts and characteristics of knowledge units in scientific literature. Then, we summarized the deep learning techniques for knowledge elements extraction from existing studies. [Results] The existing extraction methods are based on word-level or sentence-level knowledge elements. The deep learning process in knowledge extraction involves learning and capturing the different characteristics of word-level or sentence-level knowledge elements, which is crucial to using deep learning methods for knowledge extraction. [Limitations] This paper is based on the selected sample literature, which might not fully reflect certain achievements in the field. [Conclusions] The application of deep learning techniques in knowledge element extraction has improved the extraction process's accuracy, coverage, and robustness. Future studies should not only include the structured information of the scientific literature but also focus on understanding its internal knowledge content and inherent logic.

Key wordsKnowledge Elements      Knowledge Elements Extraction      Scientific Literature      Deep Learning     
Received: 25 May 2023      Published: 07 September 2023
ZTFLH:  G353  
Corresponding Authors: Yuan Yue,ORCID:0000-0002-5224-6630,E-mail: yuanyue@stu.pku.edu.cn。   

Cite this article:

Li Guangjian, Yuan Yue. Review of Knowledge Elements Extraction in Scientific Literature Based on Deep Learning. Data Analysis and Knowledge Discovery, 2023, 7(7): 1-17.

URL:

https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/10.11925/infotech.2096-3467.2023.0498     OR     https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/Y2023/V7/I7/1

An Example of Static Word Embedding
An Example of Dynamic Word Embedding
Enhancement Process of Training Annotated Data Based on Distant Supervision
Enhancement Process of Knowledge Feature Based on Domain Transfer
Example of Short-Term Dependency
Example of Long-Term Dependency
Feature Recognition Process Based on Syntactic Dependency Analysis
[1] Hey T, Tansley S, Tolle K, et al. The Fourth Paradigm: Data-Intensive Scientific Discovery[M]. Redmond, WA: Microsoft Research, 2009.
[2] 黄红, 陈翀, 张婧莹. 科技文献内容语义识别研究综述[J]. 情报学报, 2022, 41(9): 991-1002.
[2] (Huang Hong, Chen Chong, Zhang Jingying. Review on Identifying the Semantics of Scientific Literature Content[J]. Journal of the China Society for Scientific and Technical Information, 2022, 41(9): 991-1002.)
[3] 宋睿, 陈鑫, 洪宇, 等. 基于卷积循环神经网络的关系抽取[J]. 中文信息学报, 2019, 33(10): 64-72.
[3] (Song Rui, Chen Xin, Hong Yu, et al. Combination of Convolutional Recurrent Neural Network for Relation Extraction[J]. Journal of Chinese Information Processing, 2019, 33(10): 64-72.)
[4] Auerbach I L, Slamecka V. Needed: Executive Awareness of Information Resources[J]. Information & Management, 1979, 2(1): 3-6.
doi: 10.1016/0378-7206(79)90014-4
[5] Brookes B C. The Foundations of Information Science: Part IV. Information Science: The Changing Paradigm[J]. Journal of Information Science, 1981, 3(1): 3-12.
doi: 10.1177/016555158100300102
[6] 马费成. 在数字环境下实现知识的组织和提供[J]. 郑州大学学报(哲学社会科学版), 2005, 38(4): 5-7.
[6] (Ma Feicheng. Prospect of the Development of Information Science in Digital Time[J]. Journal of Zhengzhou University (Philosophy and Social Science Edition), 2005, 38(4): 5-7.)
[7] 索传军, 盖双双. 知识元的内涵、结构与描述模型研究[J]. 中国图书馆学报, 2018, 44(4): 54-72.
[7] (Suo Chuanjun, Gai Shuangshuang. The Connotation, Structure and Description Model of Knowledge Unit[J]. Journal of Library Science in China, 2018, 44(4): 54-72.)
[8] 温有奎, 焦玉英. 基于范畴论的知识单元组织与检索研究[J]. 情报学报, 2010, 29(3): 387-392.
[8] (Wen Youkui, Jiao Yuying. Organization and Retrieval of Knowledge Unit Based on Category Theory[J]. Journal of the China Society for Scientific and Technical Information, 2010, 29(3): 387-392.)
[9] 徐荣生. 知识单元初论[J]. 图书馆杂志, 2001, 20(7): 2-5.
[9] (Xu Rongsheng. A General Discussion on Knowledge Unit[J]. Library Journal, 2001, 20(7): 2-5.)
[10] 王子舟, 王碧滢. 知识的基本组分——文献单元和知识单元[J]. 中国图书馆学报, 2003, 29(1): 5-11.
[10] (Wang Zizhou, Wang Biying. The Basic Component Parts of Knowledge[J]. The Journal of the Library Science in China, 2003, 29(1): 5-11.)
[11] 文庭孝. 知识单元的演变及其评价研究[J]. 图书情报工作, 2007, 51(10): 72-76.
[11] (Wen Tingxiao. Research on Knowledge Unit Evolution and Evaluation[J]. Library and Information Service, 2007, 51(10): 72-76.)
[12] 毕崇武, 王忠义, 宋红文. 基于知识元的数字图书馆多粒度集成知识服务研究[J]. 图书情报工作, 2017, 61(4): 115-122.
doi: 10.13266/j.issn.0252-3116.2017.04.015
[12] (Bi Chongwu, Wang Zhongyi, Song Hongwen. Research on the Multi-Granularity Integrated Knowledge Service Based on Knowledge Element in Digital Library[J]. Library and Information Service, 2017, 61(4): 115-122.)
doi: 10.13266/j.issn.0252-3116.2017.04.015
[13] 于良芝, 樊振佳, 程乐天. 信息单元再认识[J]. 图书馆杂志, 2016, 35(7): 4-11.
[13] (Yu Liangzhi, Fan Zhenjia, Cheng Letian. A Revisit to Units of Information[J]. Library Journal, 2016, 35(7): 4-11.)
[14] 关景文, 宋晓, 李晓庆, 等. 导弹领域文本嵌套命名实体识别方法研究[J/OL]. 系统仿真学报. [2023-05-31]. https://doi.org/10.16182/j.issn1004731x.joss.22-0456.
[14] (Guan Jingwen, Song Xiao, Li Xiaoqing, et al. Research on Method of Nested Named Entity Recognition in Missile Field Text[J/OL]. Journal of System Simulation. [2023-05-31]. https://doi.org/10.16182/j.issn1004731x.joss.22-0456.)
[15] Mahendran D, Gurdin G, Lewinski N, et al. Identifying Chemical Reactions and Their Associated Attributes in Patents[J]. Frontiers in Research Metrics and Analytics, 2021, 6: 688353.
doi: 10.3389/frma.2021.688353
[16] 刘荫, 张凯, 王惠剑, 等. 面向电力低资源领域的无监督命名实体识别方法[J]. 中文信息学报, 2022, 36(6): 69-79.
[16] (Liu Yin, Zhang Kai, Wang Huijian, et al. Unsupervised Low-Resource Name Entities Recognition in Electric Power Domain[J]. Journal of Chinese Information Processing, 2022, 36(6): 69-79.)
[17] Guo J, Ibanez-Lopez A S, Gao H Y, et al. Automated Chemical Reaction Extraction from Scientific Literature[J]. Journal of Chemical Information and Modeling, 2022, 62(9): 2035-2045.
doi: 10.1021/acs.jcim.1c00284
[18] Pang N, Qian L, Lyu W M, et al. Transfer Learning for Scientific Data Chain Extraction in Small Chemical Corpus with BERT-CRF Model[OL]. arXiv Preprint, arXiv: 1905.05615.
[19] 李楠, 郑荣廷, 吉久明, 等. 基于启发式规则的中文化学物质命名识别研究[J]. 现代图书情报技术, 2010(5): 13-17.
[19] (Li Nan, Zheng Rongting, Ji Jiuming, et al. Research on Chinese Chemical Name Recognition Based on Heuristic Rules[J]. New Technology of Library and Information Service, 2010(5): 13-17.)
[20] 马晓慧, 赵文娟, 刘忠宝. 基于深度学习的多学科多层次学术论文结构功能识别方法比较研究[J]. 情报科学, 2021, 39(8): 94-102.
[20] (Ma Xiaohui, Zhao Wenjuan, Liu Zhongbao. Multi-Disciplinary and Multi-Level Comparative Research on Methods of Academic Text Structure Function Recognition Based on Deep Learning[J]. Information Science, 2021, 39(8): 94-102.)
[21] 孟旭阳, 白海燕. 文献摘要结构功能识别在关键词抽取中的应用[J]. 情报工程, 2022, 8(1): 79-89.
[21] (Meng Xuyang, Bai Haiyan. Structure-Function Recognition of Literature Abstract and Application in Keyword Extraction[J]. Technology Intelligence Engineering, 2022, 8(1): 79-89.)
[22] 陆伟, 李鹏程, 张国标, 等. 学术文本词汇功能识别——基于BERT向量化表示的关键词自动分类研究[J]. 情报学报, 2020, 39(12): 1320-1329.
[22] (Lu Wei, Li Pengcheng, Zhang Guobiao, et al. Recognition of Lexical Functions in Academic Texts: Automatic Classification of Keywords Based on BERT Vectorization[J]. Journal of the China Society for Scientific and Technical Information, 2020, 39(12): 1320-1329.)
[23] 方龙, 李信, 黄永, 等. 学术文本的结构功能识别——在关键词自动抽取中的应用[J]. 情报学报, 2017, 36(6): 599-605.
[23] (Fang Long, Li Xin, Huang Yong, et al. Structure-Function Recognition of Academic Text—Application in Automatic Keywords Extraction[J]. Journal of the China Society for Scientific and Technical Information, 2017, 36(6): 599-605.)
[24] Miaschi A, Dell’Orletta F. Contextual and Non-Contextual Word Embeddings: An In-Depth Linguistic Investigation[C]// Proceedings of the 5th Workshop on Representation Learning for NLP. ACL, 2020: 110-119.
[25] Arisoy E, Sainath T N, Kingsbury B, et al. Deep Neural Network Language Models[C]// Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT. 2012: 20-28.
[26] Mikolov T, Chen K, Corrado G, et al. Efficient Estimation of Word Representations in Vector Space[OL]. arXiv Preprint, arXiv: 1301.3781.
[27] Pyysalo S, Ananiadou S. Anatomical Entity Mention Recognition at Literature Scale[J]. Bioinformatics, 2014, 30(6): 868-875.
doi: 10.1093/bioinformatics/btt580 pmid: 24162468
[28] Kumar A, Starly B. “FabNER”: Information Extraction from Manufacturing Process Science Domain Literature Using Named Entity Recognition[J]. Journal of Intelligent Manufacturing, 2022, 33(8): 2393-2407.
doi: 10.1007/s10845-021-01807-x
[29] 马建霞, 袁慧, 蒋翔. 基于Bi-LSTM+CRF的科学文献中生态治理技术相关命名实体抽取研究[J]. 数据分析与知识发现, 2020, 4(2/3): 78-88.
[29] (Ma Jianxia, Yuan Hui, Jiang Xiang. Extracting Name Entities from Ecological Restoration Literature with Bi-LSTM+CRF[J]. Data Analysis and Knowledge Discovery, 2020, 4(2/3): 78-88.)
[30] Joulin A, Grave E, Bojanowski P, et al. Bag of Tricks for Efficient Text Classification[OL]. arXiv Preprint, arXiv: 1607.01759.
[31] Zhao X T, Lopez S, Saikin S, et al. Text to Insight: Accelerating Organic Materials Knowledge Extraction via Deep Learning[J]. Proceedings of the Association for Information Science and Technology, 2021, 58(1): 558-562.
[32] Hong Z, Tchoua R, Chard K, et al. SciNER: Extracting Named Entities from Scientific Literature[C]// Proceedings of the 20th International Conference on Computational Science. 2020: 308-321.
[33] Peters M E, Neumann M, Zettlemoyer L, et al. Dissecting Contextual Word Embeddings: Architecture and Representation[OL]. arXiv Preprint, arXiv: 1808.08949.
[34] Xu F F, Alon U, Neubig G, et al. A Systematic Evaluation of Large Language Models of Code[C]// Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming. New York: ACM, 2022.
[35] Peters M, Ammar W, Bhagavatula C, et al. Semi-Supervised Sequence Tagging with Bidirectional Language Models[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. ACL, 2017: 1756-1765.
[36] Sheikhshab G, Birol I, Sarkar A. In-Domain Context-Aware Token Embeddings Improve Biomedical Named Entity Recognition[C]// Proceedings of the 9th International Workshop on Health Text Mining and Information Analysis. ACL, 2018: 160-164.
[37] Smith L, Tanabe L K, Ando R J N, et al. Overview of BioCreative II Gene Mention Recognition[J]. Genome Biology, 2008, 9(Suppl 2): Artical No. S2.
[38] Kim J D, Ohta T, Tsuruoka Y, et al. Introduction to the Bio-Entity Recognition Task at JNLPBA[C]// Proceedings of the 2004 International Joint Workshop on Natural Language Processing in Biomedicine and Its Applications. New York: ACM, 2004: 70-75.
[39] 荆鑫, 王华峰, 刘潜峰, 等. 基于ELMo-GCN的核电领域命名实体识别[J]. 北京航空航天大学学报, 2022, 48(12): 2556-2565.
[39] (Jing Xin, Wang Huafeng, Liu Qianfeng, et al. Named Entity Recognition in Nuclear Power Field Based on ELMo-GCN[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(12): 2556-2565.)
[40] Zhao D D, Zhang P, Meng J N, et al. Adversarial Transfer Learning for Named Entity Recognition Based on Multi-Head Attention Mechanism and Feature Fusion[C]// Proceedings of the 11th CCF International Conference on Natural Language Processing and Chinese Computing. Cham: Springer International Publishing, 2022: 272-284.
[41] Devlin J, Chang M W, Lee K, et al. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding[OL]. arXiv Preprint, arXiv: 1810.04805.
[42] Vaswani A, Shazeer N, Parmar N, et al. Attention is All You Need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
[43] Cho H, Lee H. Biomedical Named Entity Recognition Using Deep Neural Networks with Contextual Information[J]. BMC Bioinformatics, 2019, 20(1): 735.
doi: 10.1186/s12859-019-3321-4 pmid: 31881938
[44] Deng C, Jia Y T, Xu H, et al. GAKG: A Multimodal Geoscience Academic Knowledge Graph[C]// Proceedings of the 30th ACM International Conference on Information & Knowledge Management. New York: ACM, 2021: 4445-4454.
[45] Yang Z L, Dai Z H, Yang Y M, et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[OL]. arXiv Preprint, arXiv: 1906.08237.
[46] Lan Z Z, Chen M D, Goodman S, et al. ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations[OL]. arXiv Preprint, arXiv: 1909.11942.
[47] Radford A, Narasimhan K. Improving Language Understanding by Generative Pre-Training[OL]. (2018-06-11). [2023-05-31]. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
[48] Hofmann V, Pierrehumbert J, Schütze H. Dynamic Contextualized Word Embeddings[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. ACL, 2021: 6970-6984.
[49] 岳增营, 叶霞, 刘睿珩. 基于语言模型的预训练技术研究综述[J]. 中文信息学报, 2021, 35(9): 15-29.
[49] (Yue Zengying, Ye Xia, Liu Ruiheng. A Survey of Language Model Based Pre-Training Technology[J]. Journal of Chinese Information Processing, 2021, 35(9): 15-29.)
[50] Rappaz J, Bourgeois D, Aberer K. A Dynamic Embedding Model of the Media Landscape[C]// Proceedings of the 2019 World Wide Web Conference. New York: ACM, 2019: 1544-1554.
[51] Mintz M, Bills S, Snow R, et al. Distant Supervision for Relation Extraction Without Labeled Data[C]// Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. ACL, 2009: 1003-1011.
[52] Wang X A, Hu V, Song X C, et al. ChemNER: Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided Distant Supervision[C]// Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. ACL, 2021: 5227-5240.
[53] Wang X, Song X C, Li B Z, et al. Fine-Grained Named Entity Recognition with Distant Supervision in COVID-19 Literature[C]// Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 2021: 491-494.
[54] Heist N, Paulheim H. Information Extraction from Co-Occurring Similar Entities[C]// Proceedings of the 2021 Web Conference. New York: ACM, 2021: 3999-4009.
[55] Shang J B, Liu L Y, Ren X, et al. Learning Named Entity Tagger Using Domain-Specific Dictionary[OL]. arXiv Preprint, arXiv: 1809.03599.
[56] Zhang Y, Fei H L, Li P. End-to-End Distantly Supervised Information Extraction with Retrieval Augmentation[C]// Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2022: 2449-2455.
[57] Wang X Y, Jiang Y, Bach N, et al. Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning[OL]. arXiv Preprint, arXiv: 2105.03654.
[58] Thrun S, Pratt L. Learning to Learn:Introduction and Overview[A]//Thrun S, Pratt L. Learning to Learn[M]. Boston, MA: Springer US, 1998: 3-17.
[59] Xu L, Tong Y, Dong Q Q, et al. CLUENER2020: Fine-Grained Named Entity Recognition Dataset and Benchmark for Chinese[OL]. arXiv Preprint, arXiv: 2001.04351.
[60] Yao L G, Huang H S, Wang K W, et al. Fine-Grained Mechanical Chinese Named Entity Recognition Based on ALBERT-AttBiLSTM-CRF and Transfer Learning[J]. Symmetry, 2020, 12(12): 1986.
doi: 10.3390/sym12121986
[61] Çelіkmasat G, Enes Aktürk M, Emre Ertunç Y, et al. Biomedical Named Entity Recognition Using Transformers with BiLSTM CRF and Graph Convolutional Neural Networks[C]// Proceedings of the 2022 International Conference on INnovations in Intelligent SysTems and Applications. IEEE, 2022: 1-6.
[62] Zheng H L, Wang R X, Yang Y T, et al. Cross-Domain Fault Diagnosis Using Knowledge Transfer Strategy: A Review[J]. IEEE Access, 2019, 7: 129260-129290.
doi: 10.1109/ACCESS.2019.2939876
[63] 张文韩, 刘小明, 杨关, 等. 多层结构化语义知识增强的跨领域命名实体识别[J/OL]. 计算机研究与发展.[2023-04-10]. https://kns.cnki.net/kcms/detail/11.1777.TP.20230310.1845.008.html.
[63] (Zhang Wenhan, Liu Xiaoming, Yang Guan, et al. Multi-Level Structured Semantic Knowledge Enhanced Cross-Domain Named Entity Recognition[J/OL]. Journal of Computer Research and Development. [2023-04-10]. https://kns.cnki.net/kcms/detail/11.1777.TP.20230310.1845.008.html.)
[64] Yao L, Liu H, Liu Y, et al. Biomedical Named Entity Recognition Based on Deep Neutral Network[J]. International Journal of Hybrid Information Technology, 2015, 8(8): 279-288.
[65] Jin D, Szolovits P. Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts[OL]. arXiv Preprint, arXiv: 1808.06161.
[66] Hu H, Deng S, Lu H, et al. A Comparative Study on the Classification Performance of Machine Learning Models for Academic Full Texts[C]// Proceedings of the 15th International Conference on Information. 2020: 713-737.
[67] Zhao Z H, Yang Z H, Luo L, et al. Disease Named Entity Recognition from Biomedical Literature Using a Novel Convolutional Neural Network[J]. BMC Medical Genomics, 2017, 10(Suppl 5): 73.
doi: 10.1186/s12920-017-0316-8 pmid: 29297367
[68] Hockett C F. Problems of Morphemic Analysis[J]. Language, 1947, 23(4): 321-343.
doi: 10.2307/410295
[69] Hochreiter S, Schmidhuber J. Long Short-Term Memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
doi: 10.1162/neco.1997.9.8.1735 pmid: 9377276
[70] Li S L, Xu B, Chung T L. Definition Extraction with LSTM Recurrent Neural Networks[C]// Proceedings of the 15th China National Conference on Computational Linguistics and the 4th International Symposium on Natural Language Processing Based on Naturally Annotated Big Data. 2016: 177-189.
[71] Gonçalves S, Cortez P, Moro S. A Deep Learning Approach for Sentence Classification of Scientific Abstracts[C]// Proceedings of the 27th International Conference on Artificial Neural Networks. 2018: 479-488.
[72] Accuosto P, Saggion H. Discourse-Driven Argument Mining in Scientific Abstracts[C]// Proceedings of the 24th International Conference on Applications of Natural Language to Information Systems. 2019: 182-194.
[73] Mehta P, Arora G, Majumder P. Attention Based Sentence Extraction from Scientific Articles Using Pseudo-Labeled Data[OL]. arXiv Preprint, arXiv: 1802.04675.
[74] He X Y, Li L S, Wan J, et al. Biomedical Event Trigger Detection Based on BiLSTM Integrating Attention Mechanism and Sentence Vector[C]// Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 2019: 651-654.
[75] 王倩, 曾金, 刘家伟, 等. 基于深度学习的学术文本段落结构功能识别研究[J]. 情报科学, 2020, 38(3): 64-69.
[75] (Wang Qian, Zeng Jin, Liu Jiawei, et al. Structure Function Recognition of Academic Text Paragraph Based on Deep Learning[J]. Information Science, 2020, 38(3): 64-69.)
[76] Safder I, Hassan S U, Visvizi A, et al. Deep Learning-Based Extraction of Algorithmic Metadata in Full-Text Scholarly Documents[J]. Information Processing & Management, 2020, 57(6): 102269.
doi: 10.1016/j.ipm.2020.102269
[77] Gonçalves S, Cortez P, Moro S. A Deep Learning Classifier for Sentence Classification in Biomedical and Computer Science Abstracts[J]. Neural Computing and Applications, 2020, 32(11): 6793-6807.
doi: 10.1007/s00521-019-04334-2
[78] Wan C X, Li B. Financial Causal Sentence Recognition Based on BERT-CNN Text Classification[J]. The Journal of Supercomputing, 2022, 78(5): 6503-6527.
doi: 10.1007/s11227-021-04097-5
[79] Geng Z Q, Chen G F, Han Y M, et al. Semantic Relation Extraction Using Sequential and Tree-Structured LSTM with Attention[J]. Information Sciences, 2020, 509: 183-192.
doi: 10.1016/j.ins.2019.09.006
[80] 赵鹏武, 李志义, 林小琦. 基于注意力机制和卷积神经网络的中文人物关系抽取与识别[J]. 数据分析与知识发现, 2022, 6(8): 41-51.
[80] (Zhao Pengwu, Li Zhiyi, Lin Xiaoqi. Identifying Relationship of Chinese Characters with Attention Mechanism and Convolutional Neural Network[J]. Data Analysis and Knowledge Discovery, 2022, 6(8): 41-51.)
[81] Scarselli F, Gori M, Tsoi A C, et al. The Graph Neural Network Model[J]. IEEE Transactions on Neural Networks, 2009, 20(1): 61-80.
doi: 10.1109/TNN.2008.2005605 pmid: 19068426
[82] Marcheggiani D, Titov I. Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling[OL]. arXiv Preprint, arXiv: 1703.04826.
[83] Peng N Y, Poon H, Quirk C, et al. Cross-Sentence N-Ary Relation Extraction with Graph LSTMS[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 101-115.
doi: 10.1162/tacl_a_00049
[84] 周义恒, 陈颢天, 周向东. 基于图注意力网络的科技实体关系联合抽取方法[J]. 工程管理科技前沿, 2022, 41(3): 39-44.
[84] (Zhou Yiheng, Chen Haotian, Zhou Xiangdong. Joint Extraction of Science Entity and Relation Based on Graph Attention Network[J]. Frontiers of Science and Technology of Engineering Management, 2022, 41(3): 39-44.)
[85] Garg S, Galstyan A, Hermjakob U, et al. Extracting Biomolecular Interactions Using Semantic Parsing of Biomedical Text[C]// Proceedings of the 13th AAAI Conference on Artificial Intelligence. 2016: 2718-2826.
[86] Luo Y A, Uzuner Ö, Szolovits P. Bridging Semantics and Syntax with Graph Algorithms—State-of-the-Art of Extracting Biomedical Relations[J]. Briefings in Bioinformatics, 2017, 18(1): 160-178.
[87] Xiao L W, Xue Y, Wang H, et al. Exploring Fine-Grained Syntactic Information for Aspect-Based Sentiment Classification with Dual Graph Neural Networks[J]. Neurocomputing, 2022, 471: 48-59.
doi: 10.1016/j.neucom.2021.10.091
[88] Yao L A, Mao C S, Luo Y A. Graph Convolutional Networks for Text Classification[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence and 31st Innovative Applications of Artificial Intelligence Conference and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence. 2019: 7370-7377.
[89] Bhattacharya U, Mittal T, Chandra R, et al. STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020: 1342-1350.
[90] Wu J L, Zhang R N, Gong T L, et al. BioIE: Biomedical Information Extraction with Multi-Head Attention Enhanced Graph Convolutional Network[C]// Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine. IEEE, 2022: 2080-2087.
[91] 王传栋, 徐娇, 张永. 实体关系抽取综述[J]. 计算机工程与应用, 2020, 56(12): 25-36.
doi: 10.3778/j.issn.1002-8331.2003-0189
[91] (Wang Chuandong, Xu Jiao, Zhang Yong. Survey of Entity Relation Extraction[J]. Computer Engineering and Applications, 2020, 56(12): 25-36.)
doi: 10.3778/j.issn.1002-8331.2003-0189
[92] 邱云飞, 邢浩然, 于智龙, 等. 联合多模态与多跨度特征的嵌套命名实体识别[J/OL]. 计算机科学与探索. (2023-03-29). http://kns.cnki.net/kcms/detail/11.5602.TP.20230329.1603.006.html.
[92] (Qiu Yunfei, Xing Haoran, Yu Zhilong, et al. Nested Named Entity Recognition Combining Multi-Modal and Multi-Span Features[J/OL]. Journal of Frontiers of Computer Science and Technology. (2023-03-29). http://kns.cnki.net/kcms/detail/11.5602.TP.20230329.1603.006.html.)
[93] 张颖怡, 章成志, He Daqing. 学术论文中问题与方法识别及其关系抽取研究综述[J]. 图书情报工作, 2022, 66(12): 125-138.
doi: 10.13266/j.issn.0252-3116.2022.12.012
[93] (Zhang Yingyi, Zhang Chengzhi, He Daqing. A Review of Problem and Method Recognition and Relation Extraction in Academic Papers[J]. Library and Information Service, 2022, 66(12): 125-138.)
doi: 10.13266/j.issn.0252-3116.2022.12.012
[94] Pearl J, Mackenzie D. AI Can't Reason Why[OL/J]. (2018-05-18). [2023-05-31]. https://www.wsj.com/articles/ai-cant-reason-why-1526657442.
[1] Wu Jialun, Zhang Ruonan, Kang Wulin, Yuan Puwei. Deep Learning Model of Drug Recommendation Based on Patient Similarity Analysis[J]. 数据分析与知识发现, 2023, 7(6): 148-160.
[2] Wang Xiaofeng, Sun Yujie, Wang Huazhen, Zhang Hengzhang. Construction and Verification of Type-Controllable Question Generation Model Based on Deep Learning and Knowledge Graphs[J]. 数据分析与知识发现, 2023, 7(6): 26-37.
[3] Wang Nan, Wang Qi. Evaluating Student Engagement with Deep Learning[J]. 数据分析与知识发现, 2023, 7(6): 123-133.
[4] Liu Yang, Zhang Wen, Hu Yi, Mao Jin, Huang Fei. Hotel Stock Prediction Based on Multimodal Deep Learning[J]. 数据分析与知识发现, 2023, 7(5): 21-32.
[5] Huang Xuejian, Ma Tinghuai, Wang Gensheng. Detecting Weibo Rumors Based on Hierarchical Semantic Feature Learning Model[J]. 数据分析与知识发现, 2023, 7(5): 81-91.
[6] Wang Yinqiu, Yu Wei, Chen Junpeng. Automatic Question-Answering in Chinese Medical Q & A Community with Knowledge Graph[J]. 数据分析与知识发现, 2023, 7(3): 97-109.
[7] Zhang Zhengang, Yu Chuanming. Knowledge Graph Completion Model Based on Entity and Relation Fusion[J]. 数据分析与知识发现, 2023, 7(2): 15-25.
[8] Shen Lining, Yang Jiayi, Pei Jiaxuan, Cao Guang, Chen Gongzheng. A Fine-Grained Sentiment Recognition Method Based on OCC Model and Triggering Events[J]. 数据分析与知识发现, 2023, 7(2): 72-85.
[9] Wang Weijun, Ning Zhiyuan, Du Yi, Zhou Yuanchun. Identifying Interdisciplinary Sci-Tech Literature Based on Multi-Label Classification[J]. 数据分析与知识发现, 2023, 7(1): 102-112.
[10] Nie Weimin, Ou Shiyan. A Modified Hybrid Method to Identify Cited Spans[J]. 数据分析与知识发现, 2023, 7(1): 113-127.
[11] Xiao Yuhan, Lin Huiping. Mining Differentiated Demands with Aspect Word Extraction: Case Study of Smartphone Reviews[J]. 数据分析与知识发现, 2023, 7(1): 63-75.
[12] Cheng Quan, She Dexin. Drug Recommendation Based on Graph Neural Network with Patient Signs and Medication Data[J]. 数据分析与知识发现, 2022, 6(9): 113-124.
[13] Wang Lu, Le Xiaoqiu. Research Progress on Citation Analysis of Scientific Papers[J]. 数据分析与知识发现, 2022, 6(4): 1-15.
[14] Zheng Xiao, Li Shuqing, Zhang Zhiwang. Measuring User Item Quality with Rating Analysis for Deep Recommendation Model[J]. 数据分析与知识发现, 2022, 6(4): 39-48.
[15] Yu Chuanming, Lin Hongjun, Zhang Zhengang. Joint Extraction Model for Entities and Events with Multi-task Deep Learning[J]. 数据分析与知识发现, 2022, 6(2/3): 117-128.
  Copyright © 2016 Data Analysis and Knowledge Discovery   Tel/Fax:(010)82626611-6626,82624938   E-mail:jishu@mail.las.ac.cn