Please wait a minute...
Advanced Search
数据分析与知识发现  2020, Vol. 4 Issue (5): 1-14     https://doi.org/10.11925/infotech.2096-3467.2019.1317
  综述评介 本期目录 | 过刊浏览 | 高级检索 |
自然语言处理中的注意力机制研究综述*
石磊1,王毅2,成颖2,3,魏瑞斌1()
1安徽财经大学管理科学与工程学院 蚌埠 233030
2南京大学信息管理学院 南京 210023
3山东师范大学文学院 济南 250014
Review of Attention Mechanism in Natural Language Processing
Shi Lei1,Wang Yi2,Cheng Ying2,3,Wei Ruibin1()
1School of Management Science and Engineering, Anhui University of Finance and Economics, Bengbu 233030, China
2School of Information Management, Nanjing University, Nanjing 210023, China
3School of Chinese Language and Literature, Shandong Normal University, Jinan 250014, China
全文: PDF (911 KB)   HTML ( 41
输出: BibTeX | EndNote (RIS)      
摘要 

【目的】 总结注意力机制在自然语言处理领域的衍化及应用规律。【文献范围】 以“attention”和“注意力”为检索词,分别检索WoS、The ACM Digital Library、arXiv以及中国知网,时间跨度限定为2015年1月至2019年10月,制定标准人工筛选自然语言处理领域的文献,最终获得68篇相关文献。【方法】 在深入分析文献的基础上,归纳注意力机制的通用形式,梳理其衍生类型,并基于数据对其在自然语言处理任务中的应用情况进行述评。【结果】 注意力机制在自然语言处理中的应用集中于序列标注、文本分类、推理以及生成式任务,且任务和注意力机制的类型之间存在一定的适配规律。【局限】 部分注意力机制和任务间的适配结论是通过模型整体表现数据间接得出的,不同注意力机制间的性能差异有待进一步研究。【结论】 注意力机制的研究切实推进了自然语言处理的发展,但其作用机理尚未明了,提高其可解释性并使之更加接近人类的真实注意力是未来的研究方向。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
石磊
王毅
成颖
魏瑞斌
关键词 注意力机制自注意力机器翻译机器阅读理解情感分析    
Abstract

[Objective] This paper summarizes the evolution and application of attention mechanism in natural language processing.[Coverage] We searched “attention” with the title/topic fields of WoS, ACM Digital Library, arXiv and CNKI from January 2015 to October 2019. Then, we manually screened the topic literature in the field of natural language processing, and obtained 68 related papers.[Methods] We first summarized the general attention mechanism, and sorted out its derivations. Second, we thoroughly reviewed their applications in natural language processing tasks.[Results] The application of attention mechanism in natural language processing focused on sequence labeling, text classification, reasoning and generative tasks. There were adaptation rules between tasks and the various attention mechanisms.[Limitations] Some adaptations between the mechanisms and the tasks were obtained from the overall performance of the model. More research is needed to examine the performance of different mechanisms.[Conclusions] The study of attention mechanism has effectively promoted the development of natural language processing. However, the mechanism of action is not yet clear. Future research should focus on making attention mechanism closer to those of the human beings.

Key wordsAttention Mechanism    Self-Attention    Machine Translation    Machine Reading ComprehensionSentiment Analysis
收稿日期: 2019-12-10      出版日期: 2020-06-15
ZTFLH:  TP391.1  
基金资助:*本文系国家社会科学基金重大项目“中国近现代文学期刊全文数据库建设与研究(1872-1949)”的研究成果之一(17ZDA276)
通讯作者: 魏瑞斌     E-mail: rbwxy@126.com
引用本文:   
石磊,王毅,成颖,魏瑞斌. 自然语言处理中的注意力机制研究综述*[J]. 数据分析与知识发现, 2020, 4(5): 1-14.
Shi Lei,Wang Yi,Cheng Ying,Wei Ruibin. Review of Attention Mechanism in Natural Language Processing. Data Analysis and Knowledge Discovery, 2020, 4(5): 1-14.
链接本文:  
http://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/10.11925/infotech.2096-3467.2019.1317      或      http://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/Y2020/V4/I5/1
Fig.1  带注意力机制的NMT模型示意图
Fig.2  注意力机制的通用形式
注意力 关注范围
全局注意力 全部元素
局部注意力 以对齐位置为中心的窗口
硬注意力 一个元素
稀疏注意力 稀疏分布的部分元素
结构注意力 结构上相关的一系列元素
Table 1  注意力机制按照关注范围分类
作者 模型 情感极性准确率(%) 注意力
Restaurant Laptop Twitter
Wang等[32] LSTM 74.3 66.5 66.5
Tang等[33] TD-LSTM 75.6 68.1 70.8 语境化注意力
Wang等[32] ATAE-LSTM 77.2 68.7 - 方面嵌入注意力
Ma等[21] IAN 78.6 72.1 - 粗粒度交互注意力
Liu等[34] BiLSTM-ATT-G 79.7 73.1 70.4 语境化注意力
Huang等[35] AOA-LSTM 81.2 74.5 - 细粒度双向注意力
Fan等[36] MGAN 81.2 75.4 72.5 多粒度双向注意力
Zheng等[37] LCR-Rot 81.3 75.2 72.7 语境化粗粒度双向注意力
Li等[38] HAPN 82.2 77.3 - 层级注意力
Song等[39] AEN-BERT 83.1 80.0 74.7 多头自注意力网络
Table 2  部分方面情感分析模型的表现
作者 模型 Exact Match(%) F1(%) 注意力
Wang等[43] Match-LSTM 64.7 73.7
Xiong等[44] DCN 66.2 75.9 协同注意力
Seo等[17] BiDAF 68.0 77.3 双向注意力
Gong等[45] Ruminating Reader 70.6 79.5 双向多跳注意力
Wang等[42] R-Net 72.3 80.7 Self-Matching注意力
Peters等[46] BiDAF+Self-Attention 72.1 81.1 双向注意力+自注意力
Liu等[47] PhaseCond 72.6 81.4 K2Q+自注意力
Yu等[48] QANet 76.2 84.6 协同注意力+自注意力
Wang等[49] SLQA+ 80.4 87.0 协同注意力+自注意力
Table 3  部分机器阅读理解模型在SQuAD数据集上的表现
作者 模型 训练集准确率(%) 测试集准确率(%) 注意力
Bowman等[50] 300D LSTM Encoders 83.9 80.6
Rocktaschel等[19] 100D LSTM with Attention 85.3 83.5 双路注意力
Lin等[27] 300D Structured Self-Attentive Sentence Embedding - 84.4 自注意力
Shen等[28] 300D Directional Self-Attention Network (DiSAN) 91.1 85.6 定向自注意力
Cheng等[22] 300D LSTMN Deep Fusion - 85.7 互注意力+内部注意力
Im等[51] 300D Distance-based Self-Attention Network 89.6 86.3 定向+距离自注意力
Shen等[52] 300D ReSAN 92.6 86.3 软硬混合自注意力
Parikh等[53] 300D Intra-Sentence Attention 90.5 86.8 互注意力+内部注意力
Tay等[54] 300D CAFE (AVGMAX+300D HN) 89.8 88.5 互注意力+内部注意力
Table 4  部分NLI模型在SNLI数据集上的表现
作者 模型 网络 BLEU(%) 训练开销(FLOPs)
英-德 英-法 英-德 英-法
Wu等[59] GNMT+RL LSTM 24.6 39.92 2.3×1019 1.4×1020
GNMT+RL(ensemble) 26.3 41.16 1.8×1020 1.1×1021
Gehring等[60] ConvS2S CNN 25.16 40.46 9.6×1018 1.5×1020
ConvS2S(ensemble) 26.36 41.29 7.7×1019 1.2×1021
Vaswani等[6] Transformer(big) 多头自注意力 28.4 41 2.3×1019
Table 5  部分NMT模型在WMT14数据集上的表现
作者 语料集 注意力 ROUGE-1(%) ROUGE-2(%) ROUGE-L(%)
Nallapati等[15] CNN/Daily Mail 全局注意力 32.49 11.84 29.47
平均文档/摘要词数:766/53 层级注意力(词-句) 32.75 12.21 29.01
Cohan等[57] arXiv 全局注意力 32.06 9.04 25.16
平均文档/摘要词数:4 938/220 层级注意力(词-语篇) 35.80 11.05 31.80
Table 6  层级注意力在部分生成式摘要任务上的表现
[1] Kastner S, Ungerleider L G. Mechanisms of Visual Attention in the Human Cortex[J]. Annual Review of Neuroscience, 2000,23(1):315-341.
[2] Mnih V, Heess N, Graves A, et al. Recurrent Models of Visual Attention [C]// Proceedings of the Conference of Neural Information Processing Systems 2014, Montreal, Canada. 2014.
[3] Bahdanau D, Cho K, Bengio Y. Neural Machine Translation by Jointly Learning to Align and Translate [C]// Proceedings of the International Conference on Learning Representations, San Diego, USA. 2015.
[4] Hu D. An Introductory Survey on Attention Mechanisms in NLP Problems[OL]. arXiv Preprint, arXiv :1811.05544.
[5] Chaudhari S, Polatkan G, Ramanath R, et al. An Attentive Survey of Attention Models[OL]. arXiv Preprint, arXiv: 1904.02874.
[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is All You Need [C]// Proceedings of Conference of Neural Information Processing Systems, Long Beach, USA. 2017: 6000-6010.
[7] Luong M T, Pham H, Manning C D. Effective Approaches to Attention-based Neural Machine Translation [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal. 2015: 1412-1421.
[8] Li Y, Kaiser L, Bengio S, et al. Area Attention[OL]. arXiv Preprint, arXiv :1810.10126.
[9] Mirsamadi S, Barsoum E, Zhang C. Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention [C]// Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, New Orleans, USA. 2017.
[10] Yang B, Tu Z, Wong D F, et al. Modeling Localness for Self-Attention Networks [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 4449-4458.
[11] Xu K, Ba J, Kiros R, et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention [C]// Proceedings of the International Conference on Machine Learning, Lille, France. 2015: 2048-2057.
[12] Martins A F T, Astudillo R F. From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification [C]// Proceedings of the International Conference on Machine Learning, New York, USA. 2016.
[13] Kim Y, Denton C, Hoang L, et al. Structured Attention Networks [C]// Proceedings of the International Conference on Learning Representations, Toulon, France. 2017.
[14] Yang Z, Yang D, Dyer C, et al. Hierarchical Attention Networks for Document Classification [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, USA. 2016.
[15] Nallapati R, Zhou B, Gulcehre C, et al. Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond [C]// Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany. 2016: 280-290.
[16] Celikyilmaz A, Bosselut A, He X, et al. Deep Communicating Agents for Abstractive Summarization [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 1: Long Papers, New Orleans, Louisiana, USA. 2018: 1662-1675.
[17] Seo M, Kembhavi A, Farhadi A, et al. Bi-directional Attention Flow for Machine Comprehension [C]// Proceedings of the International Conference on Learning Representations, Toulon, France. 2017.
[18] Lu J, Yang J, Batra D, et al. Hierarchical Question-Image Co-Attention for Visual Question Answering [C]// Proceedings of the Neural Information Processing Systems, Barcelona, Spain. 2016: 289-297.
[19] Rocktaschel T, Grefenstette E, Hermann K M, et al. Reasoning About Entailment with Neural Attention [C]// Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico. 2016.
[20] dos Santos C, Tan M, Xiang B, et al. Attentive Pooling Networks[OL]. arXiv Preprint , arXiv :1602.03609.
[21] Ma D, Li S, Zhang X, et al. Interactive Attention Networks for Aspect-Level Sentiment Classification [C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia. 2017: 4068-4074.
[22] Cheng J, Dong L, Lapata M. Long Short-term Memory-networks for Machine Reading [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. 2016: 551-561.
[23] Cui Y, Chen Z, Wei S, et al. Attention-over-Attention Neural Networks for Reading Comprehension [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, Vancouver, Canada. 2017: 593-602.
[24] Li J, Tu Z, Yang B, et al. Multi-Head Attention with Disagreement Regularization [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 2897-2903.
[25] Li J, Yang B, Dou Z Y, et al. Information Aggregation for Multi-Head Attention with Routing-by-Agreement[OL]. arXiv Preprint, arXiv: 1904.03100.
[26] Sabour S, Frosst N, Hinton G E. Dynamic Routing Between Capsules [C]// Proceedings of the Conference on Neural Information Processing Systems, Long Beach, USA. 2017.
[27] Lin Z, Feng M, dos Santos C N, et al. A Structured Self-attentive Sentence Embedding [C]// Proceedings of the International Conference on Learning Representations, Toulon, France. 2017.
[28] Shen T, Zhou T, Long G, et al. DiSAN: Directional Self-attention Network for RNN/CNN-free Language Understanding [C]// Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA. 2018: 5446-5455.
[29] Shaw P, Uszkoreit J, Vaswani A. Self-attention with Relative Position Representations [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, USA. 2018: 464-468.
[30] 徐冠华, 赵景秀, 杨红亚, 等. 文本特征提取方法研究综述[J]. 软件导刊, 2018,17(5):13-18.
[30] ( Xu Guanhua, Zhao Jingxiu, Yang Hongya , et al. A Review of Text Feature Extraction Methods[J]. Software Guide, 2018,17(5):13-18.)
[31] 李慧, 柴亚青. 基于卷积神经网络的细粒度情感分析方法[J]. 数据分析与知识发现, 2019,3(1):95-103.
[31] ( Li Hui, Chai Yaqing . Fine-Grained Sentiment Analysis Based on Convolutional Neural Network[J]. Data Analysis and Knowledge Discovery, 2019,3(1):95-103.)
[32] Wang Y, Huang M, Zhao L, et al. Attention-based LSTM for Aspect-level Sentiment Classification [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. 2016: 606-615.
[33] Tang D, Qin B, Feng X, et al. Effective LSTMs for Target-Dependent Sentiment Classification [C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan. 2016: 3298-3307.
[34] Liu J, Zhang Y. Attention Modeling for Targeted Sentiment [C]// Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Valencia, Spain. 2017: 572-577.
[35] Huang B, Ou Y, Carley K M. Aspect Level Sentiment Classification with Attention-over-Attention Neural Networks [C]// Proceedings of International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, Washington, DC, USA. 2018: 197-206.
[36] Fan F, Feng Y, Zhao D. Multi-grained Attention Network for Aspect-Level Sentiment Classification [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 3433-3442.
[37] Zheng S, Xia R. Left-Center-Right Separated Neural Network for Aspect-based Sentiment Analysis with Rotatory Attention[OL]. arXiv Preprint, arXiv:1802.00892.
[38] Li L, Liu Y, Zhou A. Hierarchical Attention Based Position-aware Network for Aspect-level Sentiment Analysis [C]// Proceedings of the 22nd Conference on Computational Natural Language Learning, Brussels, Belgium. 2018: 181-189.
[39] Song Y, Wang J, Jiang T, et al. Attentional Encoder Network for Targeted Sentiment Classification[OL]. arXiv Preprint, arXiv:1902.09314.
[40] Pontiki M, Galanis D, Pavlopoulos J, et al. Semeval-2014 Task 4: Aspect Based Sentiment Analysis [C]// Proceedings of the 8th International Workshop on Semantic Evaluation, Dublin, Ireland. 2014: 27-35.
[41] Dong L, Wei F, Tan C, et al. Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification [C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics Volume 2: Short Papers, Baltimore, USA. 2014: 49-54.
[42] Wang W, Yang N, Wei F. Gated Self-Matching Networks for Reading Comprehension and Question Answering [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. 2017: 189-198.
[43] Wang S, Jiang J. Machine Comprehension Using Match-LSTM and Answer Pointer[OL]. arXiv Preprint, arXiv:1608.07905.
[44] Xiong C, Zhong V, Socher R. Dynamic Coattention Networks for Question Answering[OL]. arXiv Preprint, arXiv:1611.01604.
[45] Gong Y, Bowman S R. Ruminating Reader: Reasoning with Gated Multi-hop Attention [C]// Proceedings of the Workshop on Machine Reading for Question Answering, Melbourne, Australia. 2018: 1-11.
[46] Peters M E, Neumann M, Iyyer M, et al. Deep Contextualized Word Representations [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, USA. 2018: 2227-2237.
[47] Liu R, Wei W, Mao W, et al. Phase Conductor on Multi-Layered Attentions for Machine Comprehension[OL]. arXiv Preprint, arXiv:1710.10504.
[48] Yu A W, Dohan D, Luong M T, et al. QANET: Combining Local Convolution with Global Self-Attention for Reading Comprehension [C]// Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. 2018.
[49] Wang W, Yan M, Wu C. Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering [C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), Melbourne, Australia. 2018: 1705-1714.
[50] Bowman S R, Gauthier J, Rastogi A, et al. A Fast Unified Model for Parsing and Sentence Understanding [C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, Berlin, Germany. 2016: 1466-1474.
[51] Im J, Cho S. Distance-based Self-Attention Network for Natural Language Inference[OL]. arXiv Preprint, arXiv:1712.02047.
[52] Shen T, Zhou T, Long G, et al. Reinforced Self-Attention Network: A Hybrid of Hard and Soft Attention for Sequence Modeling [C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden. 2018: 4345-4352.
[53] Parikh A P, Täckström O, Das D, et al. A Decomposable Attention Model for Natural Language Inference [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. 2016: 2249-2255.
[54] Tay Y, Tuan L A, Hui S C. Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference[OL]. arXiv Preprint, arXiv:1801.00102.
[55] Domhan T. How Much Attention do You Need? A Granular Analysis of Neural Machine Translation Architectures [C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Long Papers, Melbourne, Australia. 2018: 1799-1808.
[56] Ling J, Rush A. Coarse-to-Fine Attention Models for Document Summarization [C]// Proceedings of the Workshop on New Frontiers in Summarization, Copenhagen, Denmark. 2017: 33-42.
[57] Cohan A, Dernoncourt F, Kim D S, et al. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 2: Short Papers, New Orleans, Louisiana, USA. 2018: 615-621.
[58] Miculicich L, Ram D, Pappas N, et al. Document-Level Neural Machine Translation with Hierarchical Attention Networks [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 2947-2954.
[59] Wu Y, Schuster M, Chen Z, et al. Google’s Neural Machine Translation System: Bridging the Gap Between Human and Machine Translation[OL]. arXiv Preprint, arXiv:1609.08144.
[60] Gehring J, Auli M, Grangier D, et al. Convolutional Sequence to Sequence Learning [C]// Proceedings of the International Conference on Machine Learning, Cancun, Mexico. 2017: 1243-1252.
[61] Cao P, Chen Y, Liu K, et al. Adversarial Transfer Learning for Chinese Named Entity Recognition with Self-Attention Mechanism [C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 182-192.
[62] Cai X, Dong S, Hu J. A Deep Learning Model Incorporating Part of Speech and Self-Matching Attention for Named Entity Recognition of Chinese Electronic Medical Records[J]. BMC Medical Informatics and Decision Making, 2019,19(S2):101-109.
[63] Tan Z, Wang M, Xie J, et al. Deep Semantic Role Labeling with Self-Attention[OL]. arXiv Preprint, arXiv:1712.01586.
[64] Zhang Z, He S, Li Z, et al. Attentive Semantic Role Labeling with Boundary Indicator[OL]. arXiv Preprint, arXiv:1809.02796.
[65] Strubell E, Verga P, Andor D, et al. Linguistically-Informed Self-Attention for Semantic Role Labeling[OL]. arXiv Preprint, arXiv:1804.08199.
[66] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota. 2019: 4171-4186.
[67] Ebesu T, Fang Y. Neural Citation Network for Context-Aware Citation Recommendation [C]// Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Tokyo, Japan. 2017: 1093-1096.
[68] Yang L, Zhang Z, Cai X, et al. Attention-Based Personalized Encoder-Decoder Model for Local Citation Recommendation[J]. Computational Intelligence and Neuroscience, 2019. Article ID 1232581.
[69] Ji T, Chen Z, Self N, et al. Patent Citation Dynamics Modeling via Multi-Attention Recurrent Networks[OL]. arXiv Preprint, arXiv:1905.10022.
[70] Chi Y, Liu Y. Link Prediction Based on Supernetwork Model and Attention Mechanism [C]// Proceedings of the 19th International Symposium on Knowledge and Systems Sciences, Tokyo, Japan. 2018: 201-214.
[71] Brochier R, Guille A, Velcin J. Link Prediction with Mutual Attention for Text-Attributed Networks[OL]. arXiv Preprint, arXiv:1902.11054.
[72] Munkhdalai T, Lalor J, Yu H. Citation Analysis with Neural Attention Models [C]// Proceedings of the 7th International Workshop on Health Text Mining and Information Analysis, Austin, USA. 2016: 69-77.
[73] Jain S, Wallace B C. Attention is not Explanation[OL]. arXiv Preprint, arXiv:1902.10186.
[74] Serrano S, Smith N A. Is Attention Interpretable? [OL]. arXiv Preprint, arXiv: 1906.03731.
[75] Zhang Y, Zhang C. Unsupervised Keyphrase Extraction in Academic Publications Using Human Attention [C]// Proceedings of the 17th International Conference on Scientometrics and Informetrics, Rome, Italy. 2019.
[76] Zhang Y, Zhang C. Using Human Attention to Extract Keyphrase from Microblog Post [C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. 2019.
[1] 黄露,周恩国,李岱峰. 融合特定任务信息注意力机制的文本表示学习模型*[J]. 数据分析与知识发现, 2020, 4(9): 111-122.
[2] 尹浩然,曹金璇,曹鲁喆,王国栋. 扩充语义维度的BiGRU-AM突发事件要素识别研究*[J]. 数据分析与知识发现, 2020, 4(9): 91-99.
[3] 徐红霞,于倩倩,钱力. 基于主题模型和情感分析的话题交互数据观点对抗性分析 *[J]. 数据分析与知识发现, 2020, 4(7): 110-117.
[4] 姜霖,张麒麟. 基于引文细粒度情感量化的学术评价研究*[J]. 数据分析与知识发现, 2020, 4(6): 129-138.
[5] 李铁军,颜端武,杨雄飞. 基于情感加权关联规则的微博推荐研究*[J]. 数据分析与知识发现, 2020, 4(4): 27-33.
[6] 沈卓,李艳. 基于PreLM-FT细粒度情感分析的餐饮业用户评论挖掘[J]. 数据分析与知识发现, 2020, 4(4): 63-71.
[7] 薛福亮,刘丽芳. 一种基于CRF与ATAE-LSTM的细粒度情感分析方法*[J]. 数据分析与知识发现, 2020, 4(2/3): 207-213.
[8] 倪维健,郭浩宇,刘彤,曾庆田. 基于多头自注意力神经网络的购物篮推荐方法*[J]. 数据分析与知识发现, 2020, 4(2/3): 68-77.
[9] 谭荧,张进,夏立新. 社交媒体情境下的情感分析研究综述[J]. 数据分析与知识发现, 2020, 4(1): 1-11.
[10] 聂卉,何欢. 引入词向量的隐性特征识别研究*[J]. 数据分析与知识发现, 2020, 4(1): 99-110.
[11] 岑咏华,谭志浩,吴承尧. 财经媒介信息对股票市场的影响研究: 基于情感分析的实证 *[J]. 数据分析与知识发现, 2019, 3(9): 98-114.
[12] 卢伟聪,徐健. 基于三分网络的网络用户评论情感分析 *[J]. 数据分析与知识发现, 2019, 3(8): 10-20.
[13] 尤众喜,华薇娜,潘雪莲. 中文分词器对图书评论和情感词典匹配程度的影响 *[J]. 数据分析与知识发现, 2019, 3(7): 23-33.
[14] 吴粤敏,丁港归,胡滨. 基于注意力机制的农业金融文本关系抽取研究*[J]. 数据分析与知识发现, 2019, 3(5): 86-92.
[15] 刘清民,姚长青,石崇德,温晓洁,孙玥莹. 面向科技文献神经机器翻译词汇表优化研究*[J]. 数据分析与知识发现, 2019, 3(3): 76-82.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 2015 《数据分析与知识发现》编辑部
地址:北京市海淀区中关村北四环西路33号 邮编:100190
电话/传真:(010)82626611-6626,82624938
E-mail:jishu@mail.las.ac.cn