Please wait a minute...
Data Analysis and Knowledge Discovery  2020, Vol. 4 Issue (5): 1-14    DOI: 10.11925/infotech.2096-3467.2019.1317
Current Issue | Archive | Adv Search |
Review of Attention Mechanism in Natural Language Processing
Shi Lei1,Wang Yi2,Cheng Ying2,3,Wei Ruibin1()
1School of Management Science and Engineering, Anhui University of Finance and Economics, Bengbu 233030, China
2School of Information Management, Nanjing University, Nanjing 210023, China
3School of Chinese Language and Literature, Shandong Normal University, Jinan 250014, China
Download: PDF (911 KB)   HTML ( 66
Export: BibTeX | EndNote (RIS)      
Abstract  

[Objective] This paper summarizes the evolution and application of attention mechanism in natural language processing.[Coverage] We searched “attention” with the title/topic fields of WoS, ACM Digital Library, arXiv and CNKI from January 2015 to October 2019. Then, we manually screened the topic literature in the field of natural language processing, and obtained 68 related papers.[Methods] We first summarized the general attention mechanism, and sorted out its derivations. Second, we thoroughly reviewed their applications in natural language processing tasks.[Results] The application of attention mechanism in natural language processing focused on sequence labeling, text classification, reasoning and generative tasks. There were adaptation rules between tasks and the various attention mechanisms.[Limitations] Some adaptations between the mechanisms and the tasks were obtained from the overall performance of the model. More research is needed to examine the performance of different mechanisms.[Conclusions] The study of attention mechanism has effectively promoted the development of natural language processing. However, the mechanism of action is not yet clear. Future research should focus on making attention mechanism closer to those of the human beings.

Key wordsAttention Mechanism      Self-Attention      Machine Translation      Machine Reading ComprehensionSentiment Analysis     
Received: 10 December 2019      Published: 15 June 2020
ZTFLH:  TP391.1  
Corresponding Authors: Wei Ruibin     E-mail: rbwxy@126.com

Cite this article:

Shi Lei,Wang Yi,Cheng Ying,Wei Ruibin. Review of Attention Mechanism in Natural Language Processing. Data Analysis and Knowledge Discovery, 2020, 4(5): 1-14.

URL:

https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/10.11925/infotech.2096-3467.2019.1317     OR     https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/Y2020/V4/I5/1

Schematic Diagram of NMT Model with Attention Mechanism
The General Form of Attention Mechanism
注意力 关注范围
全局注意力 全部元素
局部注意力 以对齐位置为中心的窗口
硬注意力 一个元素
稀疏注意力 稀疏分布的部分元素
结构注意力 结构上相关的一系列元素
Classification of Attention Mechanism by Attention Range
作者 模型 情感极性准确率(%) 注意力
Restaurant Laptop Twitter
Wang等[32] LSTM 74.3 66.5 66.5
Tang等[33] TD-LSTM 75.6 68.1 70.8 语境化注意力
Wang等[32] ATAE-LSTM 77.2 68.7 - 方面嵌入注意力
Ma等[21] IAN 78.6 72.1 - 粗粒度交互注意力
Liu等[34] BiLSTM-ATT-G 79.7 73.1 70.4 语境化注意力
Huang等[35] AOA-LSTM 81.2 74.5 - 细粒度双向注意力
Fan等[36] MGAN 81.2 75.4 72.5 多粒度双向注意力
Zheng等[37] LCR-Rot 81.3 75.2 72.7 语境化粗粒度双向注意力
Li等[38] HAPN 82.2 77.3 - 层级注意力
Song等[39] AEN-BERT 83.1 80.0 74.7 多头自注意力网络
The Performance of Aspect-Level Sentiment Analysis Models
作者 模型 Exact Match(%) F1(%) 注意力
Wang等[43] Match-LSTM 64.7 73.7
Xiong等[44] DCN 66.2 75.9 协同注意力
Seo等[17] BiDAF 68.0 77.3 双向注意力
Gong等[45] Ruminating Reader 70.6 79.5 双向多跳注意力
Wang等[42] R-Net 72.3 80.7 Self-Matching注意力
Peters等[46] BiDAF+Self-Attention 72.1 81.1 双向注意力+自注意力
Liu等[47] PhaseCond 72.6 81.4 K2Q+自注意力
Yu等[48] QANet 76.2 84.6 协同注意力+自注意力
Wang等[49] SLQA+ 80.4 87.0 协同注意力+自注意力
The Performance of Machine Reading Comprehension Models on SQuAD
作者 模型 训练集准确率(%) 测试集准确率(%) 注意力
Bowman等[50] 300D LSTM Encoders 83.9 80.6
Rocktaschel等[19] 100D LSTM with Attention 85.3 83.5 双路注意力
Lin等[27] 300D Structured Self-Attentive Sentence Embedding - 84.4 自注意力
Shen等[28] 300D Directional Self-Attention Network (DiSAN) 91.1 85.6 定向自注意力
Cheng等[22] 300D LSTMN Deep Fusion - 85.7 互注意力+内部注意力
Im等[51] 300D Distance-based Self-Attention Network 89.6 86.3 定向+距离自注意力
Shen等[52] 300D ReSAN 92.6 86.3 软硬混合自注意力
Parikh等[53] 300D Intra-Sentence Attention 90.5 86.8 互注意力+内部注意力
Tay等[54] 300D CAFE (AVGMAX+300D HN) 89.8 88.5 互注意力+内部注意力
The Performance of NLI Models on SNLI
作者 模型 网络 BLEU(%) 训练开销(FLOPs)
英-德 英-法 英-德 英-法
Wu等[59] GNMT+RL LSTM 24.6 39.92 2.3×1019 1.4×1020
GNMT+RL(ensemble) 26.3 41.16 1.8×1020 1.1×1021
Gehring等[60] ConvS2S CNN 25.16 40.46 9.6×1018 1.5×1020
ConvS2S(ensemble) 26.36 41.29 7.7×1019 1.2×1021
Vaswani等[6] Transformer(big) 多头自注意力 28.4 41 2.3×1019
The Performance of NMT Models on WMT14
作者 语料集 注意力 ROUGE-1(%) ROUGE-2(%) ROUGE-L(%)
Nallapati等[15] CNN/Daily Mail 全局注意力 32.49 11.84 29.47
平均文档/摘要词数:766/53 层级注意力(词-句) 32.75 12.21 29.01
Cohan等[57] arXiv 全局注意力 32.06 9.04 25.16
平均文档/摘要词数:4 938/220 层级注意力(词-语篇) 35.80 11.05 31.80
The Performance of Hierarchical Attention Mechanism on Some Abstractive Summarization Tasks
[1] Kastner S, Ungerleider L G. Mechanisms of Visual Attention in the Human Cortex[J]. Annual Review of Neuroscience, 2000,23(1):315-341.
[2] Mnih V, Heess N, Graves A, et al. Recurrent Models of Visual Attention [C]// Proceedings of the Conference of Neural Information Processing Systems 2014, Montreal, Canada. 2014.
[3] Bahdanau D, Cho K, Bengio Y. Neural Machine Translation by Jointly Learning to Align and Translate [C]// Proceedings of the International Conference on Learning Representations, San Diego, USA. 2015.
[4] Hu D. An Introductory Survey on Attention Mechanisms in NLP Problems[OL]. arXiv Preprint, arXiv :1811.05544.
[5] Chaudhari S, Polatkan G, Ramanath R, et al. An Attentive Survey of Attention Models[OL]. arXiv Preprint, arXiv: 1904.02874.
[6] Vaswani A, Shazeer N, Parmar N, et al. Attention is All You Need [C]// Proceedings of Conference of Neural Information Processing Systems, Long Beach, USA. 2017: 6000-6010.
[7] Luong M T, Pham H, Manning C D. Effective Approaches to Attention-based Neural Machine Translation [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal. 2015: 1412-1421.
[8] Li Y, Kaiser L, Bengio S, et al. Area Attention[OL]. arXiv Preprint, arXiv :1810.10126.
[9] Mirsamadi S, Barsoum E, Zhang C. Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention [C]// Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, New Orleans, USA. 2017.
[10] Yang B, Tu Z, Wong D F, et al. Modeling Localness for Self-Attention Networks [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 4449-4458.
[11] Xu K, Ba J, Kiros R, et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention [C]// Proceedings of the International Conference on Machine Learning, Lille, France. 2015: 2048-2057.
[12] Martins A F T, Astudillo R F. From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification [C]// Proceedings of the International Conference on Machine Learning, New York, USA. 2016.
[13] Kim Y, Denton C, Hoang L, et al. Structured Attention Networks [C]// Proceedings of the International Conference on Learning Representations, Toulon, France. 2017.
[14] Yang Z, Yang D, Dyer C, et al. Hierarchical Attention Networks for Document Classification [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, USA. 2016.
[15] Nallapati R, Zhou B, Gulcehre C, et al. Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond [C]// Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany. 2016: 280-290.
[16] Celikyilmaz A, Bosselut A, He X, et al. Deep Communicating Agents for Abstractive Summarization [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 1: Long Papers, New Orleans, Louisiana, USA. 2018: 1662-1675.
[17] Seo M, Kembhavi A, Farhadi A, et al. Bi-directional Attention Flow for Machine Comprehension [C]// Proceedings of the International Conference on Learning Representations, Toulon, France. 2017.
[18] Lu J, Yang J, Batra D, et al. Hierarchical Question-Image Co-Attention for Visual Question Answering [C]// Proceedings of the Neural Information Processing Systems, Barcelona, Spain. 2016: 289-297.
[19] Rocktaschel T, Grefenstette E, Hermann K M, et al. Reasoning About Entailment with Neural Attention [C]// Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico. 2016.
[20] dos Santos C, Tan M, Xiang B, et al. Attentive Pooling Networks[OL]. arXiv Preprint , arXiv :1602.03609.
[21] Ma D, Li S, Zhang X, et al. Interactive Attention Networks for Aspect-Level Sentiment Classification [C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia. 2017: 4068-4074.
[22] Cheng J, Dong L, Lapata M. Long Short-term Memory-networks for Machine Reading [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. 2016: 551-561.
[23] Cui Y, Chen Z, Wei S, et al. Attention-over-Attention Neural Networks for Reading Comprehension [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, Vancouver, Canada. 2017: 593-602.
[24] Li J, Tu Z, Yang B, et al. Multi-Head Attention with Disagreement Regularization [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 2897-2903.
[25] Li J, Yang B, Dou Z Y, et al. Information Aggregation for Multi-Head Attention with Routing-by-Agreement[OL]. arXiv Preprint, arXiv: 1904.03100.
[26] Sabour S, Frosst N, Hinton G E. Dynamic Routing Between Capsules [C]// Proceedings of the Conference on Neural Information Processing Systems, Long Beach, USA. 2017.
[27] Lin Z, Feng M, dos Santos C N, et al. A Structured Self-attentive Sentence Embedding [C]// Proceedings of the International Conference on Learning Representations, Toulon, France. 2017.
[28] Shen T, Zhou T, Long G, et al. DiSAN: Directional Self-attention Network for RNN/CNN-free Language Understanding [C]// Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA. 2018: 5446-5455.
[29] Shaw P, Uszkoreit J, Vaswani A. Self-attention with Relative Position Representations [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, USA. 2018: 464-468.
[30] 徐冠华, 赵景秀, 杨红亚, 等. 文本特征提取方法研究综述[J]. 软件导刊, 2018,17(5):13-18.
[30] ( Xu Guanhua, Zhao Jingxiu, Yang Hongya , et al. A Review of Text Feature Extraction Methods[J]. Software Guide, 2018,17(5):13-18.)
[31] 李慧, 柴亚青. 基于卷积神经网络的细粒度情感分析方法[J]. 数据分析与知识发现, 2019,3(1):95-103.
[31] ( Li Hui, Chai Yaqing . Fine-Grained Sentiment Analysis Based on Convolutional Neural Network[J]. Data Analysis and Knowledge Discovery, 2019,3(1):95-103.)
[32] Wang Y, Huang M, Zhao L, et al. Attention-based LSTM for Aspect-level Sentiment Classification [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. 2016: 606-615.
[33] Tang D, Qin B, Feng X, et al. Effective LSTMs for Target-Dependent Sentiment Classification [C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan. 2016: 3298-3307.
[34] Liu J, Zhang Y. Attention Modeling for Targeted Sentiment [C]// Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Valencia, Spain. 2017: 572-577.
[35] Huang B, Ou Y, Carley K M. Aspect Level Sentiment Classification with Attention-over-Attention Neural Networks [C]// Proceedings of International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, Washington, DC, USA. 2018: 197-206.
[36] Fan F, Feng Y, Zhao D. Multi-grained Attention Network for Aspect-Level Sentiment Classification [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 3433-3442.
[37] Zheng S, Xia R. Left-Center-Right Separated Neural Network for Aspect-based Sentiment Analysis with Rotatory Attention[OL]. arXiv Preprint, arXiv:1802.00892.
[38] Li L, Liu Y, Zhou A. Hierarchical Attention Based Position-aware Network for Aspect-level Sentiment Analysis [C]// Proceedings of the 22nd Conference on Computational Natural Language Learning, Brussels, Belgium. 2018: 181-189.
[39] Song Y, Wang J, Jiang T, et al. Attentional Encoder Network for Targeted Sentiment Classification[OL]. arXiv Preprint, arXiv:1902.09314.
[40] Pontiki M, Galanis D, Pavlopoulos J, et al. Semeval-2014 Task 4: Aspect Based Sentiment Analysis [C]// Proceedings of the 8th International Workshop on Semantic Evaluation, Dublin, Ireland. 2014: 27-35.
[41] Dong L, Wei F, Tan C, et al. Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification [C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics Volume 2: Short Papers, Baltimore, USA. 2014: 49-54.
[42] Wang W, Yang N, Wei F. Gated Self-Matching Networks for Reading Comprehension and Question Answering [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. 2017: 189-198.
[43] Wang S, Jiang J. Machine Comprehension Using Match-LSTM and Answer Pointer[OL]. arXiv Preprint, arXiv:1608.07905.
[44] Xiong C, Zhong V, Socher R. Dynamic Coattention Networks for Question Answering[OL]. arXiv Preprint, arXiv:1611.01604.
[45] Gong Y, Bowman S R. Ruminating Reader: Reasoning with Gated Multi-hop Attention [C]// Proceedings of the Workshop on Machine Reading for Question Answering, Melbourne, Australia. 2018: 1-11.
[46] Peters M E, Neumann M, Iyyer M, et al. Deep Contextualized Word Representations [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, USA. 2018: 2227-2237.
[47] Liu R, Wei W, Mao W, et al. Phase Conductor on Multi-Layered Attentions for Machine Comprehension[OL]. arXiv Preprint, arXiv:1710.10504.
[48] Yu A W, Dohan D, Luong M T, et al. QANET: Combining Local Convolution with Global Self-Attention for Reading Comprehension [C]// Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. 2018.
[49] Wang W, Yan M, Wu C. Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering [C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), Melbourne, Australia. 2018: 1705-1714.
[50] Bowman S R, Gauthier J, Rastogi A, et al. A Fast Unified Model for Parsing and Sentence Understanding [C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, Berlin, Germany. 2016: 1466-1474.
[51] Im J, Cho S. Distance-based Self-Attention Network for Natural Language Inference[OL]. arXiv Preprint, arXiv:1712.02047.
[52] Shen T, Zhou T, Long G, et al. Reinforced Self-Attention Network: A Hybrid of Hard and Soft Attention for Sequence Modeling [C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden. 2018: 4345-4352.
[53] Parikh A P, Täckström O, Das D, et al. A Decomposable Attention Model for Natural Language Inference [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Austin, Texas, USA. 2016: 2249-2255.
[54] Tay Y, Tuan L A, Hui S C. Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference[OL]. arXiv Preprint, arXiv:1801.00102.
[55] Domhan T. How Much Attention do You Need? A Granular Analysis of Neural Machine Translation Architectures [C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics: Long Papers, Melbourne, Australia. 2018: 1799-1808.
[56] Ling J, Rush A. Coarse-to-Fine Attention Models for Document Summarization [C]// Proceedings of the Workshop on New Frontiers in Summarization, Copenhagen, Denmark. 2017: 33-42.
[57] Cohan A, Dernoncourt F, Kim D S, et al. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Volume 2: Short Papers, New Orleans, Louisiana, USA. 2018: 615-621.
[58] Miculicich L, Ram D, Pappas N, et al. Document-Level Neural Machine Translation with Hierarchical Attention Networks [C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 2947-2954.
[59] Wu Y, Schuster M, Chen Z, et al. Google’s Neural Machine Translation System: Bridging the Gap Between Human and Machine Translation[OL]. arXiv Preprint, arXiv:1609.08144.
[60] Gehring J, Auli M, Grangier D, et al. Convolutional Sequence to Sequence Learning [C]// Proceedings of the International Conference on Machine Learning, Cancun, Mexico. 2017: 1243-1252.
[61] Cao P, Chen Y, Liu K, et al. Adversarial Transfer Learning for Chinese Named Entity Recognition with Self-Attention Mechanism [C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. 2018: 182-192.
[62] Cai X, Dong S, Hu J. A Deep Learning Model Incorporating Part of Speech and Self-Matching Attention for Named Entity Recognition of Chinese Electronic Medical Records[J]. BMC Medical Informatics and Decision Making, 2019,19(S2):101-109.
[63] Tan Z, Wang M, Xie J, et al. Deep Semantic Role Labeling with Self-Attention[OL]. arXiv Preprint, arXiv:1712.01586.
[64] Zhang Z, He S, Li Z, et al. Attentive Semantic Role Labeling with Boundary Indicator[OL]. arXiv Preprint, arXiv:1809.02796.
[65] Strubell E, Verga P, Andor D, et al. Linguistically-Informed Self-Attention for Semantic Role Labeling[OL]. arXiv Preprint, arXiv:1804.08199.
[66] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [C]// Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota. 2019: 4171-4186.
[67] Ebesu T, Fang Y. Neural Citation Network for Context-Aware Citation Recommendation [C]// Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Tokyo, Japan. 2017: 1093-1096.
[68] Yang L, Zhang Z, Cai X, et al. Attention-Based Personalized Encoder-Decoder Model for Local Citation Recommendation[J]. Computational Intelligence and Neuroscience, 2019. Article ID 1232581.
[69] Ji T, Chen Z, Self N, et al. Patent Citation Dynamics Modeling via Multi-Attention Recurrent Networks[OL]. arXiv Preprint, arXiv:1905.10022.
[70] Chi Y, Liu Y. Link Prediction Based on Supernetwork Model and Attention Mechanism [C]// Proceedings of the 19th International Symposium on Knowledge and Systems Sciences, Tokyo, Japan. 2018: 201-214.
[71] Brochier R, Guille A, Velcin J. Link Prediction with Mutual Attention for Text-Attributed Networks[OL]. arXiv Preprint, arXiv:1902.11054.
[72] Munkhdalai T, Lalor J, Yu H. Citation Analysis with Neural Attention Models [C]// Proceedings of the 7th International Workshop on Health Text Mining and Information Analysis, Austin, USA. 2016: 69-77.
[73] Jain S, Wallace B C. Attention is not Explanation[OL]. arXiv Preprint, arXiv:1902.10186.
[74] Serrano S, Smith N A. Is Attention Interpretable? [OL]. arXiv Preprint, arXiv: 1906.03731.
[75] Zhang Y, Zhang C. Unsupervised Keyphrase Extraction in Academic Publications Using Human Attention [C]// Proceedings of the 17th International Conference on Scientometrics and Informetrics, Rome, Italy. 2019.
[76] Zhang Y, Zhang C. Using Human Attention to Extract Keyphrase from Microblog Post [C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. 2019.
[1] Fan Tao,Wang Hao,Wu Peng. Sentiment Analysis of Online Users' Negative Emotions Based on Graph Convolutional Network and Dependency Parsing[J]. 数据分析与知识发现, 2021, 5(9): 97-106.
[2] Liu Wenbin, He Yanqing, Wu Zhenfeng, Dong Cheng. Sentence Alignment Method Based on BERT and Multi-similarity Fusion[J]. 数据分析与知识发现, 2021, 5(7): 48-58.
[3] Yang Hanxun, Zhou Dequn, Ma Jing, Luo Yongcong. Detecting Rumors with Uncertain Loss and Task-level Attention Mechanism[J]. 数据分析与知识发现, 2021, 5(7): 101-110.
[4] Yin Pengbo,Pan Weimin,Zhang Haijun,Chen Degang. Identifying Clickbait with BERT-BiGA Model[J]. 数据分析与知识发现, 2021, 5(6): 126-134.
[5] Xie Hao,Mao Jin,Li Gang. Sentiment Classification of Image-Text Information with Multi-Layer Semantic Fusion[J]. 数据分析与知识发现, 2021, 5(6): 103-114.
[6] Han Pu,Zhang Zhanpeng,Zhang Mingtao,Gu Liang. Normalizing Chinese Disease Names with Multi-feature Fusion[J]. 数据分析与知识发现, 2021, 5(5): 83-94.
[7] Duan Jianyong,Wei Xiaopeng,Wang Hao. A Multi-Perspective Co-Matching Model for Machine Reading Comprehension[J]. 数据分析与知识发现, 2021, 5(4): 134-141.
[8] Wang Yuzhu,Xie Jun,Chen Bo,Xu Xinying. Multi-modal Sentiment Analysis Based on Cross-modal Context-aware Attention[J]. 数据分析与知识发现, 2021, 5(4): 49-59.
[9] Jiang Cuiqing,Wang Xiangxiang,Wang Zhao. Forecasting Car Sales Based on Consumer Attention[J]. 数据分析与知识发现, 2021, 5(1): 128-139.
[10] Huang Lu,Zhou Enguo,Li Daifeng. Text Representation Learning Model Based on Attention Mechanism with Task-specific Information[J]. 数据分析与知识发现, 2020, 4(9): 111-122.
[11] Yin Haoran,Cao Jinxuan,Cao Luzhe,Wang Guodong. Identifying Emergency Elements Based on BiGRU-AM Model with Extended Semantic Dimension[J]. 数据分析与知识发现, 2020, 4(9): 91-99.
[12] Xue Fuliang,Liu Lifang. Fine-Grained Sentiment Analysis with CRF and ATAE-LSTM[J]. 数据分析与知识发现, 2020, 4(2/3): 207-213.
[13] Ni Weijian,Guo Haoyu,Liu Tong,Zeng Qingtian. Online Product Recommendation Based on Multi-Head Self-Attention Neural Networks[J]. 数据分析与知识发现, 2020, 4(2/3): 68-77.
[14] Qi Ruihua,Jian Yue,Guo Xu,Guan Jinghua,Yang Mingxin. Sentiment Analysis of Cross-Domain Product Reviews Based on Feature Fusion and Attention Mechanism[J]. 数据分析与知识发现, 2020, 4(12): 85-94.
[15] Xu Tongtong,Sun Huazhi,Ma Chunmei,Jiang Lifen,Liu Yichen. Classification Model for Few-shot Texts Based on Bi-directional Long-term Attention Features[J]. 数据分析与知识发现, 2020, 4(10): 113-123.
  Copyright © 2016 Data Analysis and Knowledge Discovery   Tel/Fax:(010)82626611-6626,82624938   E-mail:jishu@mail.las.ac.cn