Please wait a minute...
Advanced Search
数据分析与知识发现  2020, Vol. 4 Issue (4): 100-108     https://doi.org/10.11925/infotech.2096-3467.2019.0896
  研究论文 本期目录 | 过刊浏览 | 高级检索 |
基于Transformer和BERT的名词隐喻识别*
张冬瑜1,崔紫娟2,李映夏1,张伟1,林鸿飞3()
1 大连理工大学软件学院 大连 116620
2 大连理工大学国际合作与交流处 大连 116024
3 大连理工大学计算机科学与技术学院 大连 116023
Identifying Noun Metaphors with Transformer and BERT
Zhang Dongyu1,Cui Zijuan2,Li Yingxia1,Zhang Wei1,Lin Hongfei3()
1 School of Software, Dalian University of Technology, Dalian 116620, China
2 International Office, Dalian University of Technology, Dalian 116024, China
3 School of Computer Science and Technology, Dalian University of Technology, Dalian 116023, China
全文: PDF (763 KB)   HTML ( 21
输出: BibTeX | EndNote (RIS)      
摘要 

【目的】 解决名词隐喻识别研究中语义信息利用不足和关系表征的问题,提高识别效果。【方法】 使用BERT模型替代词向量,在语义表示中同时包含词与词之间的位置关系等信息,利用Transformer模型进行特征提取并通过神经网络分类器进行识别。【结果】 本文模型在准确率(0.900 0)、精确率(0.896 4)、召回率(0.885 8)和F1值(0.891 0)4个指标上均表现最好,可以注意到多个关键点信息,提高名词隐喻的分类效果。【局限】 对于中文文本中的冷僻词汇、成语古语以及干扰词汇等的判断比较困难。【结论】 本文所提隐喻识别方法优于现有基于人工特征的分类模型及主流深度学习模型。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
张冬瑜
崔紫娟
李映夏
张伟
林鸿飞
关键词 隐喻识别名词隐喻语义理解Transformer模型BERT模型    
Abstract

[Objective] This paper proposes a new method to address the issues facing semantic information and relationship representation, aiming to improve the recognition of noun metaphors. [Methods] First, we used the BERT model to replace the word vector, and added position relationship among words for the semantic representation. Then, we utilized the Transformer model to extract features. Finally, we identified the noun metaphors with the help of used neural network classifier. [Results] The proposed model got the highest scores in accuracy (0.900 0), precision (0.896 4), recall (0.885 8), and F1(0.891 0). It covered multiple key points to improve the classification results of noun metaphors. [Limitations] The proposed method could not process the Chinese ancient idioms, as well as rare or dummy vocabularies. [Conclusions] The proposed model could more effectively identify Noun Metaphors than the existing models based on artificial features and deep learnings.

Key wordsMetaphor Recognition    Noun Metaphor    Semantic Comprehension    Transformer Model    BERT
收稿日期: 2019-07-30      出版日期: 2020-06-01
ZTFLH:  TP391  
基金资助:*本文系教育部人文社会科学基金项目“基于机器学习的情感隐喻识别研究”(16YJCZH141);国家自然科学基金项目“基于语义资源和深度学习的情感隐喻识别方法研究”(61602079);国家自然科学基金重点项目“社交媒体中文本情感语义计算理论和方法”的研究成果之一(61632011)
通讯作者: 林鸿飞     E-mail: hflin@dlut.edu.cn
引用本文:   
张冬瑜,崔紫娟,李映夏,张伟,林鸿飞. 基于Transformer和BERT的名词隐喻识别*[J]. 数据分析与知识发现, 2020, 4(4): 100-108.
Zhang Dongyu,Cui Zijuan,Li Yingxia,Zhang Wei,Lin Hongfei. Identifying Noun Metaphors with Transformer and BERT. Data Analysis and Knowledge Discovery, 2020, 4(4): 100-108.
链接本文:  
https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/10.11925/infotech.2096-3467.2019.0896      或      https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/Y2020/V4/I4/100
Fig.1  BERT+Transformer的名词隐喻识别流程
Fig.2  BERT模型训练过程
Fig.3  Transformer模型结构
类别 数量 比例
动词隐喻 2 040 46.43%
名词隐喻 2 035 46.31%
非隐喻 319 7.26%
总计 4 394 100%
Table 1  数据集组成
类别 示例
动词隐喻 知了在树上唱歌
名词隐喻 他像孔雀一样高傲
非隐喻 对任何不屈服于美国的国家实行制裁
Table 2  数据样例(部分)
实际

预测
True False
True Tp Fn
False Fp Tn
Table 3  混淆矩阵的字符含义
模型 Acc P R F1
CNN 0.870 9 0.879 6 0.834 6 0.856 5
LSTM 0.843 6 0.850 0 0.803 1 0.825 9
NN 0.746 7 0.742 8 0.743 1 0.747 8
LSTM+ATT 0.850 9 0.870 6 0.795 2 0.831 2
DBi-LSTM 0.744 8 0.743 0 0.743 8 0.744 5
CNN+SVM 0.784 0 0.781 2 0.780 2 0.784 6
Capsule 0.878 1 0.875 5 0.858 2 0.866 7
Transformer 0.856 3 0.895 9 0.779 5 0.833 6
BERT 0.883 6 0.874 0 0.874 0 0.874 0
BERT+Transformer 0.900 0 0.896 4 0.885 8 0.891 0
Table 4  名词隐喻识别的实验结果
[1] Lakoff G, Johnson M. Metaphors We Live by[M]. University of Chicago Press, 2008.
[2] Richards I A. The Philosophy of Rhetoric[M]. New York: Oxford University Press, 1965.
[3] Ausubel D P . The Acquisition and Retention of Knowledge: A Cognitive View[M]. Springer Science & Business Media, 2012.
[4] 田嘉, 苏畅, 陈怡疆 . 隐喻计算研究进展[J]. 软件学报, 2015,26(1):40-51.
[4] ( Tian Jia, Su Chang, Chen Yijiang . Computational Metaphor Processing[J]. Journal of Software, 2015,26(1):40-51.)
[5] Brunner G, Liu Y, Pascual D , et al. On the Validity of Self-Attention as Explanation in Transformer Models[OL]. arXiv Preprint, arXiv: 1908. 04211.
[6] Khandelwal U, Clark K, Jurafsky D , et al. Sample Efficient Text Summarization Using a Single Pre-Trained Transformer[OL]. arXiv Preprint, arXiv: 1905. 08836.
[7] Liu J, Cohen S B, Lapata M. Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model[C]// Proceedings of the 2019 IWCS Shared Task on Semantic Parsing. 2019.
[8] Yang W, Xie Y, Lin A , et al. End-to-End Open-Domain Question Answering with Bertserini[OL]. arXiv Preprint, arXiv: 1902. 01718.
[9] Alberti C, Lee K, Collins M . A Bert Baseline for the Natural Questions[OL]. arXiv Preprint, arXiv: 1901. 08634.
[10] Xu P, Ma X, Nallapati R , et al. Passage Ranking with Weak Supervision[OL]. arXiv Preprint, arXiv: 1905. 05910.
[11] Fass D . Met: A Method for Discriminating Metonymy and Metaphor by Computer[J]. Computational Linguistics, 1991,17(1):49-90.
[12] Wilks Y, Dalton A, Allen J, et al. Automatic Metaphor Detection Using Large-Scale Lexical Resources and Conventional Metaphor Extraction[C]// Proceedings of the 1st Workshop on Metaphor in NLP, Atlanta, Georgia, USA. 2013: 36-44.
[13] Jang H, Moon S, Jo Y, et al. Metaphor Detection in Discourse[C]// Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, New York, USA. 2015: 384-392.
[14] Rai S, Chakraverty S, Tayal D K , et al. A Study on Impact of Context on Metaphor Detection[J]. The Computer Journal, 2018,61(11):1667-1682.
[15] Do Dinh E L, Gurevych I. Token-level Metaphor Detection Using Neural Networks[C]// Proceedings of the 4th Workshop on Metaphor in NLP, San Diego, California, USA. 2016: 28-33.
[16] Sun S, Xie Z. BiLSTM-Based Models for Metaphor Detection[C]// Proceedings of the 2017 National CCF Conference on Natural Language Processing and Chinese Computing (NLPCC2017), Dalian, China. 2017: 431-442.
[17] 汪梦翔, 饶琪, 顾澄 , 等. 汉语名词的隐喻知识表示及获取研究[J]. 中文信息学报, 2017,31(6):1-9.
[17] ( Wang Mengxiang, Rao Qi, Gu Cheng , et al. Metaphorical Knowledge Expression and Acquisition for Chinese Nouns[J]. Journal of Chinese Information Processing, 2017,31(6):1-9.)
[18] Bizzoni Y, Ghanimifard M. Bigrams and BiLSTMs Two Neural Networks for Sequential Metaphor Detection[C]// Proceedings of the 2018 Workshop on Figurative Language Processing, Louisiana, USA. 2018: 91-101.
[19] Gao G, Choi E, Choi Y , et al. Neural Metaphor Detection in Context[OL]. arXiv Preprint, arXiv: 1808. 09653.
[20] Devlin J, Chang M W, Lee K , et al. Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding[OL]. arXiv Preprint, arXiv: 1810. 04805.
[21] Vaswani A, Shazeer N, Parmar N, et al. Attention is All You Need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017: 5998-6008.
[22] Moriya S, Shibata C. Transfer Learning Method for Very Deep CNN for Text Classification and Methods for Its Evaluation[C]// Proceedings of the IEEE 42nd Annual Computer Software & Applications Conference. 2018: 153-158.
[23] Mikolov T, Chen K, Corrado G , et al. Efficient Estimation of Word Representations in Vector Space[OL]. arXiv Preprint, arXiv: 1301. 3781.
[24] Li C, Zhan G, Li Z. News Text Classification Based on Improved Bi-LSTM-CNN[C]// Proceedings of the 9th International Conference on Information Technology in Medicine and Education (ITME). IEEE, 2018: 890-893.
[25] Maldonado S, López J . Dealing with High-Dimensional Class-Imbalanced Datasets: Embedded Feature Selection for SVM Classification[J]. Applied Soft Computing, 2018,67:94-105.
doi: 10.1016/j.asoc.2018.02.051
[26] Hinton G E, Krizhevsky A, Wang S D. Transforming Auto-encoders[C]// Proceedings of the 21st International Conference on Artificial Neural Networks. Springer, 2011: 44-51.
[27] Kingma D P, Ba J . Adam: A Method for Stochastic Optimization[OL]. arXiv Preprint, arXiv: 1412. 6980.
[1] 陆泉, 何超, 陈静, 田敏, 刘婷. 基于两阶段迁移学习的多标签分类模型研究*[J]. 数据分析与知识发现, 2021, 5(7): 91-100.
[2] 刘欢,张智雄,王宇飞. BERT模型的主要优化改进方法研究综述*[J]. 数据分析与知识发现, 2021, 5(1): 3-15.
[3] 苏传东,黄孝喜,王荣波,谌志群,毛君钰,朱嘉莹,潘宇豪. 基于词嵌入融合和循环神经网络的中英文隐喻识别*[J]. 数据分析与知识发现, 2020, 4(4): 91-99.
[4] 黄孝喜, 李晗雨, 王荣波, 王小华, 谌志群. 基于卷积神经网络与SVM分类器的隐喻识别*[J]. 数据分析与知识发现, 2018, 2(10): 77-83.
[5] 黄孝喜, 张华, 陆蓓, 王荣波, 吴铤. 一种基于词语抽象度的汉语隐喻识别方法[J]. 现代图书情报技术, 2015, 31(4): 34-40.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 2015 《数据分析与知识发现》编辑部
地址:北京市海淀区中关村北四环西路33号 邮编:100190
电话/传真:(010)82626611-6626,82624938
E-mail:jishu@mail.las.ac.cn