Please wait a minute...
Advanced Search
数据分析与知识发现  2021, Vol. 5 Issue (4): 134-141     https://doi.org/10.11925/infotech.2096-3467.2020.0714
  研究论文 本期目录 | 过刊浏览 | 高级检索 |
基于多角度共同匹配的多项选择机器阅读理解模型 *
段建勇1,2(),魏晓鹏1,王昊1,2
1北方工业大学信息学院 北京 100144
2北方工业大学CNONIX国家标准应用与推广实验室 北京 100144
A Multi-Perspective Co-Matching Model for Machine Reading Comprehension
Duan Jianyong1,2(),Wei Xiaopeng1,Wang Hao1,2
1School of Information, North China University of Technology, Beijing 100144, China
2CNONIX National Standard Application and Promotion Laboratory, North China University of Technology, Beijing 100144, China
全文: PDF (873 KB)   HTML ( 12
输出: BibTeX | EndNote (RIS)      
摘要 

【目的】 提出一个用于多项选择机器阅读理解的多角度共同匹配模型,并探讨问题类型和答案长度对机器寻找正确答案的影响。【方法】 使用多角度匹配机制获得文章与问题和候选答案之间的相关性,用相关性与文章向量相乘得到问题和候选答案的向量表示。提取句子级和文档级的特征,基于这些特征选择出正确答案。基于问题类型和答案长度对数据进行分类,测试其精确度,并分析问题类型和答案长度对机器选择正确答案的影响。【结果】 本模型在RACE-M、RACE-H和RACE数据集上的准确率分别达到72.5%、63.1%和66.1%。【局限】 多角度匹配机制包含4种匹配策略和多个角度使得模型在交互层需要消耗大量的内存和时间。【结论】 多角度匹配机制能够更好地交互文章与问题和候选答案,模型的准确率更受到问题类型的影响,而不受答案长度的影响。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
段建勇
魏晓鹏
王昊
关键词 机器阅读理解多项选择多角度匹配注意力机制    
Abstract

[Objective] This paper proposes a model for multiple-choice reading comprehension, and then explores the impacts of question types and answer length on machine reading comprehension. [Methods] First, we used the multi-perspective matching mechanism to obtain the correlation between the articles, questions and candidate answers. Then, we multiplied the correlation and articles to create the vector representation of questions and candidate answers. Third, we extracted sentence-level and document-level features, which were used to select the correct answers. Fourth, we categorized the data based on the question types and answer length. Finally, we analyzed their impacts on the machine’s choice of correct answers. [Results] The accuracy of our model on the RACE-M, RACE-H and RACE datasets reached 72.5%, 63.1% and 66.1% respectively. [Limitations] The multi-perspective matching mechanism has four matching strategies and multiple angles, which makes the model consume a lot of memory and spend longer processing time at the interactive layer. [Conclusions] The proposed model can effectively match articles with questions and answers. The accuracy of the model is more affected by the type of question, not by the length of answer.

Key wordsMachine Reading Comprehension    Multi-Choice    Multi-Perspective Matching    Attention Mechanism
收稿日期: 2020-07-21      出版日期: 2021-05-17
ZTFLH:  分类号: TP391  
基金资助:*国家自然科学基金项目的研究成果之一(61672040);本文系国家自然科学基金项目的研究成果之一(61972003)
通讯作者: 段建勇     E-mail: duanjy@ncut.edu.cn
引用本文:   
段建勇,魏晓鹏,王昊. 基于多角度共同匹配的多项选择机器阅读理解模型 *[J]. 数据分析与知识发现, 2021, 5(4): 134-141.
Duan Jianyong,Wei Xiaopeng,Wang Hao. A Multi-Perspective Co-Matching Model for Machine Reading Comprehension. Data Analysis and Knowledge Discovery, 2021, 5(4): 134-141.
链接本文:  
https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/10.11925/infotech.2096-3467.2020.0714      或      https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/Y2021/V5/I4/134
Fig.1  模型的整体结构
Fig.2  不同匹配策略
数据集 RACE-M RACE-H RACE
子集 Train Dev Test Train Dev Test Train Dev Test all
文章数(篇) 6 409 368 362 18 728 1 021 1 045 25 137 1 389 1 407 27 933
问题数(个) 25 421 1 436 1 436 62 445 3 451 3 498 87 866 4 887 4 934 97 687
Table 1  数据集详细信息
模型 RACE-M RACE-H RACE
SAR 44.2 43.0 43.2
GA 43.7 44.2 44.1
ElimiNet - - 44.7
HAF 45.3 47.9 47.2
MUSIC 51.5 45.7 47.4
HCM 55.8 48.2 50.4
MRU 57.7 47.4 50.4
BERTbase
BERTbase+MPCM
71.1
72.5
62.3
63.1
65.0
66.1
Table 2  实验结果的对比(%)
问题类型 blank who when where what why which how title others
问题数量(个) 44 577 1 149 733 1 064 16 419 3 932 10 697 3 397 1 495 5 801
准确率(%) 67.3 55.4 56.1 64.3 60.9 67.2 60.4 63.5 66.7 60.5
Table 3  不同问题类型的问题数量及其准确率
Fig.3  不同长度答案的问题的准确率
[1] Tang M, Cai J, Zhuo H. Multi-Matching Network for Multiple Choice Reading Comprehension[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. 2019,33:7088-7095.
[2] Zhu H, Wei F, Qin B, et al. Hierarchical Attention Flow for Multiple-Choice Reading Comprehension[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018.
[3] Lai G, Xie Q, Liu H, et al. Race: Large-Scale Reading Comprehension Dataset from Examinations[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 795-804.
[4] Hermann K M, Kocisky T, Grefenstette E, et al. Teaching Machines to Read and Comprehend[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015: 1693-1701.
[5] Nguyen T, Rosenberg M, Song X, et al. MS MARCO: A Human-Generated MAchine Reading Comprehension Dataset[OL]. arXiv Preprint, arXiv: 1611. 09268.
[6] Rajpurkar P, Zhang J, Lopyrev K, et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2016: 2383-2392.
[7] Kadlec R, Schmid M, Bajgar O, et al. Text Understanding with the Attention Sum Reader Network[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016: 908-918.
[8] Dhingra B, Liu H, Yang Z, et al. Gated-Attention Readers for Text Comprehension[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 1832-1846.
[9] Seo M, Kembhavi A, Farhadi A, et al. Bidirectional Attention Flow for Machine Comprehension[OL]. arXiv Preprint, arXiv: 1611. 01603.
[10] Wang W, Yang N, Wei F, et al. Gated Self-Matching Networks for Reading Comprehension and Question Answering[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 189-198.
[11] Chaturvedi A, Pandit O, Garain U. CNN for Text-Based Multiple Choice Question Answering[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 272-277.
[12] Wang S, Yu M, Chang S, et al. A Co-Matching Model for Multi-Choice Reading Comprehension[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 746-751.
[13] Wang Z, Hamza W, Florian R. Bilateral Multi-Perspective Matching for Natural Language Sentences[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017: 4144-4150.
[14] Devlin J, Chang M, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. 2019: 4171-4186.
[15] Chen D, Bolton J, Manning C D. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany. 2016: 2358-2367.
[16] Parikh S, Sai A B, Nema P, et al. ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions[C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence. 2018: 4272-4278.
[17] Xu Y, Liu J, Gao J, et al. Towards Human-Level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies[OL]. arXiv Preprint, arXiv: 1711. 04964.
[18] Tay Y, Tuan L A, Hui S C. Multi-range Reasoning for Machine Comprehension[OL]. arXiv Preprint, arXiv: 180.09074.
[1] 范涛,王昊,吴鹏. 基于图卷积神经网络和依存句法分析的网民负面情感分析研究*[J]. 数据分析与知识发现, 2021, 5(9): 97-106.
[2] 杨晗迅, 周德群, 马静, 罗永聪. 基于不确定性损失函数和任务层级注意力机制的多任务谣言检测研究*[J]. 数据分析与知识发现, 2021, 5(7): 101-110.
[3] 尹鹏博,潘伟民,张海军,陈德刚. 基于BERT-BiGA模型的标题党新闻识别研究*[J]. 数据分析与知识发现, 2021, 5(6): 126-134.
[4] 余本功,朱晓洁,张子薇. 基于多层次特征提取的胶囊网络文本分类研究*[J]. 数据分析与知识发现, 2021, 5(6): 93-102.
[5] 谢豪,毛进,李纲. 基于多层语义融合的图文信息情感分类研究*[J]. 数据分析与知识发现, 2021, 5(6): 103-114.
[6] 韩普,张展鹏,张明淘,顾亮. 基于多特征融合的中文疾病名称归一化研究*[J]. 数据分析与知识发现, 2021, 5(5): 83-94.
[7] 王雨竹,谢珺,陈波,续欣莹. 基于跨模态上下文感知注意力的多模态情感分析 *[J]. 数据分析与知识发现, 2021, 5(4): 49-59.
[8] 蒋翠清,王香香,王钊. 基于消费者关注度的汽车销量预测方法研究*[J]. 数据分析与知识发现, 2021, 5(1): 128-139.
[9] 黄露,周恩国,李岱峰. 融合特定任务信息注意力机制的文本表示学习模型*[J]. 数据分析与知识发现, 2020, 4(9): 111-122.
[10] 尹浩然,曹金璇,曹鲁喆,王国栋. 扩充语义维度的BiGRU-AM突发事件要素识别研究*[J]. 数据分析与知识发现, 2020, 4(9): 91-99.
[11] 石磊,王毅,成颖,魏瑞斌. 自然语言处理中的注意力机制研究综述*[J]. 数据分析与知识发现, 2020, 4(5): 1-14.
[12] 薛福亮,刘丽芳. 一种基于CRF与ATAE-LSTM的细粒度情感分析方法*[J]. 数据分析与知识发现, 2020, 4(2/3): 207-213.
[13] 祁瑞华,简悦,郭旭,关菁华,杨明昕. 融合特征与注意力的跨领域产品评论情感分析*[J]. 数据分析与知识发现, 2020, 4(12): 85-94.
[14] 徐彤彤,孙华志,马春梅,姜丽芬,刘逸琛. 基于双向长效注意力特征表达的少样本文本分类模型研究*[J]. 数据分析与知识发现, 2020, 4(10): 113-123.
[15] 吴粤敏,丁港归,胡滨. 基于注意力机制的农业金融文本关系抽取研究*[J]. 数据分析与知识发现, 2019, 3(5): 86-92.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 2015 《数据分析与知识发现》编辑部
地址:北京市海淀区中关村北四环西路33号 邮编:100190
电话/传真:(010)82626611-6626,82624938
E-mail:jishu@mail.las.ac.cn