Please wait a minute...
Advanced Search
数据分析与知识发现  2023, Vol. 7 Issue (9): 64-77     https://doi.org/10.11925/infotech.2096-3467.2022.0825
  研究论文 本期目录 | 过刊浏览 | 高级检索 |
面向任务型对话的小样本语言理解模型研究*
向卓元1,陈浩1,王倩1,李娜2()
1中南财经政法大学信息与安全工程学院 武汉 430073
2湖北省烟草公司黄石市公司 黄石 435000
Few-Shot Language Understanding Model for Task-Oriented Dialogues
Xiang Zhuoyuan1,Chen Hao1,Wang Qian1,Li Na2()
1School of Information and Safety Engineering,Zhongnan University of Economics and Law, Wuhan 430073, China
2Hubei Tobacco Company, Huangshi City Company, Huangshi 435000, China
全文: PDF (1251 KB)   HTML ( 16
输出: BibTeX | EndNote (RIS)      
摘要 

【目的】在没有充足标注数据支持模型学习的情况下将对话语言理解任务应用到领域更新频繁的对话系统中。【方法】提出基于信息增强的小样本对话语言理解联合模型(IAM-FSLU),利用小样本学习很好地解决了在新领域和跨领域下的意图种类及数量不同时,数据匮乏和模型适用性差的问题,同时构建了一种更有效的小样本意图识别和小样本槽位提取两个任务间的显式关系。【结果】联合建模与未联合建模模型相比,在1-shot设置下,槽位提取F1分数获得近30个百分点的提升,句准确率有近10个百分点的提升;在3-shot设置下,槽位提取F1分数获得近35个百分点的提升,句准确率有12~16个百分点的提升。【局限】从结果来看,IAM-FSLU在意图识别子任务上仍需要进一步提高性能,同时与隐式关系建模的模型相比,虽然在槽位提取任务上有很大提升,但句准确率提升效果有限。【结论】通过不同的小样本设置的对比实验验证IAM-FSLU的效果,结果表明IAM-FSLU整体效果均优于其他主流模型。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
向卓元
陈浩
王倩
李娜
关键词 深度学习对话系统意图识别对话语言理解联合建模    
Abstract

[Objective] This paper aims to apply dialogue language understanding tasks to dialogue systems with frequent domain updates without sufficient annotated data for model learning. [Methods] We proposed an Information Augmentation Model for Few-shot Spoken Language Understanding (IAM-FSLU). It uses few-shot learning to address the challenges of data scarcity and model adaptability in new and across-domain scenarios with varying intent types and quantities. Additionally, we constructed an explicit relationship between the two tasks of few-shot intent recognition and few-shot slot extraction. [Results] Compared with the non-joint modeling approaches, the F1 score of slot extraction was improved by nearly 30%, and the sentence accuracy rate was improved by nearly 10% in the 1-shot setting. The F1 score of slot extraction was improved by nearly 35%, and the sentence accuracy rate was improved by 12%~16% in the 3-shot setting. [Limitations] The IAM-FSLU model needs further improvement in intention recognition. The sentence accuracy improvement needs to be improved for the slot extraction task. [Conclusions] The overall performance of the IAM-FSLU model is better than other mainstream models.

Key wordsDeep Learning    Dialogue System    Intention Detection    Dialog Language Understanding    Joint Modeling
收稿日期: 2022-08-06      出版日期: 2023-10-24
ZTFLH:  TP391  
基金资助:*国家自然科学基金项目(61702553);湖北省烟草公司科技项目(027Y2022-031);高等学校学科创新引智基地(B21038)
通讯作者: 李娜, ORCID:0000-0003-2749-5958, E-mail: 549655554@qq.com。   
引用本文:   
向卓元, 陈浩, 王倩, 李娜. 面向任务型对话的小样本语言理解模型研究*[J]. 数据分析与知识发现, 2023, 7(9): 64-77.
Xiang Zhuoyuan, Chen Hao, Wang Qian, Li Na. Few-Shot Language Understanding Model for Task-Oriented Dialogues. Data Analysis and Knowledge Discovery, 2023, 7(9): 64-77.
链接本文:  
https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/10.11925/infotech.2096-3467.2022.0825      或      https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/Y2023/V7/I9/64
Fig.1  基于改进原型网络的意图识别模型
Fig.2  基于信息增强的对话语言理解协同模型框架
Fig.3  基于条件随机场的小样本槽位提取模型框架
Fig.4  构造动态注意力向量示例
统计项 统计值
语句总数 6 694
平均语句长度 9.9
总领域数 52
训练集领域数 38
验证集领域数 5
测试集领域数 9
总意图数 141
领域平均意图数 2.38
总槽位数 416
领域平均槽位数 8
Table 1  原始数据集统计情况
Few-shot 支持集
大小
查询集
大小
平均
意图数
平均
槽位数
1-shot 训练集 7 464 7 600 2.2 8.1
验证集 22 556 3.2 7.0
测试集 55 1 068 4.5 9.1
3-shot 训练集 22 338 7 600 2.2 8.1
验证集 66 511 3.2 7.0
测试集 147 1 061 4.5 9.1
Table 2  重构后不同设置数据统计情况
项目 规格
操作系统 Windows 10
外存 1.5TB
内存 16GB
CPU AMD Ryzen 5 2600 6-Core Processor
GPU型号 NVIDA GeForce RTX 2070 Super
GPU显存 11GB
编程语言 Python 3.6
深度学习框架 PyTorch 1.7.1
Table 3  实验环境
模型 意图识别
准确率/%
槽位提取
F 1分数/%
句准确率/
%
Proto-IS 75.77 26.30 19.98
FSLU 67.95 61.38 32.10
SAMGM-SLU 68.12 62.13 33.21
SGM-SLU 69.35 62.42 34.65
IAM-FSLU 70.31 62.99 35.30
Table4  3-shot下的对比实验
模型 意图识别
准确率/%
槽位提取
F 1分数/%
句准确率/
%
Proto-IS 67.88 20.86 16.10
FSLU 64.79 48.93 24.34
SAMGM-SLU 65.02 49.14 25.58
SGM-SLU 64.58 49.55 25.63
IAM-FSLU 64.70 50.45 26.12
Table5  1-shot下的对比实验
模型 准确率/% F 1分数/% 时长
IAM-FSLU 70.31 - 3h07m
-FSTM-CRF 62.17 71.40 1h20m
-Bi-GRU 58.38 67.25 54m34s
-CNN 61.02 70.05 1h19m
Table6  消融实验结果
[1] Hoy M B. Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants[J]. Medical Reference Services Quarterly, 2018, 37(1): 81-88.
doi: 10.1080/02763869.2018.1404391 pmid: 29327988
[2] Li F F, Fergus R, Perona P. One-Shot Learning of Object Categories[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 594-611.
doi: 10.1109/TPAMI.2006.79
[3] Ravi S, Larochelle H. Optimization as a Model for Few-Shot Learning[C]// Proceedings of the 5th International Conference on Learning Representations. 2017: 1-11.
[4] Wang Y X, Ramanan D, Hebert M. Learning to Model the Tail[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017: 7032-7042.
[5] Koch G. Siamese Neural Networks for One-Shot Image Recognition[C]// Proceedings of the 32nd International Conference on Machine Learning. 2015.
[6] Vinyals O, Blundell C, Lillicrap T, et al. Matching Networks for one Shot Learning[C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016: 3637-3645.
[7] Sung F, Yang Y X, Zhang L, et al. Learning to Compare: Relation Network for Few-Shot Learning[C]// Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018: 1199-1208.
[8] Yang H H, Li Y Y. Identifying User Needs from Social Media:RJ10513(ALM1309-013)[R]. IBM Research Division, 2013.
[9] Subramani S, Vu H Q, Wang H. Intent Classification Using Feature Sets for Domestic Violence Discourse on Social Media[C]// Proceedings of the 4th Asia-Pacific World Congress on Computer Science and Engineering. 2017: 129-136.
[10] Abeywickrama T, Cheema M A, Taniar D. k-Nearest Neighbors on Road Networks: A Journey in Experimentation and in Memory Implementation[J]. Proceedings of the VLDB Endowment, 2016, 9(6): 492-503.
doi: 10.14778/2904121.2904125
[11] Chen R C, Hsieh C H. Web Page Classification Based on a Support Vector Machine Using a Weighted Vote Schema[J]. Expert Systems with Applications, 2006, 31(2): 427-435.
doi: 10.1016/j.eswa.2005.09.079
[12] Kibriya A M, Frank E, Pfahringer B, et al. Multinomial Naive Bayes for Text Categorization Revisited[C]// Proceedings of Australasian Joint Conference on Artificial Intelligence. 2004: 488-499.
[13] Bengio Y, Ducharme R, Vincent P. A Neural Probabilistic Language Model[J]. Journal of Machine Learning Research, 2000, 3(6): 932-938.
[14] Mikolov T, Chen K, Corrado G, et al. Efficient Estimation of Word Representations in Vector Space[C]// Proceedings of the 1st International Conference on Learning Representations. 2013: 1-12.
[15] Collobert R, Weston J, Bottou L, et al. Natural Language Processing (Almost) from Scratch Ronan[J]. Journal of Machine Learning Research, 2011, 12: 2493-2537.
[16] Kim Y. Convolutional Neural Networks for Sentence Classification[C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. 2014: 1746-1751.
[17] Lai S W, Xu L H, Liu K, et al. Recurrent Convolutional Neural Networks for Text Classification[C]// Proceedings of the 29th AAAI Conference on Artificial Intelligence. 2015: 2267-2273.
[18] Yang Z C, Yang D Y, Dyer C, et al. Hierarchical Attention Networks for Document Classification[C]// Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. 2016: 1480-1489.
[19] Yu M, Guo X X, Yi J F, et al. Diverse Few-Shot Text Classification with Multiple Metrics[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1 (Long Papers). 2018: 1206-1215.
[20] Deng S M, Zhang N Y, Sun Z L, et al. When Low Resource NLP Meets Unsupervised Language Model: Meta-Pretraining then Meta-Learning for Few-Shot Text Classification (Student Abstract)[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020: 13773-13774.
[21] Gao T Y, Han X, Liu Z Y, et al. Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence and 31st Innovative Applications of Artificial Intelligence Conference and 9th AAAI Symposium on Educational Advances in Artificial Intelligence. 2019: 6407-6414.
[22] Sun S L, Sun Q F, Zhou K, et al. Hierarchical Attention Prototypical Networks for Few-Shot Text Classification[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019: 476-485.
[23] Kumar V, Glaude H, de Lichy C, et al. A Closer Look at Feature Space Data Augmentation for Few-Shot Intent Classification[C]// Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP. 2019: 1-10.
[24] Luo B F, Feng Y S, Wang Z, et al. Marrying up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers). 2018: 2083-2093.
[25] Fritzler A, Logacheva V, Kretov M. Few-Shot Classification in Named Entity Recognition Task[C]// Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. 2019: 993-1000.
[26] Hou Y T, Che W X, Lai Y K, et al. Few-Shot Slot Tagging with Collapsed Dependency Transfer and Label-Enhanced Task-Adaptive Projection Network[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 1381-1393.
[27] Marslen-Wilson W, Tyler L K. The Temporal Structure of Spoken Language Understanding[J]. Cognition, 1980, 8(1): 1-71.
pmid: 7363578
[28] 宗成庆. 统计自然语言处理[M]. 北京: 清华大学出版社, 2008.
[28] (Zong Chengqing. Statistical Natural Language Processing[M]. Beijing: Tsinghua University Press, 2008.)
[29] Seneff S. TINA: A Natural Language System for Spoken Language Applications[J]. Computational Linguistics, 1992, 18(1): 61-86.
[30] Epstein J, Klinkenberg W D. From Eliza to Internet: A Brief History of Computerized Assessment[J]. Computers in Human Behavior, 2001, 17(3): 295-314.
doi: 10.1016/S0747-5632(01)00004-8
[31] Jeong M, Lee G G. Triangular-Chain Conditional Random Fields[J]. IEEE Transactions on Audio, Speech, and Language Processing, 2008, 16(7): 1287-1302.
doi: 10.1109/TASL.2008.925143
[32] Xu P Y, Sarikaya R. Convolutional Neural Network Based Triangular CRF for Joint Intent Detection and Slot Filling[C]// Proceedings of 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. 2014: 78-83.
[33] Guo D, Tur G, Yih W T, et al. Joint Semantic Utterance Classification and Slot Filling with Recursive Neural Networks[C]// Proceedings of 2014 IEEE Spoken Language Technology Workshop. 2014: 554-559.
[34] Hakkani-Tür D, Tur G, Celikyilmaz A, et al. Multi-domain Joint Semantic Frame Parsing Using Bi-directional RNN-LSTM[C]// Proceedings of the 17th Annual Meeting of the International Speech Communication Association. 2016.
[35] Li C L, Li L, Qi J. A Self-attentive Model with Gate Mechanism for Spoken Language Understanding[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 3824-3833.
[36] Chen Q, Zhuo Z, Wang W. BERT for Joint Intent Classification and Slot Filling[OL]. arXiv Preprint, arXiv: 1902.10909.
[37] 周奇安, 李舟军. 基于BERT的任务导向对话系统自然语言理解的改进模型与调优方法[J]. 中文信息学报, 2020, 34(5): 82-90.
[37] (Zhou Qi’an, Li Zhoujun. BERT Based Improved Model and Tuning Techniques for Natural Language Understanding in Task-oriented Dialog System[J]. Journal of Chinese Information Processing, 2020, 34(5): 82-90.)
[38] Goo C W, Gao G, Hsu Y K, et al. Slot-Gated Modeling for Joint Slot Filling and Intent Prediction[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 2 (Short Papers). 2018: 753-757.
[39] Yazdani M, Henderson J. A Model of Zero-Shot Learning of Spoken Language Understanding[C]// Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2015: 244-249.
[40] Zhang X D, Wang H F. A Joint Model of Intent Determination and Slot Filling for Spoken Language Understanding[C]// Proceedings of the 25th International Joint Conference on Artificial Intelligence. 2016: 2993-2999.
[41] Hou Y, Mao J, Lai Y, et al. FewJoint: A Few-Shot Learning Benchmark for Joint Language Understanding[OL]. arXiv Preprint, arXiv: 2009.08138.
[42] Li B H, Zhou H, He J X, et al. On the Sentence Embeddings from Pre-trained Language Models[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 2020: 9119-9130.
[43] Cui Y M, Che W X, Liu T, et al. Pre-Training with Whole Word Masking for Chinese BERT[J]. ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3504-3514.
[1] 聂卉, 蔡瑞昇. 引入注意力机制的在线问诊推荐研究*[J]. 数据分析与知识发现, 2023, 7(8): 138-148.
[2] 李广建, 袁钺. 基于深度学习的科技文献知识单元抽取研究综述[J]. 数据分析与知识发现, 2023, 7(7): 1-17.
[3] 王楠, 王淇. 基于深度学习的学生课堂专注度测评方法*[J]. 数据分析与知识发现, 2023, 7(6): 123-133.
[4] 吴佳伦, 张若楠, 康武林, 袁普卫. 基于患者相似性分析的药物推荐深度学习模型研究*[J]. 数据分析与知识发现, 2023, 7(6): 148-160.
[5] 汪晓凤, 孙雨洁, 王华珍, 张恒彰. 融合深度学习和知识图谱的类型可控问句生成模型构建及验证*[J]. 数据分析与知识发现, 2023, 7(6): 26-37.
[6] 刘洋, 张雯, 胡毅, 毛进, 黄菲. 基于多模态深度学习的酒店股票预测*[J]. 数据分析与知识发现, 2023, 7(5): 21-32.
[7] 黄学坚, 马廷淮, 王根生. 基于分层语义特征学习模型的微博谣言事件检测*[J]. 数据分析与知识发现, 2023, 7(5): 81-91.
[8] 王寅秋, 虞为, 陈俊鹏. 融合知识图谱的中文医疗问答社区自动问答研究*[J]. 数据分析与知识发现, 2023, 7(3): 97-109.
[9] 张贞港, 余传明. 基于实体与关系融合的知识图谱补全模型研究*[J]. 数据分析与知识发现, 2023, 7(2): 15-25.
[10] 沈丽宁, 杨佳艺, 裴家旋, 曹广, 陈功正. 基于OCC模型和情绪诱因事件抽取的细颗粒度情绪识别方法研究*[J]. 数据分析与知识发现, 2023, 7(2): 72-85.
[11] 王卫军, 宁致远, 杜一, 周园春. 基于多标签分类的科技文献学科交叉研究性质识别*[J]. 数据分析与知识发现, 2023, 7(1): 102-112.
[12] 肖宇晗, 林慧苹. 基于CWSA方面词提取模型的差异化需求挖掘方法研究——以京东手机评论为例*[J]. 数据分析与知识发现, 2023, 7(1): 63-75.
[13] 成全, 佘德昕. 融合患者体征与用药数据的图神经网络药物推荐方法研究*[J]. 数据分析与知识发现, 2022, 6(9): 113-124.
[14] 王露, 乐小虬. 科技论文引用内容分析研究进展[J]. 数据分析与知识发现, 2022, 6(4): 1-15.
[15] 郑潇, 李树青, 张志旺. 基于评分数值分析的用户项目质量测度及其在深度推荐模型中的应用*[J]. 数据分析与知识发现, 2022, 6(4): 39-48.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 2015 《数据分析与知识发现》编辑部
地址:北京市海淀区中关村北四环西路33号 邮编:100190
电话/传真:(010)82626611-6626,82624938
E-mail:jishu@mail.las.ac.cn