|
|
Information Augmentation based improved model for intent detection and slot tagging in few shot learning
|
Xiang Zhuoyuan,Chen Hao,Wang Qian,Li Na
|
(Faculty of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan,430073,China)
(Hubei Tobacco Company ,Huangshi City Company, Huangshi,435000, China)
|
|
|
Abstract
[Objective]Intent detection and slot tagging are two subtasks of dialog language understanding. How to apply conversational language understanding tasks to dialogue systems with frequent domain updates without sufficient labeled data to support model learning is the focus of this paper. [Methods] This paper proposes an Information Augmentation Model for Few-shot spoken Language Understanding (IAM-FSLU). It uses few-shot learning to solve the problems of different types and quantities of intents in new fields and across domains, lack of data and poor model applicability, and constructs a more effective explicit relationship between the two tasks of few-shot intent recognition and few-shot slot extraction. [Results] Compared with non-union modeling models, under the 1-shot setting, the F1 score of slot extraction is improved by nearly 30%, and the sentence accuracy rate is improved by nearly 10%. Under the 3-shot setting, the F1 score of slot extraction is improved by nearly 35%, and the sentence accuracy rate is improved by nearly 13 to 16%.
[Limitations] From the results, the IAM-FSLU model still needs further improvement in the intention recognition subtask, and compared with the implicit relationship modeling model, although there is a great improvement in the slot extraction task, the sentence accuracy improvement effect is limited, and there is still a large room for improvement.[Conclusions] In this paper, the effect of the IAM-FSLU model is verified by the comparative experiments of different few shot settings, and the results show that the overall effect of the IAM-FSLU model is better than that of other mainstream models.
|
Published: 17 March 2023
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|