|
|
Construction and Verification of Type-Controllable Question Generation Model Based on Deep Learning and Knowledge Graphs |
Wang Xiaofeng1,Sun Yujie2,Wang Huazhen2(),Zhang Hengzhang2 |
1The Academy of Chinese Language and Culture Education, Huaqiao University, Xiamen 361021, China 2College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China |
|
|
Abstract [Objective] This research aims to automatically generate questions, thereby reducing the workload of manual question generation. It also addresses the issues of uncontrollable question difficulty and limited question dimensions due to collaborative question generation. It encourages learners to engage in-deep reading comprehension with intelligent questions. [Methods] We proposed a question generation model based on Transformer and knowledge graph to automatically generate type-controllable questions. First, we input the knowledge graph into the Graph Transformer module of the TCQG (Type Controllable Question Generation) model for graph representation learning and obtained the subgraph vector. Then, we obtained matching external questions for each subgraph using similarity measures. Next, we input the parameters of 4MAT question type and those external questions into the BiLSTM network for externally enhanced vectors. Finally, we entered the subgraph vector and the externally enhanced vector into the Pointer-Generator Network of the TCQG model to generate questions. [Results] The TCQG model achieves better representation learning of the knowledge graph through the Graph Transformer. The BLEU value is 39.62 on the one-hop triple dataset. In evaluating “what is” questions, the BLEU score is 38.63. Both surpassed the baseline model. [Limitations] This research is limited by the types of questions and cannot cover all types of questions in human language. In addition, this research did not involve matching responses to the questions, which limits its real-world applications. [Conclusions] This research generates diverse, semantically rich, and naturally expressed questions needed in educational scenarios. It enables learners to benefit from the generated questions and engage in deeper reading comprehension.
|
Received: 21 September 2022
Published: 22 March 2023
|
|
Fund:Fundamental Research Funds for the Central Universities(17SKGC-QG13) |
Corresponding Authors:
Wang Huazhen,ORCID:0000-0002-6548-9957,E-mail: wanghuazhen@hqu.edu.cn。
|
[1] |
刘明, 张津旭, 吴忠明. 智能提问技术及其教育应用[J]. 人工智能, 2022(2): 30-38.
|
[1] |
(Liu Ming, Zhang Jinxu, Wu Zhongming. Intelligent Questioning Techniques and Educational Applications[J]. Artificial Intelligence, 2022(2): 30-38.)
|
[2] |
Serban I V, Sordoni A, Lowe R, et al. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues[C]// Proceedings of the 31st AAAI Conference on Artificial Intelligence. New York: ACM, 2017: 3295-3301.
|
[3] |
Du X, Shao J, Cardie C. Learning to Ask: Neural Question Generation for Reading Comprehension[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 1342-1352.
|
[4] |
Wang Y S, Liu C Y, Huang M L, et al. Learning to Ask Questions in Open-Domain Conversational Systems with Typed Decoders[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. ACL, 2018: 2193-2203.
|
[5] |
鲍军威. 基于知识的自动问答与问题生成的研究[D]. 哈尔滨: 哈尔滨工业大学, 2019.
|
[5] |
(Bao Junwei. Research on Knowledge-Based Question Answering and Question Generation[D]. Harbin: Harbin Institute of Technology, 2019.)
|
[6] |
Chan Y H, Fan Y C. A Recurrent BERT-Based Model for Question Generation[C]// Proceedings of the 2nd Workshop on Machine Reading for Question Answering. 2019: 154-162.
|
[7] |
Indurthi S R, Raghu D, Khapra M M, et al. Generating Natural Language Question-Answer Pairs from a Knowledge Graph Using a RNN Based Question Generation Model[C]// Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. 2017: 376-385.
|
[8] |
Koncel-Kedziorski R, Bekal D, Luan Y, et al. Text Generation from Knowledge Graphs with Graph Transformers[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. 2019: 2284-2293.
|
[9] |
Kumar V, Hua Y C, Ramakrishnan G, et al. Difficulty-Controllable Multi-Hop Question Generation from Knowledge Graphs[C]// Proceedings of the 18th International Semantic Web Conference. 2019: 382-398.
|
[10] |
McCarthy B. About Teaching: 4MAT in the Classroom[M]. About Learning Inc., 2000.
|
[11] |
See A, Liu P J, Manning C D. Get to the Point: Summarization with Pointer-Generator Networks[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 1073-1083.
|
[12] |
Papineni K, Roukos S, Ward T, et al. BLEU: A Method for Automatic Evaluation of Machine Translation[C]// Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. New York: ACM, 2002: 311-318.
|
[13] |
Lin C Y. Rouge: A Package for Automatic Evaluation of Summaries[C]// Proceedings of the Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL. 2004.
|
[14] |
de Boer P T, Kroese D P, Mannor S, et al. A Tutorial on the Cross-Entropy Method[J]. Annals of Operations Research, 2005, 134(1): 19-67.
doi: 10.1007/s10479-005-5724-z
|
[15] |
Chen X Y, Wu Z S, Hong M Y. Understanding Gradient Clipping in Private SGD: A Geometric Perspective[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. New York: ACM, 2020: 13773-13782.
|
[16] |
Pennington J, Socher R, Manning C D. GloVe: Gloval Vectors for Word Representation[C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2014: 1532-1543.
|
[17] |
Sha L, Mou L L, Liu T Y, et al. Order-Planning Neural Text Generation from Structured Data[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018: 5414-5421.
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|