Data Analysis and Knowledge Discovery  2020, Vol. 4 Issue (8): 28-40    DOI: 10.11925/infotech.2096-3467.2019.1222
 Current Issue | Archive | Adv Search |
A Comparative Study of Word Representation Models Based on Deep Learning
Yu Chuanming1(),Wang Manyi2,Lin Hongjun1,Zhu Xingyu1,Huang Tingting2,An Lu3
1School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan 430073, China
2School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, China
3School of Information Management, Wuhan University, Wuhan 430072, China
 Download: PDF (1029 KB)   HTML ( 6 )  Export: BibTeX | EndNote (RIS)
Abstract

[Objective] This study systematically explores the principles of traditional deep representation models and the latest pre-training ones, aiming to examine their performance in text mining tasks. [Methods] We compared these models’ data mining results from the model side and the experimental side. All tests were conducted with six datasets of CR, MR, MPQA, Subj, SST-2 and TREC. [Results] The XLNet model achieved the best average F1 value (0.918 6), which was higher than ELMo (0.809 0), BERT (0.898 3), Word2Vec (0.769 2), GloVe (0.757 6) and FastText (0.750 6). [Limitations] Our research focused on classification tasks of text mining, which did not compare the performance of vocabulary representation methods in machine translation, Q&A and other tasks. [Conclusions] The traditional deep representation learning models and the latest pre-training ones yield different results in text mining tasks.

Received: 08 November 2019      Published: 14 September 2020
 ZTFLH: TP391
Corresponding Authors: Yu Chuanming     E-mail: yucm@zuel.edu.cn