1Beijing Key Laboratory of Internet Culture and Digital Dissemination Research, Beijing Information Science and Technology University, Beijing 100101, China 2General Key Laboratory of Complex System Simulation, Institute of Systems Engineering, Academy of Military Sciences, Beijing 100101, China
[Objective] This paper proposes a model combining head and tail pointers with active learning, which addresses the sparse sample issues and helps us identify long terms on weapons. [Methods] Firstly, we used the BERT pre-trained language model to obtain the word vector representation. Then, we extracted the long terms by the head-tail pointer network. Third, we developed a new active learning sampling strategy to select high-quality unlabeled samples. Finally, we iteratively trained the model to reduce its dependence on the data scale. [Results] The F1 value for long term extraction was improved by 0.50%. With the help of active learning post-sampling, we used about 50% high-quality data to achieve the same F1 value with 100% high-quality training data. [Limitations] Due to the limitation of computing power, the data set in this paper was small, and the active learning sampling strategy requires more processing time. [Conclusions] Using head-tail pointer and active learning method can extract long terms effectively and reduce the cost of data annotation.
吕学强, 杨雨婷, 肖刚, 李育贤, 游新冬. 稀疏样本下长术语的抽取方法*[J]. 数据分析与知识发现, 2024, 8(1): 135-145.
Lyu Xueqiang, Yang Yuting, Xiao Gang, Li Yuxian, You Xindong. Extracting Long Terms from Sparse Samples. Data Analysis and Knowledge Discovery, 2024, 8(1): 135-145.
(Hu Yamin, Wu Xiaoyan, Chen Fang. Review of Technology Term Recognition Studies Based on Machine Learning[J]. Data Analysis and Knowledge Discovery, 2022, 6(2/3): 7-17.)
(Li Siliang, Xu Bin, Yang Yuji. DRTE: A Term Extraction Method for K12 Education[J]. Journal of Chinese Information Processing, 2018, 32(3): 101-109.)
[3]
Aria M, Cuccurullo C, Gnasso A. A Comparison Among Interpretative Proposals for Random Forests[J]. Machine Learning with Applications, 2021, 6: Article No.100094.
(Wu Jun, Cheng Yao, Hao Han, et al. Automatic Extraction of Chinese Terminology Based on BERT Embedding and BiLSTM-CRF Model[J]. Journal of the China Society for Scientific and Technical Information, 2020, 39(4): 409-418.)
[5]
Staudemeyer R C, Morris E R. Understanding LSTM——A Tutorial into Long Short-Term Memory Recurrent Neural Networks[OL]. arXiv Preprint, arXiv: 1909.09586.
[6]
Ren L, Cheng X J, Wang X K, et al. Multi-Scale Dense Gate Recurrent Unit Networks for Bearing Remaining Useful Life Prediction[J]. Future Generation Computer Systems, 2019, 94: 601-609.
doi: 10.1016/j.future.2018.12.009
(Zhao Hong, Wang Fang. A Deep Learning Model and Self-Training Algorithm for Theoretical Terms Extraction[J]. Journal of the China Society for Scientific and Technical Information, 2018, 37(9): 923-938.)
[8]
Kucza M, Niehues J, Zenkel T, et al. Term Extraction via Neural Sequence Labeling a Comparative Evaluation of Strategies Using Recurrent Neural Networks[C]// Proceedings of the Interspeech 2018. 2018: 2072-2076.
(Liu Liu, Qin Tianyun, Wang Dongbo. Automatic Extraction of Traditional Music Terms of Intangible Cultural Heritage[J]. Data Analysis and Knowledge Discovery, 2020, 4(12): 68-75.)
(Zhang Le, Tang Liang, Yi Mianzhu. Research on the Extraction of Military Chinese Terminology Integrating Multi-Strategies[J]. Modern Computer, 2020(26): 9-16.)
[12]
Culotta A, McCallum A. Reducing Labeling Effort for Structured Prediction Tasks[C]// Proceedings of the 20th National Conference on Artificial Intelligence. AAAI Press, 2005: 746-751.
[13]
Scheffer T, Decomain C, Wrobel S. Active Hidden Markov Models for Information Extraction[C]// Proceedings of the 4th International Symposium on Intelligent Data Analysis. 2001: 309-318.
[14]
Shen Y Y, Yun H, Lipton Z C, et al. Deep Active Learning for Named Entity Recognition[OL]. arXiv Preprint, arXiv: 1707.05928.
(Hu Jiahui, Zhao Wanqing, Fang An, et al. A Study of Chinese Electronic Medical Record Named Entity Recognition Based on Active Learning[J]. China Digital Medicine, 2020, 15(11): 6-9.)
(Yu Jingsong, Wu Cong, Cao Xixin. Research on Fine-Grained Named Entity Recognition in Government Documents Based on Deep Active Learning[J]. Micro/Nano Electronics and Intelligent Manufacturing, 2020, 2(3): 23-29.)
(Yin Xuezhen. Chinese Military Named Entity Recognition Using Multi-Neural Network Collaboration[D]. Shanghai: East China Normal University, 2020.)
[18]
Devlin J, Chang M W, Lee K, et al. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding[OL]. arXiv Preprint, arXiv: 1810.04805.
(Wu Bingchao, Deng Chenglong, Guan Bei, et al. Dynamically Transfer Entity Span Information for Cross-Domain Chinese Named Entity Recognition[J]. Journal of Software, 2022, 33(10): 3776-3792.)