1National Science Library, Chinese Academy of Sciences, Beijing 100190, China 2Department of Library, Information and Archives Management, School of Economics and Management,University of Chinese Academy of Sciences, Beijing 100190, China 3Hubei Key Laboratory of Big Data in Science and Technology, Wuhan 430071, China 4Wuhan Library, Chinese Academy of Sciences, Wuhan 430071, China
[Objective] This paper explores the classification results of Chinese medical literature based on the BERT-Base-Chinese model and the BERT Chinese medical pre-training model (BERT-Re-Pretraining-Med-Chi), aiming to analyze their differences. [Methods] We built a medical text pre-training corpus with 340,000 abstracts of Chinese medical literature. Then, we constructed training samples, with 16,000 and 32,000 abstracts, and established test sample with another 3,200 abstracts. Finally, we compareed the performance of the two models, using the SVM method as a benchmark. [Results] The two BERT models yielded better results than the SVM one, and their average F1-scores are about 5% higher than the SVM model. The F1-score of the BERT-Re-Pretraining-Med-Chi model reaches 0.8390 and 0.8607, which is the best among the three. [Limitations] This study only examined research papers from 16 medical and health categories in the Chinese Library Classification, and the remaining four categories were not included in the classification system due to the small amount of data. [Conclusions] The BERT-Re-Pretraining-Med-Chi model improves the performance of medical literature classification, while the BERT-based deep learning method yields better results with large-scale training set.
赵旸, 张智雄, 刘欢, 丁良萍. 基于BERT模型的中文医学文献分类研究*[J]. 数据分析与知识发现, 2020, 4(8): 41-49.
Zhao Yang, Zhang Zhixiong, Liu Huan, Ding Liangping. Classification of Chinese Medical Literature with BERT Model. Data Analysis and Knowledge Discovery, 2020, 4(8): 41-49.
Khalil El H, Hussien A, Safwan Q, et al. Building an Ensemble of Fine-tuned Naive Bayesian Classifiers for Text Classification[J]. Entropy, 2018,20(11):857.
doi: 10.3390/e20110857
[2]
Wei O, Huynh V N, Songsak S. Training Attractive Attribute Classifiers Based on Opinion Features Extracted from Review Data[J]. Electronic Commerce Research and Applications, 2018,32:13-22.
doi: 10.1016/j.elerap.2018.10.003
[3]
Jafari A, Ezadi H, Hossennejad M, et al. Improvement in Automatic Classification of Persian Documents by Means of Support Vector Machine and Representative Vector[C]// Proceedings of the International Conference on Innovative Computing Technology. 2011: 282-292.
[4]
陈玉芹. 多类别科技文献自动分类系统[D]. 武汉: 华中科技大学, 2008.
[4]
( Chen Yuqin. Multi-class Scientific Literature Automatic Categorization System[D]. Wuhan: Huazhong University of Science & Technology, 2008.)
( Bai Xiaoming, Qiu Taorong. Science and Technology Text Auto Sort Study Base of SVM and KNN Algorithm[J]. Microcomputer Information, 2006,22(36):275-276, 65.)
( Wang Hao, Ye Peng, Deng Sanhong. The Application of Machine-Learning in the Research on Automatic Categorization of Chinese Periodical Articles[J]. New Technology of Library and Information Service, 2014(3):80-87.)
( Yang Min, Gu Jun. Study and Apply of Chinese Bibliographies Automatic Classification Based on Support Vector Machine[J]. Library and Information Service, 2012,56(9):114-119.)
( Li Xiangdong, Liao Xiangpeng, Huang Li. Research and Implementation of Bibliographic Information Classification System in LDA Model[J]. ew Technology of Library and Information Service, 2014(5):18-25.)
( Li Xiangdong, Pan Lian. Text Classification Algorithms Using the LDA Model: On the Comparison of the Applications on Webpages and eTexts Including Books and Journals[J]. Journal of Information Resources Management, 2015,5(4):24-31, 46.)
[10]
Zhang S, Chen Y, Huang X L, et al. Text Classification of Public Feedbacks Using Convolutional Neural Network Based on Differential Evolution Algorithm[J]. International Journal of Computers Communications & Control, 2019,14(1):124-134.
doi: 10.15837/ijccc.2019.1
[11]
Sun X P, Li Y B, Kang H W, et al. Automatic Document Classification Using Convolutional Neural Network[C]// Proceedings of International Seminar on Computer Science and Engineering Technology. 2019. DOI: 10.1088/1742-6596/1176/3/032029.
[12]
郭利敏. 基于卷积神经网络的文献自动分类研究[J]. 图书与情报, 2017(6):96-103.
[12]
( Guo Limin. Study of Automatic Classification of Literature Based on Convolution Neural Network[J]. Library & Information, 2017(6):96-103.)
( Ma Jianhong, Wang Ruiyang, Yao Shuang, et al. Patent Classification Method Based on Depth Learning[J]. Computer Engineering, 2018,44(10):209-214.)
doi: 10.19678/j.issn.1000-3428.0048159
[15]
Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding[OL]. arXiv Preprint, arXiv:. 181004805.
( Hu Chuntao, Qin Jinkang, Chen Jingmei, et al. Application Research of Public Opinion Classification Based on BERT Model[J]. Network Security Technology & Application, 2019(11):41-44.)
[17]
Yao L, Jin Z, Mao C S, et al. Traditional Chinese Medicine Clinical Records Classification with BERT and Domain Specific Corpora[J]. Journal of the American Medical Informatics Association, 2019,26(12):1632-1636.
doi: 10.1093/jamia/ocz164
pmid: 31550356
[18]
Zhang X H, Zhang Y Y, Zhang Q, et al. Extracting Comprehensive Clinical Information for Breast Cancer Using Deep Learning Methods[J]. International Journal of Medical Informatics, 2019, 132: Article No.103985.
doi: 10.1016/j.ijmedinf.2020.104233
pmid: 32736330
[19]
Jwa H, Oh D, Park K, et al. exBAKE: Automatic Fake News Detection Model Based on Bidirectional Encoder Representations from Transformers (BERT)[J]. Applied Sciences-Basel, 2019,9(19)Article No.4062.
( Wang Yingjie, Xie Bin, Li Ningbo. ALICE: A Pre-trained Language Representation Model for Chinese Technological Text Analysis[J]. Computer Engineering, 2020,46(2):48-52,58.)