Please wait a minute...
Advanced Search
数据分析与知识发现  2021, Vol. 5 Issue (7): 91-100     https://doi.org/10.11925/infotech.2096-3467.2020.1173
  研究论文 本期目录 | 过刊浏览 | 高级检索 |
基于两阶段迁移学习的多标签分类模型研究*
陆泉1,2,何超1,陈静3(),田敏1,刘婷1
1武汉大学信息资源研究中心 武汉 430072
2武汉大学大数据研究院 武汉 430072
3华中师范大学信息管理学院 武汉 430079
A Multi-Label Classification Model with Two-Stage Transfer Learning
Lu Quan1,2,He Chao1,Chen Jing3(),Tian Min1,Liu Ting1
1Center for Studies of Information Resources, Wuhan University, Wuhan 430072, China
2Big Data Research Institute, Wuhan University, Wuhan 430072, China
3School of Information Management, Central China Normal University, Wuhan 430079, China
全文: PDF (827 KB)   HTML ( 14
输出: BibTeX | EndNote (RIS)      
摘要 

【目的】 构建一个基于两阶段迁移学习的多标签分类模型,以解决现有模型中多标签数据采样困难与跨领域迁移学习共性特征较少的问题。【方法】 提出“通用领域-目标领域单标签数据-多标签数据”的两阶段迁移学习模型,首先在通用领域上训练,之后迁移到使用上采样方法均衡后的目标领域单标签数据进行微调,最后迁移到多标签数据,实现多标签分类。【结果】 以医学文献图像标注为例,实证结果表明:所提模型对于图像多标签分类和文本多标签分类任务均有较好效果,F1值在一阶段迁移学习模型的基础上提升超过50%。【局限】 如何根据不同任务优选基础模型和采样方法还有待研究。【结论】 本研究可供存在数据集受限的领域大数据标注、检索与利用等研究借鉴。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
陆泉
何超
陈静
田敏
刘婷
关键词 多标签分类迁移学习数据均衡化BERT模型ResNet模型    
Abstract

[Objective] This paper proposes a multi-label classification model, aiming to improve data sampling and add common characteristics of the existing models. [Methods] We constructed a two-stage migration learning model of “common domain - single tag data in the target domain - multiple tag data”. Then, we trained this model in the general and the target fields, as well as fine-tuned it with the single label data balanced with the over-sampling method. Finally, we migrated the model to multi-label data and generated multi-label classification. [Results] We examined the new model with image annotations from medical literature. On multi-label classification tasks for images and texts, the F1 score was improved by more than 50% compared to the one-stage transfer learning model. [Limitations] More research is needed to choose better basic model and sampling method for different tasks. [Conclusions] This proposed method coud be used in annotation, retrieval and utilization of big data sets with constraints.

Key wordsMulti-Label Classification    Transfer Learning    Data Equalization    BERT Model    ResNet Model
收稿日期: 2020-11-27      出版日期: 2021-03-08
ZTFLH:  G203  
基金资助:*国家自然科学基金创新研究群体项目(71921002);武汉大学国家保密学院2020年度建设项目
通讯作者: 陈静,ORCID:0000-0002-6444-2962     E-mail: dancinglulu@sina.com
引用本文:   
陆泉, 何超, 陈静, 田敏, 刘婷. 基于两阶段迁移学习的多标签分类模型研究*[J]. 数据分析与知识发现, 2021, 5(7): 91-100.
Lu Quan, He Chao, Chen Jing, Tian Min, Liu Ting. A Multi-Label Classification Model with Two-Stage Transfer Learning. Data Analysis and Knowledge Discovery, 2021, 5(7): 91-100.
链接本文:  
https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/10.11925/infotech.2096-3467.2020.1173      或      https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/Y2021/V5/I7/91
Fig.1  基于两阶段迁移学习的多标签分类模型
类序号 类代码 类名称 未均衡图像数量 均衡化后图像数量 未均衡文本数量 均衡化后文本数量
1 D3DR 三维重构图 369 200 67 301
2 DMEL 电子显微镜成像 367 200 66 301
3 DMFL 荧光显微镜成像 1 256 200 201 299
4 DMLI 光学显微镜成像 1 313 200 55 301
5 DMTR 透射显微镜成像 462 200 66 301
6 DRAN 血管造影术 165 200 19 301
7 DRCO 联合多种模式影像叠加图 73 200 281 174
8 DRCT 计算机化断层显像 431 200 174 301
9 DRMR 核磁共振影像 470 200 19 301
10 DRPE 正电子发射计算机断层显像 48 200 136 138
11 DRUS 超声波影像 300 200 391 301
12 DRXR X 光照相术 484 200 391 390
13 DSEC 心电图 143 200 117 301
14 DSEE 脑电图 41 200 29 301
15 DSEM 肌电图 30 200 18 301
16 DVDM 皮肤病影像 145 200 104 301
17 DVEN 内窥镜显像 108 200 75 301
18 DVOR 其他器官的影像 238 200 146 301
19 GCHE 化学结构图 156 200 69 255
20 GFIG 统计图表 5 243 200 186 301
21 GFLO 流程图 165 200 106 301
22 GGEL 凝胶色谱 653 200 80 301
23 GGEN 基因序列图 418 200 80 301
24 GHDR 手绘草图 285 200 93 301
25 GMAT 数学公式 43 200 24 218
26 GNCP 非临床照片 241 200 116 301
27 GPLI 程序列表 53 200 48 301
28 GSCR 屏幕截图 150 200 105 301
29 GSYS 系统概图 271 200 93 301
30 GTAB 表格 186 200 87 301
Table 1  30个类别的单标签图像和文本数据在均衡化前后的数量分布
Fig.2  数据均衡化方法
Fig.3  医学文献图像多标签分类模型框架
分类方法 实验模型 模型说明
未迁移 文本BERT
图像ResNet
不使用迁移手段,直接对多标签数据分类
一阶段迁移 文本BERT 使用一阶段迁移学习,载入预训练的模型权重,忽略对单标签数据的训练,直接对多标签数据分类
图像ResNet
两阶段
文本迁移
文本CNN未均衡 使用两阶段迁移学习,载入CNN或BERT预训练模型权重,训练单标签文本数据和多标签文本数据,根据单标签文本数据是否均衡化处理分为未均衡化模型及已均衡化模型
文本CNN已均衡
文本BERT未均衡
文本BERT已均衡
两阶段
图像迁移
图像ResNet
未均衡
使用两阶段迁移学习,载入ResNet预训练模型权重,训练单标签图像数据和多标签图像数据,根据单标签图像数据是否均衡化处理分为未均衡化模型及已均衡化模型
图像ResNet
已均衡
Table 2  对照实验设置
分类方法 HLoss F 1 macro AU C micro AU C macro
未迁移 文本BERT
图像ResNet
0.032 8
0.060 3
0.041 5
0.028 4
0.920 5
0.628 2
0.726 7
0.605 1
一阶段迁移 文本BERT 0.025 1 0.291 7 0.944 1 0.841 1
图像ResNet 0.024 2 0.237 0 0.751 6 0.722 0
两阶段文本迁移 文本CNN未均衡 0.023 9 0.185 0 0.855 2 0.830 7
文本CNN已均衡 0.023 0 0.192 0 0.873 4 0.860 1
文本BERT未均衡 0.018 5 0.450 1 0.960 9 0.873 3
文本BERT已均衡 0.017 9 0.462 3 0.975 7 0.899 7
两阶段图像迁移 图像ResNet未均衡 0.016 0 0.482 0 0.764 4 0.751 6
图像ResNet已均衡 0.015 8 0.489 1 0.775 3 0.782 0
Table 3  各多标签分类模型结果
[1] Weiss G M, Provost F. Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction[J]. Journal of Artificial Intelligence Research, 2003, 19(1):315-354.
doi: 10.1613/jair.1199
[2] Wang Y Q, Yao Q M, Kwok J T, et al. Generalizing from a Few Examples: A Survey on Few-Shot learning[J]. ACM Computing Surveys (CSUR), 2020, 53(3):1-34.
[3] Yang Q, Wu X D. 10 Challenging Problems in Data Mining Research[J]. International Journal of Information Technology & Decision Making, 2006, 5(4):597-604.
[4] Tsoumakas G, Katakis I. Multi-Label Classification: An Overview[J]. International Journal of Data Warehousing and Mining, 2007, 3(3):1-13.
[5] Salakhutdinov R, Tenenbaum J, Torralba A. One-Shot Learning with a Hierarchical Nonparametric Bayesian Model[C]// Proceedings of ICML Workshop on Unsupervised and Transfer Leaning. 2012:195-206.
[6] Koch G, Zemel R, Salakhutdinov R. Siamese Neural Networks for One-Shot Image Recognition[C]// Proceedings of the 32nd International Conference on Machine Learning. 2015.
[7] Vinyals O, Blundell C, Lillicrap T, et al. Matching Networks for One Shot Learning[C]// Proceedings of the 30th Conference on Neural Information Processing Systems. 2016: 3630-3638.
[8] Snell J, Swersky K, Zemel R. Prototypical Networks for Few-Shot Learning[C]// Proceedings of the 31st Conference on Neural Information Processing Systems. 2017: 4077-4087.
[9] Pan S J, Yang Q. A Survey on Transfer Learning[J]. IEEE Transactions on Knowledge & Data Engineering, 2010, 22(10):1345-1359.
[10] 庄福振, 罗平, 何清, 等. 迁移学习研究进展[J]. 软件学报, 2015, 26(1):26-39.
[10] (Zhuang Fuzhen, Luo Ping, He Qing, et al. Survey on Transfer Learning Research[J]. Journal of Software, 2015, 26(1):26-39.)
[11] Ng W W, Hu J, Yeung D S, et al. Diversified Sensitivity Based Under-Sampling for Imbalance Classification Problems[J]. IEEE Transactions on Cybernetics, 2017, 45(11):2402-2412.
doi: 10.1109/TCYB.2014.2372060
[12] 赵清华, 张艺豪, 马建芬, 等. 改进SMOTE的非平衡数据集分类算法研究[J]. 计算机工程与应用, 2018, 54(18):168-173.
[12] (Zhao Qinghua, Zhang Yihao, Ma Jianfen, et al. Research on Classification Algorithm of Imbalanced Datasets Based on Improved SMOTE[J]. Computer Engineering and Applications, 2018, 54(18):168-173.)
[13] Cervantes J, Huang D S, Farid G L, et al. A Hybrid Algorithm to Improve the Accuracy of Support Vector Machines on Skewed Data Sets[M]. Berlin, German: Springer, 2014: 782-788.
[14] 崔巍, 贾晓琳, 樊帅帅, 等. 一种新的不均衡关联分类算法[J]. 计算机科学, 2020, 47(S1):488-493.
[14] (Cui Wei, Jia Xiaolin, Fan Shuaishuai, et al. New Associative Classification Algorithm for Imbalanced Data[J]. Computer Science, 2020, 47(S1):488-493.)
[15] 龙明盛. 迁移学习问题与方法研究[D]. 北京: 清华大学, 2014.
[15] (Long Mingsheng. Transfer Learning Problems and Methods[D]. Beijing: Tsinghua University, 2014.)
[16] Tan C Q, Sun F C, Kong T, et al. A Survey on Deep Transfer Learning[C]// Proceedings of International Conference on Artificial Neural Networks. 2018: 270-279.
[17] Yosinski J, Clune J, Bengio Y, et al. How Transferable are Features in Deep Neural Networks?[C]// Proceedings of the 27th Conference on Neural Information Processing Systems. 2014: 3320-3328.
[18] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies-Volume 1. 2019: 4171-4186.
[19] 李年华. 深度神经网络的迁移学习关键问题研究[D]. 成都: 电子科技大学, 2018.
[19] (Li Nianhua. Research on Key Problems of Transfer Learning in Deep Neural Networks[D]. Chengdu: University of Electronic Science and Technology of China, 2018.)
[20] 朱怀涛. 面向小样本的多标签分类方法与应用研究[D]. 成都:电子科技大学, 2020.
[20] (Zhu Huaitao. Multi-label Classification Method and Application Research in Small Samples[D]. Chengdu: University of Electronic Science and Technology of China, 2020.)
[21] 程磊, 吴晓富, 张索非. 数据集类别不平衡性对迁移学习的影响分析[J]. 信号处理, 2020, 36(1):110-117.
[21] (Cheng Lei, Wu Xiaofu, Zhang Suofei. Analysis of the Effect of Class Imbalance on Transfer Learning[J]. Journal of Signal Processing, 2020, 36(1):110-117.)
[22] Estabrooks A, Jo T, Japkowicz N. A Multiple Resampling Method for Learning from Imbalanced Data Sets[J]. Computational Intelligence, 2010, 20(1):18-36.
doi: 10.1111/coin.2004.20.issue-1
[23] Bunkhumpornpat C, Sinapiromsaran K, Lursinsap C. Safe-Level-Smote: Safe-Level-Synthetic Minority Over-Sampling Technique for Handling the Class Imbalanced Problem[C]// Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining. 2009: 475-482.
[24] Dong A M, Chung F L, Wang S T. Semi-Supervised Classification Method Through Oversampling and Common Hidden Space[J]. Information Sciences, 2016, 349:216-228.
[25] Konno T, Iwazume M. Pseudo-Feature Generation for Imbalanced Data Analysis in Deep Learning[OL]. arXiv Preprint, arXiv:1807.06538.
[26] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative Adversarial Nets[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014:2672-2680.
[27] 于玉海, 林鸿飞, 孟佳娜, 等. 跨模态多标签生物医学图像分类建模识别[J]. 中国图象图形学报, 2018, 23(6):917-927.
[27] (Yu Yuhai, Lin Hongfei, Meng Jia’na, et al. Classification Modeling and Recognition for Cross Modal and Multi-Label Biomedical Image[J]. Journal of Image and Graphics, 2018, 23(6):917-927.)
[28] 李思豪, 陈福才, 黄瑞阳. 一种多标签随机均衡采样算法[J]. 计算机应用研究, 2017, 34(10):2929-2932.
[28] (Li Sihao, Chen Fucai, Huang Ruiyang. Multi-label Random Balanced Resampling Algorithm[J]. Application Research of Computers, 2017, 34(10):2929-2932.)
[29] Russakovsky O, Deng J, Su H, et al. ImageNet Large Scale Visual Recognition Challenge[J]. International Journal of Computer Vision, 2014, 115(3):211-252.
doi: 10.1007/s11263-015-0816-y
[30] Mikolov T, Grave E, Bojanowski P, et al. Advances in Pre-Training Distributed Word Representations[OL]. arXiv Preprint, arXiv:1712.09405.
[31] ImageCLEFmed (2013, 2015, 2016) Dataset[DS/OL]. [2019-06-01]. https://www.imageclef.org/.
[32] 陈健美, 宋顺林, 朱玉全, 等. 一种基于贝叶斯和神经网络的医学图像组合分类方法[J]. 计算机科学, 2008, 35(3):244-246.
[32] (Chen Jianmei, Song Shunlin, Zhu Yuquan, et al. A Method of Medical Images Combining Classification Based on Bayesian and Neural Network[J]. Computer Science, 2008, 35(3):244-246.)
[33] 孙君顶, 李琳. 基于BP神经网络的医学图像分类[J]. 计算机系统应用, 2012, 21(3):160-162.
[33] (Sun Junding, Li Lin. Classification of Medical Image Based on BP Neural Network[J]. Computer Systems & Applications, 2012, 21(3):160-162.)
[34] Lecun Y, Bengio Y, Hinton G. Deep Learning[J]. Nature, 2015, 521(7553):436-444.
doi: 10.1038/nature14539
[35] Ratliff L J, Burden S A, Sastry S S. Characterization and Computation of Local Nash Equilibria in Continuous Games[C]// Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). 2013: 917-924.
[36] He K M, Zhang X Y, Ren S Q, et al. Deep Residual Learning for Image Recognition[C]// Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
[37] BERT Model Weights[EB/OL]. [2019-06-01]. https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_ A-12.zip.
[38] ResNet Model Weights [EB/OL]. [2019-06-01]. https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5.
[39] 田敏. 基于迁移学习的医学文献内图像多标签分类[D]. 武汉: 武汉大学, 2019.
[39] (Tian Min. Image Multi-Label Classification in Medical Literature Based on Transfer Learning[D]. Wuhan: Wuhan University, 2019.)
[40] Tahir M A, Kittler J, Bouridane A. Multilabel Classification Using Heterogeneous Ensemble of Multi-Label Classifiers[J]. Pattern Recognition Letters, 2012, 33(5):513-523.
doi: 10.1016/j.patrec.2011.10.019
[41] Sun C, Qiu X P, Xu Y G, et al. How to Fine-Tune BERT for Text Classification?[C]// Proceedings of China National Conference on Chinese Computational Linguistics. 2019:194-206.
[42] Read J, Pfahringer B, Holmes G, et al. Classifier Chains for Multi-Label Classification[J]. Machine Learning, 2011, 85(3):333-359.
doi: 10.1007/s10994-011-5256-5
[1] 刘欢,张智雄,王宇飞. BERT模型的主要优化改进方法研究综述*[J]. 数据分析与知识发现, 2021, 5(1): 3-15.
[2] 赵平,孙连英,涂帅,卞建玲,万莹. 改进的知识迁移景点实体识别算法研究及应用*[J]. 数据分析与知识发现, 2020, 4(5): 118-126.
[3] 张冬瑜,崔紫娟,李映夏,张伟,林鸿飞. 基于Transformer和BERT的名词隐喻识别*[J]. 数据分析与知识发现, 2020, 4(4): 100-108.
[4] 刘彤,倪维健,孙宇健,曾庆田. 基于深度迁移学习的业务流程实例剩余执行时间预测方法*[J]. 数据分析与知识发现, 2020, 4(2/3): 134-142.
[5] 向菲,谢耀谈. 基于混合采样与迁移学习的患者评论识别模型*[J]. 数据分析与知识发现, 2020, 4(2/3): 39-47.
[6] 王树义,刘赛,马峥. 基于深度迁移学习的微博图像隐私分类研究*[J]. 数据分析与知识发现, 2020, 4(10): 80-92.
[7] 陈美杉,夏晨曦. 肝癌患者在线提问的命名实体识别研究:一种基于迁移学习的方法 *[J]. 数据分析与知识发现, 2019, 3(12): 61-69.
[8] 伍杰华, 沈静, 周蓓. 基于迁移成分分析的多层社交网络链接分类*[J]. 数据分析与知识发现, 2018, 2(9): 88-99.
[9] 邓三鸿, 傅余洋子, 王昊. 基于LSTM模型的中文图书多标签分类研究*[J]. 数据分析与知识发现, 2017, 1(7): 52-60.
[10] 张志武. 跨领域迁移学习产品评论情感分析[J]. 现代图书情报技术, 2013, (6): 49-54.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 2015 《数据分析与知识发现》编辑部
地址:北京市海淀区中关村北四环西路33号 邮编:100190
电话/传真:(010)82626611-6626,82624938
E-mail:jishu@mail.las.ac.cn