Please wait a minute...
Data Analysis and Knowledge Discovery  2022, Vol. 6 Issue (10): 93-102    DOI: 10.11925/infotech.2096-3467.2022.0071
Current Issue | Archive | Adv Search |
Classification Model for Scholarly Articles Based on Improved Graph Neural Network
Huang Xuejian1,2,Liu Yuyang3,Ma Tinghuai1()
1College of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, China
2VR College of Modern Industry, Jiangxi University of Finance and Economics, Nanchang 330013, China
3College of Humanities, Jiangxi University of Finance and Economics, Nanchang 330013, China
Download: PDF (1593 KB)   HTML ( 30
Export: BibTeX | EndNote (RIS)      
Abstract  

[Objective] This paper tries to address the over-smoothing issues of the traditional graph neural network, and then realizes the weight adaptive allocation of different depths and neighbors, aiming to improve the performance of academic literature classification. [Methods] We proposed an improved graph neural network model for academic paper classification. First, with the help of multi-head attention mechanism, the new model learned a variety of related features among documents, and adaptively distributing the weights of different neighbor nodes. Then, based on the residual network structure, the model aggregated outputs of each layer node, and provided the learning of adaptive aggregation radius. Finally, with the help of improved graph neural network, the model learned feature representation of each node in the paper citation graph, which was input into the multi-layer fully connected network to obtain the final classification. [Results] We examined our model on large-scale real datasets. The accuracy of our model reached 0.61, which is 0.04 and 0.14 higher than those of the GCN and Transformer models. [Limitations] More research is needed to improve the classification accuracy of small categories and difficult to distinguish samples. [Conclusions] The improved graph neural network can effectively conduct classification for academic articles.

Key wordsGraph Neural Network      Attention Mechanism      Residual Network      Deep Learning      Paper Classification      Text Classification     
Received: 23 January 2022      Published: 16 November 2022
ZTFLH:  G202 TP319  
Fund:National Key R&D Program of China(2021YFE0104400);Jiangxi Provincial Humanities and Social Sciences Research Project(JY21253);Youth Project of the 14th Five Year Plan of Jiangxi Educational Science(21QN012)
Corresponding Authors: Ma Tinghuai,ORCID:0000-0003-2320-1692      E-mail: thma@nuist.edu.cn

Cite this article:

Huang Xuejian, Liu Yuyang, Ma Tinghuai. Classification Model for Scholarly Articles Based on Improved Graph Neural Network. Data Analysis and Knowledge Discovery, 2022, 6(10): 93-102.

URL:

https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/10.11925/infotech.2096-3467.2022.0071     OR     https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/Y2022/V6/I10/93

Model Architecture
Subgraph Sampling Process
Aggregation Based on Residual Structure
类型 数量
节点总数 3 063 061
边总数 29 168 650
孤立节点数 535 347
有标签节点数 1 044 417
非标签节点数 2 018 644
节点类别数 23
最大入度 15 979
平均入度 13.6
最大出度 9 102
平均出度 10.5
Statistical Analysis of Data Sets
Sample Category Distribution
参数类 参数名称 参数值
模型参数 采样邻居阶数 3
每层最大采样邻居数量 10,10,10
图神经网络隐含层维度 256
注意力机制随机置零概率 0.1
注意力个数 3
全连接层数 5
全连接层隐含层维度 512,256,128,64,23
全连接层随机置零概率 0.1
训练参数 学习率 0.001
正则化参数 1e-6
最大迭代次数 100
早停等待次数 20
批量训练样本数 1024
Main Parameter Setting
Experimental Comparison of Different Graph Neural Networks at Different Aggregation Radius
Experimental Comparison of ResGAT Under Different Attention Counts
Experimental Comparison with Supervised Text Classification Models
Error Visualization
类别 占比 查准率 查全率 F1值
A 0.3% 0.00 0.00 0.00
B 6.2% 0.61 0.75 0.68
C 10.3% 0.74 0.71 0.72
D 10.3% 0.65 0.67 0.66
E 4.6% 0.44 0.73 0.55
F 3.3% 0.60 0.73 0.66
G 4.1% 0.62 0.51 0.56
H 6.6% 0.59 0.32 0.42
I 2.1% 0.73 0.68 0.71
J 2.3% 0.33 0.10 0.16
K 3.1% 0.66 0.44 0.52
L 5.2% 0.57 0.66 0.61
M 8.3% 0.52 0.56 0.54
N 9.7% 0.81 0.80 0.80
O 1.8% 0.35 0.35 0.35
P 4.9% 0.65 0.52 0.57
Q 2.0% 0.73 0.76 0.74
R 3.1% 0.53 0.53 0.53
S 2.2% 0.72 0.94 0.81
T 2.1% 0.57 0.77 0.66
U 2.3% 0.51 0.49 0.50
V 4.0% 0.39 0.30 0.34
W 1.3% 0.51 0.34 0.41
Classification Results of Different Classes
[1] Shu F, Julien C A, Zhang L, et al. Comparing Journal and Paper Level Classifications of Science[J]. Journal of Informetrics, 2019, 13(1): 202-225.
doi: 10.1016/j.joi.2018.12.005
[2] 章成志, 李卓, 储荷婷. 基于全文内容的学术论文研究方法自动分类研究[J]. 情报学报, 2020, 39(8): 852-862.
[2] (Zhang Chengzhi, Li Zhuo, Chu Heting. Using Full Content to Automatically Classify the Research Methods of Academic Articles[J]. Journal of the China Society for Scientific and Technical Information, 2020, 39(8): 852-862.)
[3] Sethares W A, Ingle A, Krč T, et al. Eigentextures: An SVD Approach to Automated Paper Classification[C]// Proceedings of the 48th Asilomar Conference on Signals, Systems and Computers. IEEE, 2014: 1109-1113.
[4] 武永亮, 赵书良, 李长镜, 等. 基于TF-IDF和余弦相似度的文本分类方法[J]. 中文信息学报, 2017, 31(5): 138-145.
[4] (Wu Yongliang, Zhao Shuliang, Li Changjing, et al. Text Classification Method Based on TF-IDF and Cosine Similarity[J]. Journal of Chinese Information Processing, 2017, 31(5): 138-145.)
[5] 廖列法, 勒孚刚, 朱亚兰. LDA模型在专利文本分类中的应用[J]. 现代情报, 2017, 37(3): 35-39.
doi: 10.3969/j.issn.1008-0821.2017.03.007
[5] (Liao Liefa, Le Fugang, Zhu Yalan. The Application of LDA Model in Patent Text Classification[J]. Journal of Modern Information, 2017, 37(3): 35-39.)
doi: 10.3969/j.issn.1008-0821.2017.03.007
[6] Kim S W, Gil J M. Research Paper Classification Systems Based on TF-IDF and LDA Schemes[J]. Human-Centric Computing and Information Sciences, 2019, 9: 30.
doi: 10.1186/s13673-019-0192-7
[7] 刘浏, 王东波. 基于论文自动分类的社科类学科跨学科性研究[J]. 数据分析与知识发现, 2018, 2(3): 30-38.
[7] (Liu Liu, Wang Dongbo. Identifying Interdisciplinary Social Science Research Based on Article Classification[J]. Data Analysis and Knowledge Discovery, 2018, 2(3): 30-38.)
[8] 董放, 刘宇飞, 周源. 基于LDA-SVM论文摘要多分类新兴技术预测[J]. 情报杂志, 2017, 36(7): 40-45.
[8] (Dong Fang, Liu Yufei, Zhou Yuan. Prediction of Emerging Technologies Based on LDA-SVM Multi-Class Abstract of Paper Classification[J]. Journal of Intelligence, 2017, 36(7): 40-45.)
[9] 王昊, 叶鹏, 邓三鸿. 机器学习在中文期刊论文自动分类研究中的应用[J]. 现代图书情报技术, 2014(3): 80-87.
[9] (Wang Hao, Ye Peng, Deng Sanhong. The Application of Machine-Learning in the Research on Automatic Categorization of Chinese Periodical Articles[J]. New Technology of Library and Information Service, 2014(3): 80-87.)
[10] 薛峰, 胡越, 夏帅, 等. 基于论文标题和摘要的短文本分类研究[J]. 合肥工业大学学报(自然科学版), 2018, 41(10): 1343-1349.
[10] (Xue Feng, Hu Yue, Xia Shuai, et al. Research on Short Text Classification Based on Paper Title and Abstract[J]. Journal of Hefei University of Technology(Natural Science), 2018, 41(10): 1343-1349.)
[11] Koutsomitropoulos D A, Andriopoulos A D. Thesaurus-Based Word Embeddings for Automated Biomedical Literature Classification[J]. Neural Computing & Applications, 2022, 34(2): 937-950.
[12] 吕璐成, 韩涛, 周健, 等. 基于深度学习的中文专利自动分类方法研究[J]. 图书情报工作, 2020, 64(10): 75-85.
doi: 10.13266/j.issn.0252-3116.2020.10.009
[12] (Lyu Lucheng, Han Tao, Zhou Jian, et al. Research on the Method of Chinese Patent Automatic Classification Based on Deep Learning[J]. Library and Information Service, 2020, 64(10): 75-85.)
doi: 10.13266/j.issn.0252-3116.2020.10.009
[13] 邓三鸿, 傅余洋子, 王昊. 基于LSTM模型的中文图书多标签分类研究[J]. 数据分析与知识发现, 2017, 1(7): 52-60.
[13] (Deng Sanhong, Fu Yuyangzi, Wang Hao. Multi-Label Classification of Chinese Books with LSTM Model[J]. Data Analysis and Knowledge Discovery, 2017, 1(7): 52-60.)
[14] 徐彤阳, 尹凯. 基于深度学习的数字图书馆文本分类研究[J]. 情报科学, 2019, 37(10): 13-19.
[14] (Xu Tongyang, Yin Kai. Text Classification of Digital Library Based on Deep Learning[J]. Information Science, 2019, 37(10): 13-19.)
[15] Xu H T, Dong M, Zhu D X, et al. Text Classification with Topic-Based Word Embedding and Convolutional Neural Networks[C]// Proceedings of the 7th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. 2016: 88-97.
[16] 王鑫芸, 王昊, 邓三鸿, 等. 面向期刊选择的学术论文内容分类研究[J]. 数据分析与知识发现, 2020, 4(7): 96-109.
[16] (Wang Xinyun, Wang Hao, Deng Sanhong, et al. Classification of Academic Papers for Periodical Selection[J]. Data Analysis and Knowledge Discovery, 2020, 4(7): 96-109.)
[17] 谢红玲, 奉国和, 何伟林. 基于深度学习的科技文献语义分类研究[J]. 情报理论与实践, 2018, 41(11): 149-154.
[17] (Xie Hongling, Feng Guohe, He Weilin. Research on Semantic Classification of Scientific and Technical Literature Based on Deep Learning[J]. Information Studies: Theory & Application, 2018, 41(11): 149-154.)
[18] Peters M E, Neumann M, Iyyer M, et al. Deep Contextualized Word Representations[OL]. arXiv Preprint, arXiv: 1802.05365.
[19] Radford A, Narasimhan K, Salimans T, et al. Improving Language Understanding by Generative Pre-training[EB/OL].[2021-11-30]. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
[20] Devlin J, Chang M W, Lee K, et al. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding[OL]. arXiv Preprint, arXiv: 1810.04805.
[21] 孙红, 陈强越. 融合BERT词嵌入和注意力机制的中文文本分类[J]. 小型微型计算机系统, 2022, 43(1): 22-26.
[21] (Sun Hong, Chen Qiangyue. Chinese Text Classification Based on BERT and Attention[J]. Journal of Chinese Computer Systems, 2022, 43(1): 22-26.)
[22] 倪斌, 陆晓蕾, 童逸琦, 等. 胶囊神经网络在期刊文本分类中的应用[J]. 南京大学学报(自然科学), 2021, 57(5): 750-756.
[22] (Ni Bin, Lu Xiaolei, Tong Yiqi, et al. Automated Journal Text Classification Based on Capsule Neural Network[J]. Journal of Nanjing University(Natural Science), 2021, 57(5): 750-756.)
[23] 刘磊, 许婕, 周勇. 基于知识增强的ERBERT-GRU中文图书分类方法研究[J]. 江西师范大学学报(自然科学版), 2021, 45(3): 299-304.
[23] (Liu Lei, Xu Jie, Zhou Yong. The Study on ERBERT-GRU Chinese Book Classification Method Based on Knowledge Enhancement[J]. Journal of Jiangxi Normal University(Natural Science Edition), 2021, 45(3): 299-304.)
[24] Tezgider M, Yildiz B, Aydin G. Text Classification Using Improved Bidirectional Transformer[J]. Concurrency and Computation: Practice and Experience, 2022, 34(9): e6486.
[25] Gori M, Monfardini G, Scarselli F. A New Model for Learning in Graph Domains[C]// Proceedings of the IEEE International Joint Conference on Neural Networks. IEEE, 2005: 729-734.
[26] Scarselli F, Gori M, Tsoi A C, et al. The Graph Neural Network Model[J]. IEEE Transactions on Neural Networks, 2009, 20(1): 61-80.
doi: 10.1109/TNN.2008.2005605 pmid: 19068426
[27] Bruna J, Zaremba W, Szlam A, et al. Spectral Networks and Locally Connected Networks on Graphs[OL]. arXiv Preprint, arXiv: 1312.6203.
[28] Kipf T N, Welling M. Semi-Supervised Classification with Graph Convolutional Networks[OL]. arXiv Preprint, arXiv: 1609.02907.
[29] Hamilton W L, Ying R, Leskovec J. Inductive Representation Learning on Large Graphs[OL]. arXiv Preprint, arXiv: 1706.02216.
[30] Velikovi P, Cucurull G, Casanova A, et al. Graph Attention Networks[OL]. arXiv Preprint, arXiv: 1710.10903.
[31] He K M, Zhang X Y, Ren S Q, et al. Deep Residual Learning for Image Recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016: 770-778.
[32] Zhang Z L, Sabuncu M R. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels[OL]. arXiv Preprint, arXiv: 1805.07836.
[1] Zhao Ruijie, Tong Xinyu, Liu Xiaohua, Lu Yonghe. Entity Recognition and Labeling for Medical Literature Based on Neural Network[J]. 数据分析与知识发现, 2022, 6(9): 100-112.
[2] Cheng Quan, She Dexin. Drug Recommendation Based on Graph Neural Network with Patient Signs and Medication Data[J]. 数据分析与知识发现, 2022, 6(9): 113-124.
[3] Chen Yuanyuan, Ma Jing. Detecting Multimodal Sarcasm Based on SC-Attention Mechanism[J]. 数据分析与知识发现, 2022, 6(9): 40-51.
[4] Tang Jiao, Zhang Lisheng, Sang Chunyan. News Recommendation with Latent Topic Distribution and Long and Short-Term User Representations[J]. 数据分析与知识发现, 2022, 6(9): 52-64.
[5] Zhao Pengwu, Li Zhiyi, Lin Xiaoqi. Identifying Relationship of Chinese Characters with Attention Mechanism and Convolutional Neural Network[J]. 数据分析与知识发现, 2022, 6(8): 41-51.
[6] Zhang Ruoqi, Shen Jianfang, Chen Pinghua. Session Sequence Recommendation with GNN, Bi-GRU and Attention Mechanism[J]. 数据分析与知识发现, 2022, 6(6): 46-54.
[7] Ye Han,Sun Haichun,Li Xin,Jiao Kainan. Classification Model for Long Texts with Attention Mechanism and Sentence Vector Compression[J]. 数据分析与知识发现, 2022, 6(6): 84-94.
[8] Tu Zhenchao, Ma Jing. Item Categorization Algorithm Based on Improved Text Representation[J]. 数据分析与知识发现, 2022, 6(5): 34-43.
[9] Wang Lu, Le Xiaoqiu. Research Progress on Citation Analysis of Scientific Papers[J]. 数据分析与知识发现, 2022, 6(4): 1-15.
[10] Chen Guo, Ye Chao. News Classification with Semi-Supervised and Active Learning[J]. 数据分析与知识发现, 2022, 6(4): 28-38.
[11] Zheng Xiao, Li Shuqing, Zhang Zhiwang. Measuring User Item Quality with Rating Analysis for Deep Recommendation Model[J]. 数据分析与知识发现, 2022, 6(4): 39-48.
[12] Xiao Yuejun, Li Honglian, Zhang Le, Lv Xueqiang, You Xindong. Classifying Chinese Patent Texts with Feature Fusion[J]. 数据分析与知识发现, 2022, 6(4): 49-59.
[13] Yang Lin, Huang Xiaoshuo, Wang Jiayang, Ding Lingling, Li Zixiao, Li Jiao. Identifying Subtypes of Clinical Trial Diseases with BERT-TextCNN[J]. 数据分析与知识发现, 2022, 6(4): 69-81.
[14] Guo Hangcheng, He Yanqing, Lan Tian, Wu Zhenfeng, Dong Cheng. Identifying Moves from Scientific Abstracts Based on Paragraph-BERT-CRF[J]. 数据分析与知识发现, 2022, 6(2/3): 298-307.
[15] Xu Yuemei, Fan Zuwei, Cao Han. A Multi-Task Text Classification Model Based on Label Embedding of Attention Mechanism[J]. 数据分析与知识发现, 2022, 6(2/3): 105-116.
  Copyright © 2016 Data Analysis and Knowledge Discovery   Tel/Fax:(010)82626611-6626,82624938   E-mail:jishu@mail.las.ac.cn