Please wait a minute...
Data Analysis and Knowledge Discovery  2021, Vol. 5 Issue (7): 101-110    DOI: 10.11925/infotech.2096-3467.2020.1216
Current Issue | Archive | Adv Search |
Detecting Rumors with Uncertain Loss and Task-level Attention Mechanism
Yang Hanxun,Zhou Dequn,Ma Jing(),Luo Yongcong
College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
Download: PDF (996 KB)   HTML ( 14
Export: BibTeX | EndNote (RIS)      
Abstract  

[Objective] This paper proposes a new model with the help of uncertainty loss function and task-level attention mechanism, aiming to address the issue of setting main and auxiliary tasks in rumor detection. [Methods] First, we integrated the domain knowledge of rumor exploration, stance classification, and rumor detectioin. Then, we constructed a modified model with task-level attention mechanism. Third, we used uncertainty loss function to explore the weight relationshaip of each task and obtain better detection results. Finally, we examined our model’s performance with the Pheme4 and Pheme5 datasets. [Results] Compared to the exisiting models, the Macro-F of our model increased by 4.2 and 7.6 percentage points with Pheme4 and Pheme5. [Limitations] We only examined our model with the Pheme dataset. [Conclusions] The proposed method could effective detect rumors without dividing the main and auxiliary tasks.

Key wordsUncertain Loss      Multi-task Learning      Rumor Detection      Attention Mechanism     
Received: 06 December 2020      Published: 09 April 2021
ZTFLH:  TP393  
Fund:Forward Development Strategy Research Fund Project(NW2020001);National Social Science Fund of China(20ZDA092);Open Fund Project of Nanjing University of Aeronautics and Astronautics Graduate Innovation Base (Laboratory)(kfjj20200901)
Corresponding Authors: Ma Jing,ORCID: 0000-0001-8472-2518     E-mail: majing5525@126.com

Cite this article:

Yang Hanxun, Zhou Dequn, Ma Jing, Luo Yongcong. Detecting Rumors with Uncertain Loss and Task-level Attention Mechanism. Data Analysis and Knowledge Discovery, 2021, 5(7): 101-110.

URL:

https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/10.11925/infotech.2096-3467.2020.1216     OR     https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/Y2021/V5/I7/101

Rumor Identification Process
Multitask Model
Branch_LSTM Model
Attention Mechanisms Based on Task Hierarchy
事件名称 事件文本量 回复文本量 疑似谣言 非疑似谣言 谣言 非谣言 无法判定
Charlie Hebdo 2 079 38 268 458 1621 193 116 149
Sydney Siege 1 221 23 996 522 699 382 86 54
Ferguson 1 143 24 175 284 859 10 8 266
Ottawa Shooting 890 12 284 470 420 329 72 69
Germanwings-crash 469 4 489 238 231 94 111 33
Putin Missing 238 835 126 112 0 9 117
Prince Toronto 233 902 229 4 0 222 7
Gurlitt 138 179 61 77 59 0 2
Elbola Essien 14 226 14 0 0 14 0
合计 6 425 105 354 2 402 4 023 1 067 638 697
PHEME Dataset
A Tree Structure with Three Branches of Rumor Text and Comments
操作系统配置 参数或版本
CPU Xeon(R) Gold 5218 CPU
GPU NVIDIA T4(16GB)
Python 3.6
TensorFlow 1.1.31
Keras 2.3.1
内存 1TB
Experimental Environment
实验 算法 acc Macro-F
实验一 Majority(True) 0.591 0.247
NileTMRG* 0.444 0.205
Branch-LSTM 0.466 0.362
MTL3 0.462 0.322
ES-ATT-MTL3 0.395 0.263
Task-ATT-MTL3 0.494 0.333
Un-Task-ATT-MTL3 0.425 0.364
实验二 Majority(True) 0.511 0.226
NileTMRG* 0.438 0.339
Branch-LSTM 0.454 0.336
MTL3 0.492 0.396
ES-ATT-MTL3 0.459 0.280
Task-ATT-MTL3 0.505 0.372
Un-Task-ATT-MTL3 0.467 0.472
实验三 Majority(True) 0.444 0.205
NileTMRG* 0.360 0.297
Branch-LSTM 0.314 0.259
MTL3 0.405 0.405
ES-ATT-MTL3 0.356 0.240
Task-ATT-MTL3 0.418 0.347
Un-Task-ATT-MTL3 0.385 0.393
Results of Rumor Detection Task
事件 acc Macro-F 谣言类
平均
F1
非谣言类平均F1 无法确定类平均F1
Charlie Hebdo 0.292 0.213 0.147 0.131 0.362
Sydney Siege 0.339 0.204 0.400 0.162 0.501
Ferguson 0.697 0.268 0 0.004 0.801
Ottawa Shooting 0.675 0.306 0.806 0.107 0.010
Germanwings-crash 0.356 0.245 0.310 0.355 0.091
Results of Single Event
[1] Zubiaga A, Aker A, Bontcheva K, et al. Detection and Resolution of Rumours in Social Media: A Survey[J]. ACM Computing Surveys (CSUR), 2018, 51(2):1-36.
[2] 陈燕方, 李志宇, 梁循, 等. 在线社会网络谣言检测综述[J]. 计算机学报, 2018, 41(7):1648-1676.
[2] (Chen Yanfang, Li Zhiyu, Liang Xun, et al. Review on Rumor Detection of Online Social Networks[J]. Chinese Journal of Computers, 2018, 41(7):1648-1676.)
[3] Qazvinian V, Rosengren E, Radev D, et al. Rumor Has It: Identifying Misinformation in Microblogs[C]// Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. 2011: 1589-1599.
[4] Liang G, He W B, Xu C, et al. Rumor Identification in Microblogging Systems Based on Users’ Behavior[J]. IEEE Transactions on Computational Social Systems, 2015, 2(3):99-108.
doi: 10.1109/TCSS.2016.2517458
[5] Kwon S, Cha M, Jung K, et al. Prominent Features of Rumor Propagation in Online Social Media[C]// Proceedings of the 13th International Conference on Data Mining. IEEE, 2013: 1103-1108.
[6] Kochkina E, Liakata M, Zubiaga A. All-in-One: Multi-task Learning for Rumour Verification[OL]. arXiv Preprint, arXiv: 1806.03713.
[7] Sejeong K, Meeyoung C, Kyomin J, et al. Rumor Detection over Varying Time Windows[J]. PLoS One, 2017, 12(1):e0168344.
doi: 10.1371/journal.pone.0168344
[8] Yang F, Liu Y, Yu X H, et al. Automatic Detection of Rumor on Sina Weibo[C]// Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. 2012: 13.
[9] Chang C, Zhang Y H, Szabo C, et al. Extreme User and Political Rumor Detection on Twitter[C]// Proceedings of International Conference on Advanced Data Mining and Applications. 2016: 751-763.
[10] Ma J, Gao W, Mitra P, et al. Detecting Rumors from Microblogs with Recurrent Neural Networks[C]// Proceedings of the 25th International Joint Conference on Artificial Intelligence. 2016: 3818-3824.
[11] Chen T, Li X, Yin H Z, et al. Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks for Early Rumor Detection[C]// Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining. Springer, Cham, 2018: 40-52.
[12] Collobert R, Weston J. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning[C]// Proceedings of the 25th International Conference on Machine Learning. 2008: 160-167.
[13] Liu P F, Qiu X P, Huang X J. Recurrent Neural Network for Text Classification with Multi-task Learning[OL]. arXiv Preprint, arXiv: 1605.05101.
[14] Dong D X, Wu H, He W, et al. Multi-task Learning for Multiple Language Translation[C]// Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2015: 1723-1732.
[15] Firat O, Cho K, Bengio Y. Multi-way, Multilingual Neural Machine Translation with a Shared Attention Mechanism[OL]. arXiv Preprint, arXiv: 1601.01073.
[16] Ma J, Gao W, Wong K F. Detect Rumor and Stance Jointly by Neural Multi-task Learning[C]// Companion Proceedings of the Web Conference 2018. 2018:585-593.
[17] Li Q Z, Zhang Q, Si L. Rumor Detection by Exploiting User Credibility Information, Attention and Multi-task Learning[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 1173-1179.
[18] Sener O, Koltun V. Multi-task Learning as Multi-objective Optimization[C]// Proceedings of the 32nd Conference on Neural Information Processing Systems. 2018: 527-538.
[19] Kendall A, Gal Y, Cipolla R. Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018: 7482-7491.
[20] Kendall A, Gal Y. What Uncertainties do We Need in Bayesian Deep Learning for Computer Vision?[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017: 5580-5590.
[21] Gal Y. Uncertainty in Deep Learning[D]. University of Cambridge, 2016.
[22] Staudemeyer R C, Morris E R. Understanding LSTM - a Tutorial into Long Short-Term Memory Recurrent Neural Networks[OL]. arXiv Preprint, arXiv: 1909. 09586.
[23] Fabbri M, Moro G. Dow Jones Trading with Deep Learning: The Unreasonable Effectiveness of Recurrent Neural Networks[C]// Proceedings of the 7th International Conference on Data Science, Technology and Applications. 2018: 142-153.
[24] Kochkina E, Liakata M, Augenstein I. Turing at Semeval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM[OL]. arXiv Preprint, arXiv: 1704.07221.
[25] Mikolov T, Chen K, Corrado G, et al. Efficient Estimation of Word Representations in Vector Space[OL]. arXiv Preprint, arXiv: 1301.3781.
[26] Enayet O, El-Beltagy S R. NileTMRG at SemEval-2017 Task 8: Determining Rumour and Veracity Support for Rumours on Twitter[C]// Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). 2017: 470-474.
[1] Xie Hao,Mao Jin,Li Gang. Sentiment Classification of Image-Text Information with Multi-Layer Semantic Fusion[J]. 数据分析与知识发现, 2021, 5(6): 103-114.
[2] Yin Pengbo,Pan Weimin,Zhang Haijun,Chen Degang. Identifying Clickbait with BERT-BiGA Model[J]. 数据分析与知识发现, 2021, 5(6): 126-134.
[3] Han Pu,Zhang Zhanpeng,Zhang Mingtao,Gu Liang. Normalizing Chinese Disease Names with Multi-feature Fusion[J]. 数据分析与知识发现, 2021, 5(5): 83-94.
[4] Duan Jianyong,Wei Xiaopeng,Wang Hao. A Multi-Perspective Co-Matching Model for Machine Reading Comprehension[J]. 数据分析与知识发现, 2021, 5(4): 134-141.
[5] Wang Yuzhu,Xie Jun,Chen Bo,Xu Xinying. Multi-modal Sentiment Analysis Based on Cross-modal Context-aware Attention[J]. 数据分析与知识发现, 2021, 5(4): 49-59.
[6] Jiang Cuiqing,Wang Xiangxiang,Wang Zhao. Forecasting Car Sales Based on Consumer Attention[J]. 数据分析与知识发现, 2021, 5(1): 128-139.
[7] Huang Lu,Zhou Enguo,Li Daifeng. Text Representation Learning Model Based on Attention Mechanism with Task-specific Information[J]. 数据分析与知识发现, 2020, 4(9): 111-122.
[8] Yin Haoran,Cao Jinxuan,Cao Luzhe,Wang Guodong. Identifying Emergency Elements Based on BiGRU-AM Model with Extended Semantic Dimension[J]. 数据分析与知识发现, 2020, 4(9): 91-99.
[9] Shi Lei,Wang Yi,Cheng Ying,Wei Ruibin. Review of Attention Mechanism in Natural Language Processing[J]. 数据分析与知识发现, 2020, 4(5): 1-14.
[10] Xue Fuliang,Liu Lifang. Fine-Grained Sentiment Analysis with CRF and ATAE-LSTM[J]. 数据分析与知识发现, 2020, 4(2/3): 207-213.
[11] Qi Ruihua,Jian Yue,Guo Xu,Guan Jinghua,Yang Mingxin. Sentiment Analysis of Cross-Domain Product Reviews Based on Feature Fusion and Attention Mechanism[J]. 数据分析与知识发现, 2020, 4(12): 85-94.
[12] Xu Tongtong,Sun Huazhi,Ma Chunmei,Jiang Lifen,Liu Yichen. Classification Model for Few-shot Texts Based on Bi-directional Long-term Attention Features[J]. 数据分析与知识发现, 2020, 4(10): 113-123.
[13] Yuemin Wu,Ganggui Ding,Bin Hu. Extracting Relationship of Agricultural Financial Texts with Attention Mechanism[J]. 数据分析与知识发现, 2019, 3(5): 86-92.
[14] Kan Liu,Haochen Du. Detecting Twitter Rumors with Deep Transfer Network[J]. 数据分析与知识发现, 2019, 3(10): 47-55.
[15] Yuman Li,Zhibo Chen,Fu Xu. Classifying Texts with KACC Model[J]. 数据分析与知识发现, 2019, 3(10): 89-97.
  Copyright © 2016 Data Analysis and Knowledge Discovery   Tel/Fax:(010)82626611-6626,82624938   E-mail:jishu@mail.las.ac.cn