Please wait a minute...
Data Analysis and Knowledge Discovery  2022, Vol. 6 Issue (10): 68-78    DOI: 10.11925/infotech.2096-3467.2022.0009
Current Issue | Archive | Adv Search |
Quantifying Logical Relations of Financial Risks with BERT and Mutual Information
Jia Minghua1,2(),Wang Xiuli1,3
1School of Information, Central University of Finance and Economics, Beijing 102206, China
2Peking University Library, Beijing 100871, China
3Engineering Research Center of State Financial Security, Ministry of Education, Beijing 102206, China
Download: PDF (1229 KB)   HTML ( 22
Export: BibTeX | EndNote (RIS)      
Abstract  

[Objective] This paper tries to prevent and control financial risks by quantifying their logical relationship, which also improve the reliability of processing word frequency of financial events. [Methods] We proposed a quantitative analysis method for the logical relation of financial risks based on BERT and mutual information combined with domain knowledge. Then, we quantified the relations with COPA and financial data sets. [Results] The proposed model effectively addressed the issue of unreliable quantization of word frequency. Its accuracy reached 80.1%, which was 3.1%~37.4% higher than the benchmark models. [Limitations] More research is needed to examine our new model with non-financial and other corpora. [Conclusions] Our new method can reveal the evolutionary path of financial risk events and improve the effect quantitative presentation of their logical relationship.

Key wordsFinancial Risk      Relationship Quantization      Domain Knowledge      BERT      Mutual Information     
Received: 05 January 2022      Published: 16 November 2022
ZTFLH:  TP391  
Corresponding Authors: Jia Minghua,ORCID:0000-0003-0859-7502      E-mail: 2020212349@email.cufe.edu.cn

Cite this article:

Jia Minghua, Wang Xiuli. Quantifying Logical Relations of Financial Risks with BERT and Mutual Information. Data Analysis and Knowledge Discovery, 2022, 6(10): 68-78.

URL:

https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/10.11925/infotech.2096-3467.2022.0009     OR     https://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/Y2022/V6/I10/68

模型 核心思想 细分模型
BERT[17] 采用Transformer编码器,包含编码器(Encoder)和解码器(Decoder)两部分 BERT-base-uncased
BERT-large-uncased
XLNet[18] 改进Transformer结构为Transformer-XL XLNet-base-cased
XLNet-base-cased
RoBERTa[19] 沿用BERT基础模型,优化掩藏语言模型[20] RoBERTa-base
RoBERTa-large
ERNIE[21-22] 优化掩藏语言模型和相邻句预测 ERNIE(Baidu)
ERNIE(Tsinghua)
ALBERT[23] 优化相邻句预测 ALBERT-base
ALBERT-large
Comparison of Common BERT Models
Quantization Model of Event Relationship Based on BERT and Mutual Information
BERT Model
Embedding Representation of BERT
Transformer Encoding Unit
类型 文本内容
Premise: The man broke his toe. What was the CAUSE of this?
Alternative 1: He got a hole in his sock.
Alternative 2: He dropped a hammer on his foot.
Premise: I tipped the bottle. What happened as a RESULT?
Alternative 1: The liquid in the bottle froze.
Alternative 2: The liquid in the bottle poured out.
Premise: I knocked on my neighbor's door. What happened as a RESULT?
Alternative 1: My neighbor invited me in.
Alternative 2: My neighbor left his house.
Data Example of COPA
编号 抽象主题事件 泛化事件数 结果事件数 原因事件数
E1 货币超发 3 10 10
E2 股市大跌 3 10 10
E3 美联储加息 3 10 10
E4 人民币升值 3 10 10
E5 人民币贬值 3 10 10
E6 中美贸易摩擦 3 10 3
E7 英国脱欧 3 10 1
E8 股市上涨 3 10 10
Abstract Topic Events
编号 一因多果 由果溯因
方法A 方法B 方法A 方法B
E1 0.73 1.00 0.59 1.00
E2 0.73 1.00 0.75 1.00
E3 0.95 1.00 0.92 1.00
E4 0.79 1.00 0.85 1.00
E5 0.88 1.00 0.79 1.00
E6 0.86 1.00 0.37 1.00
E7 0.59 1.00 0.04 1.00
E8 0.58 1.00 0.69 1.00
Comparison Results of Relational Quantization Values
The Distribution of Relational Quantization Values for Reasoning from Cause to Effect
The Distribution of Relational Quantization Values for Finding the Cause by the Effect
模型 参数量 模型
层数
隐层
大小
批大小 词表
大小
BERT-base-uncased 108M 12 768 16 30 522
BERT-large-uncased 334M 24 1 024 4 30 522
RoBERTa-base 123M 12 768 16 30 522
RoBERTa-large 355M 24 1 024 4 50 265
ALBERT-base 12M 12 768 32 30 000
ALBERT-large 18M 24 1 024 12 30 522
The Parameter Setting of BERT Model
实验方法 Test Set Dev Set Dev + Test
协方差* 50.2% 49.0% 49.6%
共现频率 50.0% 51.8% 50.9%
互信息 57.8% 58.8% 58.3%
BERT-base-uncased+PMI 58.2% 62.0% 60.1%
BERT-large-uncased+PMI 71.6% 68.6% 70.1%
RoBERTa-base+PMI 71.4% 76.8% 74.1%
RoBERTa-large+PMI 68.8% 70.6% 69.7%
ALBERT-base+PMI 57.6% 58.4% 58.0%
ALBERT-large+PMI 78.8% 81.4% 80.1%
Accuracy Results of Relational Quantization Reasoning Tasks on COPA
[1] Singhal A. Introducing the Knowledge Graph: Things, Not Strings[EB/OL]. (2012-05-16). [2020-03-01]. https://www.blog.google/products/search/introducing-knowledge-graph-things-not/.
[2] 刘宗田, 黄美丽, 周文, 等. 面向事件的本体研究[J]. 计算机科学, 2009, 36(11): 189-192.
[2] (Liu Zongtian, Huang Meili, Zhou Wen, et al. Research on Event-oriented Ontology Model[J]. Computer Science, 2009, 36(11): 189-192.)
[3] Lee S. Simulation Modeling with Event Graphs[J]. Communications of the ACM, 1983, 26(11): 957-963.
doi: 10.1145/182.358460
[4] Buss A H. Modeling with Event Graphs[C]// Proceedings of the 28th Conference on Winter Simulation. 1996: 153-160.
[5] Yang C C, Shi X D. Discovering Event Evolution Graphs from Newswires[C]// Proceedings of the 15th International Conference on World Wide Web. 2006: 945-946.
[6] Yang C C, Shi X D, Wei C P. Discovering Event Evolution Graphs from News Corpora[J]. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 2009, 39(4): 850-863.
doi: 10.1109/TSMCA.2009.2015885
[7] Li Z Y, Zhao S D, Ding X, et al. EEG: Knowledge Base for Event Evolutionary Principles and Patterns[C]// Proceedings of the 6th National Conference on Social Media Processing. 2017: 40-52.
[8] Li Z Y, Ding X, Liu T. Constructing Narrative Event Evolutionary Graph for Script Event Prediction[C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence. 2018: 4201-4207.
[9] Ding X, Li Z Y, Liu T, et al. ELG: An Event Logic Graph[OL].arXiv Preprint, arXiv: 1907.08015.
[10] 胡扬, 闫宏飞, 陈翀. 面向金融知识图谱的实体和关系联合抽取算法[J]. 重庆理工大学学报(自然科学), 2020, 34(5): 139-149.
[10] (Hu Yang, Yan Hongfei, Chen Chong. Joint Entity and Relation Extraction for Constructing Financial Knowledge Graph[J]. Journal of Chongqing University of Technology (Natural Science), 2020, 34(5): 139-149.)
[11] 李江龙, 吕学强, 周建设, 等. 金融领域的事件句抽取[J]. 计算机应用研究, 2017, 34(10) : 2915-2918.
[11] (Li Jianglong, Lyu Xueqiang, Zhou Jianshe, et al. Event Sentence Extraction in Financial Field[J]. Application Research of Computers, 2017, 34(10): 2915-2918.)
[12] Quinlan J R. C4.5: Programs for Machine Learning[M]. San Mateo, CA: Morgan Kaufmann, 1993.
[13] 程兴国, 肖南峰. 词类共现频率的MapReduce并行生成方法[J]. 重庆理工大学学报(自然科学), 2013, 27(11): 53-57.
[13] (Cheng Xingguo, Xiao Nanfeng. Parallel Implementation for Co-occurrence Statistics with MapReduce Model[J]. Journal of Chongqing University of Technology (Natural Science), 2013, 27(11): 53-57.)
[14] 钟茂生, 刘慧, 刘磊. 词汇间语义相关关系量化计算方法[J]. 中文信息学报, 2009, 23(2): 115-122.
[14] (Zhong Maosheng, Liu Hui, Liu Lei. Method of Semantic Relevance Relation Measurement Between Words[J]. Journal of Chinese Information Processing, 2009, 23(2): 115-122.)
[15] 黄进, 阮彤, 蒋锐权. 基于SVM结合依存句法的金融领域舆情分析[J]. 计算机工程与应用, 2015, 51(23): 230-235.
[15] (Huang Jin, Ruan Tong, Jiang Ruiquan. Sentiment Analysis in Financial Domain Based on SVM with Dependency Syntax[J]. Computer Engineering and Applications, 2015, 51(23): 230-235.)
[16] 张洪宽, 宋晖, 王舒怡, 等. 基于BERT的端到端中文篇章事件抽取[C]// 第19届中国计算语言学大会论文集. 2020: 390-401.
[16] (Zhang Hongkuan, Song Hui, Wang Shuiyi, et al. A BERT-Based End-to-End Model for Chinese Document-level Event Extraction[C]// Proceedings of the 19th Chinese National Conference on Computational Linguistics. 2020: 390-401.)
[17] Devlin J, Chang M W, Lee K, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, 2019: 4171-4186.
[18] Yang Z L, Dai Z H, Yang Y M, et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019: 5753-5763.
[19] Liu Y H, Ott M, Goyal N, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[OL]. arXiv Preprint, arXiv: 1907.11692.
[20] 刘欢, 张智雄, 王宇飞. BERT模型的主要优化改进方法研究综述[J]. 数据分析与知识发现, 2021, 5(1): 3-15.
[20] (Liu Huan, Zhang Zhixiong, Wang Yufei. A Review on Main Optimization Methods of BERT[J]. Data Analysis and Knowledge Discovery, 2021, 5(1): 3-15.)
[21] Sun Y, Wang S H, Li Y K, et al. ERNIE: Enhanced Representation Through Knowledge Integration[OL]. arXiv Preprint, arXiv: 1904.09223.
[22] Zhang Z Y, Han X, Liu Z Y, et al. ERNIE: Enhanced Language Representation with Informative Entities[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 1441-1451.
[23] Lan Z Z, Chen M D, Goodman S, et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations[C]// Proceedings of the 8th International Conference on Learning Representations. 2020: 1-17.
[24] Do Q X, Chan Y S, Roth D. Minimally Supervised Event Causality Identification[C]// Proceedings of the Conference on Empirical Methods in Natural Language Processing. 2011: 294-303.
[25] Hashimoto C, Torisawa K, Kloetzer J, et al. Generating Event Causality Hypotheses through Semantic Relations[C]// Proceedings of the 29th AAAI Conference on Artificial Intelligence. 2015: 2396-2403.
[26] Luo Z Y, Sha Y C, Zhu K Q, et al. Commonsense Causal Reasoning Between Short Texts[C]// Proceedings of the 15th International Conference on Principles of Knowledge Representation and Reasoning. 2016: 421-430.
[27] Staliūnaitė I, Gorinski P J, Iacobacci I. Improving Commonsense Causal Reasoning by Adversarial Training and Data Augmentation[C]// Proceedings of the 35th Conference on Innovative Applications of Artificial Intelligence. 2021: 13834-13842.
[28] Sharma A, Kiciman E. Causal Inference and Counterfactual Reasoning[C]// Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. 2020: 369-370.
[29] Han M Y, Wang Y L.Doing Good or Doing Right? Exploring the Weakness of Commonsense Causal Reasoning Models[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 2021: 151-157.
[30] 张东东, 彭敦陆. ENT-BERT:结合BERT和实体信息的实体关系分类模型[J]. 小型微型计算机系统, 2020, 41(12): 2557-2562.
[30] (Zhang Dongdong, Peng Dunlu. ENT-BERT: Entity Relation Classification Model Combining BERT and Entity Information[J]. Journal of Chinese Computer Systems, 2020, 41(12): 2557-2562.)
[31] Bai C Y, Pan L M, Luo S L, et al. Joint Extraction of Entities and Relations by a Novel End-to-End Model with a Double-Pointer Module[J]. Neurocomputing, 2020, 377: 325-333.
doi: 10.1016/j.neucom.2019.09.097
[1] Shi Yunmei, Yuan Bo, Zhang Le, Lv Xueqiang. IMTS: Detecting Fake Reviews with Image and Text Semantics[J]. 数据分析与知识发现, 2022, 6(8): 84-96.
[2] Wu Jiang, Liu Tao, Liu Yang. Mining Online User Profiles and Self-Presentations: Case Study of NetEase Music Community[J]. 数据分析与知识发现, 2022, 6(7): 56-69.
[3] Zheng Jie, Huang Hui, Qin Yongbin. Matching Similar Cases with Legal Knowledge Fusion[J]. 数据分析与知识发现, 2022, 6(7): 99-106.
[4] Pan Huiping, Li Baoan, Zhang Le, Lv Xueqiang. Extracting Keywords from Government Work Reports with Multi-feature Fusion[J]. 数据分析与知识发现, 2022, 6(5): 54-63.
[5] Xiao Yuejun, Li Honglian, Zhang Le, Lv Xueqiang, You Xindong. Classifying Chinese Patent Texts with Feature Fusion[J]. 数据分析与知识发现, 2022, 6(4): 49-59.
[6] Yang Lin, Huang Xiaoshuo, Wang Jiayang, Ding Lingling, Li Zixiao, Li Jiao. Identifying Subtypes of Clinical Trial Diseases with BERT-TextCNN[J]. 数据分析与知识发现, 2022, 6(4): 69-81.
[7] Guo Hangcheng, He Yanqing, Lan Tian, Wu Zhenfeng, Dong Cheng. Identifying Moves from Scientific Abstracts Based on Paragraph-BERT-CRF[J]. 数据分析与知识发现, 2022, 6(2/3): 298-307.
[8] Zhang Yunqiu, Wang Yang, Li Bocheng. Identifying Named Entities of Chinese Electronic Medical Records Based on RoBERTa-wwm Dynamic Fusion Model[J]. 数据分析与知识发现, 2022, 6(2/3): 242-250.
[9] Wang Yongsheng, Wang Hao, Yu Wei, Zhou Zeyu. Extracting Relationship Among Characters from Local Chronicles with Text Structures and Contents[J]. 数据分析与知识发现, 2022, 6(2/3): 318-328.
[10] Xie Xingyu, Yu Bengong. Automatic Classification of E-commerce Comments with Multi-Feature Fusion Model[J]. 数据分析与知识发现, 2022, 6(1): 101-112.
[11] Chen Jie,Ma Jing,Li Xiaofeng. Short-Text Classification Method with Text Features from Pre-trained Models[J]. 数据分析与知识发现, 2021, 5(9): 21-30.
[12] Zhou Zeyu,Wang Hao,Zhao Zibo,Li Yueyan,Zhang Xiaoqin. Construction and Application of GCN Model for Text Classification with Associated Information[J]. 数据分析与知识发现, 2021, 5(9): 31-41.
[13] Ma Jiangwei, Lv Xueqiang, You Xindong, Xiao Gang, Han Junmei. Extracting Relationship Among Military Domains with BERT and Relation Position Features[J]. 数据分析与知识发现, 2021, 5(8): 1-12.
[14] Li Wenna, Zhang Zhixiong. Entity Alignment Method for Different Knowledge Repositories with Joint Semantic Representation[J]. 数据分析与知识发现, 2021, 5(7): 1-9.
[15] Wang Hao, Lin Kerou, Meng Zhen, Li Xinlei. Identifying Multi-Type Entities in Legal Judgments with Text Representation and Feature Generation[J]. 数据分析与知识发现, 2021, 5(7): 10-25.
  Copyright © 2016 Data Analysis and Knowledge Discovery   Tel/Fax:(010)82626611-6626,82624938   E-mail:jishu@mail.las.ac.cn