Home Table of Contents

25 November 2024, Volume 8 Issue 11
    

  • Select all
    |
  • Li Jiawei, Zhang Shunxiang, Li Shuyu, Duan Wenjie, Wang Yuqing, Deng Jinke
    Data Analysis and Knowledge Discovery. 2024, 8(11): 1-10. https://doi.org/10.11925/infotech.2096-3467.2023.1005
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This paper proposes a Chinese implicit sentiment analysis model based on text graph representation. It fully utilizes external knowledge and context to enhance implicit sentiment text and achieve word-level semantic interaction. [Methods] First, we modeled the target sentence and context as a text graph with words as nodes. Then, we obtained the semantic expansion of the word nodes in the graph through external knowledge linking. Finally, we used the Graph Attention Network to transfer semantic information between the nodes of this text graph. We also obtained the text graph representation through the Readout function. [Results] We evaluated the model on the publicly available implicit sentiment analysis dataset SMP2019-ECISA. Its F1 score reached 78.8%, at least 1.2% higher than the existing model. [Limitations] The size of the generated text graph is related to the length of the text, leading to significant memory and computational overhead for processing long text. [Conclusions] The proposed model uses graph structure to model the relationship between external knowledge, context, and the target sentence at the word level. It effectively represents text semantics and enhances the accuracy of implicit sentiment analysis.

  • Li Hui, Pang Jingwei
    Data Analysis and Knowledge Discovery. 2024, 8(11): 11-21. https://doi.org/10.11925/infotech.2096-3467.2023.0744
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] To effectively utilize information containing audio and video and fully capture the multi-modal interaction among text, image, and audio, this study proposes a multi-modal sentiment analysis model for online users (TIsA) incorporating text, image, and STFT-CNN audio feature extraction. [Methods] First, we separated the video data into audio and image data. Then, we used BERT and BiLSTM to obtain text feature representations and applied STFT to convert audio time-domain signals to the frequency domain. We also utilized CNN to extract audio and image features. Finally, we fused the features from the three modalities. [Results] We conducted empirical research using the “9.5 Luding Earthquake” public sentiment data from Sina Weibo. The proposed TIsA model achieved an accuracy, macro-averaged recall, and macro-averaged F1 score of 96.10%, 96.20%, and 96.10%, respectively, outperforming related baseline models. [Limitations] We should have explored the more profound effects of different fusion strategies on sentiment recognition results. [Conclusions] The proposed TIsA model demonstrates high accuracy in processing audio-containing videos, effectively supporting online public opinion analysis.

  • Yu Bengong, Xing Yu, Zhang Shuwen
    Data Analysis and Knowledge Discovery. 2024, 8(11): 22-32. https://doi.org/10.11925/infotech.2096-3467.2023.0746
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] To fully extract features from multiple modalities, align and integrate multimodal features, and design downstream tasks, we propose an aspect-based sentiment analysis model of multimodal collaborative contrastive learning (MCCL-ABSA). [Methods] Firstly, on the text side, we utilized the similarity between aspect words and their encoding within sentences. On the image side, the model used the similarity of images encoded in different sequences after random cropping to construct positive and negative samples required for contrastive learning. Secondly, we designed the loss function for contrastive learning tasks to learn more distinguishable feature representation. Finally, we fully integrated text and image features for multimodal aspect-based sentiment analysis while dynamically fine-tuning the encoder by combining contrastive learning tasks. [Results] On the TWITTER-2015 dataset, our model’s accuracy and F1 scores improved by 0.82% and 2.56%, respectively, compared to the baseline model. On the TWITTER-2017 dataset, the highest accuracy and F1 scores were 0.82% and 0.25% higher than the baseline model. [Limitations] We need to examine the model’s generalization on other datasets. [Conclusions] The MCCL-ABSA model effectively improves feature extraction quality, achieves feature integration with a simple and efficient downstream structure, and enhances the efficacy of multimodal sentiment classification.

  • Teng Fei, Zhang Qi, Qu Jiansheng, Li Haiying, Liu Jiangfeng, Liu Boyu
    Data Analysis and Knowledge Discovery. 2024, 8(11): 33-46. https://doi.org/10.11925/infotech.2096-3467.2023.0767
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This study utilizes big data analytics to identify key and core technologies, improving the accuracy of identification results and providing robust data support for future technological innovation and large-scale applications. [Methods] We proposed a key and core technology identification method using the patent competitiveness index and Doc-LDA topic model based on the definitions of key and core technology concepts. The method distinguished topics by evaluating their strength, topic co-occurrence strength, and effective cohesion constraint coefficient. [Results] Taking new energy vehicles (EVs) as an empirical research example, a total of 10 key and core technologies were identified: fuel cells, solid-state power batteries, high-efficiency high-density motor drive system, lightweight plastic and composite materials, cellular communication, electro-mechatronics integration, multi-gear transmission, vehicle operations, intelligent control, and autonomous driving. Further trend analysis was conducted. [Limitations] Due to the limited granularity of topic refinement, some potential micro-mechanisms have not been fully revealed. [Conclusions] Using the patent competitiveness index and the Doc-LDA topic model provides a comprehensive assessment of the market value and competitive advantage of technologies. The proposed method also enhances the accuracy of technology development trend predictions.

  • Xie Jun, Gao Jing, Xu Xinying, Hao Shufeng, Liu Yuxin
    Data Analysis and Knowledge Discovery. 2024, 8(11): 47-58. https://doi.org/10.11925/infotech.2096-3467.2023.0793
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] In order to solve the shortcomings such as ignoring affective knowledge when constructing syntactic dependency graph in most GCN models of ABSA, excessive dependency in syntactic dependency graph generates noise, and reducing performance when modeling long-distance or incoherent words, this paper proposes an aspect-based sentiment analysis model of dual-transformer network based on knowledge enhancement (DTNKE). [Methods] The sentiment score in SenticNet7 is used to improve the syntactic dependency graph, and noise reduction for various syntactic dependency types is considered. Secondly, the dual-transformer network is used to improve the performance of long-distance word processing. Meanwhile, the improved syntactic dependency graph can enhance the representation learning of semantic features. [Results] Experiments conducted on five public datasets showed that the DTNKE model achieves F1 scores of 74.97%, 76.13%, 74.83%, 68.01%, and 74.54%, respectively. Compared to the average F1 scores of various baseline models, the improvements are 3.85%, 5.22%, 3.48%, 6.80%, and 7.49%. [Limitations] Because there is a certain proportion of implicit sentiment sentences in the dataset, the proposed model cannot learn more accurate implicit sentiment features, and the analysis results are limited. [Conclusions] The proposed model combines affective commonsense knowledge and syntactic relation after denoising to reconstruct the dual-transformer network, which improves the effect of ABSA.

  • Du Jialin, Wang Xizi, Hu Guangwei
    Data Analysis and Knowledge Discovery. 2024, 8(11): 59-71. https://doi.org/10.11925/infotech.2096-3467.2023.0778
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This study investigates the factors influencing public satisfaction with government-citizen interaction platforms. We constructed an analysis model for factors affecting public satisfaction. [Methods] We extracted micro-level variables from the leadership mailbox corpus, which were combined with macroeconomic variables to establish a public satisfaction analysis model using the Gradient Boosting Decision Tree (GBDT) method. We also eliminated less influential variables with SHAP analysis to optimize the model. [Results] The proposed model outperformed comparison models across accuracy, recall, precision, and F1-score. Key features affecting public satisfaction with the leadership mailbox include GDP growth rate, PCDI growth rate, CPI growth rate, message topic, message type, and response mode. [Limitations] The study did not explore a broader range of influencing factors or more extensive government-citizen interaction scenarios. [Conclusions] The new model optimizes the variable selection process and visualizes how each feature influences the level, direction, and manner of public satisfaction with government responses. The model is a data-driven tool for administrative decision-making.

  • Yang Ning, Huang Feihu, Zhao Shuang, Li Shan, Hu Wei
    Data Analysis and Knowledge Discovery. 2024, 8(11): 72-82. https://doi.org/10.11925/infotech.2096-3467.2024.0750
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] The existing Sleeping Beauty literature recognition methods rely on long-term citation curves. We explore new prediction methods for Sleeping Beauty coefficients based on early-stage citation curves. [Methods] This paper proposed a prediction method based on the ts2net models. Firstly, we transformed the citation curve of literature into three types of complex networks: NVG, HVG, and QG. Secondly, we extracted five features from each network: average degree, average path length, clustering coefficient, number of communities, and modularity. Finally, we used a machine learning-based model to construct the prediction method. [Results] We examined the new model with 89,681 computer science papers retrieved from the Web of Science. We found that the B and the Bcp coefficients correlated with the complex network features. Among the prediction methods built using machine learning models, MLP and GBRT performed the best. MLP achieved the optimal accuracy in predicting the Bcp coefficient with an error rate of 5.90%, while GBRT predicted the B coefficient with an error rate of 31.18%. [Limitations] The prediction accuracy of the new method decreased for literature with high fluctuations in citation frequency or long dormant periods. Additionally, the predicted Sleeping Beauty coefficient serves only as an indicator of potential Sleeping Beauty literature, which needs further validation through downstream Sleeping Beauty literature recognition models or tasks. [Conclusions] This study demonstrates the feasibility of converting citation curves into complex networks and constructing Sleeping Beauty coefficient predictions using network features.

  • Hu Zhongyi, Qin Wei, Wu Jiang
    Data Analysis and Knowledge Discovery. 2024, 8(11): 83-90. https://doi.org/10.11925/infotech.2096-3467.2023.0838
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This paper aims to expand the application of diffusion models in the field of text generation, and to solve the problem of single and redundant information generated by existing models. [Methods] The TextRank algorithm is used to extract keyword information from the original text, and then the keyword information is integrated into a sequence diffusion model (DiffuSeq) to construct a sequence diffusion model (K-DiffuSeq) that integrates keywords. [Results] Compared to the benchmark models, the K-DiffuSeq model has shown an improvement of at least 4.140% in terms of PPL, 32.692% in terms of ROUGE, and 1.566% in terms of diversity measure. [Limitations] Only text corpus related to the product was considered, while richer multimodal product information such as images and videos were ignored. [Conclusions] The integration of keywords can effectively improve the performance of marketing text generation models, and this study confirms the potential application of diffusion models in the field of text generation.

  • Hu Maodi, Yu Qianqian, Qian Li, Chang Zhijun, Zhang Zhixiong
    Data Analysis and Knowledge Discovery. 2024, 8(11): 91-101. https://doi.org/10.11925/infotech.2096-3467.2023.0828
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] To fully explore the semantic information content of review papers, this study proposes a system of relevant information elements and a formal definition of their extraction tasks. We constructed a corresponding framework to explore the semantic information of review papers. [Methods] To address the issues of high specialization, sparse term distribution, and difficulty in annotation in review papers, we applied multi-task learning to achieve information complementarity across tasks. We also introduced self-supervised learning to discover latent information from unlabeled data. [Results] The proposed multi-task learning framework significantly enhanced the performance of various tasks, especially improving the accuracy of element relationship recognition tasks by 8.32%. Furthermore, the overall F1 score increased by about 2% through self-supervised learning. [Limitations] The information extraction process does not consider non-textual data such as images and tables. [Conclusions] The proposed method and process incorporate multi-task and self-supervised learning to improve the mining effect of labeled data and unlabeled data.

  • Chang Bolin, Yuan Yiguo, Li Bin, Xu Zhixing, Feng Minxuan, Wang Dongbo
    Data Analysis and Knowledge Discovery. 2024, 8(11): 102-113. https://doi.org/10.11925/infotech.2096-3467.2023.0834
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This paper proposes an integrated model incorporating radical information to improve the low accuracy and efficiency of existing automatic word segmentation and part-of-speech tagging for Classical Chinese. [Methods] Based on over 70,000 Chinese characters and their radicals, we constructed a radical vector representation model, Radical2Vector. We combined this model with SikuRoBERTa for representing Classic Chinese texts, forming an integrated BiLSTM-CRF model as the main experimental framework. Additionally, we designed a dual-layer scheme for word segmentation and part-of-speech tagging. Finally, we conducted experiments on the Zuo Zhuan dataset. [Results] The model achieved an F1 score of 95.75% for the word segmentation task and 91.65% for the part-of-speech tagging task. These scores represent 8.71% and 13.88% improvements over the baseline model. [Limitations] The approach only incorporates a single radical for each character and does not utilize other components of the characters. [Conclusions] The proposed model successfully integrates radical information, effectively enhancing the performance of textual representation for Classical Chinese. This model demonstrates exceptional performance in word segmentation and part-of-speech tagging tasks.

  • Ye Naifu, Yuan Deyu, Zhang Zhi, Hou Xiaolong
    Data Analysis and Knowledge Discovery. 2024, 8(11): 114-125. https://doi.org/10.11925/infotech.2096-3467.2023.0841
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This paper constructs a dual-channel text relation extraction model based on cross-attention to address the partial text feature issues of the existing models. The new model aims to enhance the comprehensiveness and accuracy of text relation extraction, achieving high-performance relation extraction in domain-specific datasets. [Methods] We proposed a Dual Channel Textual Relation Extraction Based on Cross Attention relation extraction model DCCAM (Dual Channel Cross Attention Model), designing a dual-channel structure that integrated sequence and graph channels. Then, we constructed a cross-attention mechanism of self-attention and gated-attention to promote the high fusion of text features and deeply examine the potential associative information. Finally, we conducted experiments on public datasets and two constructed policing datasets. [Results] Experimental results on the NYT and WebNLG public datasets showed that the DCCAM model’s F1 values improved by 3% and 4% compared to the baseline model. Additionally, ablation experiments proved the effectiveness of each module in enhancing text extraction capability. Experimental results on the telecom fraud category dataset and the aiding cybercrime dataset in the police domain showed that the DCCAM model can improve the text relation extraction effectiveness in the police domain, with F1 values improving by 8.8% and 11.8% compared with the baseline model. [Limitations] We did not use large language models to explore text relation extraction techniques. [Conclusions] The DCCAM model can significantly improve the ability of text relationship extraction, demonstrating the effectiveness and practicality of text relation extraction tasks in the policing domain, and can provide text association analysis and guidance for police work.

  • Zhu Xiping, Xiao Lijuan, Gao Ang, Guo Lu, Yang Huan
    Data Analysis and Knowledge Discovery. 2024, 8(11): 126-135. https://doi.org/10.11925/infotech.2096-3467.2023.0765
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] To achieve semantic association mining between carbon-neutral data and improve the overall accuracy of triplet extraction, this paper proposes a HmBER model for joint entity-relation extraction based on MacBERT. [Methods] In the HmBER model, we enhanced the performance in joint extraction of carbon-neutral entity relationships through similarity measurement, auxiliary training with entity boundaries, and introducing entity category features in relation extraction. [Results] Compared with Multi-head, CasRel, SpERT, and STER models, the F1 score of the HmBER model on the carbon-neutral dataset achieved an average improvement by 2.39% and 13.84%, respectively. [Limitations] The data processed by this method requires inference of sentence meaning to derive entity-relation joint extraction results and deeper latent semantic mining was not performed. [Conclusions] The HmBER model effectively addresses data annotation omission and entity boundary errors, providing a highly accurate approach for entity-relation joint extraction.

  • Wang Yudong, Bai Yu, Ye Na, Chen Jianjun
    Data Analysis and Knowledge Discovery. 2024, 8(11): 136-145. https://doi.org/10.11925/infotech.2096-3467.2023.0968
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    [Objective] This study addresses the issue of topic drift in hyponym expansion in interactive retrieval scenarios. [Methods] We used a graph attention network to encode the relationship graph’s nodes between concept chains and texts. Then, we modeled the concept chains through the word interaction processes and obtained the relationship graph based on character co-occurrence relations. By introducing the attention mechanism, our method overcomes the problem of losing query scenario information in traditional text encoding processes. [Results] The proposed method improved the F1 score by 2.0% compared to the best method, PRGC. [Limitations] The proposed method was designed for interactive scenarios and depended on the interactive data quality. [Conclusions] The proposed model effectively integrates the concept chains’ structural and semantic features into text features. It also calculates attention for concept chains and candidate texts, reducing the loss of scenario topic information during encoding and mitigating the topic drift problem.