Please wait a minute...
Data Analysis and Knowledge Discovery  2020, Vol. 4 Issue (6): 43-50    DOI: 10.11925/infotech.2096-3467.2019.1320
Current Issue | Archive | Adv Search |
Generating Sentences of Contrast Relationship
Jiao Qihang,Le Xiaoqiu()
National Science Library, Chinese Academy of Sciences, Beijing 100190, China
Department of Library, Information and Archives Management, School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190, China
Download: PDF (770 KB)   HTML ( 8
Export: BibTeX | EndNote (RIS)      
Abstract  

[Objective] This paper tries to generate contrastive sentences from two related paragraphs, aiming to establish a new model for creating contrastive paragraphs. [Methods] We generated contrastive sentences automatically from contrastive text sequences. We designed a deep learning model based on Seq2seq, which incorporated contrast features with character vectors to represent texts. Both the Encoder and Decoder layers of our model used BiLSTM structure, which also included attention mechanism. [Results] We examined the proposed model with manually annotated search lists and scientific papers. Then, we adopted BLEU as evaluation index for the results. The final evaluation score was 12.1, which was 6.5 higher than those of the benchmark model using BiLSTM + Attention. [Limitations] Due to the complexity of manually labeling, the data size in our experiments was small. [Conclusions] The proposed model could be used to build new model for generating contrastive paragraphs.

Key wordsContrast Relationship      Text Generation      Text Representation      Deep Learning     
Received: 10 December 2019      Published: 07 July 2020
ZTFLH:  TP391  
Corresponding Authors: Le Xiaoqiu     E-mail: lexq@mail.las.ac.cn

Cite this article:

Jiao Qihang,Le Xiaoqiu. Generating Sentences of Contrast Relationship. Data Analysis and Knowledge Discovery, 2020, 4(6): 43-50.

URL:

http://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/10.11925/infotech.2096-3467.2019.1320     OR     http://manu44.magtech.com.cn/Jwk_infotech_wk3/EN/Y2020/V4/I6/43

科技论文中对比关系文本示例 查新单中对比关系文本示例
对于不同段落间篇章级并列关系的识别研究目前还较少。Zhao等在新闻推荐研究中采用序列标注方法,考虑句子出现在新闻文本中的位置信息,对新闻文本有并列关系但并不相似的语句进行识别,但所识别的句群分布在两篇论文中,尚未发现针对一篇文章内句群间并列关系的文本识别相关研究。 从检出文献看,在国内已有关于转运呼吸机的报道。常久利报道了一种新生儿专用急救综合治疗车,涉及呼吸机、暖箱的应用,呼吸机、暖箱采用蓄电池供电,与该查新项目采用车载电源并进行逆变匹配和响应略有不同,也未提及电源逆变的具体技术;南通市第一人民医院报道了…
Examples of Contrast Relationship Text
Generation Model Framework
参数 取值
Batch Size 16
字向量维度 64
学习率 10-3
隐藏层单元个数 1 024
输入文本截断 600
输出文本截断 200
Model Parameters
项目 配置
GPU TeslaP100
操作系统 Ubuntu18.04
内存 12GB
显存 16GB
Python版本 Python3.6.9
TensorFlow版本 Tensorflow1.15.0
Environment Configuration
方法 BLEU
LSTM 2.6
BiLSTM 2.9
BiLSTM+Attention 5.6
本文方法(BiLSTM+Attention+对比特征) 12.1
Model Experiment Results
查新文本+相关文本 基准模型(BiLSTM+Attention)生成文本 本文方法生成文本 人工生成文本
新生儿呼吸机专用转运设备的研发。通过电源逆变技术实现新生儿温箱和呼吸机功率匹配和响应,应用于新生儿呼吸转运系统。
+
一种新生儿专用急救综合治疗车。本实用新型涉及一种医疗器械,特别涉及一种新生儿专用急救综合治疗车,包括车体,其特征在于:所述车体包括四个支撑杆、支撑台面和底层托板,支撑台面上安装有监护仪、新生儿呼吸机、新生儿操作台、新生儿暖箱和输液架,底层托板设置有抽屉和储物柜,底层托板上安装有蓄电池、垃圾桶。
上述研究了用于呼吸机呼吸机的危护治装的危术,未涉及新生儿转运物的电变配和的,响应的技术。 上述文献报了了一新生儿专急救综综治疗车,涉及呼吸机、暖箱的合用,呼吸机、研究蓄电池供电,未提提电源逆变进行技术。 上述研究报道了一种新生儿专用急救综合治疗车,涉及呼吸机、暖箱的应用,呼吸机、暖箱采用蓄电池供电,与该查新项目采用车载电源并进行逆变匹配和响应略有不同,也未提及电源逆变的具体技术。
Senentce Generation Example of Contrast Relationship in Search List
[1] 万小军, 冯岩松, 孙薇薇. 文本自动生成研究进展与趋势[R]. 北京:北京大学, 2016: 1-2.
[1] ( Wan Xiaojun, Feng Yansong, Sun Weiwei. Research Progress and Trend of Automatic Text Generation[R]. Beijing: Peking University, 2016: 1-2.)
[2] Mihalcea R, Tarau P. TextRank: Bringing Order into Text [C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. 2004: 404-411.
[3] 林汝昌, 李曼珏. 语义的对比关系和对立关系[J]. 外语教学与研究, 1987(2):15-21.
[3] ( Lin Ruchang, Li Manjue. On Semantic Opposites and Contrasts[[J]. Foreign Language Teaching and Research, 1987(2):15-21.)
[4] 车竞. 现代汉语比较句论略[J]. 湖北师范学院学报:哲学社会科学版, 2005,25(3):60-63.
[4] ( Che Jing. A Brief Analysis of Comparative Sentences in Modern Chinese[J]. Journal of Hubei Normal University:Philosophy and Social Sciences, 2005,25(3):60-63.)
[5] 魏阳阳. 现代汉语三种平比句型的语义认知机制研究[J]. 理论月刊, 2017(12):75-80.
[5] ( Wei Yangyang. A Study on the Semantic Cognitive Mechanism of Three Parable Sentence Patterns in Modern Chinese[[J]. Theory Monthly, 2017(12):75-80.)
[6] Jindal N, Liu B. Identifying Comparative Sentences in Text Documents [C]//Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2006: 244-251.
[7] 黄小江, 万小军, 杨建武, 等. 汉语比较句识别研究[J]. 中文信息学报, 2008,22(5):30-38.
[7] ( Huang Xiaojiang, Wan Xiaojun, Yang Jianwu, et al. Learning to Identify Chinese Comparative Sentences[J]. Journal of Chinese Information Processing, 2008,22(5):30-38.)
[8] 白林楠, 胡韧奋, 刘智颖. 基于句法语义规则系统的比较句自动识别[J]. 北京大学学报(自然科学版), 2015,51(2):275-281.
[8] ( Bai Linnan, Hu Renfen, Liu Zhiying. Recognition of Comparative Sentences Based on Syntactic and Semantic Rules-System[J]. Acta Scientiarum Naturalium Universitatis Pekinensis, 2015,51(2):275-281.)
[9] 吴晨, 韦向峰. 用户评价中比较句的识别和倾向性分析[J]. 计算机科学, 2016,43(S1):435-439.
[9] ( Wu Chen, Wei Xiangfeng. Opinion Analysis and Recognition of Comparative Sentences in User Views[J]. Computer Science, 2016,43(S1):435-439.)
[10] 朱茂然, 王奕磊, 高松, 等. 中文比较关系的识别: 基于注意力机制的深度学习模型[J]. 情报学报, 2019,38(6):612-621.
[10] ( Zhu Maoran, Wang Yilei, Gao Song, el at. A Deep-Learning Model Based on Attention Mechanism for Chinese Comparative Relation Detection[J]. Journal of the China Society for Scientific and Technical Information, 2019,38(6):612-621.)
[11] Baxendale P B. Machine-made Index for Technical Literature—An Experiment[J]. IBM Journal of Research and Development, 1958,2(4):354-361.
doi: 10.1147/rd.24.0354
[12] Edmundson H P. New Methods in Automatic Extracting[J]. Journal of the ACM, 1969,16(2):264-285.
doi: 10.1145/321510.321519
[13] Gkatzia D, Lemon O, Rieser V. Natural Language Generation Enhances Human Decision-making with Uncertain Information[OL]. arXiv Preprint, arXiv: 1606. 03254.
[14] Lopez A. Statistical Machine Translation[J]. ACM Computing Surveys, 2008,40(3). DOI: 10.1145/1380584.1380586.
[15] Sutskever I, Vinyals O, Le Q V. Sequence to Sequence Learning with Neural Networks[OL]. arXiv Preprint, arXiv: 1409. 3215.
[16] Shi T, Keneshloo Y, Ramakrishnan N, et al. Neural Abstractive Text Summarization with Sequence-to-Sequence Models : A Survey [OL]. arXiv Preprint, arXiv: 1812. 02303.
[17] Jain P, Agrawal P, Mishra A, et al. Story Generation from Sequence of Independent Short Descriptions[OL]. arXiv Preprint, arXiv: 1707. 05501.
[18] Liu T, Wang K, Sha L, et al. Table-to-Text Generation by Structure-aware Seq2Seq Learning [C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018.
[19] Deng Y, Kim Y, Chiu J, et al. Latent Alignment and Variational Attention [C]//Advances in Neural Information Processing Systems. 2018: 9712-9724.
[20] Li J, Monroe W, Shi T, et al. Adversarial Learning for Neural Dialogue Generation[OL]. arXiv Preprint, arXiv: 1701. 06547.
[21] Al-Rfou R, Perozzi B, Skiena S. Polyglot: Distributed Word Representations for Multilingual NLP[OL]. arXiv Preprint, arXiv: 1307. 1662.
[22] Papineni K, Roukos S, Ward T, et al. BLEU: A Method for Automatic Evaluation of Machine Translation [C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2002: 311-318.
[1] Wang Xinyun,Wang Hao,Deng Sanhong,Zhang Baolong. Classification of Academic Papers for Periodical Selection[J]. 数据分析与知识发现, 2020, 4(7): 96-109.
[2] Wang Mo,Cui Yunpeng,Chen Li,Li Huan. A Deep Learning-based Method of Argumentative Zoning for Research Articles[J]. 数据分析与知识发现, 2020, 4(6): 60-68.
[3] Deng Siyi,Le Xiaoqiu. Coreference Resolution Based on Dynamic Semantic Attention[J]. 数据分析与知识发现, 2020, 4(5): 46-53.
[4] Yu Chuanming,Yuan Sai,Zhu Xingyu,Lin Hongjun,Zhang Puliang,An Lu. Research on Deep Learning Based Topic Representation of Hot Events[J]. 数据分析与知识发现, 2020, 4(4): 1-14.
[5] Su Chuandong,Huang Xiaoxi,Wang Rongbo,Chen Zhiqun,Mao Junyu,Zhu Jiaying,Pan Yuhao. Identifying Chinese / English Metaphors with Word Embedding and Recurrent Neural Network[J]. 数据分析与知识发现, 2020, 4(4): 91-99.
[6] Liu Tong,Ni Weijian,Sun Yujian,Zeng Qingtian. Predicting Remaining Business Time with Deep Transfer Learning[J]. 数据分析与知识发现, 2020, 4(2/3): 134-142.
[7] Chuanming Yu,Haonan Li,Manyi Wang,Tingting Huang,Lu An. Knowledge Representation Based on Deep Learning:Network Perspective[J]. 数据分析与知识发现, 2020, 4(1): 63-75.
[8] Mengji Zhang,Wanyu Du,Nan Zheng. Predicting Stock Trends Based on News Events[J]. 数据分析与知识发现, 2019, 3(5): 11-18.
[9] Jingjing Pei,Xiaoqiu Le. Identifying Coordinate Text Blocks in Discourses[J]. 数据分析与知识发现, 2019, 3(5): 51-56.
[10] Zhixiong Zhang,Huan Liu,Liangping Ding,Pengmin Wu,Gaihong Yu. Identifying Moves of Research Abstracts with Deep Learning Methods[J]. 数据分析与知识发现, 2019, 3(12): 1-9.
[11] Li Yu,Li Qian,Changlei Fu,Huaming Zhao. Extracting Fine-grained Knowledge Units from Texts with Deep Learning[J]. 数据分析与知识发现, 2019, 3(1): 38-45.
[12] Changlei Fu,Li Qian,Huaping Zhang,Huaming Zhao,Jing Xie. Mining Innovative Topics Based on Deep Learning[J]. 数据分析与知识发现, 2019, 3(1): 46-54.
[13] Bengong Yu,Peihang Zhang,Qingtang Xu. Selecting Products Based on F-BiGRU Sentiment Analysis[J]. 数据分析与知识发现, 2018, 2(9): 22-30.
[14] Wei Lu,Mengqi Luo,Heng Ding,Xin Li. Image Annotation Tags by Deep Learning and Real Users: A Comparative Study[J]. 数据分析与知识发现, 2018, 2(5): 1-10.
[15] Guoming Feng,Xiaodong Zhang,Suhui Liu. Classifying Chinese Texts with CapsNet[J]. 数据分析与知识发现, 2018, 2(12): 68-76.
  Copyright © 2016 Data Analysis and Knowledge Discovery   Tel/Fax:(010)82626611-6626,82624938   E-mail:jishu@mail.las.ac.cn