Please wait a minute...
Advanced Search
数据分析与知识发现  2023, Vol. 7 Issue (1): 113-127     https://doi.org/10.11925/infotech.2096-3467.2022.0402
  研究论文 本期目录 | 过刊浏览 | 高级检索 |
面向被引片段识别的改进混合方法*
聂维民,欧石燕()
南京大学信息管理学院 南京 210023
A Modified Hybrid Method to Identify Cited Spans
Nie Weimin,Ou Shiyan()
School of Information Management, Nanjing University, Nanjing 210023, China
全文: PDF (1105 KB)   HTML ( 14
输出: BibTeX | EndNote (RIS)      
摘要 

【目的】 “无监督排序+分类”模式的两阶段混合方法存在无监督排序可靠性较低、分类得到的被引句数量不稳定问题,并且被引片段的识别粒度仅限于单句。本研究对混合方法中的上述问题予以改进以提高其性能,同时解决不同粒度被引片段的识别问题。【方法】 提出一种面向被引片段识别的改进混合方法,在第一阶段采用有监督排序从所有被引文献句中筛选出候选被引句,在第二阶段通过回归方法确定最终被引片段。此外,引入包含不同数量连续句子的n元句输入方式以及组内标准化方法以识别不同粒度的被引片段。【结果】 在CL-SciSumm 2019和2020竞赛语料测试集上进行测评,本研究所提改进混合方法的句子重合度F1值为0.167;以3元句为输入,采用组内Z值标准化,其句子重合度F1值由0.083提高到0.158。【局限】 未使用被引文献句的位置特征;在下游任务中的应用尚待探索。【结论】 本研究所提改进混合方法在被引片段识别粒度为单句和多个连续单句时均取得良好效果。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
聂维民
欧石燕
关键词 科学文献被引片段有监督排序回归组内标准化    
Abstract

[Objective] This paper proposes a new algorithm to identify the cited contents, aiming to address the issues facing the existing unsupervised models and extend the granularity of single sentence to several adjacent ones. [Methods] First, we established a modified hybrid method with supervised ranking to select candidates from all sentences of the cited literature. Then, we used regression technique to determine the sentences with the cited segments. Third, we used the grouped adjacent sentences of the cited literature, namely n-sent, as inputs to the modified hybrid method. Finally, we conducted the intraclass normalization to identify the cited contents. [Results] The modified hybrid method yielded sentence overlapping F1 value of 0.167 on the test set of CL-SciSumm 2019 and 2020. With 3-sent as input, the modified hybrid method improved the sentence overlapping F1 value from 0.083 to 0.158 after intraclass Z-score normalization. [Limitations] The modified hybrid method did not utilize the sentence positions of the cited literature. In addition, the prospect of applying the proposed method to downstream tasks remains vague. [Conclusions] The proposed method could effectively identify cited segments, of which the granularity ranges from single sentence to multiple adjacent sentences.

Key wordsScientific Literature    Cited Spans    Supervised Ranking    Regression    Intraclass Normalization
收稿日期: 2022-04-26      出版日期: 2023-02-16
ZTFLH:  G353 TP391  
基金资助:*国家社会科学基金重点项目的研究成果之一(17ATQ001)
通讯作者: 欧石燕,ORCID:0000-0001-8617-6987,E-mail: oushiyan@nju.edu.cn。   
引用本文:   
聂维民, 欧石燕. 面向被引片段识别的改进混合方法*[J]. 数据分析与知识发现, 2023, 7(1): 113-127.
Nie Weimin, Ou Shiyan. A Modified Hybrid Method to Identify Cited Spans. Data Analysis and Knowledge Discovery, 2023, 7(1): 113-127.
链接本文:  
https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/10.11925/infotech.2096-3467.2022.0402      或      https://manu44.magtech.com.cn/Jwk_infotech_wk3/CN/Y2023/V7/I1/113
Fig.1  引证关系实例
Fig.2  被引片段识别流程
Fig.3  基于SBERT的有监督排序过程
Fig.4  基于RBERT的被引片段确定过程
被引片段句子构成 数量 占比
不连续的单句 594 78.88%
连续两个单句 123 16.34%
连续三个单句 29 3.85%
连续4个单句 5 0.66%
连续5个单句 2 0.27%
总计 753 100.00%
Table 1  被引片段组成情况统计
Fig.5  n元句输入方式
预训练语言模型名称 Top-N值 SO-P SO-R SO-F ROUGE-P ROUGE-R ROUGE-F
all-MPNet-base-v2 Top1 0.048 0.180 0.075 0.242 0.069 0.101
Top2 0.046 0.215 0.075 0.244 0.086 0.116
Top3 0.046 0.130 0.068 0.248 0.081 0.113
multi-qa-MPNet-base-dot-v1 Top1 0.122 0.115 0.118 0.148 0.165 0.146
Top2 0.108 0.204 0.142 0.254 0.080 0.113
Top3 0.088 0.249 0.130 0.311 0.047 0.076
all-MiniLM-L6-v2 Top1 0.127 0.120 0.124 0.242 0.084 0.114
Top2 0.097 0.182 0.126 0.244 0.069 0.101
Top3 0.122 0.115 0.118 0.224 0.078 0.106
Table 2  采用不同预训练语言模型的SBERT识别性能
预训练语言模型名称 Top-N值 SO-P SO-R SO-F ROUGE-P ROUGE-R ROUGE-F
BERT-base-uncased Top1 0.117 0.221 0.153 0.298 0.067 0.104
Top2 0.122 0.231 0.160 0.307 0.072 0.111
Top3 0.104 0.296 0.154 0.386 0.040 0.069
SciBERT Top1 0.105 0.199 0.138 0.277 0.068 0.102
Top2 0.116 0.220 0.152 0.299 0.072 0.108
Top3 0.114 0.215 0.149 0.290 0.064 0.099
ALBERT-base-v2 Top1 0.051 0.095 0.066 0.156 0.034 0.052
Top2 0.083 0.157 0.109 0.220 0.049 0.075
Top3 0.073 0.138 0.096 0.187 0.053 0.077
RoBERTa-base Top1 0.025 0.047 0.033 0.096 0.014 0.023
Top2 0.111 0.209 0.144 0.289 0.064 0.099
Top3 0.096 0.093 0.125 0.255 0.052 0.082
Table3  采用不同预训练语言模型的RBERT识别性能
Fig.6   m n不同取值组合下的改进混合方法性能
系统名 SO-P SO-R SO-F ROUGE-P ROUGE-R ROUGE-F
PINGAN TECH 0.132 0.246 0.172 0.298 0.113 0.147
本研究改进混合方法 0.128 0.242 0.167 0.312 0.075 0.115
uniHD 0.116 0.260 0.161 0.317 0.085 0.113
本研究RBERT模型 0.122 0.231 0.160 0.307 0.072 0.111
本研究SBERT模型 0.108 0.204 0.142 0.254 0.080 0.113
CMU 0.087 0.246 0.128 0.307 0.049 0.075
NaCTeM-UoM / / 0.126 / / 0.075
NJU / / 0.124 / / 0.090
BUPT / / 0.106 / / 0.034
Table 4  不同被引片段识别系统的性能比较
n元句 SO-P SO-R SO-F ROUGE-P ROUGE-R ROUGE-F
1元句 0.128 0.242 0.167 0.312 0.075 0.115
2元句 0.064 0.206 0.098 0.295 0.038 0.061
3元句 0.056 0.156 0.083 0.240 0.037 0.080
Table 5  以n元句为输入的混合模型识别效果
标准化方法 SO-P SO-R SO-F ROUGE-P ROUGE-R ROUGE-F
均值标准化 0.104 0.218 0.141 0.293 0.071 0.108
最小-最大标准化 0.061 0.185 0.092 0.279 0.041 0.065
Z值标准化 0.119 0.233 0.158 0.311 0.074 0.113
Table 6  采用不同标准化方法的改进混合方法识别性能
[1] 叶继元. “SCI至上”的要害、根源与破解之道[J]. 情报学报, 2020, 39(8): 787-795.
[1] ( Ye Jiyuan. The Keys, Roots, and Solutions To “SCI Supremacy”[J]. Journal of the China Society for Scientific and Technical Information, 2020, 39(8): 787-795.)
[2] 国务院办公厅. 关于完善科技成果评价机制的指导意见[EB/OL]. [2022-03-12]. http://www.gov.cn/zhengce/content/2021-08/02/content_5628987.htm.
[2] ( General Office of the State Council. Guidance on Improving the Evaluation Mechanism of Scientific and Technological Achievements[EB/OL]. [2022-03-12]. http://www.gov.cn/zhengce/content/2021-08/02/content_5628987.htm.)
[3] 卢超, 章成志, 王玉琢, 等. 语义特征分析的深化——学术文献的全文计量分析研究综述[J]. 中国图书馆学报, 2021, 47(2): 110-131.
[3] ( Lu Chao, Zhang Chengzhi, Wang Yuzhuo, et al. Strengthened Analyses of Semantic Features: Review of Full-Text Bibliometrics of Academic Documents[J]. Journal of Library Science in China, 2021, 47(2): 110-131.)
[4] 李文文, 陈雅. 国内外Data Curation研究综述[J]. 情报资料工作, 2013(5): 35-38.
[4] ( Li Wenwen, Chen Ya. Summary of Data Curation Research at Home and Abroad[J]. Information and Documentation Services, 2013(5): 35-38.)
[5] 徐健, 李纲, 毛进, 等. 文献被引片段特征分析与识别研究[J]. 数据分析与知识发现, 2017, 1(11): 37-45.
[5] ( Xu Jian, Li Gang, Mao Jin, et al. Recognizing and Analyzing Cited Spans in Literature[J]. Data Analysis and Knowledge Discovery, 2017, 1(11): 37-45.)
[6] 金贤日, 欧石燕. 无监督引用文本自动识别与分析[J]. 数据分析与知识发现, 2021, 5(1): 66-77.
[6] ( Kim Hyonil, Ou Shiyan. Identifying Citation Texts with Unsupervised Method[J]. Data Analysis and Knowledge Discovery, 2021, 5(1): 66-77.)
[7] Chandrasekaran M K, Yasunaga M, Radev D R, et al. Overview and Results: CL-SciSumm Shared Task 2019[C]// Proceedings of the 4th Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 153-166.
[8] Jaidka K, Chandrasekaran M K, Rustagi S, et al. Overview of the CL-SciSumm 2016 Shared Task[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 93-102.
[9] Li L, Mao L, Zhang Y, et al. CIST System for CL-SciSumm 2016 Shared Task[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 156-167.
[10] La Quatra M, Cagliero L, Baralis E. Poli2Sum@CL-SciSumm-19: Identify, Classify, and Summarize Cited Text Spans by Means of Ensembles of Supervised Models[C]// Proceedings of the 4th Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 233-246.
[11] Ma S, Zhang H, Xu J, et al. NJUST @CLSciSumm-18[C]// Proceedings of the 3rd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 2018: 114-129.
[12] Wang P, Li S, Wang T, et al. NUDT @CLSciSumm-18[C]// Proceedings of the 3rd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 2018: 102-113.
[13] Nomoto T. NEAL: A Neurally Enhanced Approach to Linking Citation and Reference[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 168-174.
[14] Prasad A. WING-NUS at CL-SciSumm 2017:Learning from Syntactic and Semantic Similarity for Citation Contextualization[C]// Proceedings of the 2017 Computational Linguistics Scientific Summarization Shared Task Organized as a Part of the 2nd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries and Co-Located, the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2017: 26-32.
[15] Zerva C, Nghiem M Q, Nguyen N T H, et al. NaCTeM-UoM @CL-SciSumm 2019[C]// Proceedings of the 4th Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 167-180.
[16] Chai L, Fu G Z, Ni Y. NLP-PINGAN-TECH @CL-SciSumm 2020[C]// Proceedings of the 1st Workshop on Scholarly Document Processing. 2020: 235-241.
[17] Alonso H M, Makki R, Gu J. CL-SciSumm Shared Task-Team Magma[C]// Proceedings of the 3rd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 2018: 172-176.
[18] Moraes L, Baki S, Verma R, et al. University of Houston at CL-SciSumm 2016: SVMs with Tree Kernels and Sentence Similarity[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 113-121.
[19] Zhang D, Li S. PKU @CLSciSumm-17: Citation Contextualization[C]// Proceedings of the 2017 Computational Linguistics Scientific Summarization Shared Task Organized as a Part of the 2nd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries and Co-Located, the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2017: 86-93.
[20] Kim H, Ou S. NJU@CL-SciSumm-19[C]// Proceedings of the 4th Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 247-255.
[21] 章成志, 徐津, 马舒天. 学术文本被引片段的自动识别研究[J]. 情报理论与实践, 2019, 42(9): 139-145.
[21] ( Zhang Chengzhi, Xu Jin, Ma Shutian. Automatic Identification of Cited Spans in Academic Articles[J]. Information Studies: Theory & Application, 2019, 42(9): 139-145.)
[22] Jaidka K, Chandrasekaran M K, Elizalde B F, et al. The Computational Linguistics Summarization Pilot Task[C]// Proceedings of the 2014 Text Analysis Conference. 2014: 1-12.
[23] Cohan A, Soldaini L. Towards Citation-Based Summarization of Biomedical Literature[C]// Proceedings of the 2014 Text Analysis Conference. 2014: 79-87.
[24] Felber T, Kern R. Graz University of Technology at CL-SciSumm 2017: Query Generation Strategies[C]// Proceedings of the 2017 Computational Linguistics Scientific Summarization Shared Task Organized as a Part of the 2nd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries and Co-Located, the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2017: 67-72.
[25] Lu K, Mao J, Li G, et al. Recognizing Reference Spans and Classifying Their Discourse Facets[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 139-145.
[26] Cao Z, Li W, Wu D. PolyU at CL-SciSumm 2016[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 132-138.
[27] Klampfl S, Rexha A, Kern R. Identifying Referenced Text in Scientific Publications by Summarisation and Classification Techniques[C]// Proceedings of the 2016 Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 2016 Joint Conference on Digital Libraries. 2016: 122-131.
[28] Aumiller D, Almasian S, Hausner P, et al. UniHD@CL-SciSumm 2020: Citation Extraction as Search[C]// Proceedings of the 1st Workshop on Scholarly Document Processing. 2020: 261-269.
[29] Lauscher A, Glavas G, Eckert K. University of Mannheim @CLSciSumm-17: Citation-Based Summarization of Scientific Articles Using Semantic Textual Similarity[C]// Proceedings of the 2017 Computational Linguistics Scientific Summarization Shared Task Organized as a Part of the 2nd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries and Co-Located, the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2017: 33-42.
[30] Bromley J, Bentz J W, Bottou L, et al. Signature Verification Using a “Siamese” Time Delay Neural Network[J]. International Journal of Pattern Recognition and Artificial Intelligence, 1993, 7(4): 669-688.
doi: 10.1142/S0218001493000339
[31] Moraes L F, Das A, Karimi S, et al. University of Houston @CL-SciSumm 2018[C]// Proceedings of the 3rd Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 2018: 142-149.
[32] Fergadis A, Pappas D, Papageorgiou H. ATHENA@CL-SciSumm 2019: Siamese Recurrent Bi-Directional Neural Network for Identifying Cited Text Spans[C]// Proceedings of the 4th Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 256-262.
[33] Reimers N, Gurevych I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019: 3982-3992.
[34] Mahurkar S, Patil R. LRG at SemEval-2020 Task 7: Assessing the Ability of BERT and Derivative Models to Perform Short-Edits Based Humor Grading[C]// Proceedings of the 14th Workshop on Semantic Evaluation. 2020: 858-864.
[35] Henderson M, Al-Rfou R, Strope B, et al. Efficient Natural Language Response Suggestion for Smart Reply[OL]. arXiv Preprint, arXiv: 1705.00652.
[36] Lin C Y. ROUGE: A Package for Automatic Evaluation of Summaries[C]// Proceedings of the 2004 Workshop on Text Summarization Branches Out. 2004: 74-81.
[37] Kenton D, Chang M W, Lee K, et al. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. 2019: 4171-4186.
[38] Song K, Tan X, Qin T, et al. MPNet: Masked and Permuted Pre-training for Language Understanding[C]// Proceedings of the 2020 Annual Conference on Neural Information Processing Systems. 2020: 16857-16867.
[39] Wang W, Wei F, Dong L, et al. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers[C]// Proceedings of the 2020 Annual Conference on Neural Information Processing Systems. 2020: 5776-5788.
[40] Beltagy I, Lo K, Cohan A. SciBERT: A Pretrained Language Model for Scientific Text[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, the 9th International Joint Conference on Natural Language Processing. 2019: 3615-3620.
[41] Lan Z, Chen M, Goodman S, et al. ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations[C]// Proceedings of the 8th International Conference on Learning Representations. 2020: 1-17.
[42] Liu Y H, Ott M, Goyal N, et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[OL]. arXiv Preprint, arXiv: 1907.11692.
[43] Umapathy A, Radhakrishnan K, Jain K, et al. CiteQA@CLSciSumm 2020[C]// Proceedings of the 1st Workshop on Scholarly Document Processing. 2020: 297-302.
[44] Li L, Zhu Y, Xie Y, et al. CIST@CLSciSumm-19: Automatic Scientific Paper Summarization with Citances and Facets[C]// Proceedings of the 4th Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries Co-Located, the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2019: 196-207.
[1] 张冬瑜, 顾丰, 崔紫娟, 胡绍翔, 张伟, 林鸿飞. 基于关键词抽取算法的隐喻研究趋势分析*[J]. 数据分析与知识发现, 2022, 6(4): 130-138.
[2] 韦婷婷, 江涛, 郑舒玲, 张建桃. 融合LSTM与逻辑回归的中文专利关键词抽取*[J]. 数据分析与知识发现, 2022, 6(2/3): 308-317.
[3] 周衡,陈张建,李爱勤,成晓强,吴华意. 居民地变化的空间分布及社会经济驱动力分析——以浙江省为例*[J]. 数据分析与知识发现, 2020, 4(9): 81-90.
[4] 钟丽珍,马敏书,周长锋. 考虑航线特征的机票价格预测方法研究*[J]. 数据分析与知识发现, 2020, 4(2/3): 192-199.
[5] 丁晟春,俞沣洋,李真. 网络舆情潜在热点主题识别研究*[J]. 数据分析与知识发现, 2020, 4(2/3): 29-38.
[6] 陈先来,韩超鹏,安莹,刘莉,李忠民,杨荣. 基于互信息和逻辑回归的新词发现 *[J]. 数据分析与知识发现, 2019, 3(8): 105-113.
[7] 宋士杰,赵宇翔,韩文婷,朱庆华. 互联网环境下公民健康素养对健康风险的抑制效应分析*——基于CHNS数据的慢性病实证研究[J]. 数据分析与知识发现, 2019, 3(4): 13-21.
[8] 扈文秀,马丽,张建锋. 基于股票日内交易加权网络的超短期股票交易型操纵识别研究 *[J]. 数据分析与知识发现, 2019, 3(10): 118-126.
[9] 张红丽, 刘济郢, 杨斯楠, 徐健. 基于网络用户评论的评分预测模型研究*[J]. 数据分析与知识发现, 2017, 1(8): 48-58.
[10] 徐健, 李纲, 毛进, 叶光辉. 文献被引片段特征分析与识别研究[J]. 数据分析与知识发现, 2017, 1(11): 37-45.
[11] 俞立平, 潘云涛, 武夷山. 学术期刊非线性评价方法的检验与修正研究[J]. 现代图书情报技术, 2011, 27(7/8): 110-115.
[12] 阚德涛. 服务质量分析系统的设计与实现*[J]. 现代图书情报技术, 2009, 25(5): 76-80.
[13] 徐革,姚卫东,陈浩. 电子资源用户满意度影响因子的多元线性回归分析[J]. 现代图书情报技术, 2007, 2(10): 52-56.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 2015 《数据分析与知识发现》编辑部
地址:北京市海淀区中关村北四环西路33号 邮编:100190
电话/传真:(010)82626611-6626,82624938
E-mail:jishu@mail.las.ac.cn