Review on Normalization Methods of Citation Counts
Chen Shiji1, Shi Liwen1, Li Dongmei2, Zuo Wenge1
1. China Agricultural University Library, Beijing 100193, China;
2. Development Planning Department, China Agricultural University, Beijing 100193, China
Abstract:Citation counts is the most representative indicator in citation analysis,which is also used in research performance as an indicator of the impact. Since citation counts is sensitive on field, document type and publish year, it needs normalize for the cross-field comparison. In this paper,various methods of citation counts normalization are demonstrated and some problems on citation indicator in practice are also discussed.
陈仕吉, 史丽文, 李冬梅, 左文革. 论文被引频次标准化方法述评[J]. 现代图书情报技术, 2012, 28(4): 54-60.
Chen Shiji, Shi Liwen, Li Dongmei, Zuo Wenge. Review on Normalization Methods of Citation Counts. New Technology of Library and Information Service, 2012, 28(4): 54-60.
[1] HEFCE. HEFCE : Publications : 1998 : Consultation 98/54 - Research Funding: Introduction of a Policy Factor[EB/OL]. [2012-02-13]. http://www.hefce.ac.uk/pubs/hefce/1998/98_54.htm.[2] Glanzel W, Thijs B, Schubert A, et al. Subfield-specific Normalized Relative Indicators and a New Generation of Relational Charts: Methodological Foundations Illustrated on the Assessment of Institutional Research Performance [J]. Scientometrics, 2009, 78(1): 165-188.[3] van Raan A F J. The Use of Bibliometric Analysis in Research Performance Assessment and Monitoring of Interdisciplinary Scientific Developments [J]. Assessment Theory and Practice, 2003,1(12): 20-29.[4] Schubert A, Braun T. Relative Indicators and Relational Charts for Comparative Assessment of Publication Output and Citation Impact [J]. Scientometrics, 1986, 9(5): 281-291.[5] Schubert A, Braun T. Reference Standards for Citation Based Assessments [J]. Scientometrics, 1993, 26(1): 21-35.[6] Vinkler P. The Case of Scientometricians with the “Absolute Relative” Impact Indicator[J]. Journal of Informetrics, 2012, 6(2): 254-264.[7] Vinkler P. Model for Quantitative Selection of Relative Scientometric Impact Indicators [J]. Scientometrics, 1996, 36(2): 223-236.[8] Waltman L, van Eck N J, van Leeuwen T N, et al. Towards a New Crown Indicator: Some Theoretical Considerations [J]. Journal of Informetrics, 2011, 5(1): 37-47.[9] Leydesdorff L, Opthof T. Remaining Problems with the “New Crown Indicator” (MNCS) of the CWTS[J]. Journal of Informetrics, 2011, 5(1): 224-225.[10] Gingras Y, Larivière V. There are Neither “King” nor “Crown” in Scientometrics: Comments on A Supposed “Alternative” Method of Normalization [J]. Journal of Informetrics, 2011, 5(1): 226-227.[11] Bornmann L, Mutz R D. Further Steps Towards an Ideal Method of Measuring Citation Performance: The Avoidance of Citation (Ratio) Averages in Field-normalization [J]. Journal of Informetrics, 2011, 5(1): 228-230.[12] Opthof T, Leydesdorff L. Caveats for the Journal and Field Normalizations in the CWTS (“Leiden”) Evaluations of Research Performance [J]. Journal of Informetrics, 2010, 4(3): 423-430.[13] van Raan A F J, van Leeuwen T N, Visser M S, et al. Rivals for the Crown: Reply to Opthof and Leydesdorff[J]. Journal of Informetrics, 2010, 4(3): 431-435.[14] Moed H F. CWTS Crown Indicator Measures Citation Impact of a Research Group's Publication Oeuvre[J]. Journal of Informetrics, 2010, 4(3): 436-438.[15] Spaan J A E. The Danger of Pseudoscience in Informetrics[J]. Journal of Informetrics, 2010, 4(3): 439-440.[16] Bornmann L. Towards an Ideal Method of Measuring Research Performance: Some Comments to the Opthof and Leydesdorff (2010) Paper [J]. Journal of Informetrics, 2010, 4(3): 441-443.[17] Leydesdorff L, Opthof T. Normalization at the Field Level: Fractional Counting of Citations [J]. Journal of Informetrics, 2010, 4(4): 644-646.[18] Lundberg J. Lifting the Crown—Citation Z—Score [J]. Journal of Informetrics, 2007, 1(2): 145-154.[19] Seglen P O. The Skewness of Science [J]. Journal of the American Society for Information Science, 1992, 43(9): 628-638.[20] Bornmann L, Mutz R, Neuhaus C, et al. Citation Counts for Research Evaluation: Standards of Good Practice for Analyzing Bibliometric Data and Presenting and Interpreting Results [J]. Ethics in Science and Environmental Politics, 2008(8): 93-102.[21] Radicchi F, Fortunato S, Castellano C. Universality of Citation Distributions: Toward an Objective Measure of Scientific Impact [J]. Proceedings of the National Academy of Sciences of the United States of America, 2008, 105 (45): 17268-17272.[22] Waltman L, van Eck N J, van Raan A F J. Universality of Citation Distributions Revisited[J]. Journal of the American Society for Information Science and Technology, 2012, 63(1): 72-77.[23] Bornmann L, Daniel H D. Universality of Citation Distributions-A Validation of Radicchi et al.'s Relative Indicator cf = c/c0 at the Micro Level Using Data from Chemistry[J]. Journal of the American Society for Information Science and Technology, 2009, 60(8): 1664-1670.[24] Zitt M, Ramanana-Rahary S, Bassecoulard E. Relativity of Citation Performance and Excellence Measures: From Cross-field to Cross-scale Effects of Field-normalisation[J]. Scientometrics, 2005, 63(2): 373-401.[25] Adams J, Gurney K, Jackson L. Calibrating the Zoom-A Test of Zitt's Hypothesis [J]. Scientometrics, 2008, 75(1): 81-95.[26] Rons N. Partition-based Field Normalization: An Approach to Highly Specialized Publication Records[J]. Journal of Informetrics, 2012, 6(1): 1-10.[27] Cristian C, Per A. The Effects and Their Stability of Field Normalization Baseline on Relative Performance with Respect to Citation Impact: A Case Study of 20 Natural Science Departments [J]. Journal of Informetrics, 2011, 5(1): 101-113.[28] National Science Board. Science and Engineering Indicators:2010[R]. Arlington,VA, USA: National Science Foundation, 2010.[29] SCImago Research Group. SCIMago Institutions Rankings [EB/OL]. [2012-02-10].http://www.scimagoir.com/pdf/sir_2011_world_report_ni.pdf.[30] Centre for Science and Technology Studies, Leiden University,The Netherlands. Leiden Ranking [EB/OL]. [2012-02-10]. http://www.leidenranking.com/methodology.aspx.[31] Leydesdorff L, Bornmann L, Mutz R, et al. Turning the Tables on Citation Analysis One More Time:Principles for Comparing Sets of Documents[J]. Journal of the American Society for Information Science and Technology, 2011, 62(7): 1370-1381.[32] Small H, Sweeney E. Clustering the Science Citation Index? Using Co-citations I. A Comparison of Methods [J]. Scientometrics, 1985, 7(3-6): 391-409.[33] Zitt M, Small H. Modifying the Journal Impact Factor by Fractional Citation Weighting: The Audience Factor [J]. Journal of the American Society for Information Science and Technology, 2008, 59(11): 1856-1860.[34] Zhou P, Leydesdorff L. Fractional Counting of Citations in Research Evaluation: A Cross- and Interdisciplinary Assessment of the Tsinghua University in Beijing [J]. Journal of Informetrics, 2011, 5(3): 360-368.[35] Leydesdorff L, Shin J C. How to Evaluate Universities in Terms of Their Relative Citation Impacts: Fractional Counting of Citations and the Normalization of Differences Among Disciplines [J]. Journal of the American Society for Information Science and Technology, 2011, 62(6): 1146-1155.[36] Rousseau Ronald. 评价科研机构的文献计量学和经济计量学指标[A]. // 蒋国华. 科研评价与指标: 国际会议论文集[M]. 北京: 红旗出版社, 2000:16-37. (Rousseau Ronald. Bibliometrics and Econometric Indicators for the Evaluation of Scientific Institutes[A]. //Jiang Guohua. Scientific Research Evaluation and Indicators: Proceeding of the International Conference[M]. Beijing: Red Flag Press, 2000:16-37.)