do article influence scores overestimate the citation...

18
Do Article Influence Scores Overestimate the Citation Impact of Social Science Journals in Subfields That Are Related to Higher-Impact Natural Science Disciplines? William H. Walters This document is the accepted version of an article in the Journal of Informetrics, vol. 8, no. 2 (April 2014), pp. 421–430. It is also available from the publisher’s web site at http://www.sciencedirect.com/science/article/pii/S1751157714000273

Upload: others

Post on 06-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

Do Article Influence ScoresOverestimate the Citation Impact ofSocial Science Journals in SubfieldsThat Are Related to Higher-ImpactNatural Science Disciplines?William H. Walters

This document is the accepted version of an article in the Journal ofInformetrics, vol. 8, no. 2 (April 2014), pp. 421–430.It is also available from the publisher’s web site athttp://www.sciencedirect.com/science/article/pii/S1751157714000273

Page 2: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

1

Journal of Informetrics, vol. 8, no. 2 (April 2014), pp. 421–430

Do Article Influence scores overestimate the citation impactof social science journals in subfields that are related tohigher-impact natural science disciplines?

William H. Walters

Bowman Library, Menlo College, 1000 El Camino Real, Atherton, CA 94027, USA

Tel.: +1 650 543 3827

E-mail address: [email protected]

Unlike Impact Factors (IF), Article Influence (AI) scores assign greater weight to citations that appear inhighly cited journals. The natural sciences tend to have higher citation rates than the social sciences. Wemight therefore expect that relative to IF, AI overestimates the citation impact of social science journals insubfields that are related to (and presumably cited in) higher-impact natural science disciplines. Thisstudy evaluates that assertion through a set of simple and multiple regressions covering seven socialscience disciplines: anthropology, communication, economics, education, library and information science,psychology, and sociology. Contrary to expectations, AI underestimates 5IF (five-year Impact Factor) forjournals in science-related subfields such as scientific communication, science education, scientometrics,biopsychology, and medical sociology. Journals in these subfields have low AI scores relative to their 5IFvalues. Moreover, the effect of science-related status is considerable—typically 0.60 5IF units or 0.50 SD.This effect is independent of the more general finding that AI scores underestimate 5IF for higher-impactjournals. It is also independent of the very modest curvilinearity in the relationship between AI and 5IF.

Keywords: bias, Eigenfactor, interdisciplinary, Journal Citation Reports, multidisciplinary, Web of Science

Page 3: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

2

1. Introduction

From 1964 to 2004, Science Citation Index (SCI) and Social Sciences Citation Index (SSCI) were the onlysources of reliable, large-scale citation data (Garfield 2007). The Impact Factor (IF), based on data fromSCI and SSCI, was recognized by both scholars and practitioners as a standard indicator of citationimpact. In recent years, however, a number of alternative indicators have been introduced. These includethe Article Influence (AI) score, which is calculated from SCI and SSCI data (Bergstrom 2007), and theSource Normalized Impact per Paper (SNIP) indicator, which draws on data from Elsevier’s Scopusdatabase (Moed 2010).

Aside from their dates of introduction, there are three major differences between the Impact Factorand the Article Influence score (Bergstrom, West, & Wiseman 2008; Franceschet 2010b; West, Bergstrom,& Bergstrom 2010b). First, IF data are available only to institutions that subscribe to Thomson Reuters’Journal Citation Reports (JCR). In contrast, AI scores are freely available online athttp://www.eigenfactor.org/.

A second difference lies in the weighting of citations. Impact Factors give equal weight to everycitation; a citation in PNAS contributes no more to the IF than a citation in a regional specialty journal. Incontrast, AI scores give greater weight to citations that appear in highly cited journals. “The [AI] rankingsystem accounts for difference in prestige among citing journals, such that citations from Nature or Cellare valued highly relative to citations from third-tier journals with narrower readership” (West et al.2012a).

A third difference is that AI scores, unlike IFs, are normalized to account for differences in impactamong academic disciplines. It is well known that articles in the natural sciences and in fields with moreauthors tend to be cited more often. Differences in citation impact persist even among subdisciplines.(See, for example, Althouse, West, Bergstrom, & Bergstrom 2009; Franceschet 2010a; Leydesdorff 2008;Postma 2007; Smolinsky & Lercher 2012; and So 1998.) Impact Factors do not account for thesedisciplinary differences, and users of the IF are cautioned not to compare journals in different subjectareas. In contrast, AI scores are normalized to minimize disciplinary differences in citation rates.According to its creators, the AI algorithm “automatically accounts for these differences and allows bettercomparison across research areas” (West et al. 2012c).

The AI algorithm does not completely eliminate disciplinary differences in citation impact, however.As Table 1 shows, subject areas differ considerably in their average AI scores—only slightly less than theydiffer in their average IFs. The average AI score of a medical journal, for instance, is far higher than theaverage AI score of an anthropology or sociology journal. This may pose a problem for the comparison ofjournals within fields such as anthropology and sociology, since certain subfields—biologicalanthropology and medical sociology, for instance—may be especially likely to be cited in the journals ofbiology, medicine and other high-impact disciplines. Arguably, this gives those science-related subfieldsan unfair advantage in terms of their AI scores, since a citation in a mid-ranked medical journal is likelyto increase the AI score more than a citation in a top social science journal. After all, more than 40% of thejournals in the SCI medicine category have AI scores higher than that of American Anthropologist, theflagship journal of the American Anthropological Association. There is nothing unfair about the AI scoreitself, since any subfield-related differences in AI reflect real differences in impact among subdisciplines.However, unfairness can easily result if differences in impact among subfields are interpreted asdifferences in scholarly quality, as they sometimes are.

Page 4: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

3

Table 1Average Article Influence scores and five-year Impact Factors ofjournals in 35 JCR subject categories (2012).

Subject area AI 5IF NDevelopmental biology 1.83 4.12 37Evolutionary biology 1.54 4.00 47Psychology, biological 1.58 3.98 14Genetics and heredity 1.57 3.97 153Biochemistry and molecular biology 1.44 3.91 284Chemistry, physical 1.17 3.68 134Medicine, research and experimental 1.15 3.40 106Medicine, general and internal 1.10 3.02 129Psychology, experimental 1.20 2.75 78Chemistry, organic 0.67 2.60 56Management 1.11 2.55 115Public, env. and occupational hlth.—SCI 0.85 2.46 134Psychology, clinical 0.83 2.43 99Psychology (all subfields combined) 0.97 2.38 497Biology 0.90 2.38 76Health care sciences and services 0.83 2.24 73Health policy and services 0.80 2.07 53Public, env. and occupational hlth.—SSCI 0.70 2.01 107Physics, nuclear 0.82 1.92 21Computer science, artificial intelligence 0.71 1.85 107Geography 0.66 1.79 60Social sciences, biomedical 0.59 1.70 32Business, finance 1.34 1.60 59Economics 1.24 1.51 276Information science and library science 0.49 1.41 68Communication 0.65 1.39 55Urban studies 0.58 1.36 34Sociology 0.70 1.34 115Computer science, software engineering 0.68 1.33 94Social work 0.43 1.30 31Anthropology 0.54 1.26 70Education, scientific disciplines 0.37 1.26 28Education and educational research 0.51 1.23 145Political science 0.77 1.15 116Mathematics 0.93 0.81 258Avg. of avg. values for 25 subject areas 0.92 2.23 —SD of avg. values for 25 subject areas 0.37 0.98 —SD / avg. 0.40 0.44 —

Page 5: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

4

This study evaluates whether AI scores overestimate the citation impact of social science journals insubfields that are related to higher-impact natural science disciplines. “Overestimate” is a relative term,of course, and it could be argued that indicators such as IF underestimate the impact of journals in science-related subfields. In this analysis, however, overestimation is defined relative to the Impact Factor. TheIF is used as a baseline simply because it predates the AI score by approximately four decades.

Two analyses are presented. The first has three components: (1) within each of seven social sciencedisciplines, identify a subfield (and a corresponding set of journals) that is closely related to one or moreof the natural sciences; (2) use AI scores to estimate IF values through a set of simple regressionequations; and (3) examine the residuals to determine, for each field, whether AI scores systematicallyoverestimate IF values for journals that are related to the natural sciences. The second analysis is muchlike the first, but it incorporates a dummy variable coded 1 for journals in science-related subfields. Thisallows us to estimate the independent effect of science-related status.

2. Previous research

The use of impact indicators weighted by the reputation or impact of the citing journal was proposed asearly as 1974 (Kochen 1974: 83) and has been revisited regularly since then (Bergstrom 2007; Liebowitz &Palmer 1984; Pinski & Narin 1976). Recently, several authors have claimed that unweighted indicatorssuch as the IF measure popularity while weighted indicators such as the AI measure prestige (Ding &Cronin 2011; Yan & Ding 2010). In fact, however, both weighted and unweighted indicators are simplymeasures of impact, which is influenced by a wide range of factors other than prestige (Balaban 2012;Fersht 2009).

Most measures of citation impact focus on either the article or the journal. The IF, AI, and SNIPindicators each represent the impact of a typical article in a particular journal—not the impact of thejournal as a whole. That is, they do not vary systematically with differences in journal size (the numberof articles published in each journal). In contrast, indicators such as the h-index, the Eigenfactor Score,and the SCImago Journal Rank (SJR) measure the impact of the entire journal (all articles combined) andare therefore sensitive to journal size.

Ideally, each individual—the author deciding where to publish, the librarian assessing the cost-effectiveness of journals, or the committee member evaluating a promotion application—would choosethe citation impact indicator most appropriate to the task at hand. No single measure of reputation orimpact is appropriate in all circumstances (Bar-Ilan 2012; Engemann & Wall 2009). However, the choiceof AI, IF, or another indicator is often limited by the availability of data. While only a small minority ofcolleges and universities have access to IF data through Journal Citation Reports, AI scores are freelyavailable online. Moreover, for better or worse, some authors and librarians may regard AI scores simplyas a cost-effective alternative to the IF “gold standard” that has become familiar over the past severaldecades (Fersht 2009). They may really want Impact Factors, but they’ll settle for AI scores because thosedata are readily available. Within this context, any systematic difference between the two measures willmake AI less acceptable as a substitute for IF.

As noted in Section 1, AI is weighted by the impact of the citing journal. In contrast, IF is not.Despite this distinction, weighted and unweighted measures of journal impact are closely related. Withinthe field of medicine, for instance, there is a 0.95 correlation between Eigenfactor scores and total citationcounts (Davis 2008). Such strong correlations may understate the differences among the various impactindicators, however, since both Eigenfactors and total citation counts are heavily influenced by a commonfactor: journal size. Journal size is a central component of all measures, weighted and unweighted, thatrepresent the citation impact of the journal as a whole rather than the impact of a typical article (West,Bergstrom, and Bergstrom 2010a).

Page 6: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

5

However, strong correlations persist even when we compare indicators such as AI and IF, whichrepresent the impact of a typical article. Comparing 77 journals in 15 fields, Rousseau and theSTIMULATE 8 Group (2009) found AI-IF correlations of 0.90 (2004 data) and 0.92 (2006 data). Likewise,Waltman and van Eck (2010) reported a 0.93 correlation between AI and IF for a set of 6,708 journals inthe natural and social sciences. Their analysis also demonstrates that strong correlations persist evenwhen IF is compared to several different variants of the AI score. Elkins, Maher, Herbert, Moseley, andSherrington (2010) document a close correspondence between AI and IF (Spearman’s rho = 0.79) for a setof 5,856 multidisciplinary journals. An even closer relationship (r = 0.94) can be seen among the top sixjournals in each of 20 natural science disciplines (Chang, McAleer, & Oxley 2011b).

One could argue that the multidisciplinary correlations presented by these authors also reflect theinfluence of a common factor. When AI and IF are compared across several subject areas, both may beinfluenced by disciplinary differences in citation impact. That is, differences in impact among the variousphysics journals (for example) may be swamped by the broader relationship between physics journals(higher impact) and anthropology journals (lower impact).

Fortunately, several authors have conducted single-discipline studies that compare weighted andunweighted indicators representing the impact of a typical article. For instance, Jacsó (2010b) presents ananalysis of AI scores and five-year Impact Factors for a single discipline: library and information science(LIS). The two indicators are moderately related with regard to the rank ordering of LIS journals; of thetop 22 journals, 18 exhibit rank-order differences of five places or fewer. At the same time, four well-known LIS journals rise or fall substantially when ranked by AI rather than five-year IF. Other studiessuggest that the correlation between AI and IF is strongest in the natural sciences. Yin (2011) reports aSpearman’s rho value of 0.96 for the set of 104 chemical engineering journals, along with a similarly closerelationship (0.94) for 15 of the top journals in that field. Among 40 high-impact economics journals, therelationship is not quite as strong; r = 0.91 (Chang, McAleer, & Oxley 2011a). Likewise, the Eigenfactorweb site reports correlations of 0.93 for biology and medicine, with weaker relationships for mathematics(0.73) and economics (0.56) (West et al. 2012b).

Jacsó (2010b) discusses a number of factors that appear to influence the citation rankings ofparticular LIS journals. However, no published study has investigated the factors (such as disciplinarydifferences in impact) that might lead to systematic bias in the relationship between AI scores and ImpactFactors.

3. Methods

The standard Impact Factor (IF or 2IF) is based on a two-year publication window and a one-year citationwindow. It accounts for the number of times the articles published in 2010 and 2011 were cited in 2012,for example. Because of its short publication and citation windows, 2IF varies in response to short-termtrends. Some such trends signal real changes in the trajectory of a journal, while others representanomalies that can be attributed to a single issue or even a single article. The developers of the AI scorechose a five-year publication window partly to remedy the shortcomings of the two-year window used inthe calculation of 2IF (West et al. 2010b, 2012c). Although the advantages of a five-year publicationwindow have not been conclusively demonstrated, there is no reason to believe that a two-year windowis preferable (Abramo, D’Angelo, & Cicero 2012; van Leeuwen 2012; Vanclay 2009). For these reasons, thefive-year Impact Factor (5IF) was chosen for use in this study. AI and 5IF are both based on a five-yearpublication window and a one-year citation window.

AI and 5IF data were downloaded in October 2013 from the 2012 edition of Journal Citation Reports.Although the AI scores presented in JCR have not always matched those reported on the Eigenfactor website (Jacsó 2010a), a comparison of 100 randomly selected values suggests that the two sources provideidentical AI data for 2012.

Page 7: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

6

JCR presents both AI and 5IF data for 1,226 journals in the anthropology, communication, economics,education and educational research, information science and library science, psychology, and sociology subjectcategories. (For purposes of analysis, the 10 JCR psychology categories—clinical, experimental, social,etc.—were treated as a single discipline.) Within those broad subject areas, the identification criteriashown in Table 2 were used to identify 56 journals in subfields that are conceptually related to the naturalsciences: biological and medical anthropology, scientific and technical communication, the economics ofhealth care, science education, scientometrics, biopsychology, and medical sociology and sociobiology.Physical geography was excluded because its methods, journals, publication patterns, and graduateprograms are distinct from those of human geography. (See, for example, Castree, Rogers, & Sherman2005; Harrison, Massey, Richards, Magilligan, Thrift, & Bender 2004; Turner 2002; and Viles 2004.)

Table 2

Criteria used to identify journals in seven social science subfields.

Biological and medical anthropology—11 journalsAll journals in the JCR anthropology category that also appear in the categories of biology; evolutionarybiology; genetics and heredity; health care sciences and services; health policy and services; public,environmental, and occupational health; or social sciences, biomedical. Also includes the journalEvolutionary Anthropology.

Scientific and technical communication—8 journalsAll journals in the JCR communication category that focus on scientific and technical communication,subjectively determined.

Economics of health care—6 journalsAll journals in the JCR economics category that also appear in the categories of health care sciences andservices; health policy and services; public, environmental, and occupational health; or social sciences,biomedical.

Science education—12 journalsAll journals in the JCR education and educational research category that also appear in the education,scientific disciplines category or that have “science,” “health,” or “environmental” in their titles—excluding Instructional Science and Journal of the Learning Sciences.

Scientometrics—3 journalsThe three journals in the JCR information science and library science category that most often cited, orwere cited by, Scientometrics from 2008 to 2012. The same three journals appear at the top of the“cited” and “cited by” lists.

Biopsychology—14 journalsAll journals in the JCR psychology, biological category.

Medical sociology and sociobiology—2 journalsAll journals in the JCR sociology category that also appear in the categories of biology; evolutionarybiology; genetics and heredity; health care sciences and services; health policy and services; public,environmental, and occupational health; or social sciences, biomedical.

Page 8: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

7

Seven OLS regressions were conducted for the first analysis (Section 4.1). Each regression uses AIscores to estimate 5IF values in a particular discipline. The residuals (error terms) were examined toevaluate the hypothesis that AI scores systematically overestimate 5IF values for journals in subfields thatare related to the natural sciences. Emphasis was placed on both (a) the direction and magnitude of eacherror term and (b) the direction and magnitude of each error term relative to those of comparable(similar-impact) journals not in subfields related to the natural sciences.

In the second analysis (Section 4.2), a dummy variable—coded 1 for journals in science-relatedsubfields—was added to each regression equation. This allows us to estimate the independent effect ofscience-related status. Each new set of residuals was evaluated based on the same criteria used in Section4.1.

Significance tests are not appropriate here, since the analysis is based on data for the entirepopulation of interest: all the journals, in seven social science disciplines, for which AI and 5IF data areavailable. No attempt is made to extend the findings to other subject areas.

4. Results

4.1. Does AI overestimate 5IF for science-related journals?

For the seven subject areas shown in Table 3, the correlations between AI and 5IF are similar to thosereported in previous research. The r value for economics (0.88) is comparable to the 0.91 value reportedby Chang et al. (2011a) but higher than the 0.56 value reported elsewhere (West et al. 2012b). Overall, AIis a good or excellent predictor of 5IF.

The regression residuals (Table 4) demonstrate that AI does not systematically overestimate 5IF forsocial science journals in subfields that are related to the natural sciences. As the table shows, each of theseven subfields has an average residual value that is positive rather than negative. That is, AI tends tounderestimate 5IF for the journals in science-related subfields. (See the economics of health care category, inparticular.) Only 6 of the 56 journals have negative error terms: one in biological and medicalanthropology, one in scientific and technical communication, three in biopsychology, and one in medicalsociology and sociobiology.

Table 3

Descriptive statistics and simple linear regressions.

5IF 5IF SESubject area N avg. SD Equation Est. rAnthropology 70 1.26 1.09 5IF = 0.1373 + 2.0685 AI 0.42 0.92Communication 55 1.39 0.93 5IF = 0.2834 + 1.7199 AI 0.41 0.90Economics 276 1.51 1.36 5IF = 0.6732 + 0.6743 AI 0.64 0.88Education 145 1.23 0.93 5IF = 0.3308 + 1.7759 AI 0.43 0.89Inf. & Lib. Sci. 68 1.41 1.36 5IF = 0.2194 + 2.4458 AI 0.39 0.96Psychology 497 2.38 2.48 5IF = 0.4230 + 2.0085 AI 0.55 0.98Sociology 115 1.34 1.11 5IF = 0.3598 + 1.3892 AI 0.42 0.92

Page 9: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

8

Table 4

Simple linear regressions. Predicted values and residuals of the 56 journals in social science subfields thatare related to the natural sciences. Resid. is the residual value for each journal. Comp. resid. is the averageresidual value of the three journals ranked immediately higher and the three journals rankedimmediately lower (on the basis of 5IF) in the relevant subject area (anthropology, communication, etc.).Diff. is the difference between Resid. and Comp. resid.

Pred. Comp.Subfield / Journal 5IF 5IF Resid. resid. Diff.Biological and medical anthropology (average) 2.49 2.07 0.42 0.17 0.25

Evolutionary Anthropology 4.64 4.80 -0.16 0.60 -0.76Journal of Human Evolution 4.53 3.35 1.18 -0.08 1.26American Journal of Physical Anthropology 2.85 1.97 0.88 0.05 0.83American Journal of Human Biology 2.39 1.73 0.66 0.18 0.47Human Nature 2.37 2.25 0.12 0.20 -0.09Yearbook of Physical Anthropology 2.23 1.77 0.46 0.15 0.32Medical Anthropology Quarterly 1.82 1.51 0.31 0.19 0.12Medical Anthropology 1.77 1.51 0.26 0.19 0.07Annals of Human Biology 1.76 1.25 0.51 0.12 0.38Culture, Medicine and Psychiatry 1.66 1.43 0.22 0.29 -0.07Human Biology 1.38 1.22 0.16 -0.04 0.20

Scientific and technical communication (average) 1.49 1.14 0.35 0.02 0.33Public Understanding of Science 2.47 1.72 0.75 0.30 0.45Science Communication 2.42 1.70 0.72 0.27 0.46Journal of Health Communication 2.31 1.69 0.62 -0.01 0.63Health Communication 1.74 1.39 0.35 0.17 0.18Environmental Communication 0.99 0.68 0.31 -0.11 0.42IEEE Transactions on Professional Communication 0.84 0.65 0.19 -0.05 0.23Technical Communication 0.70 0.61 0.09 -0.15 0.24Journal of Business and Technical Communication 0.45 0.64 -0.19 -0.23 0.03

Economics of health care (average) 2.79 1.47 1.32 0.69 0.64PharmacoEconomics 3.54 1.46 2.09 0.98 1.10Journal of Health Economics 3.03 1.85 1.18 0.45 0.73Value in Health 2.90 1.35 1.56 1.12 0.44Health Economics 2.79 1.57 1.22 0.96 0.26Economics and Human Biology 2.51 1.45 1.06 0.16 0.90European Journal of Health Economics 1.98 1.13 0.85 0.46 0.39

Science education (average) 2.08 1.55 0.52 0.17 0.35Journal of Research in Science Teaching 3.23 2.34 0.89 0.98 -0.09Science Education 2.71 2.32 0.39 0.28 0.11Advances in Health Sciences Education 2.61 2.06 0.54 0.00 0.55Health Education Research 2.44 1.82 0.62 -0.32 0.94Physical Review: Physics Education Research 2.13 1.24 0.89 0.10 0.79Journal of School Health 2.01 1.41 0.61 0.37 0.24

Table 4 continues

Page 10: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

9

Table 4 (continued)

Pred. Comp.Subfield / Journal 5IF 5IF Resid. resid. Diff.Science education (continued)

Journal of American College Health 1.99 1.42 0.57 0.41 0.16Journal of Engineering Education 1.92 1.50 0.42 0.34 0.08International Journal of Science Education 1.80 1.34 0.46 0.17 0.29Research in Science Education 1.58 1.29 0.29 0.15 0.14Health Education Journal 1.29 1.07 0.22 -0.13 0.35Chemistry Education Research and Practice 1.20 0.84 0.36 -0.29 0.65

Scientometrics (average) 2.78 2.33 0.46 0.33 0.13Journal of Informetrics 3.99 3.21 0.78 -0.22 1.00Scientometrics 2.21 1.68 0.52 0.52 0.00JASIST 2.16 2.09 0.07 0.68 -0.61

Biopsychology (average) 3.98 3.60 0.38 0.08 0.30Behavioral and Brain Sciences 23.17 22.45 0.72 0.95 -0.23Biological Psychology 4.34 3.37 0.98 0.23 0.74Evolution and Human Behavior 4.25 3.76 0.49 0.59 -0.10Psychophysiology 4.01 3.17 0.84 -0.13 0.97Physiology and Behavior 3.34 2.41 0.93 0.08 0.86Experimental and Clinical Psychopharmacology 3.20 2.42 0.78 0.14 0.65International Journal of Psychophysiology 2.66 2.19 0.47 -0.03 0.50Journal of Exp. Psy.: Animal Behavior Processes 2.46 2.21 0.25 0.30 -0.05Journal of Psychophysiology 2.13 1.94 0.19 0.02 0.17Learning and Behavior 1.89 1.84 0.05 0.09 -0.04Behavioural Processes 1.63 1.49 0.13 -0.27 0.40Journal of the Experimental Analysis of Behavior 1.12 1.16 -0.04 -0.11 0.07Learning and Motivation 0.75 1.02 -0.27 -0.36 0.09Integrative Psychological and Behavioral Science 0.74 0.92 -0.18 -0.34 0.16

Medical sociology and sociobiology (average) 1.51 1.09 0.42 -0.15 0.57Sociology of Health and Illness 2.44 1.53 0.91 -0.15 1.06Health Sociology Review 0.58 0.66 -0.07 -0.15 0.08

The underestimation of 5IF for these science-related journals may possibly result from a combinationof two factors: (1) the relatively high impact (5IF) of journals in subfields that are related to the naturalsciences (Table 5), and (2) the positive correlation between 5IF and the regression residuals within each ofthe seven disciplines. Of the 56 journals shown in Table 4, 51 are ranked at the 40th percentile or higherwithin their broad subject areas. All but one of those 51 journals have positive residuals. In contrast, just5 of the 56 journals are ranked below the 40th percentile in their subject areas; all 5 have negativeresiduals. For the set of all 1,226 journals, the correlations between 5IF and the residuals vary by subjectarea, from 0.22 in psychology to 0.47 in economics, but the average correlation (0.38) indicates amoderately strong relationship.

Page 11: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

10

Table 5

Average and minimum percentile ranks of science-related journals, based on 5IF within each broad subjectarea (anthropology, communication, etc.).

Subfield Avg. Min.Biological and medical anthropology 84th 66thScientific and technical communication 65 33Economics of health care 88 80Science education 88 73Scientometrics 84 78Biopsychology 62 15Medical sociology and sociobiology 66 40

Can the underestimation of 5IF for natural science-related journals be attributed to their scientificemphasis, or does it simply result from the fact that AI tends to underestimate 5IF for higher-impactjournals? The two rightmost columns of Table 4 suggests that scientific emphasis does play a role. EachComp. resid. (comparable residuals) value shows the average of the residuals for the three higher-impactjournals and the three lower-impact journals that are closest in 5IF to each science-related journal, withinthe relevant subject category (anthropology, communication, etc.). Each Diff. value shows the differencebetween the journal’s residual and the average residual of the journals in the same subject category thatare most similar in impact. (Diff. equals Resid. minus Comp. resid.). If the underestimation of 5IF for these56 journals can be attributed solely to their high impact, we can expect an average Diff. value near zerowithin each subject area.

In fact, however, the average Diff. value is positive for each of the seven subfields. Positive valuescan be seen for eight of the eleven biological and medical anthropology journals, all eight of the scientificand technical communication journals, all six of the health care economics journals, eleven of the twelvescience education journals, two of the three scientometrics journals, ten of the fourteen biopsychologyjournals, and both of the medical sociology journals. This indicates that the journals’ low AI scores(relative to their 5IF values) can be attributed at least partly to factors other than their high impact.

Scatterplots reveal that the relationship between AI and 5IF is essentially linear for each of the sevendisciplines. (See Fig. 1, for example.) Linear relationships can also be seen in several earlier studies(Elkins et al. 2010; Rousseau & STIMULATE 8 Group 2009; Waltman & van Eck 2010; West et al. 2012b;Yin 2011). Nonetheless, several types of curvilinear regression were used to investigate whether anonlinear relationship might explain the underestimation of 5IF for journals in science-related subfields.For four of the seven disciplines, a third-order polynomial regression resulted in the best fit. For threedisciplines—communication, economics, and sociology—a power curve resulted in the best fit.

The use of curvilinear regression improves the predictive power of the equations only slightly,however. It increases the r value by more than 0.02 for only one field, economics (0.04). Moreimportantly, the use of curvilinear regression does not reduce the prevalence of positive residuals amongthe journals with high IFs. For the linear regressions, the average correlation between 5IF and theresiduals is 0.38; for the curvilinear regressions, 0.40. Adjusting the regressions to account fornonlinearity has no real impact on the strength of the overall relationships or on the relatively highresiduals of the science-related journals. That is, the very modest curvilinearity identified here cannotaccount for the fact that science-related journals have low AI scores relative to their Impact Factors.

Page 12: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

11

Fig. 1. Article Influence scores and five-year Impact Factors of 70 anthropology journals.

4.2. The independent effect of science-related status

The inclusion of a dummy variable for science-related status—Sci, coded 1 for journals in subfieldsrelated to the natural sciences—improves the predictive power of the regressions only slightly. Likewise,it reduces the standard error of estimate by just 0.02, on average (Table 6). The overall impact is minorsimply because the Sci variable introduces relatively little explanatory variance; 1,170 of the 1,226 journalshave values of 0 for Sci.

Table 6

Linear regressions with a dummy variable, Sci, coded 1 for journals insubfields that are related to the natural sciences.

SESubject area Equation Est. rAnthropology 5IF = 0.1301 + 1.9180 AI + 0.5649 Sci 0.38 0.94Communication 5IF = 0.1971 + 1.7589 AI + 0.4208 Sci 0.39 0.91Economics 5IF = 0.6431 + 0.6749 AI + 1.3544 Sci 0.61 0.90Education 5IF = 0.3032 + 1.7361 AI + 0.5772 Sci 0.40 0.90Inf. & Lib. Sci. 5IF = 0.2118 + 2.4170 AI + 0.4899 Sci 0.38 0.96Psychology 5IF = 0.4164 + 2.0038 AI + 0.3959 Sci 0.55 0.98Sociology 5IF = 0.3507 + 1.3916 AI + 0.4268 Sci 0.42 0.93

Page 13: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

12

Table 7

Estimated impact of science-relatedstatus—the extent to which AIunderestimates 5IF for journals in subfieldsthat are related to the natural sciences.

In 5IF InSubject area units SDAnthropology 0.56 0.52Communication 0.42 0.45Economics 1.35 1.00Education 0.58 0.62Inf. & Lib. Sci. 0.49 0.36Psychology 0.40 0.16Sociology 0.43 0.38

Nonetheless, the dummy variable has a considerable impact on the 5IF estimates for the science-related journals. For the seven disciplines, the average Sci coefficient is 0.60 5IF units or 0.50 SD (Table 7).In anthropology (N = 70), an effect of that magnitude is enough to move a mid-ranked journal at least 10places up or down in the rankings. In economics, failure to consider science-related status can lead to theunderestimation of 5IF by an entire standard deviation.

As Table 8 shows, the inclusion of the Sci variable eliminates the high residuals identified in theearlier regressions (Section 4.1). It also results in an average residual value of 0.00 for each of the sevenscience-related subfields. Within each subfield, the journals’ residuals are about evenly split betweenpositive and negative values, and the inclusion of Sci reduces the average magnitude of the residualsfrom 0.55 (Table 4) to 0.29 (Table 8). This effect is consistent across the seven subfields. Likewise, the Diff.values shown in Table 8 are consistent with a better-fitting and more fully specified regression model.Without the Sci variable, the average Diff. value is 0.34. With the Sci variable, the average value is -0.17.

The correlation between 5IF and the residuals persists even when Sci is included in the regressionequations. For the earlier regressions (Table 4), the average correlation between 5IF and the residuals is0.38. For these new regressions (Table 8), it is 0.36. AI tends to underestimate 5IF for higher-impactjournals even after we control for Sci. This suggests that the underestimation associated with science-related status is largely independent of the underestimation associated with high impact.

5. Conclusion

Within the seven subject areas evaluated here, Article Influence scores do not systematically overestimatethe citation impact (5IF) of social science journals in subfields that are related to higher-impact naturalscience disciplines. In fact, AI scores tend to underestimate 5IF for journals that are related to the naturalsciences. Such journals have high 5IFs relative to their AI scores—that is, low AI scores relative to their5IFs.

This finding should be interpreted against the backdrop of a more general relationship: AI scores areespecially likely to underestimate 5IF values for higher-impact journals—those with high 5IFs, whetherrelated to the natural sciences or not. A similar result has been reported with regard to the SNIPindicator. In comparison with measures such as the Impact Factor, SNIP “makes differences among

Page 14: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

13

Table 8

Linear regressions with a dummy variable, Sci, coded 1 for journals in subfields that are related to thenatural sciences. Predicted values and residuals of the 56 journals in social science subfields that arerelated to the natural sciences. Resid. is the residual value for each journal. Comp. resid. is the averageresidual value of the three journals ranked immediately higher and the three journals rankedimmediately lower (on the basis of 5IF) in the relevant subject area (anthropology, communication, etc.).Diff. is the difference between Resid. and Comp. resid.

Pred. Comp.Subfield / Journal 5IF 5IF Resid. resid. Diff.Biological and medical anthropology (average) 2.49 2.49 0.00 0.20 -0.20

Evolutionary Anthropology 4.64 5.02 -0.38 0.66 -1.04Journal of Human Evolution 4.53 3.68 0.85 0.06 0.79American Journal of Physical Anthropology 2.85 2.39 0.46 0.18 0.28American Journal of Human Biology 2.39 2.17 0.22 0.16 0.05Human Nature 2.37 2.65 -0.29 0.18 -0.46Yearbook of Physical Anthropology 2.23 2.21 0.02 0.20 -0.18Medical Anthropology Quarterly 1.82 1.97 -0.15 0.12 -0.27Medical Anthropology 1.77 1.97 -0.20 0.12 -0.32Annals of Human Biology 1.76 1.73 0.03 0.05 -0.02Culture, Medicine and Psychiatry 1.66 1.90 -0.24 0.38 -0.62Human Biology 1.38 1.70 -0.32 0.05 -0.37

Scientific and technical communication (average) 1.49 1.49 0.00 0.01 -0.01Public Understanding of Science 2.47 2.09 0.38 0.27 0.11Science Communication 2.42 2.06 0.36 0.17 0.19Journal of Health Communication 2.31 2.06 0.25 -0.04 0.29Health Communication 1.74 1.75 -0.01 0.23 -0.23Environmental Communication 0.99 1.03 -0.04 -0.11 0.07IEEE Transactions on Professional Communication 0.84 0.99 -0.16 -0.12 -0.04Technical Communication 0.70 0.95 -0.25 -0.15 -0.10Journal of Business and Technical Communication 0.45 0.99 -0.54 -0.15 -0.39

Economics of health care (average) 2.79 2.79 0.00 0.64 -0.64PharmacoEconomics 3.54 2.78 0.76 1.01 -0.25Journal of Health Economics 3.03 3.17 -0.14 0.48 -0.63Value in Health 2.90 2.67 0.23 0.92 -0.69Health Economics 2.79 2.89 -0.11 0.76 -0.87Economics and Human Biology 2.51 2.77 -0.26 0.19 -0.45European Journal of Health Economics 1.98 2.46 -0.48 0.49 -0.97

Science education (average) 2.08 2.08 0.00 0.15 -0.15Journal of Research in Science Teaching 3.23 2.84 0.38 1.05 -0.66Science Education 2.71 2.83 -0.11 0.26 -0.37Advances in Health Sciences Education 2.61 2.57 0.03 -0.12 0.15Health Education Research 2.44 2.34 0.11 -0.33 0.44Physical Review: Physics Education Research 2.13 1.77 0.36 0.17 0.19Journal of School Health 2.01 1.93 0.08 0.33 -0.25

Table 8 continues

Page 15: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

14

Table 4 (continued)

Pred. Comp.Subfield / Journal 5IF 5IF Resid. resid. Diff.Science education (continued)

Journal of American College Health 1.99 1.94 0.05 0.27 -0.22Journal of Engineering Education 1.92 2.03 -0.10 0.21 -0.31International Journal of Science Education 1.80 1.86 -0.07 0.13 -0.20Research in Science Education 1.58 1.82 -0.24 0.20 -0.44Health Education Journal 1.29 1.60 -0.31 -0.07 -0.24Chemistry Education Research and Practice 1.20 1.37 -0.17 -0.23 0.06

Scientometrics (average) 2.78 2.78 0.00 0.31 -0.31Journal of Informetrics 3.99 3.65 0.33 -0.16 0.49Scientometrics 2.21 2.15 0.06 0.46 -0.41JASIST 2.16 2.55 -0.39 0.62 -1.01

Biopsychology (average) 3.98 3.98 0.00 0.08 -0.08Behavioral and Brain Sciences 23.17 22.79 0.38 1.00 -0.61Biological Psychology 4.34 3.75 0.59 0.18 0.41Evolution and Human Behavior 4.25 4.14 0.11 0.54 -0.43Psychophysiology 4.01 3.55 0.46 -0.12 0.57Physiology and Behavior 3.34 2.79 0.55 0.09 0.46Experimental and Clinical Psychopharmacology 3.20 2.81 0.40 0.15 0.25International Journal of Psychophysiology 2.66 2.58 0.09 -0.02 0.11Journal of Exp. Psy.: Animal Behavior Processes 2.46 2.59 -0.14 0.31 -0.45Journal of Psychophysiology 2.13 2.33 -0.20 0.03 -0.23Learning and Behavior 1.89 2.22 -0.33 0.10 -0.43Behavioural Processes 1.63 1.88 -0.25 -0.26 0.01Journal of the Experimental Analysis of Behavior 1.12 1.55 -0.43 -0.10 -0.33Learning and Motivation 0.75 1.41 -0.66 -0.42 -0.24Integrative Psychological and Behavioral Science 0.74 1.31 -0.57 -0.39 -0.17

Medical sociology and sociobiology (average) 1.51 1.51 0.00 -0.14 0.14Sociology of Health and Illness 2.44 1.95 0.49 -0.14 0.63Health Sociology Review 0.58 1.07 -0.49 -0.14 -0.35

journals smaller” (Colledge et al. 2010, p. 220). The reason for this is not clear, although it may be relatedto the fact that until recently, both AI and SNIP counted citations that appeared in popular magazines andlower-ranked journals—citations that are not considered in the calculation of the Impact Factor (Moed2010; West et al. 2012c). Since October 2012, however, SNIP has been redefined to exclude citations intrade journals and in publications with relatively few references to previous work (Waltman, van Eck, vanLeeuwen, and Visser 2013).

These results reveal that the underestimation of 5IF associated with science-related status is largelyindependent of the underestimation associated with high impact. Just as AI tends to underestimate 5IFfor high-impact journals regardless of their connection (or lack of connection) to the natural sciences, AItends to underestimate 5IF for science-related journals regardless of their 5IF values. Moreover, the effectof science-related status cannot be attributed to the modest curvilinearity of the relationship between AIand 5IF.

Page 16: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

15

Why do the social science journals shown in Tables 4 and 8 have low AI scores relative to theirImpact Factors? Since AI scores give more (less) credit for citations that appear in more (less) citedjournals, three possibilities come to mind.(1) Although these social science journals are conceptually related to the natural sciences, they are cited

only seldom in natural science journals. Despite their subject coverage, they are not integrated intothe literature of the natural sciences.

(2) Although these journals are integrated into the literature of the natural sciences, they are most oftencited in the lower-ranked journals—natural science journals with AI scores lower than those of thejournals’ home (social science) disciplines. Although this possibility has not been evaluated, there issome evidence to support it. Previous research has shown that articles in journals covering morethan one of the natural sciences tend to be cited less often than those in journals that cover just asingle natural science discipline (Levitt & Thelwall 2008). Lower citation rates may correspond tocitations in lower-ranked journals.

(3) Although these journals are cited regularly in higher-impact natural science journals, they areespecially unlikely to be cited in the higher-impact journals of their home disciplines.

Further research may help determine whether any of these explanations are valid.With regard to the needs of faculty and librarians, these findings show that the careful estimation of

Impact Factors from Article Influence scores involves more than a simple rescaling of the values.Although AI is closely correlated with IF5, there are systematic differences between the two. Theequations presented in Table 6—and, more generally, an awareness of the issues involved—may be usefulto authors deciding where to submit their papers, to faculty serving on tenure and promotioncommittees, and to librarians making journal selection and deselection decisions.

Acknowledgements

I am grateful for the comments of Esther Isabelle Wilder and two anonymous reviewers.

Page 17: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

16

References

Abramo, G., D’Angelo, C.A., & Cicero, T. (2012). What is the appropriate length of the publication periodover which to assess research performance? Scientometrics, 93(3), 1005–1017.

Althouse, B.M., West, J.D., Bergstrom, C.T., & Bergstrom, T.C. (2009). Differences in Impact Factor acrossfields and over time. Journal of the American Society for Information Science and Technology, 60(1), 27–34.

Balaban, A.T. (2012). Positive and negative aspects of citation indices and journal Impact Factors.Scientometrics, 92(2), 241–247.

Bar-Ilan, J. (2012). Journal report card. Scientometrics, 92(2), 249–261.Bergstrom, C.T. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College &

Research Libraries News, 68(5), 314–316.Bergstrom, C.T., West, J.D., & Wiseman, M.A. (2008). The Eigenfactor metrics. Journal of Neuroscience,

28(45), 11433–11434.Castree, N., Rogers, A., & Sherman, D., eds. (2005). Questioning geography: Fundamental debates. Oxford:

Blackwell.Chang, C.-L., McAleer, M., & Oxley, L. (2011a). What makes a great journal great in economics? The

singer not the song. Journal of Economic Surveys, 25(2), 326–361.Chang, C.-L., McAleer, M., & Oxley, L. (2011b). What makes a great journal great in the sciences? Which

came first, the chicken or the egg? Scientometrics, 87(1), 17–40.Colledge, L., de Moya-Anegón, F., Guerrero-Bote, V.P., López-Illescas, C., El Aisati, M., & Moed, H.F.

(2010). SJR and SNIP: Two new journal metrics in Elsevier’s Scopus. Serials, 23(3), 215–221.Davis, P.M. (2008). Eigenfactor: Does the principle of repeated improvement result in better estimates

than raw citation counts? Journal of the American Society for Information Science and Technology, 59(13):2186–2188.

Ding, Y., & Cronin, B. (2011). Popular and/or prestigious? Measures of scholarly esteem. InformationProcessing and Management, 47(1), 80–96.

Elkins, M.R., Maher, C.G., Herbert, R.D., Moseley, A.M., & Sherrington, C. (2010). Correlation betweenthe Journal Impact Factor and three other journal citation indices. Scientometrics, 85(1), 81–93.

Engemann, K.M., & Wall, H.J. (2009). A journal ranking for the ambitious economist. Federal Reserve Bankof St. Louis Review, 91(3), 127–139.

Fersht, A. (2009). The most influential journals: Impact Factor and Eigenfactor. Proceedings of the NationalAcademy of Sciences, 106 (17), 6883–6884.

Franceschet, M. (2010a). Journal influence factors. Journal of Informetrics, 4(3), 239–248.Franceschet, M. (2010b). Ten good reasons to use the Eigenfactor metrics. Information Processing and

Management, 46(5), 555–558.Garfield, E. (2007). The evolution of the Science Citation Index. International Microbiology, 10(1), 65–69.Harrison, S., Massey, D., Richards, K., Magilligan, F.J., Thrift, N., & Bender B. (2004). Thinking across the

divide: Perspectives on the conversations between physical and human geography. Area, 36(4): 435–442.

Jacsó, P. (2010a). Differences in the rank position of journals by Eigenfactor metrics and the five-yearImpact Factor in the Journal Citation Reports and the Eigenfactor Project web site. Online InformationReview, 34(3), 496–508.

Jacsó, P. (2010b). Eigenfactor and Article Influence scores in the Journal Citation Repor ts. OnlineInformation Review, 34(2), 339–348.

Kochen, M. (1974). Principles of information retrieval. Los Angeles: Melville Publishing.Levitt, J.M., & Thelwall, M. (2008). Is multidisciplinary research more highly cited? A macrolevel study.

Journal of the American Society for Information Science and Technology, 59(12), 1973–1984.

Page 18: Do Article Influence Scores Overestimate the Citation ...cdm15970.contentdm.oclc.org/utils/getfile/collection/p15970coll1/id/92/filename/93.pdfThis study evaluates whether AI scores

17

Leydesdorff, L. (2008). Caveats for the use of citation indicators in research and journal evaluations.Journal of the American Society for Information Science and Technology, 59(2), 278–287.

Liebowitz, S.J., & Palmer, J.P. (1984). Assessing the relative impacts of economics journals. Journal ofEconomic Literature, 22(1), 77–88.

Moed, H.F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3),265–277.

Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory,with application to the literature of physics. Information Processing and Management, 12(5), 297–312.

Postma, E. (2007). Inflated Impact Factors? The true impact of evolutionary papers in non-evolutionaryjournals. PLoS ONE, 2(10), e999.

Rousseau, R., & STIMULATE 8 Group (2009). On the relation between the WoS Impact Factor, theEigenfactor, the SCImago Journal Rank, the Article Influence score and the journal h-index. Paperpresented at Nanjing University. http://eprints.rclis.org/13304/. Accessed 3 February 2014.

Smolinsky, L., & Lercher, A. (2012). Citation rates in mathematics: A study of variation by subdiscipline.Scientometrics, 91(3), 911–924.

So, C.Y.K. (1998). Citation ranking versus expert judgment in evaluating communication scholars: Effectsof research specialty size and individual prominence. Scientometrics, 41(3), 325–333.

Turner, B.L. (2002). Contested identities: Human-environment geography and disciplinary implicationsin a restructuring academy. Annals of the Association of American Geographers, 92(1), 52–74.

van Leeuwen, T.N. (2012). Discussing some basic critique on Journal Impact Factors: Revision of earliercomments. Scientometrics, 92(2), 443–455.

Vanclay, J. (2009). Bias in the Journal Impact Factor. Scientometrics, 78(1), 3–12.Viles, H. (2004). Does Area keep you awake at night? Area, 36(4), 337.Waltman, L., & van Eck, N.J. (2010). The relation between Eigenfactor, Audience Factor, and Influence

Weight. Journal of the American Society for Information Science and Technology, 61(7), 1476–1486.Waltman, L., van Eck, N.J., van Leeuwen, T.N., & Visser, M.S. (2013). Some modifications to the SNIP

journal impact indicator. Journal of Informetrics, 7(2), 272–285.West, J.D., Bergstrom, C.T., Althouse, B.M., Rosvall, M., Bergstrom, T.C., & Vilhena, D. (2012a). A model of

research. University of Washington. http://www.eigenfactor.org/methods.php. Accessed 3 February2014.

West, J.D., Bergstrom, C.T., Althouse, B.M., Rosvall, M., Bergstrom, T.C., & Vilhena, D. (2012b). Correlationof Article Influence with Impact Factor. University of Washington.http://www.eigenfactor.org/stats.php. Accessed 3 February 2014.

West, J.D., Bergstrom, C.T., Althouse, B.M., Rosvall, M., Bergstrom, T.C., & Vilhena, D. (2012c). WhyEigenfactor? University of Washington. http://www.eigenfactor.org/whyeigenfactor.php. Accessed 3February 2014.

West, J.D., Bergstrom, T.C., & Bergstrom, C.T. (2010a). Big Macs and Eigenfactor scores: Don’t letcorrelation coefficients fool you. Journal of the American Society for Information Science and Technology,61(9), 1800–1807.

West, J.D., Bergstrom, T.C., & Bergstrom, C.T. (2010b). The Eigenfactor metrics: A network approach toassessing scholarly journals. College & Research Libraries, 71(3), 236–244.

Yan, E., & Ding, Y. (2010). Weighted citation: An indicator of an article’s prestige. Journal of the AmericanSociety for Information Science and Technology, 61(8), 1635–1643.

Yin, C.-Y. (2011). Do Impact Factor, h-index and Eigenfactor of chemical engineering journals correlatewell with each other and indicate the journals’ influence and prestige? Current Science, 100(5), 648–653.