the ranking game, class, and scholarship in … 03, numbers 1 and 2, … · university of missouri,...

41
Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 1 THE RANKING GAME, CLASS, AND SCHOLARSHIP IN AMERICAN MAINSTREAM ECONOMICS Frederic S. Lee University of Missouri, Kansas City E-mail: [email protected] ABSTRACT The paper starts with an analysis of the rationale and logic of ranking which culminates with the conclusion that the criteria used to rank journals which are then used to rank departments are influenced by where the top departments publish. In the second section, the intellectual and social organization of mainstream economics is delineated and the role of ranking within it identified; while the third section summarizes the empirical research and shows that irrespective of the ranking criteria used, the outcomes have been a relatively stable hierarchy of journals and departments over the past three decades. Drawing from the previous discussion, the fourth section argues that the rankings of journals and departments contribute to the making and retaining of mainstream economics as a class-hierarchical dependency structured science. Finally, the last section concludes the paper with a discussion of the implications arising from economics being a class and hierarchy-based science. Keywords: Department, Journal, Ranking, Class, Science JEL Classification: A14. 1. INTRODUCTION The American economics profession has been interested in the ranking of economic journals and departments for over thirty years. From the start, the rankings had significant impacts on both individual economists and departments. In fact, the first comprehensive and influential ranking of economic journals and departments by William Moore (1972, 1973) was precipitated by the desire of neoclassical economists in his department at the University of Houston to cleanse it of Institutionalist economists (Lower, 2004). Moreover, the rankings are used in university promotion and allocation decisions, and in the recent case of Notre Dame to restructure the economics department in favor of

Upload: lethuan

Post on 14-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 1

THE RANKING GAME, CLASS, AND SCHOLARSHIP IN AMERICAN MAINSTREAM ECONOMICS

Frederic S. Lee

University of Missouri, Kansas City E-mail: [email protected]

ABSTRACT

The paper starts with an analysis of the rationale and logic of ranking which culminates with the conclusion that the criteria used to rank journals which are then used to rank departments are influenced by where the top departments publish. In the second section, the intellectual and social organization of mainstream economics is delineated and the role of ranking within it identified; while the third section summarizes the empirical research and shows that irrespective of the ranking criteria used, the outcomes have been a relatively stable hierarchy of journals and departments over the past three decades. Drawing from the previous discussion, the fourth section argues that the rankings of journals and departments contribute to the making and retaining of mainstream economics as a class-hierarchical dependency structured science. Finally, the last section concludes the paper with a discussion of the implications arising from economics being a class and hierarchy-based science.

Keywords: Department, Journal, Ranking, Class, Science

JEL Classification: A14.

1. INTRODUCTION

The American economics profession has been interested in the ranking of economic journals and departments for over thirty years. From the start, the rankings had significant impacts on both individual economists and departments. In fact, the first comprehensive and influential ranking of economic journals and departments by William Moore (1972, 1973) was precipitated by the desire of neoclassical economists in his department at the University of Houston to cleanse it of Institutionalist economists (Lower, 2004). Moreover, the rankings are used in university promotion and allocation decisions, and in the recent case of Notre Dame to restructure the economics department in favor of

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

2

mainstream economics. Yet little has been written that explores in detail the implications of ranking on the organization of economics in the United States, either with regard to mainstream or heterodox economics.1 This paper is a contribution to this little-examined topic as it pertains to the former. That is, the paper explores what kind of impact rankings has on the social production of mainstream economic knowledge. Casual observation suggests that rankings have contributed to creating a scenario where neoclassical economists produce scientific knowledge as an unintended by-product of acquiring invidious social distinctions. But the impact may also have other consequences—but what? Thus, the paper starts with an analysis of the rationale and logic of ranking which culminates with the conclusion that the criteria used to rank journals which are then used to rank departments are influenced by where the top departments publish. In the second section, the intellectual and social organization of mainstream economics is delineated and the role of ranking within it identified; while the third section summarizes the empirical research and shows that irrespective of the ranking criteria used, the outcomes have been a relatively stable hierarchy of journals and departments over the past three decades. Drawing from the previous discussion, the fourth section argues that the rankings of journals and departments contribute to the making and retaining of mainstream economics as a class-hierarchical dependency structured science. Finally, the last section concludes the paper with a discussion of the implications arising from economics being a class and hierarchy-based science.

2. RATIONALE AND LOGIC OF RANKING

In the post-war period 1945 to 1970, American economic departments made clear decisions to hire well-trained neoclassical theorists with proselytizing attitudes to transform the way economic theory was being taught to its undergraduate and graduate students. Intermediate theory courses in microeconomics and macroeconomics were introduced and in some cases with a mathematical economics course as a prerequisite; mathematical economics courses became required for undergraduate majors; graduate theory courses became more mathematical; some degree of mathematical preparedness was expected of incoming graduate students; and graduate students were taught that a true scientific economist was one who discarded ideological biases, became detached and objective, and accepted the conclusions of logic and evidence. As a result economic

1

For a critical examination of the impact of journal and department rankings on economics in the United Kingdom, see Lee (2006).

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 3

departments throughout the United States became increasingly homogeneous: being neoclassical-theoretical in tone, attitude, and research.2 This also meant a change in the training of graduate students, with an increased focus on a common core of theory supported by extensive training in mathematical and statistical techniques.3 With these forces in place by the 1960s and into the 1970s, departments in many different universities had improved so much—having highly trained faculty publishing in the ‘conventional’ top journals and in the new journals and having instituted doctoral programs - that it was not clear who in fact were the top departments qua doctoral programs or what were the top journals.

2.1. Ranking Departments

The significant expansion of Ph.D. granting institutions from approximately sixty-six in 1940 to ninety-four in 1971 led to a questioning as to which doctoral programs were, in terms of scholarly and educational quality, truly the top programs—the well-known established programs or had the new up-start programs taken some of their places. This question was posed, for example, by the National Science Foundation in light of its disbursement of large sums of monies to universities. Other federal agencies and private foundations were also interested in the question because they needed a mechanism to help evaluate requests for funding projects. A second reason for posing the question was to help prospective graduate students select the best program for their needs. The final reason was that the answer could assist university administrators in developing fund raising strategies and making resource allocation decisions concerning specific graduate programs, and public officials when making policies and funding decisions concerning higher education. This latter reason became

2 By 1990 the hegemony of neoclassical theory was so complete that in line with most American mainstream economists the

American Economic Association Commission on Graduate Education in Economics simply did not recognize that economic

theories other than neoclassical economic theory existed, while also noting that graduate and undergraduate programs in the United

States were virtually identical in terms of the core theory taught. To put it another way, the Commission simply assumed that all

economists spoke the same language, that is, were intellectually-theoretically the same—a conclusion that clearly emerges from the

work of Klamer and Colander (1990). Complementary to this are the ongoing efforts to ensure that all American high school

students are taught neoclassical theory. This is evident in the Test of Economic Literacy administered to high school students and in

the establishment of voluntary content standards for pre-college economics education developed by the National Council on

Economic Education. The content of the standards included only neoclassical economic theory and its universal truths (such as

scarcity and choice, markets work, marginal analysis, and supply and demand), since the inclusion of alternatives would, it was

feared, confuse the students and the teachers and hence prompt them to abandon economics altogether. [Krueger, A. O. et. al, 1991;

Hansen, 1991; Kasper, et. al., 1991; Nelson and Sheffrin, 1991; Siegfried and Meszaros, 1997; Buckles and Watts, 1998; and

National Council on Economic Education, 1997] 3 To make room for these changes, history of economic thought and economic history were frequently dropped as required core

courses and sometimes dropped altogether from the course offerings. [Barber, 1997; and Aslanbeigui and Naples, 1997]

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

4

progressively more important to the point that universities by the 1980s used rankings to decide whether departments should exist, be reorganized, or abolished, while governments adopted educational policies that tied public funding directly to department (or university) rankings. [Cartter, 1966; Roose and Anderson, 1970; Niemi, 1975; Dolan, 1976; Jones, Lindzey, and Coggeshall, 1982; Tschirhart, 1989; Goldberger, Maher, and Flattau, 1995; Scott and Mitias, 1996; and Holcombe, 2004]

Underlying the desire to rank departments and their doctoral programs was the fundamental assumption that discipline-specific knowledge while evolving was broadly uncontested; that is, the professors and practitioners of each discipline generally accepted and engaged the same body of evolving knowledge, and produced relatively homogeneous scientific output. Without this assumption of knowledge homogeneity, department-program rankings would be largely meaningless. However, by adopting department performance indicators and hence the assumption, it becomes easy to use them to rank departments and their programs that are not keeping up with the evolving theory, techniques and applications as “less than adequate,” “marginal,” or “not sufficient for doctoral education” without wondering whether the ranking was due to intellectual bias. If the less than adequate departments-programs are so classified because of a bias against the kind of economics they support and with resource allocations directly tied to rankings that are blind to it, then one possible secular consequence is the decline of contested knowledge and intellectual-scholarly diversity, and the rise to dominance of an uncontested, unquestioned intellectual orthodoxy. In this situation, thinking “wrong” thoughts becomes “scholarly” crime punishable by the withdrawal of department-program resources or worse. [Dolan, 1976]

2.2. Ranking Journals

The mainstream economists of the 1960s were generally satisfied that they as a collective knew without a doubt which economic journals were “venerable and prestigious” (Coats, 1971, p. 30) and which were not. This shared tacit knowledge flowed from the initiation of economic graduate students and newly minted Ph.Ds. by their professors into the vague, opinionated, and ephemeral folklore of what are good, not-so-good, and even non-economic journals, which meant that they published in the same journals as did their professors. For example, from 1958 to 1968, the faculty in the top fifteen economic departments (see fn. 19 below) produced 64.5% of the articles in eight core journals including the American Economic Review, Journal of Political Economy, and Quarterly Journal of Economics. In addition, from 1960 to 1969 these same

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 5

departments produced 43% of the pages in five core journals including those mentioned above. Finally, the graduates of these departments publish heavily in the same journals, as in the case of the American Economic Review where 57% of all the authors and 77% of the authors with American doctorates were from these departments. [Moore, 1973; Siegfried, 1972; Hogan, 1973; Eagly, 1974; and Sun, 1975]

The 1960s was also a period when the number of economic journals more than doubled; and this led to some doubts, perhaps, as to which economic journals were prestigious and hence constituted the core journals of the discipline: were the older journals such as the American Economic Review, Journal of Political Economy, and Quarterly Journal of Economics still prestigious or had they declined and been replaced by newer economic journals? If the concern with invidious comparisons of economic journals extended no further than a game of publication one-upmanship among colleagues, then the answer to the question would matter little. But this is not the case when departments, university administrators, and grant-giving institutions base their tenure, promotion, salary, and grant decisions on the prestige (now equated with the quality of scholarship or the scientific output) of the journals in which one publishes (Centra, 1977; Boyes, Happel, and Hogan, 1984; and Dearden, Taylor, and Thorton, 2001). If the ‘subjective’ prestige of a journal is directly equated to an article’s scholarly quality and hence scientific importance (as opposed to what the article is actually about irrespective of where it is published), then the prestige ranking of journals becomes a short-cut for administrators to evaluate academics and more significantly to evaluate and rank departments.

For the ‘short cut’ to be accepted by administrators and economists alike, three intermediate steps had to be taken, the first of which was to connect reputation-based rankings to publications. The initial rankings of economic departments (Hughes, 1925 and 1934; Keniston, 1959; and Cartter, 1966) used informed opinions. However, skepticism was voiced at whether informed opinion would actually identify the high quality scholarly productive departments. So Cartter provisionally examined it and demonstrated that for six major journals4 there existed a “clear correlation between reputation of a department and the scholarly productivity of its members” (Cartter, 1966, p. 81).5 Moreover, Siegfried (1972) provided

4

The journals used were American Economic Review, Quarterly Journal of Economics, Journal of Political Economy, Review of

Economics and Statistics, Econometrica, and Southern Economic Journal. 5

The opinion surveys of 1966, 1970, 1982, and 1995 generated department rankings based on quality of the program faculty and on

teaching effectiveness of the doctoral program. The correlation between the two for the 1966 survey was ‘high’ and for the 1970,

1982, and 1995 surveys was .99, .98, and .98 respectfully. Given such a tight fit combined with the difficulty of easily measuring

teaching effectiveness, ranking studies have ignored teaching effectiveness per se and concentrated on quality of faculty as

measured by publications.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

6

evidence that publishing in selected ‘blue ribbon’ journals had a significant positive correlation (.891) with reputation. Finally, in the subsequent 1982 and 1995 National Research Council opinion surveys (Jones, Lindzey, and Coggeshall, 1982; Goldberger, Maher, and Flattau, 1995; and Thursby, 2000), the correlation between reputation and publication, measured in terms of total publications, publications per faculty, and citations per faculty, ranged from .60 to .94.6 But the correlations suggested that publications and informed opinions were less than a near perfect match. Therefore, the high ranking-reputation accorded to Harvard’s economics department, for example, was still insufficiently ‘objectively’ grounded, which left open the possibility that a lowly regarded economics department but one with a sufficient number of publications, pages, and/or citations could be its ‘objective’ equal. This possibility was inferred from the studies by Siegfried (1972), Moore (1973), Hogan (1973), and Stolen and Gnuschke (1977) who showed not only that there were some differences between publication-based and reputation-based rankings but also the former rankings were sensitive to the journals used for the rankings.7

A way to significantly reduce this possibility is to weight journal publications by a “journal quality index.” Thus, the second step established that the journals selected for the publication-based ranking of departments represented scholarly quality and hence were prestigious journals.8 This was achieved in all of the twenty-one articles identifying top journals (see Appendix I9) by selecting ‘blue ribbon’ journals based on author institutional affiliation, subjective evaluation such as ‘everyone would agree are core, mainstream, highly respected, quality journals,’ and/or by utilizing a citation count. Finally, the third step was to ensure that the identified prestigious journals generated a ranking of departments that was nearly the same as the reputation-based ranking. This was accomplished in

6

There is also a not-insignificant correlation between the quality of program faculty and program size as measured by the number of

faculty, number of graduate students, and the number of recent graduates. For the 1982 opinion survey the correlations are .61, .56,

and .75 respectively; and for the 1995 opinion survey, the correlations are .67, .67, and .81 respectively. Moreover, Thursby (2000)

makes an argument that department resources (of which size is one proxy) can account for 90 percent of the variation in the 1995

survey quality scores. Thus it would seem that program size and more generally department resources are as well correlated with

reputation as is publications. But these correlations have not become factors in ‘explaining’ rankings, perhaps because they imply

that the difference between top and low ranked departments is not scholarship and the production of scientific knowledge or even

effectively using given resources, but simply the amount of resources or money at hand. 7

This possibility was also suggested by Thursby (2000) and demonstrated by Kodrzycki and Yu (2005). 8

With this step the question of why journal publications versus other kinds of publications is completely forgotten. Academics

who write books generally have fewer journal publications than those who do not. [Clark, Hartnett, and Baird, 1976] 9 The two Appendicies referred to in the paper can be found at:

http://cas.umkc.edu/econ/economics/faculty/Lee/docs/ranking2ajeeapp.pdf.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 7

part because the reputation of the ‘blue ribbon’ journals was acquired from the fact that the high-reputation departments published in them, as especially is the case of Harvard and The Quarterly Journal of Economics and Chicago and the Journal of Political Economy; and in part because quantitative measures such as number of articles, number of pages, and citation counts do not indirectly ‘measure’ quality but reflect the reputations of journals derived from high-ranking departments.10 With these two steps in place, a tight fit between reputation-based rankings and rankings based on journal publications is assured—see below, Appendix II, and Dusansky and Vernon (1998).

2.3. Ranking Journals → Ranking Departments→Ranking Journals

For the short cut to actually work, it is necessary to go inside step two and establish that the ‘prestigious’ rank of a journal indicates without question an article’s scholarly quality and importance. This requires two criteria and an assumption to be in place. The first criterion is that it is difficult to publish in the journal because of its rigorous editorial and refereeing process which eliminates publish-favoritism.11 Secondly, the journal’s subject content, whether it be theoretical or applied, is directly important or relevant to economists, so that the higher the journal’s ranking the greater importance its content is to economists.12 The necessary

10

Citation counts do indicate which economic journals are important to economists. However, economists do not approach journals

with a blank slate and simply determine which journals are quality and which are not by reading the articles. Rather, as suggested

above, they are taught by their professors and learn by observation of the publications of the high-reputation departments which are

the ‘venerable and prestigious’ journals. In short, economic journals are not just ‘experience’ goods; an emerging economist’s

reading material is too important to be left unsupervised and undirected. Consequently, citation counts are not objective,

independent indirect measures of quality, but rather a quantitative measure of informed opinion of the best, most useful, and

influential economic journals. There is a further point. Since citation counts are aggregate counts, specialist journals in areas with

few practitioners relative to other areas will have smaller citation counts. Hence aggregate citation count based journal rankings are

appropriately biased in favor of journals from larger fields.[Liebowitz and Palmer, 1988; Archibald and Finifter, 1990; Beed and

Beed, 1996; Seglen, 1997; and Lubrano, Kirman, Bauwens, and Protopopescu, 2003] 11

This criterion is not, it is claimed, generally met with ‘in-house’ journals, such as the Journal of Political Economy and Quarterly

Journal of Economics, or with ‘in-house’ editors in that economists in the editor’s department are awarded some preferential

treatment in terms of accepting and the average length of their papers. It is also not met when there is non-anonymous reviewing,

such as with the American Economic Review. Since this publish-favoritism is also found in other social science disciplines, it would

appear difficult to ignore, but economists producing journal-based department rankings do so, or at least rationalize it.[Crane, 1967;

Laband, 1985b; McDowell and Amacher, 1986; Braxton, 1986; and Laband and Piette, 1994a] 12

History of economic thought journals on this ground alone are considered non-prestigious because mainstream economists

generally believe that their content is of no importance to them since it does not contribute to the development of economic theory

or methods. One case of this is Lovell (1973, pp. 39 – 40) where he excluded history of economic thought articles from his citation

study of the quality of economic journals; and a second case is Brauninger and Haucap (2003) where they excluded history of

thought journals from their study of reputation and relevance of economic journals. Moreover, this is clearly revealed in citation

impact studies where the impact of History of Political Economy is just about if not zero. Therefore it is not surprising that the

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

8

assumption is that the content of the journals being ranked is based on and utilizes, either explicitly and/or implicitly, the same general evolving body of knowledge, theory, and methodology. However, if either criterion fails to hold or there is contested knowledge, then the connection between the prestige of a journal and the scholarly quality of an article it publishes breaks down. In particular, if different journals represent different contested knowledge, then differences in their ranking may reflect the subjective or ideological repulsion of the theory employed rather than the quality of scholarship and its importance to economists. Moreover, if the different knowledge and theory is associated with different subject matter, then for one group of economists, journals embracing the unacceptable knowledge will have unimportant content and hence be unfamiliar to and not cited by them. Therefore in a contested discipline, such as economics, differential journal citation counts indicate which journals embrace the dominant theory most completely and which do not. Hence citation counts do not indicate quality that is independent of the contested knowledge (Beed and Beed, 1996). Consequently, in departments, universities, and higher education systems where tenure, promotions, salaries, and department funding are in part affected by the ranking of the journals in which a professor publishes, then in a discipline with contested knowledge, publishing in less prestigious journals is not simply a mistake in judgment, it is punishable in terms of the withdrawal of institutional and/or national resources from the individual and/or the department. [Coats, 1971; Berg, 1971; Moore, 1972; Hawkins, Ritter, and Walter, 1973; McDonough, 1975; Bell and Seater, 1978; Gerrity and McKenzie, 1978; Graves, Marchand, and Thompson, 1982; Liebowitz and Palmer, 1984; Tschirhart, 1989; Ellis and Durden, 1991; Laband and Piette, 1994b; Pieters and Baumgartner, 2002; and Brauninger and Haucap, 2003]

3. ORGANIZATION OF MAINSTREAM ECONOMICS

3.1 Intellectual and Social Organization of Science

In light of this rather critical discussion of ranking, the question that needs to be asked is “what are the rankings actually revealing in terms of the intellectual and social organization of mainstream economics?”. To answer the question necessitates a brief digression on the organization of science itself. The sciences are social systems of work that produce particular outputs called scientific knowledge, which are explanations and

Social Science Citation Index sought to remove (and in fact did so for a period of time) History of Political Economy, which is the

only explicit history of thought journal in the index, because of inadequate citations. [Liebowitz and Palmer, 1984; and Laband and

Piette, 1994b]

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 9

understanding of some set of real world phenomena. However, scientific knowledge is fallible and perhaps historically contingent and hence can be contested; thus it is not some immutable objective stock that grows quantitatively. Rather, scientific knowledge is a partially demarcated body of knowledge that changes unpredictably and qualitatively. In short, what constitutes scientific knowledge has a subjective and a ‘community approval’ component. In this respect, scientific knowledge is a product of an elaborate intellectual and social organization which constitutes the system of work that is, for the most part, embedded in educational systems and their employment markets known as academic departments. The essential characteristics of a science are that its participants within this system of work see their activities as communal and hence consider themselves as a member of a community of scientists, and that the scientists control the way work is carried out, the goals for which it is carried out, and who is employed to carry it out. This further implies that participants engaged in a particular science (or scientific field) are dependent at least to some degree on each other in the production of scientific knowledge. One component of this dependency is in terms of being able to use another scientist’s research and the second is working on common issues that are relevant to achieving the goals of the scientific community. The former requires that the scientists and their research meet community-based acceptable research standards including competently utilizing acceptable research techniques; while the latter requires the existence of a community consensus on what are the goal-dependent central issues for research so as to ensure, without administrative directive, colleagues are also working on the same and/or broadly supportive issues. Thus the structure of dependency essentially determines the structure of the system of work that produces the scientific knowledge relevant to meeting the goals of the community. And those possible scientists who do not ‘fit’ into this structure of dependency, and do not produce the right kind of knowledge are either marginalized or not permitted to be part of the community.13

Several factors affect the structure of dependency: (1) the nature of the audience for which the scientific output is

intended; (2) the degree to which control over the means of production of

the scientific knowledge (including the equipment, the techniques, and the laboring skills), the format by which scientific knowledge is reported, and the communication outlets such as journals, are concentrated in the hands of a

13

Hence, it is possible in a scientific field, such as economics, for the ‘marginalized’ to form their own community of scientists.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

10

few or many; the role of individual and institutional reputations in affecting both the production of scientific knowledge (and particularly what is accepted as scientific knowledge) and the goals of such knowledge; and

(3) the role of state power and other organizational power outside the science community in legitimizing, supporting, or otherwise affecting, particular reputations, goals, and scientific knowledge.

Variations in the impact of the factors on the structure of dependency and on the goals of scientific knowledge produce quite different social systems of scientific work. Hence it is possible to have a scientific field whose social system of work is controlled by an elite, is hierarchically structured, is centralized, has a high degree of participant dependency, and is legitimized and supported by state-organizational power and monies and which has an incestuous and evolutionary relationship with the elite. And it is also possible to have a scientific field whose system of work is populated by numerous local schools that are hierarchically structured, have a low degree of participant dependency, and are not legitimized and supported by state-organizational power and monies. Hence the intellectual and social organization of a scientific work, that is the community of scientists, is not naturally given for any scientific field, but is historically and intentionally determined by its participants and recipients of its scientific knowledge. [Whitley, 1984, 1986, and 1991; and Pickering, 1995]

3.2 Organization of Mainstream Economics

Although the methodology of mainstream economics is grounded in methodological individualism and promotes the individual actor over social interaction and social norms, the actual work activity that produces mainstream scientific output is socially organized. The structural organization of the work activity is, arguably, derived from (but not conflated with) the theoretical organization of mainstream economics. That is, neoclassical theory is arranged in a hierarchical manner. At the top is the theoretical core that comprises of primary theoretical concepts and propositions that are accepted without much disagreement.14 From them, synthetic theoretical propositions are deduced. For example, the concepts 14

The theoretical core is not written in stone—there are developments. Particular concepts may be created, developed, and/or

modified, such as rationality and bounded rationality. But what is not possible is for any of the core concepts, such as rationality

(and any modifications) to be ejected from the core. If this occurred, then all the previous scientific knowledge of mainstream

economics would be called into question. Consequently, the theoretical core can (and in fact does) contain concepts that are

contradictory.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 11

of relative scarcity, rationality, optimization, and preference structure and the propositions of convexity, equilibrium, exchange, and technology combine to produce the synthetic propositions of demand curves, supply curves, and market equilibrium. The synthetic propositions in turn are the basis for deriving ‘lower’ level propositions that directly engage issues derived from the economy and the provisioning process. They are embodied in applied economic research whose generation of empirical (as opposed to theoretical) scientific knowledge is not used to evaluate the core concepts and propositions or even the synthetic propositions.15

For this hierarchical theoretical organization to be possible, it is necessary for economists at all levels of economic research to know and work with the same evolving theory, have the same research standards, and utilize the same evolving research techniques.16 It is also necessary that they accept the same broad goals and for the more specific research issues the same set of theoretical propositions. These two requirements are achieved through the homogeneous teaching and intellectual inculcation of graduate students. That is, as noted at the beginning of the paper, graduate students in mainstream graduate programs are uncritically introduced to a pre-established body of theory. Consequently, work involving the theoretical core and resulting synthetic propositions is accepted as eventually having relevance for lower level production of scientific knowledge; and research at the lower levels utilizes directly and/or indirectly the scientific output generated at the higher level. This hierarchical dependency structure works well when accompanied by a hierarchy of intellectual deference in that economists working at the lower levels do not expect their research to question the theory coming from a higher level or to be given the scholarly recognition awarded to the higher level research; and they do not expect that they should have the same scholarly reputation as those economists doing research at a higher level.

For this dual hierarchy dependency structure to work, mainstream economists must respect the hierarchy and maintain their places without question. This social control is, in part, achieved through the process of 15

Thus, this hierarchal theoretical organization of mainstream economics (which is very Lakatosian in structure) essentially protects

the core from almost all forms of criticism, particularly those forms that are not empirically based, such as emanating from

economic philosophy. 16

There is a debate over whether neoclassical and mainstream economics is the same or different and whether the mainstream is

hopelessly fragmented or not—see Colander, Holt, and Rosser (2004), Davis (2006), and Lawson (2006). In this paper, I proceed

on the basis that there is no difference and hence neoclassical and mainstream are treated as the same and used interchangeably.

The rationale for doing so is that in Lee and Keen (2004) I argued that neoclassical and mainstream economics are fundamentally

the same; and what differences and fragmentation that does exist really is no difference at all. A more detailed support for the

position is being assembled in a manuscript on “Neoclassical Microeconomic Theory: A Heterodox Approach” which draws from

textbooks as well as journals and research monographs. Clearly, if my position is incorrect, then many of the arguments in this

paper will have to be recast.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

12

community indoctrination that starts when graduate programs teach their students what the hierarchy is and that they should defer to it. Specifically, as part of their graduate education, economic students are taught to discriminate among types of scientific knowledge and accordingly value some more than others. Because this differential valuation is extended to the economists who produce the knowledge, there is also discrimination among economists in that some are considered more valuable than others. This is strongly reinforced through the control and allocation of jobs, access to the material resources needed to carry out research, and access to journals and publishers through which research that is more or less in conformity with neoclassical theory is made known and disseminated. Thus, built into this ‘mainstream’ hierarchical dependency structure are discriminatory relationships that are widely accepted as ‘natural’, the way things are to be, and hence are not seen as discriminatory.

In this context, the ranking of journals and departments is designed to visibly reflect, reinforce, and internalize the hierarchical dependency structure without explicitly acknowledging the existence of the embedded discriminatory relationships. Thus the discriminatory hierarchy of mainstream economics is transformed into a community concern about reputations based on better-than-you distinctions. Individual reputation and department progress is tied to moving up the hierarchies and retaining one’s position at the top of it. Therefore ranking reflects the self-praise of existing invidious distinctions. What is not a concern is the question of the production of scientific knowledge per se; rather it is assumed that publication in a ranked journal is equivalent to the production of knowledge. Consequently, while seeking invidious distinctions promotes the production of new and differentiated knowledge, such knowledge must remain within the general orbit of neoclassical economics.17 [Beed and Beed, 1996]

4. RANKING JOURNALS AND DEPARTMENTS

Coats (1971) suggested that nine journals (see Table 1) constituted the top and leading economic journals. However, given the existence of numerous old and new economic journals, many and perhaps most economists were unconvinced by Coats’s arguments. As a result, some twenty-one different articles emerged over the next thirty years identifying the blue ribbon, core, mainstream economic journals (see Appendix I). The

17

That is to say, mainstream economics is pluralistic with respect to different research projects that do not go outside its theoretical

orbit, such as behavioral economics. But it does not maintain this same open pluralistic attitude towards research agendas that are

beyond the pale—see for example Lee (2006).

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 13

articles have two components: one is the selection of journals, and the second is the ranking of the journals selected. Of the twenty-three different identifications of top economic journals from the twenty-one articles, 9 include all the Social Science Citation Index journals that might be useful to economists (see Appendix I—H, K, L, M, S, T, U, V, Diamond List), 12 are based on author’s institutional affiliation, inclusion in graduate reading lists, subjective evaluation of top journals, and/or a combination of these and other factors (see Appendix I--A, B, C, D, E, F, G, J, N, P, Q, R), and 2 use a combination of the two approaches (see Appendix I--I, O). As for ranking, eight of the identifications did not rank the journals; rather they considered them as a whole as the top journals (see Appendix I—F, G, I, N, O, P, Q, R). Of the remaining fifteen, 11 use various citation algorithms (Appendix I—B, C, H, Diamond List, K, L, M, S, T, U, V), and 4 use institutional affiliation or subjective evaluation (Appendix I—A, D, E, J). The outcome of the selection and ranking processes reveals a relatively stable hierarchy of high-quality important and low-quality unimportant economic journals. This is illustrated by reference to the list of the twenty-seven top journals generated by Diamond (1989). The list includes the nine journals of Coats’s 1971 list, the eight blue ribbon journals identified in 1995 by Conroy and Dusasky (1995), and seventeen of the top journals of the most recent ranking (see Table 1). Moreover, from eight to twenty-two of the Diamond List journals are included in each of the 22 lists of top journals in Appendix I while nine appear on 75% or more of the lists and fourteen appeared on over half of the lists. Thus, there is a significant degree of commonality between the various lists; and embedded in the various lists is a core of nine top journals that does not change. The various lists also established which economic journals, such as history of thought journals, are perceived by mainstream economists as low quality and/or unimportant. [McDonough, 1975; Burton and Phimister, 1995; and Sutter and Kocher, 2001]

From the 1920s to the 1960s, there existed an informed opinion as to which were the top economic departments with doctoral programs (see Table 2). However, as noted above, this view was open to question by the 1960s, hence the onslaught of fifteen department ranking studies that produced nineteen different rankings. In the studies, the process by which the top Ph.D.-granting economic departments are determined has two steps, first is the selection of the departments followed by the process of ranking them. The selection process consists of either including all (or nearly all) the economic departments with doctoral programs (thirteen of the nineteen rankings) or deals only with the top fifty departments identified by Graves, Marchand, and Thompson (1982) plus an additional number of departments thought to have progressed significantly in recent years (see Appendix II).

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

14

Given the departments to rank, the ranking process consists, as noted above, of two approaches: one is based on informed opinion (see Appendix II—A, B, C, H, P) and the second based on journal publications, calculated in various ways, in the top economic journals that are previously determined and delineated in the journal ranking studies (see Appendix II—D-G, I-O, Q-S). Although the ranking approaches differed, the end results are largely the same.18 That is, reputation-based and publication-based rankings and identification produce the same top departments. Moreover, this continuity among the top departments exists over time, as is illustrated in Table 2 which compares the top economic departments in 1925 and 1934, in 1959-1970, and in 1995 to 2003. The minimal amount of variation in top-ranked departments can also be deduced from the fact that over the period of 1959 to 2003, fifteen departments appeared among the top twenty-five departments in 16 to 19 of the rankings, while another nine appeared in 11 to 15 of the rankings.19 Thus, the inescapable conclusion is that there is a high degree of continuity in the ranking of Ph.D. granting economic departments for the last eighty years (see Table 2).

5. RANKINGS, CLASS, AND MAINSTREAM ECONOMICS

Together, the journal and department ranking studies establish that top departments publish in quality economic journals and quality journals publish economists from the top departments. This symbiotic relationship existed in the 1950s (Clearly and Edwards, 1960; and Yotopoulos, 1961) and, as noted above, in the 1960s. Moreover, it has replicated itself to the present day as economists in the top twenty-four departments repeatedly directed their efforts to publish in those quality journals and end up contributing over fifty percent of the articles and pages, although this significantly under-estimates their dominance of the top journals. For the period 1974 to 1994, the top 15 departments contributed 40% of the pages to the top journals, while the top 24 departments contributed 51%. Moreover, the degree of concentration of the top 24 departments in the top 50 departments for pages produced in top journals for the period 1971 to 1983 is 72%. In addition, for the period 1985 to 1990, the top 15 departments contributed nearly 75% of the total pages contributed by American economic departments to the American Economic Review,

18

This conclusion is widely acknowledged—see Smith and Gold (1976), Stolen and Gnuschke (1977), Bell and Seater (1978),

Liebowitz and Palmer (1988), Dusansky and Vernon (1998), Thursby (2000), and Coupe (2003). 19

The top fifteen departments include Chicago, Columbia, Harvard, Michigan, Minnesota, MIT, Northwestern, Pennsylvania,

Princeton, Rochester, Stanford, UC-Berkeley, UC-Los Angles, Wisconsin, and Yale. The next nine departments include Brown,

Carnegie-Mellon, Cornell, Duke, Maryland, New York, UC-San Diego, Virginia, and Washington. See Appendix II—last column.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 15

Econometrica, Economic Journal, Journal of Political Economy, and Quarterly Journal of Economics. Finally, from 1977 to 1997, the top 24 departments contributed more than 51% of the American-based authors to the top fifteen journals; and in 1995 the top 24 departments contributed more than 54% of the American-based authors in the top fifteen journals and 51% to the top thirty journals.20 [Graves, Marchand, and Thompson, 1982; Hirsch, Randall, Brooks, and More, 1984; Laband, 1985a; Bairam, 1994; Scott and Mitias, 1996; Hodgson and Rothman, 1999; and Kocher and Sutter, 2001]

Adopting the publishing values of their professors, Ph.D. graduates of the top twenty-four departments contributed more than 70% of the articles and pages in the top journals. From 1970 to 1979, more than 70% of the pages in the American Economic Review, Journal of Political Economy, and Quarterly Journal of Economics were contributed by graduates of the top 24 departments; from 1975 to 1984, 84% of the articles and pages in the top journals contributed by authors who earned Ph.D.s from eighty American doctoral programs from 1975 to 1984 came from graduates of the top 24 departments; from 1977 to 1997 more than 67% of all authors and 83% of the authors with American Ph.Ds. in the top fifteen journals graduated from the top 24 departments; and in 1995 70% of the authors in the top thirty journals with American Ph.Ds., came from the top fifteen departments. [Hogan, 1986; Laband, 1986; Hodgson and Rothman, 1999; Collins, Cox, and Stango, 2000; and Kocher and Sutter, 2001; also see Cox and Chung, 1991]

Since the top departments have historically employed each others’ graduates and exported their huge surplus to lower ranking departments (while importing very few of their graduates), their faculty are relatively homogeneous in terms of their graduate training, the graduate training they offer, and their publishing expectations.21 In addition, the lower ranking departments have increasingly become clones of the top ranked departments. Finally, over 40% of the editors of the top journals obtained their Ph.Ds. from the top twenty-four departments while 43% of the editors resided in them, which implies that nearly all of the departments have an editor from a top journal (Yoels, 1974; and Hodgson and Rothman, 1999).

20

More limiting but still supportive evidence shows that for the period 1995 – 2000 the top seven departments—Harvard, MIT,

Princeton, Stanford, UC-Berkeley, Chicago, and Yale—contributed 21% to 43% of the pages of American Economic Review,

Econometrica, Journal of Political Economy, Quarterly Journal of Economics, and Review of Economic Studies. And conversely

supporting evidence shows that economists in liberal arts colleges do not generally publish in top journals. [Rupp and McKinney,

2002; and Hartley and Robinson, 1997] 21

Evidence for this is not extensive, but see footnote 2 and Berelson (1960), Crane (1970), Eagly (1974), and Kocher and Sutter

(2001). It is of interest to note that in 1992 70% of economics faculty at Ph.D.-granting American universities obtained their Ph.Ds.

at one of the top twenty-four departments. [Pieper and Willis, 1999]

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

16

With the symbiotic relationship between quality journals and top departments combined with the homogeneity of graduate training and publishing expectations and the dispersion of journal editors across the top departments, economists in these departments have all the right social characteristics to be successful—they have the right training, employment location, and social connections. In short, the top departments and their faculty form, it would seem, a class with distinct social characteristics that ensure them access to publishing in the top journals, the control of the journals themselves, and the prestige to have their work be taken more seriously than others not of their class. This is evident in the case of articles written by economists not affiliated with top ranked departments that appear in top ranked journals in that they tend to receive fewer citations than economists affiliated with top departments whose articles appear in the same journals. Hence department (or academic-class) affiliation affects perceptions of the significance of research rather than the research itself. [Oromaner, 1983]

6. CONCLUSION

The concern guiding the analysis and discussion of this paper has been to understand the impact of the ranking of journals and departments on the way neoclassical economists produce scientific knowledge. An examination of the rationale and logic of the ranking process found it to be incestuous in that the criteria for ranking journals which are then used to rank departments are affected by what journals top departments publish in. Thus, the ranking process essentially ensures that top departments publish in quality economic journals and quality journals publish economists from top departments. Then there followed a brief discussion of the intellectual and social organization of mainstream economics as a hierarchical dependency structured science from which it was argued that the ranking exercises visibly reflected, reinforced, and internalized the structure within the community of mainstream economists. As a result, the community of economists pursue the production of new and differentiated scientific knowledge as a way to acquire invidious distinctions. The survey of the ranking studies clearly showed the logic of the ranking process at work in that it revealed that over the past thirty years, there has been a stable hierarchy of top journals and departments. This ‘empirical’ fact was integrated with evidence from the ranking and collateral studies to argue that economics appears to be a class-structured as well as a hierarchical dependency structured science.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 17

So, if economics is such a science, what are the implications, somewhat speculative as they may be? Since economics is a class-based science, one is that there are lower classes: a second class composed of economists with only some of the social characteristics of the top class, such as graduate training and perhaps some social connection with editors of top journals but lack the employment location; a third class which has, to some extent, only the graduate training social characteristic, and finally, an ‘untouchable’ class of economists with none of the social characteristics. A second implication is that the top departments qua class dictate what the appropriate graduate training is and control access to the top journals and hence the production of scientific knowledge. Thus scientific knowledge in economics is class-based and hence socially constructed in that it must exhibit the social characteristics most appropriate for the continued dominance and social control of economics by the top departments (Braxton, 1986).22 Moreover, because the journal editors and journal authors share much if not all the same social characteristics, the acceptance of a paper for publication is not materially affected by whether there was single-blind or double-blind reviewing; and for the same reason publish-favoritism for a specific top journal would be difficult to identify, if it indeed exists at all. The issue is not so much journal-specific publish-favoritism, but class publish-favoritism (Crane, 1967; Blank, 1991; and Laband and Piette, 1994a). Consequently, the ranking of journals and departments is ultimately a symbiotic legitimization exercise of class domination and control in economics: the top departments publish in the top journals which are the top journals because the top departments publish in them. These actions by the top departments have given rise to charges of nepotism and of theoretical and methodological incest and inbreeding that will ultimately impede the production of scientific knowledge (Crane, 1970; Eagly, 1974; Pieper and Willis, 1999). Such charges are largely correct because continued class dominance and control requires the reproduction of the social characteristics that constitute the top departments.

The final implication is its two-fold negative impact on economics and the production of scientific knowledge. Because emulation of the social characteristics of the top departments is considered important by the lower class departments, they place heavy weight for tenure and decisions on getting publications in the top journals (Dearder, Taylor, and Thornton, 2001). However, as noted above, at least 50% to 70% of the publication space in the top journals is taken by [or reserved for (?)] the top

22

Conversely, lower class departments may prefer economic knowledge that is different from what the top departments produce. A

hint of this is suggested in Mason, Steagall, and Fabritius (1997) where economic departments that emphasis teaching or a balance

between teaching and research rank journals somewhat differently from the top major research universities.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

18

departments; thus, insistence that lower class economists publish in them is a recipe for professional disappointment if not self-destruction. The point of a class-structured science is to exclude the lower classes from acquiring the characteristics of the top class and hence becoming part of it. In addition, emulation in the context of class stability and embedded discriminatory relationships transforms neoclassical economists’ interest in knowledge production per se to knowledge production for the purpose of social climbing and creating invidious social distinctions. That is, lower class economists (and their departments) are driven to pursue knowledge production as a way to acquire at least some of the social characteristics of economists in the top departments. Hence they take on their research topics for the purpose of social climbing within their class and across classes; and they use the journal and department rankings to mark and legitimize their progress and their ‘social’ elevation above the less distinguished economists left behind. Similarly, economists in the top departments pursue new and differentiated knowledge production as a way to prevent the lower classes from rising above their station and to acquire in-class social distinctions relative to other top economists and use the journal and department rankings to mark and legitimize their progress. So instead of a scientific community dedicated to the production of scientific knowledge, we have one in which economists (and their departments) are devoted to social climbing and acquiring invidious social distinctions that are publicly endorsed via the ranking game where the production of knowledge emerges (if at all) as a unintended by-product.

While this invidious invisible hand of knowledge production is viewed by mainstream economists as an acceptable, natural way of engaging in the production of scientific knowledge, it is not the only way. Scientific knowledge can be produced under conditions where class divisions, hierarchically organized and valued work, and intellectual deference are absent.23 Whether mainstream economics can be so transformed and still retain its hierarchically arranged theory is, however, an unresolved question that requires further investigation.

23

For example, with regard to heterodox economics, see Lee (2005).

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 19

APPENDIX I

Diamond List Journals

Below lists studies that place the Diamond List Journals among the core mainstream or neoclassical journals. If the study lists less than 27 journals, all of them are considered. If the study lists more than 27 journals, but places them in ranked grouping, then the number of groups considered will include in total at least 27 journals. In addition, if the study just has a single unranked group of journals that number more than 27, then all the journals are considered. Finally, when the studies have a ranking that included more than 27 journals the first 27 are taken; and when The American Economic Review and the AER Papers and Proceedings appear separately among the first 27 journals they are combined into a single journal and the 28th journal is included. The next to final row lists the number of journals that were ranked or listed in the article; and the final row describes the criteria used by each study to rank or include the journals.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

20

Diamond List Journals, 1989*

JOURNALS

A

1972

B

1972

C

1972

D

1973

E

1975

F

1975

G

1982

H

1984

American Economic Review X X X X X X X X

Brookings Papers on Economic Activity X X

Canadian Journal of Economics X X X X X

Economica X X X X X X X X

Economic Inquiry X X X X X

Economic Journal X X X X X X X X

Econometrica X X X X X X X X

Economic Letters

European Economics Review

International Economics Review X X X X X X X X

Journal of Development Economics

Journal of Econometrics X

Journal of Economic Literature X X X X X

Journal of Economic Theory X X X X X X X

Journal of Financial Economics X

Journal of International Economics X X

Journal of Labor Economics

Journal of Law and Economics X X X X X X X X

Journal of Mathematical Economics X

Journal of Monetary Economics X

Journal of Political Economy X X X X X X X X

Journal of Public Economics X

Oxford Economic Papers X X X X X X X

Quarterly Journal of Economics X X X X X X X X

Rand Journal of Economics X X

Review of Economics and Statistics X X X X X X X X

Review of Economic Studies X X X X X X X X

50

35 48 85 70 24 24 107

Note:

* Inclusion based on their citation frequency as well as citation impact and self-citation (negatively).

A. Journals chosen based on author institutional affiliation and other factors; and rank based on institutional affiliation and

other factors.

B. Ranking based on the number of times an article from the journal was included in graduate reading lists, 1963-1972

C. Inclusion is derived from Moore (1972). Ranking based on citation count of the journals referenced in the American

Economic Review, Econometrica, and Economic Journal which were arbitrarily chosen

D. Inclusion and ranking based on the subjective evaluation of the journals

E. Inclusion derived from Hawkins (1973); and ranking based on a mixture of factors used in articles A-D

F. Inclusion based on the top journals in studies A-D.

G. Used the same journals as Niemi (1975)

H. Included all SSCI journals that might be useful to economists; and ranked based on citations

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 21

Diamond List Journals, 1989* (cont.) JOURNALS

I

1985

J

1987

K

1994

L

1994

M

1995

N

1995

O

1995

American Economic Review X X X X X X X

Brookings Papers on Economic Activity X X X X

Canadian Journal of Economics

Economica X X X

Economic Inquiry X X

Economic Journal X X X X X X

Econometrica X X X X X X X

Economic Letters

European Economics Review

International Economics Review X X X X X X

Journal of Development Economics

Journal of Econometrics X X X X

Journal of Economic Literature X X X X X

Journal of Economic Theory X X X X X X X

Journal of Financial Economics X X X X

Journal of International Economics X X

Journal of Labor Economics X X X

Journal of Law and Economics X X X X

Journal of Mathematical Economics X X X

Journal of Monetary Economics X X X X X

Journal of Political Economy X X X X X X X

Journal of Public Economics X X X

Oxford Economic Papers X X

Quarterly Journal of Economics X X X X X X X

Rand Journal of Economics X X X X X

Review of Economics and Statistics X X X X X X X

Review of Economic Studies X X X X X X X

27 130 129 129 9 8 34

Note:

* Inclusion based on their citation frequency as well as citation impact and self-citation (negatively).

I. Used the same journals as in Niemi (1975) plus 3 additional journals ranked in the top 10 by Liebowitz and Palmer

(1984)

J. Inclusion and ranking based on the subjective evaluation of the journals

K. Followed Liebowitz and Palmer (1984) and included all SSCI journals that might be useful to economists; and ranked

based on citations

L. Followed Liebowitz and Palmer (1984) and included all SSCI journals that might be useful to economists; and ranked

based on citations

M. Selected from SSCI journals that were clearly economics and had the highest citation count

N. Journals included derived in part from Graves, Marchand, and Thompson (1982) and Laband (1985); and the journals are

identified as the ‘Blue Ribbon journals’.

O. Union of the Blue Ribbon journals, Graves, Marchand, and Thompson (1982), and Liebowitz and Palmer (1984)

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

22

Diamond List Journals, 1989* (cont.) JOURNALS

P

1996

Q

1998

R

1999

S

1999

T

2001

American Economic Review X X X X X

Brookings Papers on Economic Activity X X

Canadian Journal of Economics

Economica X

Economic Inquiry X

Economic Journal X X X X X

Econometrica X X X

Economic Letters

European Economics Review X X

International Economics Review

Journal of Development Economics

Journal of Econometrics X X

Journal of Economic Literature X X

Journal of Economic Theory X

Journal of Financial Economics X X X

Journal of International Economics X

Journal of Labor Economics X X

Journal of Law and Economics X X X

Journal of Mathematical Economics

Journal of Monetary Economics X X X

Journal of Political Economy X X X X X

Journal of Public Economics X

Oxford Economic Papers X X

Quarterly Journal of Economics X X X X X

Rand Journal of Economics X X X

Review of Economics and Statistics X X X

Review of Economic Studies X X X X X

36 8 10 137 15

Note:

* Inclusion based on their citation frequency as well as citation impact and self-citation (negatively).

P. Journals included derived in part from Graves, Marchand, and Thompson (1982) and added 15 new journals subjectively

evaluated as newer, highly respected journals

Q. Subjectively chosen as the core journals

R. Subjectively chosen as the core journals

S. Included all SSCI journals that might be useful to economists; ranked based on citations

T. Top 15 journals selected from all SSCI journals that might be useful to economists; ranked based on citations

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 23

Diamond List Journals, 1989* (cont.)

JOURNALS

U

2002

V

2003

Total Number of Times a Diamond

List Journal gets Listed (out of 22

studies)

American Economic Review X X 22

Brookings Papers on Economic Activity X 9

Canadian Journal of Economics 5

Economica 12

Economic Inquiry 8

Economic Journal X 20

Econometrica X X 21

Economic Letters X 1

European Economics Review X 3

International Economics Review X 15

Journal of Development Economics 0

Journal of Econometrics X 8

Journal of Economic Literature X X 14

Journal of Economic Theory X 16

Journal of Financial Economics X 9

Journal of International Economics 5

Journal of Labor Economics X 6

Journal of Law and Economics X 16

Journal of Mathematical Economics 4

Journal of Monetary Economics X X 11

Journal of Political Economy X X 22

Journal of Public Economics X 6

Oxford Economic Papers 11

Quarterly Journal of Economics X X 22

Rand Journal of Economics X 11

Review of Economics and Statistics X 19

Review of Economic Studies X X 22

10 159

Note:

* Inclusion based on their citation frequency as well as citation impact and self-citation (negatively).

U. Top 10 journals selected from all SSCI journals that might be useful to economists; ranked based on citations

V. Included all SSCI journals that might be useful to economists; and ranked based on citations

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

24

A. Moore, W. J. 1972. “The Relative Quality of Economics Journals: A Suggested Rating System.”

Western Economic Journal 10.2 (June): 156 – 169, Table 1, p. 160.

B. Skeels, J. W. and Taylor, R. A. 1972. “The Relative Quality of Economic Journals: An Alternative

Rating System.” Western Economic Journal 10.4 (December): 470 – 473.

C. Billings, B. B. and Viksnins, G. J. 1972. “The Relative Quality of Economic Journals: An Alternative

Rating System.” Western Economic Journal 10.4 (December): 467 – 469.

D. Hawkins, R. G., Ritter, L. S., and Walter, I. 1973. “What Economists Think of Their Journals.” Journal

of Political Economy 81.4 (July-August): 1017 – 1032, Table 1, pp. 1020 - 1022, Mean Rank column.

E. McDonough, C. C. 1975. “The Relative Quality of Economics Journals Revisited.” The Quarterly

Review of Economics and Business 15.1 (Spring): 91 – 97, Table 3, pp. 94 – 95.

F. Niemi, A. W. 1975. “Journal Publication Performance During 1970 – 1974: The Relative Output of

Southern Economics Departments.” Southern Economic Journal 42.1 (July): 97 – 106. G. Graves, P. E., Marchand, J. R., and Thompson, R. 1982. “Economic Departmental Rankings: Research Incentives,

Constraints, and Efficiency.” American Economic Review 72.5 (December): 1131 – 1141, p. 1132.

H. Liebowitz, S. J. and Palmer, J. P. 1984. “Assessing the Relative Impacts of Economic Journals.” Journal

of Economic Literature 22.1 (March): 77 – 88, Table 2, column 2, pp. 84 - 85.

I. Laband, D. N. 1985. “An Evaluation of 50 ‘Ranked’ Economics Departments by Quantity and Quality of

Faculty Publications and Graduate Student Placement and Research Success.” Southern Economic Journal

52.1 (July): 216 – 240.

Diamond, A. M. 1989. “The Core Journals in Economics.” Current Contents 21 (January): 4 – 11, Table 1, p. 6.

J. Malouin, J.-L. and Outreville, J.-F. 1987. “The Relative Impact of Economics Journals: A Cross-Country

Survey and Comparison.” Journal of Economics and Business 39.3 (August): 267 – 277.

K. Laband, D. N. and Piette, M. J. 1994. “The Relative Impacts of Economics Journals: 1970 – 1990.”

Journal of Economic Literature 32.2 (June): 640 – 666, Table 2, column 3, pp. 648 - 651.

L. Laband, D. N. and Piette, M. J. 1994. “The Relative Impacts of Economics Journals: 1970 – 1990.”

Journal of Economic Literature 32.2 (June): 640 – 666, Table A2, column 3, pp. 663 - 666.

M. Stigler, G. J., Stigler, S. M., and Friedland, C. 1995. “The Journals of Economics.” Journal of Political

Economy 103.2 (April): 331 – 359, Table 1, p. 336.

N. Conroy, M. E. and Dusasky, R. 1995. “The Productivity of Economics Departments in the U.S.:

Publications in the Core Journals.” Journal of Economic Literature 33.4 (December): 1966 – 1971, p.

1966.

O. Conroy, M. E. and Dusasky, R. 1995. “The Productivity of Economics Departments in the U.S.:

Publications in the Core Journals.” Journal of Economic Literature 33.4 (December): 1966 – 1971, p.

1971.

P. Scott, L. C. and Mitias, P. M. 1996. “Trends in Ranking of Economics Departments in the U.S.: An

Update.” Economic Inquiry 34 (April): 378 – 400, p. 379.

Q. Elliott, C., Greenaway, D., and Sapsford, D. 1998. “Who’s Publishing Who? The National Composition

of Contributors to some core US and European Journals.” European Economic Review 42.1: 201 – 206.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 25

R. Kalaitzidakis, P. Mamuneas, T. P., and Stengos, T. 1999. “European Economics: An analysis based on

publications in core journals.” European Economic Review 43: 1150 – 1168.

S. Hodgson, G. M. and Rothman, H. 1999. “The Editors and Authors of Economics Journals: A Case of

Institutional Oligopoly?” The Economic Journal 109 (February): F165 – F186, Table 1, p. F168.

T. Kocher, M. G. and Sutter, M. 2001. “The Institutional Concentration of Authors in Top Journals of

Economics During the Last Two Decades.” The Economic Journal 111 (June): F405 – F421, Table 1, p.

F408.

U. Kocher, M. G., Luptacik, M., and Sutter, M. 2002. “Measuring Productivity of Research in Economics: A

Cross-Country Study using DEA.” Unpublished, Table 1, p. 4.

Http://homepage.uibk.ac.at/homepage/c404/c40433/kls.pdf.

V. Kalaitzidakis, P., Mamuneas, T. P., and Stengos, T. 2003. “Rankings of Academic Journals and

Institutions in Economics.” Journal of the European Economic Association 1.6 (December): 1346 - 1366,

Table 1, pp. 1349 - 1351.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

26

APPENDIX II

Top 25 Economic Departments in the United States with Ph.D.

Programs

Below lists studies that identified the top 25 economic departments in the United States with Ph.D. programs. If the study lists more than 25 departments, but places them in a ranked grouping, then the number of groups considered will include in total at least 25 departments. More generally, when the studies have a ranking that included more than 25 departments the first 25 are listed. The next to final row lists the number of departments that are ranked or listed in the article. The studies generally included all the Ph.D. programs in the United States, but with some exceptions. The final row describes the criteria used by each study to rank or include the departments. If the criterion for ranking departments is based on selecting and ranking the top journals, the criteria for their selection and ranking are given in Appendix I.

Departments* A

1959

B

1966

C

1970

D

1973

E

1975

F

1982

Brown X X X X

Boston U

California Institute of Technology

Carnegie-Mellon X X X X X

Chicago X X X X X X

Columbia X X X X X

Cornell X X X X X

CUNY

Duke X X X

Florida

Harvard X X X X X X

Illinois-Urbana X X X X X

Indiana X

Iowa

Iowa State X X

Johns Hopkins X X X X

Maryland-College Park X X

Michigan X X X X X X

Michigan State X X X X

Minnesota X X X X X X

MIT X X X X X X

New York X X

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 27

North Carolina X X X

Northwestern X X X X X X

Ohio State X

Pennsylvania X X X X X X

Pennsylvania State X

Pittsburgh

Princeton X X X X X X

Purdue X X X X

Rice

Rochester X X X X X

Rutgers

Stanford X X X X X X

SUNY-Buffalo X

SUNY-Stony Brook

Texas

Texas A&M X

Tulane

UC-Berkeley X X X X X X

UC-Los Angles X X X X X

UC-San Diego

UC-Santa Barbara

UNC-Chapel Hill X

USC

Vanderbilt X X X X

Virginia X X X X X

Washington X X X X X

Washington, St. Louis X

Wisconsin-Madison X X X X X X

Yale X X X X X X

Total Number of Programs 71 91 94 88 116

Note: * Criteria for Selection and Ranking

A. In principle all doctoral programs were considered; ranking based on the opinion of chairmen of economic

departments of 25 leading universities

B. Selected nearly all doctoral programs; ranking is based on informed opinion derived from questionnaire

C. Selected nearly all doctoral programs; ranking is based on informed opinion derived from questionnaire

D. Selected nearly all doctoral programs; ranking is based on the number of faculty publications in the top 9

journals; for selection of journals and their ranking see C in Appendix I

E. Selected nearly all doctoral programs; ranking is based on the number of faculty publications in 24

journals; for selection of the journals, see F in Appendix

F. Selected nearly all doctoral programs; ranking is based on the number of faculty pages in top journals; used

the same journals as Niemi (1975) see F in Appendix I

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

28

Departments G

1982

H

1982

I

1984

J

1985

K

1985

Brown X X

Boston U

California Institute of Technology X

Carnegie-Mellon X X X X

Chicago X X X X X

Columbia X X X X X

Cornell X X X X

CUNY X

Duke X

Florida

Harvard X X X X X

Illinois-Urbana X X

Indiana

Iowa X

Iowa State

Johns Hopkins X X X

Maryland-College Park X X

Michigan X X X

Michigan State

Minnesota X X X X X

MIT X X X X X

New York X X X X X

North Carolina X X

Northwestern X X X X X

Ohio State X X X

Pennsylvania X X X X X

Pennsylvania State X

Pittsburgh

Princeton X X X X X

Purdue X X

Rice X

Rochester X X X X

Rutgers X

Stanford X X X X X

SUNY-Buffalo

SUNY-Stony Brook X

Texas

Texas A&M X X X

Tulane X

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 29

UC-Berkeley X X X X

UC-Los Angles X X X X X

UC-San Diego X X X X X

UC-Santa Barbara

UNC-Chapel Hill X

USC

Vanderbilt

Virginia X X X X

Washington X X X

Washington, St. Louis

Wisconsin-Madison X X X X X

Yale X X X X X

Total Number of Programs 116 93 119 50 50

Note:

* Criteria for Selection and Ranking

G. Selected nearly all doctoral programs; ranking is based on the pages per faculty member in top journals;

used the same journals as Niemi (1975) see F in Appendix I

H. Selected nearly all doctoral programs; ranking is based on informed opinioned derived from questionnaire

I. Selected nearly all doctoral programs; ranking is based on the pages per faculty member in top journals;

used the same journals as Niemi (1975) see F in Appendix I

J. Selected the top fifty departments as identified by Graves, Marchand, and Thompson (1982); the ranking

was based on the number of faculty pages published in 27 top journals; used the same journals as in Niemi

(1975) plus 3 additional journals ranked in the top 10 by Liebowitz and Palmer (1984)

K. Selected the top fifty departments as identified by Graves, Marchand, and Thompson (1982); the ranking

was based on the number of faculty and graduate pages published in 27 top journals, citation counts, and

graduate placement; used the same journals as in Niemi (1975) plus 3 additional journals ranked in the top

10 by Liebowitz and Palmer (1984)

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

30

Departments L

1989

M

1989

N

1995

O

1995

P

1995

Q

1996

Brown X X X X

Boston U X X X

California Institute of Technology X X

Carnegie-Mellon X X X X X

Chicago X X X X X X

Columbia X X X X

Cornell X X X X

CUNY

Duke X X X X X X

Florida X X

Harvard X X X X X X

Illinois-Urbana X X

Indiana X

Iowa

Iowa State

Johns Hopkins X

Maryland-College Park X X X X X

Michigan X X X X X

Michigan State X

Minnesota X X X X X X

MIT X X X X X X

New York X X X X

North Carolina

Northwestern X X X X X X

Ohio State X X X

Pennsylvania X X X X X X

Pennsylvania State

Pittsburg X X

Princeton X X X X X

Purdue

Rice

Rochester X X X X X X

Rutgers

Stanford X X X X X X

SUNY-Buffalo

SUNY-Stony Brook

Texas X X X

Texas A&M X X

Tulane

UC-Berkeley X X X X X X

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 31

UC-Los Angles X X X X X X

UC-San Diego X X X X

UC-Santa Barbara X X

UNC-Chapel Hill X

USC X

Vanderbilt

Virginia X X X

Washington X X X X

Washington, St. Louis

Wisconsin-Madison X X X X X X

Yale X X X X X X

Total Number of Programs 124 124 70 70 107 108

Note:

* Criteria for Selection and Ranking

L. Selected nearly all doctoral programs; ranking based on the total number of faculty articles derived from

the 107 journals in Liebowitz and Palmer (1984)

M. Articles per faculty; based on the total number of faculty articles derived from the 107 journals in

Liebowitz and Palmer (1984)

N. Selected the top fifty departments as identified by Graves, Marchand, and Thompson (1982) plus twenty

departments thought to have progressed significantly in recent years; the ranking was based on the number

of faculty pages combined with pages per faculty published in 8 blue ribbon journals

O. Selected the top fifty departments as identified by Graves, Marchand, and Thompson (1982) plus twenty

departments thought to have progressed significantly in recent years; the ranking was based on the number

of faculty pages combined with pages per faculty published in 34 journals that were a union of the Blue

Ribbon journals, Graves, Marchand, and Thompson (1982), and Liebowitz and Palmer (1984)

P. Selected nearly all doctoral programs; ranking is based on informed opinion derived from questionnaire

Q. Selected nearly all doctoral programs; ranking is based on the total faculty pages in 36 journals; journals

included derived in part from Graves, Marchand, and Thompson (1982) and added 15 new journals

subjectively evaluated as newer, highly respected journals

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

32

Departments

R

1998

S

2003

Total Number of

Times in the Top 25

(out of 19)

Brown X 11

Boston U X X 5

California Institute of Technology 3

Carnegie-Mellon X 15

Chicago X X 19

Columbia X X 17

Cornell X 14

CUNY 1

Duke X 11

Florida X 3

Harvard X X 19

Illinois-Urbana 9

Indiana 2

Iowa 1

Iowa State 2

Johns Hopkins X 9

Maryland-College Park X X 11

Michigan X X 16

Michigan State X 6

Minnesota X X 19

MIT X X 19

New York X X 13

North Carolina 5

Northwestern X X 19

Ohio State X 8

Pennsylvania X X 19

Pennsylvania State 2

Pittsburg X 3

Princeton X X 18

Purdue 6

Rice 1

Rochester X X 17

Rutgers 1

Stanford X X 19

SUNY-Buffalo 1

Suny-Stony Brook 1

Texas X X 5

Texas A&M 6

Tulane 1

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 33

UC-Berkeley X X 18

UC-Los Angles X X 18

UC-San Diego X X 11

UC-Santa Barbara 2

UNC-Chapel Hill 3

USC 1

Vanderbilt 4

Virginia X 13

Washington 12

Washington, St. Louis 1

Wisconsin-Madison X X 19

Yale X X 19

Total Number of Programs 80 74

Note:

* Criteria for Selection and Ranking

R. Selected the top fifty departments as identified by Graves, Marchand, and Thompson (1982) plus thirty

departments thought to have progressed significantly in recent years; the ranking was based on the

combination of total faculty pages and pages per faculty in the ‘blue ribbon journals’

S. Selected nearly all doctoral programs; ranking is based on the total faculty pages in 30 journals; derived

from all SSCI journals that might be useful to economists and ranked based on citations

A. Keniston, H. 1959. Graduate Study and Research in the Arts and Sciences at the University of

Pennsylvania. Philadelphia: University of Pennsylvania Press, p. 129.

B. Cartter, A. M. 1966. An Assessment of Quality in Graduate Education. Washington, D.C.: American

Council on Education, ‘Leading Departments Rated Effectiveness of Graduate Faculty’, p. 34.

C. Roose, K. D. and Andersen, C. J. 1970. A Rating of Graduate Programs. Washington, D.C.: American

Council on Education, ‘Leading Institutions by Rated Quality of Graduate Faculty’, p. 58.

D. Moore, W. J. 1973. “The Relative Quality of Graduate Programs in Economics, 1958 – 1972: Who

Published and Who Perished.” Western Economic Journal 11.1 (March): 1 – 23, Table 4, Column 6, p. 16.

E. Niemi, A. W. 1975. “Journal Publication Performance During 1970 – 1974: The Relative Output of

Southern Economics Departments.” Southern Economic Journal 42.1 (July): 97 – 106, Table II, p. 101.

F. Graves, P. E., Marchand, J. R., and Thompson, R. 1982. “Economic Departmental Rankings: Research

Incentives, Constraints, and Efficiency.” American Economic Review 72.5 (December): 1131 – 1141,

Table 1, p. 1133.

G. Graves, P. E., Marchand, J. R., and Thompson, R. 1982. “Economic Departmental Rankings: Research

Incentives, Constraints, and Efficiency.” American Economic Review 72.5 (December): 1131 – 1141,

Table 2, p. 1134.

H. Jones, L. V., Lindzey, G., and Coggeshall, P. E. (eds.) 1982. An Assessment of Research-Doctorate

Programs in the United States: Social and Behavioral Sciences. Washington, D. C.: National Academy

Press, Table 4.1, column 8, pp. 54 – 63.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

34

I. Hirsch, B. T., Randall, A., Brooks, J., and Moore, J. B. 1984. “Economics Departmental Rankings:

Comment.” American Economic Review 74.4 (September): 822 – 826, Table 1, pp. 823 – 824.

J. Laband, D. N. 1985. “An Evaluation of 50 ‘Ranked’ Economics Departments—By Quantity and Quality

of Faculty Publications and Graduate Student Placement and Research Success.” Southern Economic

Journal 52.1 (July): 216 – 240, Table I, p. 220.

K. Laband, D. N. 1985. “An Evaluation of 50 ‘Ranked’ Economics Departments—By Quantity and Quality

of Faculty Publications and Graduate Student Placement and Research Success.” Southern Economic

Journal 52.1 (July): 216 – 240, Table XVI, p. 238 – 239.

L. Tschirhart, J. 1989. “Ranking Economics Departments in Areas of Expertise.” Journal of Economic

Education 20.2 (Spring): 199 – 222, Table 1, column 1, pp. 203 – 206.

M. Tschirhart, J. 1989. “Ranking Economics Departments in Areas of Expertise.” Journal of Economic

Education 20.2 (Spring): 199 – 222, Table 1, column 2, pp. 203 – 206.

N. Conroy, M. E. and Dusansky, R. 1995. “The Productivity of Economic Departments in the U.S.:

Publications in Core Journals.” Journal of Economic Literature 33.4 (December): 1966 – 1971, Table 1,

Mean Rank column, p. 1969.

O. Conroy, M. E. and Dusansky, R. 1995. “The Productivity of Economic Departments in the U.S.:

Publications in Core Journals.” Journal of Economic Literature 33.4 (December): 1966 – 1971, Appendix,

column C, p. 1971.

P. Goldberger, M. L., Maher, B. A., and Flattau, P. E. (eds.) 1995. Research-Doctorate Programs in the

United States: Continuity and Change. Washington, D.C.: National Academy Press, Appendix Table H –

5, pp. 187 – 196.

Q. Scott, L. C. and Mitias, P. M. 1996. “Trends in Ranking of Economics Departments in the U.S.: An

Update.” Economic Inquiry 34 (April): 378 – 400, Table 1, pp. 380 – 383.

R. Dusansky, R. and Vernon, C. J. 1998. “Rankings of U.S. Economics Departments.” Journal of Economic

Perspectives 12.1 (Winter): 157 – 170; Table 1, first column, p. 159.

S. Kalaitzidakis, P., Mamuneas, T. P., and Stengos, T. 2003. “Rankings of Academic Journals and

Institutions in Economics.” Journal of the European Economic Association 1.6 (December): 1346 – 1366,

Table 3, pp. 1357 - 1360.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 35

REFERENCES

Archibald, R. B. and Finifter, D. H. 1990. “Multivariate Citations Functions and Journal Rankings.” Eastern

Economic Journal 16.2 (April-June): 151 – 158.

Aslanbeigui, N. and Naples, M. I. 1997. “The Changing Status of the History of Thought in Economics Curricula.”

In Borderlands of Economics: Essays in Honor of Daniel R. Fusfeld, pp. 131 – 150. Edited by N.

Aslanbeigui and Y. B. Choi. London: Routledge.

Bairam, E. I. 1994. “Institutional Affiliation of Contributors in Top Economic Journals, 1985 – 1990.” Journal of

Economic Literature 32.2 (June): 674 – 679.

Barber, W. J. 1997. “Reconfiguration in American Academic Economics: A General Practitioner’s Perspective.”

Daedalus 126.1 (Winter): 87 – 103.

Beed, C. and Beed, C. 1996. “Measuring the Quality of Academic Journals: The Case of Economics.” Journal of

Post Keynesian Economics 18.3 (Spring): 369 – 396.

Bell, J. G. and Seater, J. J. 1978. “Publishing Performance: Departmental and Individual.” Economic Inquiry 16.4

(October): 599 – 615.

Berelson, B. 1960. Graduate Education in the United States. New York: McGraw-Hill Book Company, Inc.

Berg, S. V. 1971. “Increasing the Efficiency of the Economics Journal Market.” Journal of Economic Literature 9.3

(September): 798 – 813.

Blank, R. M. 1991. “The Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from The

American Economic Review.” American Economic Review 81.5 (December): 1041 – 1067.

Boyes, W. J., Happel, S. K., and Hogan, T. D. 1984. “Publish or Perish: Fact or Fiction?” Journal of Economic

Education 15.2 (Spring): 136 – 141.

Brauninger, M. and Haucap, J. 2003. “Reputation and Relevance of Economics Journals.” Kyklos 56.2: 175 – 198.

Braxton, J. M. 1986. “The Normative Structure of Science: Social Control in the Academic Profession.” In Higher

Education: Handbook on Theory and Research, vol. II, pp. 309 – 357. Edited by J. C. Smart. New York:

Agathon Press, Inc.

Buckles, S. and Watts, M. 1998. “National Standards in Economics, History, Social Studies, Civics, and Geography:

Complementarities, Competition, or Peaceful Coexistence?” Journal of Economic Education 29.2 (Spring):

157 – 166.

Burton, M. P. and Phimister, E. 1995. “Core Journals: A Reappraisal of the Diamond List.” Economic Journal 105

(March): 361 – 373.

Cartter, A. M. 1966. An Assessment of Quality in Graduate Education. Washington, D.C.: American Council on

Education.

Centra, J. A. 1977. How Universities Evaluate Faculty Performance: A Survey of Department Heads. Princeton:

Educational Testing Service.

Clark, M. J., Hartnett, R. T., and Baird, L. L. 1976. Assessing Dimensions of Quality in Doctoral Education: A

Technical Report of a National Study in Three Fields. Princeton: Educational Testing Service.

Cleary, F. R. and Edwards, D. J. 1960. “The Origins of the Contributors to the A.E.R. During the ‘Fifties.”

American Economic Review 50.5 (December): 1011 – 1014.

Coats, A. W. 1971. “The Role of Scholarly Journals in the History of Economics: An Essay.” Journal of Economic

Literature 9.1 (March): 29 – 44. Colander, D., Holt, R. P. F., and Rosser, J. B. 2004. “The Changing Face of Mainstream Economics.” Review of Political

Economy 16.4 (October): 485 – 499.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

36

Collins, J. T., Cox, R. G., and Stango, V. 2000. “The Publishing Patterns of Recent Economics Ph.D.

Recipients.” Economic Inquiry 38.2 (April): 358 – 367.

Conroy, M. E. and Dusasky, R. 1995. “The Productivity of Economics Departments in the U.S.: Publications in the

Core Journals.” Journal of Economic Literature 33.4 (December): 1966 – 1971, p. 1966.

Coupe, T. 2003. “Revealed Performances: Worldwide Rankings of Economists and Economics Departments, 1990

- 2000.” Journal of the European Economic Association 1.6 (December): 1309 – 1345.

Cox, R. A. K. and Chung, K. H. 1991. “Patterns of Research Output and Author Concentration in the Economics

Literature.” The Review of Economic and Statistics 73.4 (November): 740 – 747.

Crane, D. 1967. “The Gatekeepers of Science: Some Factors Affecting the Selection of Articles for Scientific

Journals.” The American Sociologist 2.4 (November): 195 – 201.

Crane, D. 1970. “The Academic Marketplace Revisited: A Study of Faculty Mobility Using the Cartter Ratings.”

The American Journal of Sociology 75.6 (May): 953 – 964.

Davis, J. B. 2006. “The Turn in Economics: Neoclassical Dominance to Mainstream Pluralism?” Journal of

Institutional Economics 2.1 (April): 1 – 20.

Dearden, J., Taylor, L. and Thornton, R. 2001. “A Benchmark Profile of Economic Departments in 15 Private

Universities.” Journal of Economic Education 32.4 (Fall): 387 – 396.

Diamond, A. 1989. “The Core Journals in Economics.” Current Contents 21 (January): 4 – 11.

Dolan, W. P. 1976. The Ranking Game: The Power of the Academic Elite. Lincoln: Study Commission on

Undergraduate Education and the Education of Teachers.

Dusansky, R. and Vernon, C. J. 1998. “Rankings of U.S. Economics Departments.” Journal of Economic

Perspectives 12.1 (Winter): 157 – 170.

Eagly, R. V. 1974. “Contemporary Profile of Conventional Economists.” History of Political Economy 6.1

(Spring): 76 – 91.

Ellis, L. V. and Durden, G. C. 1991. “Why Economists Rank Their Journals the Way They Do.” Journal of

Economics and Business 43.3 (August): 265 – 270.

Gerrity, D. M. and McKenzie, R. B. 1978. “The Ranking of Southern Economics Departments: New Criterion and

Further Evidence.” Southern Economic Journal 45.2 (October): 608 – 614.

Goldberger, M., Maher, B. A., and Flattau, P. E. (eds.) 1995. Research-Doctorate Programs in the United States:

Continuity and Change. Washington, D.C.: National Academy Press.

Graves, P. E., Marchand, J. R., and Thompson, R. 1982. “Economics Departmental Rankings: Research Incentives,

Constraints, and Efficiency.” American Economic Review 72.5 (December): 1131 – 1141.

Hansen, W. L. 1991. “The Education and Training of Economics Doctorates: Major Findings of the American

Economic Association’s Commission on Graduate Education on Economics.” Journal of Economic

Literature 29.3 (September): 1054 – 1087.

Hartley, J. E. and Robinson, M. D. 1997. “Economic Research at National Liberal Arts Colleges: School

Rankings.” Journal of Economic Education 28.4 (Fall): 337 – 349.

Hawkins, R. G., Ritter, L. S., and Walter, I. 1973. “What Economists Think of Their Journals.” Journal of Political

Economy 81.4 (July-August): 1017 – 1032.

Hirsch, B. T., Randall, A., Brooks, J., and Moore, J. B. 1984. “Economics Departmental Rankings: Comment.”

American Economic Review 74.4 (September): 822 – 826.

Hodgson, G. M. and Rothman, H. 1999. “The Editors and Authors of Economics Journals: A Case of Institutional

Oligopoly?” The Economic Journal 109 (February): F165 – F186.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 37

Hogan, T. D. 1973. “Ranking of Ph.D. Programs in Economics and the Relative Publishing Performance of their

Ph.D.’s: Experience of the 1960’s.” Western Economics Journal 11.4 (December): 429 450.

Hogan, T. D. 1986. “The Publishing Performance of U.S. Ph.D. Programs in Economics during the 1970s.” Journal

of Human Resources 21.2 (Spring): 216 – 229.

Holcombe, R. G. 2004. “The National Research Council Ranking of Research Universities: Its Impact on Research

in Economics.” Econ Journal Watch 1.3 (December): 498 – 514. Http://www.econjournalwatch.org.

Hughes, R. M. 1925. A Study of the Graduate Schools of America. Oxford: Miami University.

Hughes, R. M. 1934. “Report of the Committee on Graduate Instruction.” Educational Record 15.2 (April): 192 -

234.

Jones, L. V., Lindzey, G., and Coggeshall, P. E. (eds.) 1982. An Assessment of Research-Doctorate Programs in the

United States: Social and Behavioral Sciences. Washington, D.C.: National Academy Press.

Kalaitzidakis, P., Mamuneas, T. P., and Stengos, T. 2003. “Rankings of Academic Journals and Institutions in

Economics.” Journal of the European Economic Association 1.6 (December): 1346 – 1366.

Kasper, H. et.al. 1991. “The Education of Economists: From Undergraduates to Graduate Study.” Journal of

Economic Literature 29.3 (September): 1054 – 1087.

Keniston, H. 1959. Graduate Study and Research in the Arts and Sciences at the University of Pennsylvania.

Philadelphia: University of Pennsylvania Press.

Klamer, A. and Colander, D. 1990. The Making of an Economist. Boulder: Westview Press.

Kocher, M. G. and Sutter, M. 2001. “The Institutional Concentration of Authors in Top Journals of Economics

During the Last Two Decades.” The Economic Journal 111 (June): F405 – F421.

Kodrzycki, Y. K. and Yu, P. D. 2005. “New Approaches to Ranking Economic Journals.” Contributions to

Economic Analysis and Policy 5.1, Article 24,

http://www.bepress.com/bejeap/contributions/Vol5/iss1/art24.

Krueger, A. O. et. al. 1991. “Report of the Commission on Graduate Education in Economics.” Journal of

Economic Literature 29.3 (September): 1035 – 1053.

Laband, D. N. 1985a. “An Evaluation of 50 ‘Ranked’ Economics Departments—By Quantity and Quality of Faculty

Publications and Graduate Student Placement and Research Success.” Southern Economic Journal 52.1

(July): 216 – 240.

Laband, D. N. 1985b. “Publishing Favoritism: A Critique of Department Rankings Based on Qualitative Publishing

Performance.” Southern Economic Journal 52.2 (October): 510 – 515.

Laband, D. N. 1986. “A Ranking of the Top U.S. Economics Departments by Research Productivity of Graduates.”

Journal of Economic Education 17.1 (Winter): 70 – 76.

Laband, D. N. and Piette, M. J. 1994a. “Favoritism versus Search for Good Papers: Empirical Evidence Regarding

the Behavior of Journal Editors.” Journal of Political Economy 102.1 (February): 194 – 203.

Laband, D. N. and Piette, M. J. 1994b. “The Relative Impacts of Economics Journals: 1970 – 1990.” Journal of

Economic Literature 32.2 (June): 640 – 666.

Lawson, T. 2006. “The Nature of Heterodox Economics.” Cambridge Journal of Economics 30.4 (July): 483 –

505.

Lee, F. 2005. “Ranking Heterodox Economic Journals and Departments: Suggested Methodologies.” Unpublished.

Lee, F. 2006. “The Research Assessment Exercise, the State, and the Dominance of Mainstream Economics in

British Universities.” Cambridge Journal of Economics, forthcoming.

Lee, F. S. and Keen, S. 2004. “The Incoherent Emperor: A Heterodox Critique of Neoclassical Microeconomic

Theory.” Review of Social Economy 62.2 (June): 169 – 199.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

38

Liebowitz, S. J. and Palmer, J. P. 1984. “Assessing the Relative Impacts of Economics Journals.” Journal of

Economic Literature 22.1 (March): 77 – 88.

Liebowitz, S. J. and Palmer, J. P. 1988. “Assessing Assessments of Economics Departments.” Quarterly Review of

Economics and Business 28.2 (Summer): 88 – 113.

Lovell, M. C. 1973. “The Production of Economic Literature: An Interpretation.” Journal of Economic Literature

11.1 (March): 27 – 55.

Lower, M. 2004. Personal communication. November 1.

Lubrano, M., Kirman, A., Bauwens, L., and Protopopescu, C. 2003. “Ranking Economics Departments I Europe: A

Statistical Approach.” Journal of the European Economic Association 1.6 (December): 1367 – 1401.

McDonough, C. C. 1975. “The Relative Quality of Economics Journals Revisited.” The Quarterly Review of

Economics and Business 15.1 (Spring): 91 – 97.

McDowell, J. M. and Amacher, R. C. 1986. “Economic Value of an In-House Editorship.” Public Choice 48.2: 101

– 112.

Moore, W. J. 1972. “The Relative Quality of Economics Journals: A Suggested Rating System.” Western

Economic Journal 10.2 (June): 156 – 169.

Moore, W. J. 1973. “The Relative Quality of Graduate Programs in Economics, 1958 – 1972: Who Published and

Who Perished.” Western Economic Journal 11.1 (March): 1 – 23.

National Council on Economic Education. 1997. Voluntary National Content Standards in Economics. New York:

National Council on Economic Education.

Nelson, J. A. and Sheffrin, S. M. 1991. “Economic Literacy or Economic Ideology?” The Journal of Economic

Perspectives 5.3 (Summer): 157 – 165.

Niemi, A. W. 1975. “Journal Publication Performance During 1970 – 1974: The Relative Output of Southern

Economics Departments.” Southern Economic Journal 42.1 (July): 97 – 106.

Oromaner, M. 1983. “Professional Standing and the Reception of Contributions to Economics.” Research in Higher

Education 19.3: 351 – 362.

Pickering, A. 1995. The Mangle of Practice: Time, Agency, and Science. Chicago: The University of Chicago

Press.

Pieper, P. J. and Willis, R. A. 1999. “The Doctoral Origins of Economics Faculty and the Education of New

Economics Doctorates.” Journal of Economic Education 30.1 (Winter): 80 – 88.

Pieters, R. and Baumgartner, H. 2002. “Who Talks to Whom? Intra- and Interdisciplinary Communication of

Economic Journals.” Journal of Economic Literature 40.2 (June): 483 – 509.

Roose, K. D. and Anderson, C. J. 1970. A Rating of Graduate Programs. Washington, D.C.: American Council on

Education.

Rupp, N. G. and McKinney, C. N. 2002. “The Publication Patterns of the Elite Economics Departments: 1995 –

2000.” Eastern Economic Journal 28.4 (Fall): 523 – 538.

Scott, L. C. and Mitias, P. M. 1996. “Trends in Rankings of Economics Departments in the U.S.: An Update.”

Economic Inquiry 34 (April): 378 – 400.

Seglen, P. O. 1997. “Why the Impact Factor of Jorunals Should not be used for Evaluating Research.” British

Medical Journal 314 (February 15): 498 – 502.

Siegfried, J. J. 1972. “The Publishing of Economic Papers and Its Impact on Graduate Faculty Ratings, 1960 –

1969.” Journal of Economic Literature 10.1 (March): 31 – 49.

Siegfried, J. J. and Meszaros, B. T. 1997. “National Voluntary Content Standards for Pre-College Education.” The

American Economic Review 87.2 (May): 247 – 253.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 39

Smith, V. K. and Gold, S. 1976. “Alternative Views of Journal Publication Performance During 1968-71 and 1970-

74.” Eastern Economic Journal 3.2 (April): 109 – 112.

Stolen, J. D. and Gnuschke, J. E. 1977. “Reflections on Alternative Rating Systems for University Economics

Departments.” Economic Inquiry 15.2 (April): 277 – 282.

Sun, E. 1975. “Doctoral Origins of Contributors to the ‘American Economic Review’, 1960-72.” Journal of

Economic Education 7.1 (Autumn): 50 – 55.

Sutter, M. and Kocher, M. G. 2001. “Tools for Evaluating Research Output: Are Citation-Based Rankings of

Economics Journals Stable?” Evaluation Review 25.5 (October): 555 – 566.

Thursby, J. G. 2000. “What Do We Say about Ourselves and What Does It Mean? Yet Another Look at Economics

Department Research.” Journal of Economic Literature 38.2 (June): 383 – 404.

Tschirhart, J. 1989. “Ranking Economics Departments in Areas of Expertise.” Journal of Economic Education 20.2

(Spring): 199 – 222.

Whitley, R. 1984. The Intellectual and Social Organization of the Sciences. Oxford: Clarendon Press.

Whitley, R. 1986. “The Structure and Context of Economics as a Scientific Field.” Research in the History of

Economic Thought and Methodology 4: 179 – 209.

Whitley, R. 1991. “The Organization and Role of Journals in Economics and Other Scientific Fields.” Economic

Notes 20.1: 6 – 32.

Yoels, W. C. 1974. “The Structure of Scientific Fields and the Allocation of Editorships on Scientific Journals:

Some Observations on the Politics of Knowledge.” The Sociological Quarterly 15 (Spring): 264 – 276.

Yotopoulos, P. A. 1961. “Institutional Affiliation of the Contributors to Three Professional Journals.” American

Economic Review 51.4 (September): 665 – 670.

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006

40

Table 1

Stability in Top Neoclassical Journals, 1969 – 2003

Coats’s List

-1971

Diamond’s List

-1989

Kalaitzidakis et.al

-2003

*American Economic Review *American Economic Review *American Economic Review

Economica Brookings Papers Economic Journal

Economic Journal Canadian Journal of Economics *Econometrica

*Econometrica Economica Economic Letters

*Journal of Political Economy Economic Inquiry European Economic Review

*Oxford Economic Papers Economic Journal *International Economics Review

*Quarterly Journal of Economics *Econometrica Journal of Econometrics

*Review of Economics & Statistics Economic Letters Journal of Economic Literature

*Review of Economic Studies European Economics Review *Journal of Economic Theory

*International Economic Review Journal of Labor Economics

Journal of Development Economics Journal of Monetary Theory

Journal of Econometrics *Journal of Political Economy

Journal of Economic Literature Journal of Public Economics

*Journal of Economic Theory *Quarterly Journal of Economics

Journal of Financial Economics Rand Journal of Economics

Journal of International Economics *Review of Economics & Statistics

Journal of Labor Economics *Review of Economic Studies

Journal of Law and Economics

Journal of Mathematical Economics

Journal of Monetary Economics

*Journal of Political Economy

Journal of Public Economics

Oxford Economic Papers

*Quarterly Journal of Economics

Rand Journal of Economics

*Review of Economics & Statistics

*Review of Economic Studies

*The Blue Ribbon Journals of Conroy and Dusasky (1995)

[Derived from Appendix I and Coats (1971)]

Australasian Journal of Economics Education Vol. 3. Numbers 1 & 2, 2006 41

Table 2

Top Economic Departments in the United States with Ph.D. Programs,

1925 - 2003

Hughes

(1925, 1934)

Keniston/Cartter/ Roose-Anderse

(1995-2003)1

Currently

(1959-1970)

Brown Brown Brown

Chicago Carnegie-Mellon Boston University

Columbia Chicago Carnegie-Mellon

Cornell Columbia Chicago

Harvard Cornell Columbia

Illinois Duke Cornell

Iowa Harvard Duke

Johns Hopkins Illinois Florida

Michigan Indiana Harvard

Minnesota Iowa State Maryland

Missouri Johns Hopkins Michigan

New York Michigan Minnesota

Northwestern Michigan State MIT

Ohio State Minnesota New York

Pennsylvania MIT Northwestern

Princeton North Carolina Ohio State

Stanford Northwestern Pennsylvania

Texas Pennsylvania Pittsburg

UC-Berkeley Princeton Princeton

Virginia Purdue Rochester

Wisconsin Rochester Stanford

Yale Stanford Texas

UC-Berkeley UC-Berkeley

UCLA UCLA

Vanderbilt UC-San Diego

Virginia Wisconsin

Washington Yale

Wisconsin Yale

Total Number of Doctoral Programs

53 71 108

1Derived from Rankings N through S in Appendix II.

[Appendix II; Hughes, 1925 and 1934]