failure rates and publication status: periodicals reviewed in library journal (1980–2005) and...

8
Failure Rates and Publication Status: Periodicals Reviewed in Library Journal (19802005) and Database Accuracy Steve Black Life spans of periodicals are described for all titles reviewed in Library Journal's (LJ) Magazinescolumn from 1980 through 2005 using data from WorldCat, EBSCO's Serials Directory and Ulrichsweb. Fifty-four percent of the periodicals reviewed in LJ have failed. The highest rate of failure is in the first few years of publication, but there remains a substantial failure rate thereafter. The many challenges of keeping accurate records on end dates for periodicals are discussed in the context of cooperative cataloging under CONSER's leadership. The relative strengths and weaknesses of data in WorldCat, Serials Directory and Ulrichsweb are discussed and quantified. Serials Review 2010; 36:206213. © 2010 Elsevier Inc. All rights reserved. While every observer of the publishing industry knows that many periodicals have short lives, few studies have quantified the life spans of periodicals. This study expands upon the author's previous study of the life spans of magazines selected by Library Journal (LJ) as the Best of the Year(n = 224) by using the same research method for the much larger sample of all magazines reviewed in Library Journal from 1980 through 2005 (n = 2,079). It was found in the previous study that 63% of the Best ofmagazines had failed. 1 The primary purpose of this study is to describe the life spans of periodicals reviewed in LJ from 1980 to 2005. Two secondary research questions that emerged from the prior study are more thoroughly investigated here. One is How accurately are end dates recorded in WorldCat, EBSCO's Serials Directory, and Ulrichsweb?(The rationale behind selecting these three databases and the process of using them for data collection are described below in Methodology). A second is How well does cooperative cataloging work to maintain accurate end dates for periodicals?The author found in the prior study of LJ's Best Magazines of the Yearthat records had not been closed in OCLC for almost 30% of ceased periodicals. 2 The publications examined for this study were reviewed in the Library Journal column labeled Magazines,so an explanation for why this author uses the term periodicalinstead is in order. The Cooperative Online Serials program (CONSER) defines a periodical as a serial appearing or intended to appear indefinitely at regular or stated intervals.3 CONSER does not include magazinein its glossary. William (Bill) Katz (19242004) created Magazines for Libraries, was LJ's magazine reviewer for many years, and wrote the book Magazine Selection. 4 Unfortunately for posterity, Katz used the term magazineinstead of the more accurate term periodical.Katz was not much interested in parsing the definitions of magazine,”“journal,”“serial,or periodical,stating flatly that he refuse[d] to be caught up in the kibitzers' capers.5 LJ chose to label the reviews column Magazineseven though the variety of periodicals Katz reviewed included scholarly journals, monograph- ic series, newsletters, and indexes. Just as not every periodical is a magazine, not every publication in a magazine format is a periodical. Newsstand magazines include many publications with little or no intent to be published indefinitely and, therefore, are not periodicals by CONSER's definition. This author is avoiding the kibitzers' capersby including in this study all the publications reviewed in LJ's Magazines column. With the possible exception of monographic series, the publications reviewed by LJ's four magazines columnists from 1980 through 2005 meet the definition of a periodical, so periodicalis the term used here. Literature Review Literature reviewed here is organized into three sections: prior studies of the life spans of periodicals; studies of the relative strengths and weaknesses of the EBSCO Serials Directory and Ulrichweb; descriptions of the quality of records in WorldCat, especially cooperative serials cataloging under the leadership of CONSER. Life Spans of Periodicals The fact that periodicals come and go is well known by publishers, serials librarians, and anyone who pays attention to the magazine market. But there have been few quantitative studies of magazine life-spans. Prior to this author's study of LJ's Best Magazines of the Year,most of the quantitative work has been conducted by Samir Husni, beginning with his 1983 doctoral dissertation. 6 His follow- Black is Serials and Reference Librarian, The College of Saint Rose, Albany, NY 12203, USA; e-mail: [email protected]. 206 0098-7913/$ see front matter © 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.serrev.2010.08.001

Upload: steve-black

Post on 29-Jun-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Failure Rates and Publication Status: PeriodicalsReviewed in Library Journal (1980–2005)and Database Accuracy

Steve Black

Black is Serials and Re12203, USA; e-mail: bla

0098-7913/$ – see frondoi:10.1016/j.serrev.201

Life spans of periodicals are described for all titles reviewed in Library Journal's (LJ) “Magazines”column from 1980 through 2005 using data from WorldCat, EBSCO's Serials Directory andUlrichsweb. Fifty-four percent of the periodicals reviewed in LJ have failed. The highest rate offailure is in the first few years of publication, but there remains a substantial failure rate thereafter.The many challenges of keeping accurate records on end dates for periodicals are discussed in thecontext of cooperative cataloging under CONSER's leadership. The relative strengths andweaknesses of data in WorldCat, Serials Directory and Ulrichsweb are discussed and quantified.Serials Review 2010; 36:206–213.© 2010 Elsevier Inc. All rights reserved.

While every observer of the publishing industry knows that manyperiodicals have short lives, few studies have quantified the lifespans of periodicals. This study expands upon the author'sprevious study of the life spans of magazines selected by LibraryJournal (LJ) as the “Best of the Year” (n=224) by using the sameresearch method for the much larger sample of all magazinesreviewed in Library Journal from 1980 through 2005 (n=2,079). Itwas found in the previous study that 63% of the “Best of”magazines had failed.1

The primary purpose of this study is to describe the life spans ofperiodicals reviewed in LJ from 1980 to 2005. Two secondaryresearch questions that emerged from the prior study are morethoroughly investigated here. One is “How accurately are enddates recorded in WorldCat, EBSCO's Serials Directory, andUlrichsweb?” (The rationale behind selecting these three databasesand the process of using them for data collection are describedbelow in “Methodology”). A second is “How well does cooperativecataloging work to maintain accurate end dates for periodicals?”The author found in the prior study of LJ's “Best Magazines of theYear” that records had not been closed in OCLC for almost 30% ofceased periodicals.2

The publications examined for this study were reviewed in theLibrary Journal column labeled “Magazines,” so an explanation forwhy this author uses the term “periodical” instead is in order. TheCooperative Online Serials program (CONSER) defines a periodicalas “a serial appearing or intended to appear indefinitely at regularor stated intervals.”3 CONSER does not include “magazine” in itsglossary. William (Bill) Katz (1924–2004) created Magazines forLibraries, was LJ's magazine reviewer formany years, andwrote thebookMagazine Selection.4 Unfortunately for posterity, Katz used the

ference Librarian, The College of Saint Rose, Albany, [email protected].

206

t matter © 2010 Elsevier Inc. All rights reserved.0.08.001

term “magazine” instead of the more accurate term “periodical.”Katz was not much interested in parsing the definitions of“magazine,” “journal,” “serial,” or “periodical,” stating flatly thathe “refuse[d] to be caught up in the kibitzers' capers.”5 LJ chose tolabel the reviews column “Magazines” even though the variety ofperiodicals Katz reviewed included scholarly journals,monograph-ic series, newsletters, and indexes. Just as not every periodical is amagazine, not every publication in a magazine format is aperiodical. Newsstand magazines include many publications withlittle or no intent to be published indefinitely and, therefore, are notperiodicals by CONSER's definition. This author is avoiding the“kibitzers' capers” by including in this study all the publicationsreviewed in LJ's Magazines column. With the possible exception ofmonographic series, the publications reviewed by LJ's fourmagazines columnists from 1980 through 2005meet the definitionof a periodical, so “periodical” is the term used here.

Literature Review

Literature reviewed here is organized into three sections: priorstudies of the life spans of periodicals; studies of the relativestrengths and weaknesses of the EBSCO Serials Directory andUlrichweb; descriptions of the quality of records in WorldCat,especially cooperative serials cataloging under the leadership ofCONSER.

Life Spans of Periodicals

The fact that periodicals come and go is well known by publishers,serials librarians, and anyone who pays attention to the magazinemarket. But there have been few quantitative studies of magazinelife-spans. Prior to this author's study of LJ's “Best Magazines of theYear,”most of the quantitative work has been conducted by SamirHusni, beginning with his 1983 doctoral dissertation.6 His follow-

Failure Rates and Publication StatusVolume 36, Number 4, 2010

up study of 323 new consumer magazines launched 1981–1983concluded that two out of ten magazines survive for at least fouryears, and that chances of survival “for the forseen future” are goodonce a magazine has reached its fifth year.7 Based on his continuedobservations of the magazine industry, Husni later estimated that90% of new magazines fail.8

Husni studied consumer magazines, a category well repre-sented in but not synonymous with the periodicals reviewed in LJ.Importantly, Husni includes in his figures many glossy newsstandtitles that observers would not expect the publishers to continueindefinitely. For example, his “New This Month” includes Collec-tor's Edition World Champions New Orleans Saints, Fix-It and Forget-It 5- Ingredient Slow-Cooker Recipes, and Marvel Comic's Iron ManSpecial Edition.9 In this author's estimation many titles Husni listsare not periodicals, so title selection directly impacts thesubstantial difference between Husni's estimated 90% failure rateand the overall failure rate found in this study.

Periodicals are runwithmyriad business models and succeed orfail for a variety of reasons. On a macroeconomic level magazinesface opportunities and challenges from the overall economy andfrom competition with other media. Economic downturns causeentrenchment in the magazine industry. The Magazine Publishersof America notes that consumer magazine closings correlate moreclosely with the overall economy and advertising spending thanwith consumer demand for magazines. Since the 1980s, thehighest rates of magazine closings occurred in the recession yearsof 1992–1993 and 2000–2001.10 Competition from other mediacan negatively affect magazines, but new media opens upopportunities for publishers. The impact of television on maga-zines is an example of creative destruction whereby displacementof existing magazines is offset by creation of new titles.11

Important general interest mass market magazines includingCollier's, Look, and Life did not survive competition from television,but TV created a market for new titles like TV Guide and Soap OperaDigest. Similar combinations of competition and newmarkets haveoccurred with cinema, radio, sound recordings, personal compu-ters, and most recently the internet.12

EBSCO's Serials Directory and Ulrichsweb

Various sources of data on periodicals have been produced, but thefive intended to be comprehensive are the Standard PeriodicalDirectory, Magazines for Libraries, Ulrich's International PeriodicalsDirectory (now called Ulrichsweb), the EBSCO Serials Directory, andrecords in OCLC's WorldCat database. Oxbridge Communications'Standard Periodical Directory (1964–present) and its online versionMediaFinder are better known in the advertising industry thanamong librarians. Its particular strength is circulation andadvertising rate data. When a title ceases, it is dropped from theMediaFinder database, so it was not helpful for this study of lifespans of periodicals. Magazines for Libraries is a fine source ofinformation, but many of the periodicals reviewed in LJwere neverincluded. The selectivity which makes Magazines for Libraries souseful for reference and collection development made it inappro-priate as a data source for this study.

Ulrich's International Periodicals Directory, now Ulrichsweb(hereafter referred to as Ulrich's) and EBSCO's The Serials Directory(hereafter referred to as EBSCO) have long been regarded bylibrarians as the two most important resources for informationabout serials. Both publications were described in some detail intwo very widely used textbooks, Katz's Introduction to ReferenceWork and Bopp and Smith's Reference and Information Services.Katz concluded that Ulrich's tends to be more current and morecomplete in details.13 Richard Bopp and Linda Smith highlight

207

Ulrich's data on where titles are indexed and the usefulness of itslisting of recently ceased titles and note the usefulness of EBSCO'sdata on peer review, book reviews, and whether advertising isaccepted. Bopp and Smith did not evaluate the relative thorough-ness or accuracy of data in the two sources.14

Marcia Tuttle directly compared the relative strengths andweaknesses of Ulrich's and EBSCO in a review published soon afterthe 1986 release of the first edition of the Serials Directory. Tuttledescribed how CONSER data formed the basis of the SerialsDirectory, noting that the CONSER project of creating andmaintaining a database for serials publications “is probably themost important event ever to occur in serials librarianship.”15 Herreview's focus on problems in the printed directory serve as areminder of how dramatically the production of reference workshas changed since 1986 and how much ongoing work is done byOCLC and its member libraries. None of the serious problems Tuttlepointed out concerning EBSCO's indexes apply to today's onlineSerials Directory, and while inconsistencies and duplicationsremain in CONSER records, cooperative cataloging provides anongoing process for cleaning up records. Tuttle's review included acomparison of Ulrich's to EBSCO. She stated that Ulrich's includedmore ceased titles and a higher ratio of ceased titles to active(1:11.8 for Ulrich's versus 1:28.5 for EBSCO).16 She discussed prosand cons of basing the EBSCO directory on CONSER data,concluding that the more carefully one looks at either EBSCO orUlrich's, “the more it becomes apparent that it is impossible tokeep [either of] them up to date.”17

Four research studies have directly compared the accuracy andthoroughness of data in Ulrich's and EBSCO. Jonathan Eldredgecompared them for accuracy of indexing service coverage, notingthe difficulty of maintaining accuracy since coverage is in a state ofconstant flux. For the biomedical journals in his sample Eldredgefound 92% accuracy in Ulrich's and 95% accuracy in EBSCO, andconcluded that the authority of both sources should be treatedwitha touch of skepticism.18 A few years later he investigated theaccuracy of Ulrich's and EBSCO's identification of whether period-icals are peer reviewed and found 46% of a sample of clinicalmedicine journals tagged as peer reviewed (n=784) were uniqueto one directory, indicating significant discrepancies in what islabeled as peer reviewed.19 He found Ulrich's and EBSCO to havevery similar title lists and rejected that as a cause of thediscrepancies. Eldredge discussed the causes for inconsistentidentification of peer reviewed titles and hypothesized that thecauses were inconsistencies in publishers' self-reported data andthe lack of a standard definition of “peer review.”20Robert Bachandand Pamela Sawallis followed Eldredge's work on the accuracy ofUlrich's and EBSCO's identification of peer reviewed periodicalsand also found significant discrepancies. Their study mostlyfocused on the broad range of issues surrounding the meaningand quality of “peer review” but did conclude that Ulrich'sidentifications of peer-reviewed and scholarly journals matchedpublishers' information more consistently than did EBSCO.21 LikeEldredge, they advocated that users maintain a touch of skepticismregarding the accuracy of data in both directories and recom-mended cross-checking the datawith information frompublishers.

Recent work by Marybeth Grimes and Sara Morris comparedentries in Ulrich's and EBSCO's for five widely held periodicals.They found that records in the two directories agreed on only 40%of ten selected data fields (price, address, URL, subscription, editor,e-mail, phone, fax, circulation, whether refereed).22 After describ-ing the surprising level of discrepancies in data, they note that “[i]nall fairness, both directories get their information directly from theserials, but it would seem prudent for the two companies to bemore diligent in providing accurate and reliable data to their

Steve Black Serials Review

customers. [Therefore] librarians should checkmultiple sources.”23

The presumption is that any one data source may contain errors,but accurate data should be obtainable if one checks multiplesources, including publishers. Every close examination of Ulrich'sand EBSCO has found discrepancies and has concluded with theadvice to cross-check data for accuracy.

Catalog Record Quality and CONSER

A good data source to cross-check for accuracy of someinformation in Ulrich's or EBSCO is bibliographic records in OCLC'sWorldCat; however, the EBSCO and Ulrich's directories record datathat bibliographic records in OCLC do not, including editorialcontact information, current subscription price, advertising rates,and circulation. Furthermore, maintaining the accuracy of serialrecords in OCLC has long been known to be a difficult challenge. A1987 OCLC survey of librarians reported by Carol Davis measuredthe degree to which librarians perceived OCLC to be a “dirtydatabase.”24 The survey found the most serious problems to beduplicate records and inconsistencies in name and subjectauthority control, and reported that “most users indicated thatthe reason they did not report errors was that it took too muchtime.”25 Davis' article concludes on the upbeat note that OCLC datawas perceived as improving and efforts were underway tocontinue to correct errors.

Quality control for serials records created in the United States isguided by Cooperative Online Serials (CONSER). CONSER is basedat the Library of Congress, is organized within the Program forCooperative Cataloging (PCC), and works closely with OCLC.CONSER-authenticated records reside in OCLC's WorldCat data-base, the embodiment of cooperative cataloging that incorporatesrecords created by 27,000 member libraries in 171 countries.26

Librarians' cooperative struggle to keep serials records up to dateinvolved adding 25,096 new CONSER records and updatinganother 31,023 in 2008.27 CONSER works to maintain quality bypublishing the authoritative CONSER Cataloging Manual andCONSER Editing Guide, conducting Serials Cataloging CooperativeTraining Program (SCTTP) workshops, and requiring a thoroughtraining and mentoring process for becoming a CONSER library.CONSER authorization allows a library to create and edit masterserials records in OCLC. Liping Song describes the training andreview process to become a CONSER library trusted to create andmaintain high quality records.28

Several articles have addressed the accuracy of OCLC records asone element in the broad context of factors affecting the quality ofcataloging. Good entry points to the literature on cataloging qualityare Thomas' 1996 overview of the history of quality in bibliographiccontrol29 and Paiste's 2003 literature review of cataloging qualityseen through the lens of Total Quality Management.30 Both MarshaThomas and Sarah Paiste highlight the sometimes tense dialogbetween librarianswho emphasize quality as adherence to rules andpreventing errors versus librarians more concernedwith timeliness,adapting to changing technology, and meeting user expectations.These tensions are apparent in the Library of Congress' 1994symposium on the topic “Quality cataloging is . . .,” summarized byTillett as “accurate bibliographic information that meets the users'needs and provides appropriate access in a timely fashion.”31 Thethirty-two contributors were Library of Congress catalogers whosedefinitions of quality focused primarily on maintaining goodauthority control and accuracy in the face of pressure to reducebacklogs. CONSER Coordinator Jean Hirons stated simply, “Qualitycataloging is dynamic and able to be shared.”32 Aside from Hiron'spoint about sharing, cooperative cataloging was mentioned in thesymposium only in the context of the global need for trust in the

208

quality of Library of Congress records. In a more recent discussion ofcooperative cataloging and quality, Turner reaffirms that in the1990s the de facto definition of high quality was “like a Library ofCongress record,” but she argues that today more flexible standardsare needed to help librariansmaintain the ever expandingWorldCatdatabase.33

Cooperative cataloging is based on the premise that qualifiedcatalogers can be trusted to create good records. In her descriptionof the evolution of the CONSER program and the development ofcataloging standards for serials, Bartley emphasizes trust andinteraction as the core of cooperative cataloging, noting thatCONSER “members trust each other's work more than they trustthe constancy of the serials they attempt to catalog together.”34

Tuttle summarized the issue of cataloging quality and CONSER inthese words: “CONSER participants do not give high priority toupdating records for titles that cease publication or change title.Despite the stated high cataloging standards employed in theCONSER project, not all records are complete and without error,even at the time they are entered in the database. However, thesedata are the best means of bibliographic control of the thousandsof serials being published today.”35

Since libraries serve a broad range of patrons with differentneeds and libraries have different levels of resources devoted tocataloging, not all serials records in the OCLC database are ascomplete as full records created at the Library of Congress. TheCONSER standards for cataloging serials specify minimumrequirements for three levels of records: full, core, and minimal.Serials records at all three levels of cataloging are required toinclude beginning date and end date if applicable in MARC 008,and require end dates if applicable in MARC 260 (publication,distribution, etc.) and MARC 362 (dates of publication).36 Therules take into account what to do if information is missing fromissues in hand or if the issue is not available to the cataloger. Datesfrom other sources are to be entered in brackets in MARC 260 oras notes in MARC 362 if the item is not in hand and if the date isuncertain a question mark is added. If the information is notobtained from an issue in hand, the source of information isdescribed in a MARC 500 note field. For a ceased periodical, therules presume that the cataloger has information describing thelast issue published. If the information is not available, the properaction is to leave the bibliographic record as is, rather than toguess at an end date. This presents a problem when a publisherdoes not stay on schedule and cannot be contacted. The rules callfor an end date to be recorded if a periodical has ceased, but anend date should not be entered if the cataloger lacks reliableinformation.

CONSER training, support, and membership requirements arewell designed to control the quality of serials records, but thesheer volume of record maintenance for ever-changing serials canexceed CONSER members' ability to keep up. Song found herself“thinking out loud, ‘why hasn't anyone updated this record?'”37 Inhis reflections on working at a CONSER library and thenexperiencing loss of his ability to fix records after moving to aposition at a non-CONSER library, Christopher Walker notes that“since there are 71,000 institutions represented in the OCLCWorldCat community, and only about fifty of them havecatalogers who are allowed to edit an authenticated serial record,it can take a long time before a record needing enhancementattracts attention, especially if it is not a widely held currenttitle.”38 The literature on cooperative cataloging and CONSERdemonstrates a sound system for creating and maintaining serialsrecords straining under the sheer volume of work involved. Thechallenge of keeping up is exacerbated by a constant stream ofchanges and difficulties in getting accurate information about

Failure Rates and Publication StatusVolume 36, Number 4, 2010

changes from publishers, including when or if a periodical hasceased.

Method for This Study

The sample for this study of life spans of periodicals is all titlesreviewed in the “Magazines” column of Library Journal (LJ) from1980 through 2005. The sample was chosen based on itsrelationship to the author's previously published work on lifespans, the relevance of LJ reviews to libraries' periodical collectiondevelopment, and a range of dates capable of capturing life spansof periodicals in the contemporary publishing market. Examiningall periodicals reviewed in LJ during the chosen time frameexpands the sample beyond the previously studied life spans of“Best Magazines of the Year”while maintaining consistency in themethod of selecting titles. LJ's magazine columnist is responsiblefor regular reviews and the annual “Best Magazines of the Year”feature article. The purpose of both is to alert LJ's readers tonoteworthy periodicals. The magazine columnists never explicitlydescribed selection criteria. They simply reviewed periodicals theybelieved would be of interest to collection development librarians.This liberal policy gives the sample broad variety: popular andscholarly, stapled little magazines and thick glossy glamour titles,general interest and niche magazines. It also has the virtue ofexcluding periodicals of little interest to most libraries such aspuzzle series, pornography or super-specialized niche titles withtiny intended audiences. But the resulting sample is not random,and therefore the degree to which this sample correlates with thelife spans of all periodicals is not known.

Choosing a time span for the sample requires a trade-off. A longtime frame allows better charting of failure rates. But the older theperiodical, the less the conditions surrounding its launch corre-spond with today's publishing environment. The pre-internetperiodical market of the 1980s was different from today's market,and titles continue to fail after surviving five years. The time frame1980-2005 is a compromise intended to represent the contempo-rary market while giving titles a reasonable period of time tosurvive or fail. The sample is a bit skewed to older periodicalsbecause more titles were reviewed per year in the 1980s andbecause LJ reviews have always included some titles that havebeen in publication for many years. Figure 1 shows who thereviewers were when, with average number of reviews per year.So while this is a reasonable time frame for sampling periodicalsfound in libraries, the chosen dates are a second reason why it isnot known whether the results from this study correlate with lifespans of all periodicals.

Data for the reviewed periodicals were entered by the authorfrom September 2009 through March 2010 from four sources: theLJ reviews, WorldCat (First Search version), EBSCO's SerialsDirectory via EBSCOhost, and Ulrichsweb. The sample consisted ofevery periodical reviewed in the LJ “Magazines” column, excludingtitles mentioned in the theme-based narratives that occasionally

Reviewers for Library Journal’s “Magazines” Column, 1980-2005Reviewer(s) Dates Average reviews

per year

William A. “Bill” Katz 1980-1992 98 Bill Katz and Eric Bryant 1993-1994 111 Eric Bryant 1995-1999 51 Michael Colford 1999-2005 54 Clayton Couch 2005-2008 43

Figure 1. Library Journal's Magazine Reviewers.

209

made up all or part of the column. The data entered from LJ forevery periodical reviewed were:

• Title• Year started• Year reviewed• Frequency of publication (1=annual, 2=semi-annual,4=quarterly, etc.)

Next, the FirstSearch version of WorldCat (a version of the OCLCbibliographic database designed for use by reference librarians andlibrary patrons) was used to enter:

• Year ended• How many libraries have holdings• ISSN

In caseswhere the start date shown inWorldCat differed from thestart date listed in the LJ review, the date in WorldCat was entered.The LJ reviews usually list a month or season as a start date, but onlythe yearwas entered. End date 9999was entered for active titles. If aperiodical changed its title, records for successive titles werechecked and an end date was entered only if the successive title(s)ceased. Merged titles were counted as ceased. A notes field in thespreadsheet recorded if a title had changed one or more times. Forperiodicals that have ceased publishing in print but continue online,for this study the titlewas considered active if the online version hadcurrent updates and appeared to continue the same work.

WorldCat was the data source for ISSNs because LJ reviewsincluded no ISSNs before 1994, and after then many reviews listedno ISSN. When no ISSN was listed in WorldCat, the field was leftblank. OCLC has the most complete records for U.S.-assigned ISSN,but U.S. periodicals from international publishers may be underthe responsibility of other ISSN centers. So it is possible that someof the reviewed periodicals have an ISSN not listed in the OCLCrecord. Finally, end dates (when applicable) were entered from theEBSCO and Ulrich's. If OCLC, EBSCO, or Ulrich's had no entry for theperiodical, “no record” indicated the omission.

Discrepancies in end dates amongWorldCat, EBSCO, and Ulrich'swere found to be very common. By using three sources the authorwas able to detect titles that may have ceased even though OCLCrecorded no end date. If EBSCO or Ulrich's listed an end date orindicated suspension of publication (e.g. status listed as “unable tocontact publisher” or “researched/unresolved”) and the OCLCrecord was not closed, the author checked the holdings listed inWorldCat. Not every library displays its holdings in WorldCat. If noend date was apparent from the WorldCat holdings data listed in“libraries worldwide,” the author followed the more time-consum-ing process of checking individual libraries' catalogs. Frequently,one or more libraries will close a record and give a clear note on thesource of information in their local bibliographic record, but notreport this update to appear in OCLC. An end date was recorded inthe spreadsheet when no library showed holdings beyond an enddate listed in EBSCO or Ulrich's, even if the OCLC record remainedopen (no ending in MARC 008 or 362). The same process was usedwhen OCLC indictedwith “?” that a title ceased at an unknown date.

If EBSCO or Ulrich's suggested a title had ceased and libraryholdings indicated an end date, yet no end date was specified inany of the three sources, the end date from holdingswas entered inthe spreadsheet with the note “end inferred from holdings.” Forexample, Bluefish (ISSN 0741-5028) had no end date in OCLC andno record in EBSCO, but Ulrich's listed its status as “researched/unresolved.” Bluefish has forty-three holdings in OCLC, nine ofwhich show an end date of 1987. None of the libraries' catalogs

0102030405060708090

100

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

num

ber

faile

d

years published

Life Spans of Failed Periodicals

igure 3. Life Spans of Failed Periodicals.

Steve Black Serials Review

indicating current holdings in OCLC showed issues receivedbeyond 1987, so 1987 was entered into the spreadsheet as theend date. Despite checking the three data sources and trackinglibrary holdings, there remained some ceased titles for which noend date could be determined.

Once the datawere entered for the 2,079 periodicals reviewed inLJ from 1980 through 2005, life spans were calculated bysubtracting the start date from the end date. Separate columnscalculated how many years the periodical had been in publicationwhen the review was published, and how many years theperiodical remained in publication after it was reviewed.

Results: Magazine Failure Rates

Results for the primary research objective of describing the lifespans of periodicals reviewed in Library Journal from 1980 to 2005are depicted in figure 2.

End dates were inferred from holdings using the proceduredescribed above for 11% of the periodicals. The 7% with unknownend dates are denoted in Fig. 2 as “?”; therefore, 18% of theperiodicals in this sample have ceased without OCLC, EBSCO, orUlrich's having a record of the year ceased. The range of years inpublication for current periodicals expresses the oldest titlereviewed in LJ (Yale Review, noted in 1992 in recognition of itscentenary) to the titles launched and reviewed in 2005, the cutoffdate for this study.

Figure 3 charts the life spans of periodicals that failed. The trendline imposed on top of the bar graph indicates the gradual decreasein chance of failure further into the life of a periodical. The gradualslope of that trend is notable, as it shows a continued risk of failureeven after several years of publication. Also of note is that amongthe failed periodicals, relatively few folded in their first year.Conventional wisdom holds that failure peaks in the first year ofpublication, but the data shown in figure 3 contradict thatcommonly held view.

Which publication frequencies are most common? Are period-icals with particular frequencies of publication more likely to fail?

currently published (5-118 years)

46%

one year or less7%

2 years4%

3-5 years10%

6-10 years13%

11-20 years13%

?7%

Life Spans of Reviewed Periodicals

Figure 2. Life Spans of Reviewed Periodicals.

210

F

To answer these questions, failure rates were calculated for tenfrequencies shown in figure 4.

Quarterly publication schedules were the most common,followed by bi-monthlies and semi-annuals. The data showrelatively little difference in the failure rates among publicationfrequencies, with the exception of irregulars. While the variationin failure rates is small, the most successful frequencies are semi-annual and more than once per month (predominantly bi-weeklies and weeklies). Irregular publications fail at the greatestrate among periodicals reviewed in LJ, followed by those publishedbetween five and eleven times per year. These results differ fromHusni's finding from a sample of sixty-five periodicals thatbimonthly publications have best chance of survival.39

The data allow additional characterizations of the periodicalsreviewed in LJ 1980–2005. It was found that 161 of 2,079 (8%)changed title at least once. Although all periodicals should beassigned an International Standard Serial Number, 131 (6%) titlesnever received an ISSN. Having an ISSN may indicate a publisher'sknowledge of the periodicals business, but no attempt was madeto correlate failure rate with the presence of an ISSN because halfof all serials are assigned an ISSN from sources other than thepublisher's request.40

Reviewed Periodicals’ Frequencies and Failure Rates

FREQUENCY % of

sample % failed

annual 4% 51%

semi-annual 16% 47%

3x/year 7% 49%

quarterly 34% 49%

5x/year 1% 56%

bi-monthly 17% 55%

7-11x/year 3% 53%

monthly 12% 50%

>12x/year 2% 44%

irregular 2% 77%

Figure 4. Frequencies and Failure Rates.

Failure Rates and Publication StatusVolume 36, Number 4, 2010

A recent phenomenon in periodical publishing is haltingproduction of print but continuing publication online. Amongthe titles reviewed in LJ, continuing as an online publicationafter print has ceased is not as common as one might expect.Only 2% of the ceased titles were found to be current at anactive Web site.

Results: Characteristics of Data Sources

The two secondary research questions for this study are “Howaccurately are end dates recorded in WorldCat, EBSCO's SerialsDirectory, and Ulrichsweb?” and “How well does cooperativecataloging work to maintain accurate end dates for periodicals?”The answers to these questions are quantified in figure 5.

The first row shows the incidence of there being no record at allin the data source. On this score OCLC is superior, followed byUlrich's and then EBSCO, which lacked records for almost a third ofthe reviewed periodicals. OCLC also has the fewest records thatremain open even though they should be closed. OCLC and EBSCOare tied for fewest records with indeterminate end dates, but thisshould be taken in the context of EBSCO lacking any record at allfor a third of the titles. Unknown or indeterminate end dates arenot errors; they are indicators that required information was notavailable. Ulrich's is much more likely than OCLC or EBSCO toindicate a title is still active when it has in fact ceased. This is, inpart, because Ulrich's employs a latest entry approach to describeperiodicals, whereas OCLC and EBSCO use the successive entryapproach whereby records are closed when a periodical changesits title, merges into a title, or changes physical format. The authortracked the title changes and checked URLs to account for thedifferent approaches but found Ulrich's to be very liberal inchoosing what constitutes a succeeding title. As the data show,Ulrich's tends to err on the side of leaving a title active. Conversely,cataloging rulesmake it no surprise that OCLC has very few recordsthat state a title has ceased when it is in fact still active.

Of the three data sources, OCLC has the most complete andaccurate records overall. The finding in the author's prior work onLJ's “Best Magazines of the Year” that OCLC records for 30% ofceased periodicals had not been closed is not replicated. Amongthe sample of 2,079 periodicals reviewed in LJ, OCLC lacked enddates for eighty ceased titles, or 4% of the total. So the incidence ofopen records for ceased titles was much lower for the largersample. (One might expect that all the “best of” titles would havebeen previously reviewed and, therefore, overlap with the largersample, but in fact many of the “best” titles never appeared in theregular reviews column). So the problem of open records in OCLC

Ghosts of Periodicals Past: Incomplete or Inaccurate End Date Data i

n=2,079 WorldCat (OCLC) EBSC

SeriaNo record 56

(3%) Ceased, but no end date recorded

74 (4%)

Ceased, indeterminate end date (e.g. ? or 199?)

65 (3%)

Periodical has ceased, but listed as still active

6 (0%)

Notes: a Ulrich’s listed a last volume or issue but no date for 4 titles b Includes 32 titles Ulrich’s counted as active online, but the li

Figure 5. Database Discrepancies, Inaccuracies, and Omissions.

211

for ceased periodicals is much less prevalent than suggested by theprior study.

Discussion

The overall failure rate of 54% for periodicals reviewed in LJ from1980 through 2005 is lower than the 63% failure rate for LJ's “BestMagazines of the Year”41 and much lower than Husni's estimated90% failure rate.42 Two reasonsmay be hypothesized to explain thegreater success of all periodicals reviewed in LJ versus thosechosen as “Best Magazines of the Year.” One is simply error due tosample size. The 224 titles chosen as best of the year may be toofew to reflect periodicals' life spans. Another reason could be thatmagazine columnists chose as “best” those unusual, distinctivetitles that are inherently riskier to publish and therefore morelikely to fail.

Reasons for the large difference in failure rates between theperiodicals reviewed in LJ and Husni's estimates are more obvious.First, Husni studies consumer magazines, most of which rely veryheavily on advertising revenue. The Magazine Publishers ofAmerica report that loss of ad revenue tends to cause morefailures than does loss of circulation.43 The LJ sample includesmany consumer magazines reliant upon advertising revenue, butalso includes literary reviews, scholarly journals, newsletters,zines and little magazines that have other means of support:grants, library subscriptions, memberships, even the unpaiddedication of a hand full of individuals. Second, as describedabove in the literature review, Husni counts newsstand publica-tions that librarians would not classify as periodicals. Husni'schoice of what to count is reflected in Ulrichsweb, which hasentries for many magazines that don't necessarily fit the definitionof a periodical. For example, annual publications like DreamLandscapes and Ultimate Casseroles which are reported to Ulrich'sby the publisher (Meredith Corp.) as new magazines for 2010. Tocount these as “failed” if they are not issued under the same title in2011 will inflate failure rates.

A noteworthy finding in this study is the gradually decreasingnumber of failures by years in publication shown in figure 3. Thedata support the notion that periodicals are most likely to fail intheir first few years of publication, but also make clear that at notime is a periodical “safe.” This is a blinding flash of the obvious topublishers, who know very well that without constant vigilance,hard work, and responsiveness to advertisers and readers even awell established periodical can fail.

Similarly to the findings from prior studies of EBSCO andUlrich's described in the literature review, this study found

n OCLC, EBSCO, and Ulrich’s O

ls Directory Ulrichsweb

669 (32%)

221 (11%)

123 (6%)

165a

(8%)

61 (3%)

103 (6%)

34 (2%)

168b (8%)

sted URL was dead or linked to unrelated content

Steve Black Serials Review

discrepancies, errors of omission and inaccurate data. As shown infigure 5, despite EBSCO's and Ulrich's sustained efforts to obtainthorough and accurate data from publishers, both databases misssome information. Incomplete or inaccurate data is inevitable withany data based on self-reported surveys. Self-reported data frompublishers can be problematic because of varying interpretationsof the meaning of terms like “peer reviewed,” the time and energyspent to complete the survey, and good intentions they are notable to fulfill (e.g. not able to maintain the publication schedule).The problems inherent in self-reported data become acute in thecase of ceased titles, because publishers may simply disappear.Ulrich's lists the publication status as “researched/unresolved” for124 (6%) of the titles. Using three data sources helped the authordeduce end dates, but there were still twenty-seven periodicalsreviewed in LJ that had no record in OCLC, EBSCO, or Ulrich's, and7% of the sample had no firm end date recorded in any of the threedata sources. In addition, twenty-four bibliographic records inWorldCat were held by zero libraries, making it impossible to useholdings to infer end dates. (It may seem strange to keep records inWorldCat for titles not held in any OCLC-participating library, butthose records allowed this study of life spans to be more complete,and historical records of past periodicals may be of value to othersin unpredictable ways.).

The OCLC approach of using Anglo-American Cataloging Rulesand CONSER standards may miss information EBSCO and Ulrich'scan obtain by surveying publishers. But the data in Fig. 5 suggeststhat cooperative cataloging following rules that place highestpriority on items in hand can producemore accurate and completerecords than EBSCO or Ulrich's can maintain from surveyingpublishers. Overall these data suggest that, while cooperativecataloging under the guidance of CONSER is not perfect, it is themost effective way to create and maintain records for serials.However, surveying publishers has merit, and CONSER recognizesthe expertise and contributions of EBSCO and Ulrich's. EBSCOPublishing and Serials Solutions (owner of Ulrichsweb) are bothCONSER members authorized to maintain records, and CONSER'scurrent mission statement includes a commitment to collaboratewith publishers and other information providers to create “cost-effective, timely, shareable, and authoritative metadata.”44

Deficiencies in OCLC records such as unknown or missing enddates demonstrate the difficulty of maintaining records for ever-changing serials. Since only three public libraries (Cleveland, NewYork, Brooklyn) are CONSER members, consumer magazines maybe especially vulnerable to out-of-date records. That may partlyexplain the much higher rate of open records for ceased period-icals chosen as “Best Magazines of the Year.” Perhaps somethingcan be done to make it easier for public librarians or even librarypatrons to report problems. Davis' report that “most usersindicated that the reason they did not report errors was that ittook too much time”45 probably remains true today. Even if everylibrary diligently reports errors, there will always be missinginformation due to ephemeral titles that are never cataloged,periodicals that fall behind schedule but are not declared dead, andpublishers who vanish without a trace. Records for periodicals willalways be time consuming to maintain and will inevitably includesome inaccuracies.

Further Study

No attemptwasmade in this study to classify periodicals by type. Itcould be useful to go back through the data and tag each title as“popular,” “scholarly,” or other categories. Literarymagazines havea reputation for high failure rates. Perhaps those can be tagged andcompared to other types of periodicals reviewed in LJ. A separate

212

study using a different sample might also be used to compare thelife spans of different types of periodicals.

The degree to which failure rates for the periodicals reviewed inLJ 1980–2005 correlate with failure rates of all periodicals isunknown. A follow-up study using a sample of similar size andscope drawn from a different source or sources could provide abasis of comparison. Unfortunately, since no data source containsinformation on every periodical, it is not possible to take a randomsample. The population cannot be defined because nomaster list ofall periodicals exists. Since EBSCO's Serials Directory lacked recordsfor a third of the periodicals reviewed in LJ and suffers from otheromissions depicted in figure 5, it would be rather unreliable forstudies of life spans. Ulrich's is a better choice for life span studies. Itwas found here to have records for 89% of the periodicals and thedata is searchable by country of origin, whether peer reviewed, andtype (e.g. scholarly, consumer, trade, newspaper). All the citedstudies of Ulrich's discussed above note error rates in the data, butsufficient sample sizes can mitigate the effect of periodicals beingmisrepresented or excluded due to errors and omissions.

An investigation into what factors impede libraries fromcontributing record corrections to CONSER could be worthwhile.One can reasonably presume that the press of local needs bumpserror reporting to lower priority, but perhaps there are other factorsthat can be identified. The recent special issue of Cataloging andClassification Quarterly (v.48, no. 2/3, 2010) devoted to cooperativecataloging is a welcome source of information, but more studies andcommentaries on the experience of earning CONSER authorizationand of working in a CONSER library would be useful additions to theliterature. Likewise, investigations of why or why not librarieschoose to display holdings in WorldCat would be welcome.

The entire concept of holdings is being disrupted as morelibraries provide access to periodicals via aggregated databasesand publisher packages. A fertile area for research is the variousways serials and interlibrary loan librarians use holdings in theirwork and how aggregations and packages affect library holdingsrecords and the processes that rely on holdings information. Morework can be done on the relationship between printed periodicalsand online integrating resources. Good work has been done onpreserving scholarly online publications, but more could be doneregarding what libraries can and should do to preserve onlinepopular magazines.

A range of research questions belongs more to the discipline ofmedia studies. Under what conditions does it make editorial andeconomic sense to replace a print periodical with a Web site?When is it best to have one medium supplement the other, andwhat are the best ways to do that? What are the characteristics ofsuccessful periodicals, including those not heavily dependent onadvertising for survival?

This study has shown that 54% of the periodicals reviewed inLibrary Journal have failed. But that means 46% have survived for atleast 5 years. Those 925 surviving periodicals are valuable reflectionsof and contributors to our culture, and their editors and publishersdeserve due credit for their creativity and hard work.

Notes

1. Steve Black, “Life Spans of Library Journal's ‘Best Magazines of the Year,’”Serials Review 35, no. 4 (December 2009): 213–217.

2. Ibid.

3. Cooperative Online Serials (CONSER) definition of a periodical, http://www.itsmarc.com/crs/manl1706.htm (accessed May 20, 2010).

4. William A. (Bill) Katz, Magazine Selection (New York: R. R. Bowker, 1971).

5. Ibid., 1.

Failure Rates and Publication StatusVolume 36, Number 4, 2010

6. Samir Husni, Success and Failure of New Consumer Magazines in the UnitedStates 1979-1983. (Columbia, Missouri: Univeristy of Missouri School ofJournalism, 1983).

7. Husni, “Influences on the Survival of New Consumer Magazines,” Journal ofMedia Economics 1, no. 1 (1998): 39–49.

8. Husni quoted in Dale Buss, “10 Reasons new magazines fail.” Folio: TheMagazine for Magazine Management, Feb. 1, 2001: 42.

9. Husni, “Mr. Magazine's New Titles, February 2010,” http://www.mrmagazine.com/newtitles/2010Titles/February2010/index.html.

10. Magazine Publishers of America, “Clearing Up Misperceptions About MagazineClosings,” August 2009, http://www.magazine.org/ASSETS/ACC5AFCF184843B9-B8A4CE13080DB232/misperceptions-about-magazine-closings-082009.pdf.

11. A.J. van Zuilen, The Life Cycle of Magazines: A Historical Study of the Decline andFall of the General Interest Mass Audience Magazine in the United States duringthe Period 1946-1972. (Uithoorn: Graduate Press, 1977).

12. Quint Randle, “A Historical Overview of the Effects of New Mass Media:Introductions in Magazine Publishing During the Twentieth Century,” FirstMonday 6 (September 2001): 9.

13. Katz, Introduction to Reference Work, Vol. 1 (New York: McGraw-Hill, 1992).

14. Richard E. Bopp and Linda C. Smith, Reference and Information Services: AnIntroduction, 3rd. ed. (Englewood, CO: Englewood, CO, 2001).

15. Marcia Tuttle, “The Serials Directory: An International Reference Book,”Serials Review 13, no. 2 (Summer 1987): 6.

16. Ibid., 12.

17. Ibid., 13.

18. Jonathan D. Eldredge, “Accuracy of Indexing Coverage Information asReported by Serials Sources,” Bulletin of the Medical Library Association 81,no. 4 (October 1993): 364–370.

19. Eldredge, “Identifying Peer-reviewed Journals in Clinical Medicine,” Bulletin ofthe Medical Library Association 85, no. 4 (October 1997): 418–422.

20. Ibid., 421.

21. Robert B. Bachand and Pamela P. Sawallis, “Accuracy in the Identification ofScholarly and Peer-reviewed Journals and the Peer-review Process AcrossDisciplines,” Serials Librarian 45, no. 2 (2003): 39–59.

22. Marybeth Grimes and Sara E. Morris, “Is Accuracy Everything? A Study ofTwo Serials Directories,” Reference & User Services Quarterly 46, no. 2 (2006):45–49.

23. Ibid., 48.

213

24. Carol C. Davis, “Results of a Survey on Record Quality in the OCLC Database,”Technical Services Quarterly 7, no. 2 (1989): 43–53.

25. Ibid., 50.

26. OCLC, “Community.” http://www.oclc.org/about/community/default.htm.

27. CONSER, “2008 CONSER Report.” http://www.loc.gov/acq/conser/CONSER-annual-report-2008.pdf.

28. Liping Song, “The Road to CONSER–Taken by the Health Sciences LibrarySystem, University of Pittsburgh,” Cataloging & Classification Quarterly 48(2010): 143–152.

29. Sarah E. Thomas, “Quality in Bibliographic Control,” Library Trends 44, no. 3(Winter 1996): 491–506.

30. Marsha Starr Paiste, “Defining and Achieving Quality in Cataloging inAcademic Libraries: A Literature Review,” Library Collections, Acquisitions,and Technical Services 27 (2003): 327–338.

31. Barbara B. Tillett, Cataloging Quality: A Library of Congress Symposium.(Washington, D.C.: Cataloging Forum, Library of Congress, 1995).

32. Ibid., 16.

33. Amy H. Turner, “OCLC WorldCat as a Cooperative Catalog,” Cataloging &Classification Quarterly 48 (2010): 271–278.

34. Linda K. Bartley, “The CONSER Model: A Personal View,” Cataloging &Classification Quarterly 17, no. 3/4 (1993): 75–86.

35. Tuttle, “The Serials Directory,” 11.

36. Jean L. Hirons, ed., CONSER Cataloging Manual. (Washington, D.C.: Library ofCongress, Cataloging Distribution Service, 2002).

37. Song, “The Road to CONSER,” 151.

38. Christopher H. Walker, “Record Authentication as a Barrier: Reflections onReturning to CONSER,” Cataloging & Classification Quarterly 48 (2010): 161–168.

39. Husni, “Influences on the Survival,” 42.

40. Regina Reynolds, e-mail message to author, March 17, 2010.

41. Black, “Life Spans of Library Journal's ‘Best Magazines of the Year,’” 215.

42. Husni quoted in Dale Buss “10 Reasons new magazines fail," 42.

43. Magazine Publishers of America, Clearing Up Misperceptions, August 2009.

44. Program for Cooperative Cataloging Mission Statement 2010, http://www.loc.gov/catdir/pcc/stratdir-2010.pdf.

45. Davis, “Results of a Survey on Record Quality,” 50.